instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
|---|---|---|
numerically stable way to multiply log probability matrices in numpy
|
I need to take the matrix product of two NumPy matrices (or other 2d arrays) containing log probabilities. The naive way np.log(np.dot(np.exp(a), np.exp(b))) is not preferred for obvious reasons.
Using
from scipy.misc import logsumexp
res = np.zeros((a.shape[0], b.shape[1]))
for n in range(b.shape[1]):
# broadcast b[:,n] over rows of a, sum columns
res[:, n] = logsumexp(a + b[:, n].T, axis=1)
works but runs about 100 times slower than np.log(np.dot(np.exp(a), np.exp(b)))
Using
logsumexp((tile(a, (b.shape[1],1)) + repeat(b.T, a.shape[0], axis=0)).reshape(b.shape[1],a.shape[0],a.shape[1]), 2).T
or other combinations of tile and reshape also work but run even slower than the loop above due to the prohibitively large amounts of memory required for realistically sized input matrices.
I am currently considering writing a NumPy extension in C to compute this, but of course I'd rather avoid that. Is there an established way to do this, or does anybody know of a less memory intensive way of performing this computation?
EDIT:
Thanks to larsmans for this solution (see below for derivation):
def logdot(a, b):
max_a, max_b = np.max(a), np.max(b)
exp_a, exp_b = a - max_a, b - max_b
np.exp(exp_a, out=exp_a)
np.exp(exp_b, out=exp_b)
c = np.dot(exp_a, exp_b)
np.log(c, out=c)
c += max_a + max_b
return c
A quick comparison of this method to the method posted above (logdot_old) using iPython's magic %timeit function yields the following:
In [1] a = np.log(np.random.rand(1000,2000))
In [2] b = np.log(np.random.rand(2000,1500))
In [3] x = logdot(a, b)
In [4] y = logdot_old(a, b) # this takes a while
In [5] np.any(np.abs(x-y) > 1e-14)
Out [5] False
In [6] %timeit logdot_old(a, b)
1 loops, best of 3: 1min 18s per loop
In [6] %timeit logdot(a, b)
1 loops, best of 3: 264 ms per loop
Obviously larsmans' method obliterates mine!
|
logsumexp works by evaluating the right-hand side of the equation
log(â exp[a]) = max(a) + log(â exp[a - max(a)])
I.e., it pulls out the max before starting to sum, to prevent overflow in exp. The same can be applied before doing vector dot products:
log(exp[a] â
exp[b])
= log(â exp[a] Ã exp[b])
= log(â exp[a + b])
= max(a + b) + log(â exp[a + b - max(a + b)]) { this is logsumexp(a + b) }
but by taking a different turn in the derivation, we obtain
log(â exp[a] Ã exp[b])
= max(a) + max(b) + log(â exp[a - max(a)] Ã exp[b - max(b)])
= max(a) + max(b) + log(exp[a - max(a)] â
exp[b - max(b)])
The final form has a vector dot product in its innards. It also extends readily to matrix multiplication, so we get the algorithm
def logdotexp(A, B):
max_A = np.max(A)
max_B = np.max(B)
C = np.dot(np.exp(A - max_A), np.exp(B - max_B))
np.log(C, out=C)
C += max_A + max_B
return C
This creates two A-sized temporaries and two B-sized ones, but one of each can be eliminated by
exp_A = A - max_A
np.exp(exp_A, out=exp_A)
and similarly for B. (If the input matrices may be modified by the function, all the temporaries can be eliminated.)
|
django change default runserver port
|
I would like to make the default port that manage.py runserver listens on specifiable in an extraneous config.ini. Is there an easier fix than parsing sys.argv inside manage.py and inserting the configured port?
The goal is to run ./manage.py runserver without having to specify address and port every time but having it take the arguments from the config.ini.
|
create a bash script with the following:
#!/bin/bash
exec ./manage.py runserver 0.0.0.0:<your_port>
save it as runserver in the same dir as manage.py
chmod +x runserver
and run it as
./runserver
|
Disable a method in a ViewSet, django-rest-framework
|
ViewSets have automatic methods to list, retrieve, create, update, delete, ...
I would like to disable some of those, and the solution I came up with is probably not a good one, since OPTIONS still states those as allowed.
Any idea on how to do this the right way?
class SampleViewSet(viewsets.ModelViewSet):
queryset = api_models.Sample.objects.all()
serializer_class = api_serializers.SampleSerializer
def list(self, request):
return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED)
def create(self, request):
return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED)
|
The definition of ModelViewSet is:
class ModelViewSet(mixins.CreateModelMixin,
mixins.RetrieveModelMixin,
mixins.UpdateModelMixin,
mixins.DestroyModelMixin,
mixins.ListModelMixin,
GenericViewSet)
So rather than extending ModelViewSet, why not just use whatever you need? So for example:
from rest_framework import viewsets, mixins
class SampleViewSet(mixins.RetrieveModelMixin,
mixins.UpdateModelMixin,
mixins.DestroyModelMixin,
viewsets.GenericViewSet):
...
With this approach, the router should only generate routes for the included methods.
Reference:
ModelViewSet
|
How to find all possible regex matches in python?
|
I am trying to find all possible word/tag pairs or other nested combinations with python and its regular expressions.
sent = '(NP (NNP Hoi) (NN Hallo) (NN Hey) (NNP (NN Ciao) (NN Adios)))'
def checkBinary(sentence):
n = re.findall("\([A-Za-z-0-9\s\)\(]*\)", sentence)
print(n)
checkBinary(sent)
Output:
['(NP (NNP Hoi) (NN Hallo) (NN Hey) (NNP (NN Ciao) (NN Adios)))']
looking for:
['(NP (NNP Hoi) (NN Hallo) (NN Hey) (NNP (NN Ciao) (NN Adios)))',
'(NNP Hoi)',
'(NN Hallo)',
'(NN Hey)',
'(NNP (NN Ciao) (NN Adios))',
'(NN Ciao)',
'(NN Adios)']
I think the regex formula could find the nested parenthesis word/tag pairs aswell but it doesn't return them. How should I do this?
|
it's actually not possible to do this by using regular expressions, because regular expressions express a language defined by a regular grammar that can be solved by a non finite deterministic automaton, where matching is represented by states ; then to match nested parenthesis, you'd need to be able to match an infinite number of parenthesis and then have an automaton with an infinite number of states.
To be able to cope with that, we use what's called a push-down automaton, that is used to define the context free grammar.
So if your regex does not match nested parenthesis, it's because it's expressing the following automaton and does not match anything on your input:
Play with it
As a reference, please have a look at MIT's courses on the topic:
http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-045j-automata-computability-and-complexity-spring-2011/lecture-notes/MIT6_045JS11_lec04.pdf
http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-005-elements-of-software-construction-fall-2011/lecture-notes/MIT6_005F11_lec05.pdf
http://www.saylor.org/site/wp-content/uploads/2012/01/CS304-2.1-MIT.pdf
So one of the ways to parse your string efficiently, is to build a grammar for nested parenthesis (pip install pyparsing first):
>>> import pyparsing
>>> strings = pyparsing.Word(pyparsing.alphanums)
>>> parens = pyparsing.nestedExpr( '(', ')', content=strings)
>>> parens.parseString('(NP (NNP Hoi) (NN Hallo) (NN Hey) (NNP (NN Ciao) (NN Adios)))').asList()
[['NP', ['NNP', 'Hoi'], ['NN', 'Hallo'], ['NN', 'Hey'], ['NNP', ['NN', 'Ciao'], ['NN', 'Adios']]]]
N.B.: there exists a few regular expressions engines that do implement nested parenthesis matching using the push down. The default python re engine is not one of them, but an alternative engine exists, called regex (pip install regex) that can do recursive matching (which makes the re engine context free), cf this code snippet:
>>> import regex
>>> res = regex.search(r'(?<rec>\((?:[^()]++|(?&rec))*\))', '(NP (NNP Hoi) (NN Hallo) (NN Hey) (NNP (NN Ciao) (NN Adios)))')
>>> res.captures('rec')
['(NNP Hoi)', '(NN Hallo)', '(NN Hey)', '(NN Ciao)', '(NN Adios)', '(NNP (NN Ciao) (NN Adios))', '(NP (NNP Hoi) (NN Hallo) (NN Hey) (NNP (NN Ciao) (NN Adios)))']
|
Printing boolean values True/False with the format() method in Python
|
I was trying to print a truth table for Boolean expressions. While doing this, I stumbled upon the following:
>>> format(True, "") # shows True in a string representation, same as str(True)
'True'
>>> format(True, "^") # centers True in the middle of the output string
'1'
As soon as I specify a format specifier, format() converts True to 1. I know that bool is a subclass of int, so that True evaluates to 1:
>>> format(True, "d") # shows True in a decimal format
'1'
But why does using the format specifier change 'True' to 1 in the first example?
I turned to the docs for clarification. The only thing it says is:
A general convention is that an empty format string ("") produces the same result as if you had called str() on the value. A non-empty format string typically modifies the result.
So the string gets modified when you use a format specifier. But why the change from True to 1 if only an alignment operator (e.g. ^) is specified?
|
Excellent question! I believe I have the answer. This requires digging around through the Python source code in C, so bear with me.
First, format(obj, format_spec) is just syntactic sugar for obj.__format__(format_spec). For specifically where this occurs, you'd have to look in abstract.c, in the function:
PyObject *
PyObject_Format(PyObject* obj, PyObject *format_spec)
{
PyObject *empty = NULL;
PyObject *result = NULL;
...
if (PyInstance_Check(obj)) {
/* We're an instance of a classic class */
HERE -> PyObject *bound_method = PyObject_GetAttrString(obj, "__format__");
if (bound_method != NULL) {
result = PyObject_CallFunctionObjArgs(bound_method,
format_spec,
NULL);
...
}
To find the exact call, we have to look in intobject.c:
static PyObject *
int__format__(PyObject *self, PyObject *args)
{
PyObject *format_spec;
...
return _PyInt_FormatAdvanced(self,
^ PyBytes_AS_STRING(format_spec),
| PyBytes_GET_SIZE(format_spec));
LET'S FIND THIS
...
}
_PyInt_FormatAdvanced is actually defined as a macro in formatter_string.c as a function found in formatter.h:
static PyObject*
format_int_or_long(PyObject* obj,
STRINGLIB_CHAR *format_spec,
Py_ssize_t format_spec_len,
IntOrLongToString tostring)
{
PyObject *result = NULL;
PyObject *tmp = NULL;
InternalFormatSpec format;
/* check for the special case of zero length format spec, make
it equivalent to str(obj) */
if (format_spec_len == 0) {
result = STRINGLIB_TOSTR(obj); <- EXPLICIT CAST ALERT!
goto done;
}
... // Otherwise, format the object as if it were an integer
}
And therein lies your answer. A simple check for whether format_spec_len is 0, and if it is, convert obj into a string. As you well know, str(True) is 'True', and the mystery is over!
|
Does enumerate() produce a generator object?
|
As a complete Python newbie, it certainly looks that way. Running the
following...
x = enumerate(['fee', 'fie', 'foe'])
x.next()
# Out[1]: (0, 'fee')
list(x)
# Out[2]: [(1, 'fie'), (2, 'foe')]
list(x)
# Out[3]: []
... I notice that: (a) x does have a next method, as seems to be
required for generators, and (b) x can only be iterated over once, a
characteristic of generators emphasized in this famous python-tag
answer.
On the other hand, the two most highly-upvoted answers to this
question
about how to determine whether an object is a generator would seem to
indicate that enumerate() does not return a generator.
import types
import inspect
x = enumerate(['fee', 'fie', 'foe'])
isinstance(x, types.GeneratorType)
# Out[4]: False
inspect.isgenerator(x)
# Out[5]: False
... while a third poorly-upvoted answer to that question would seem to indicate that enumerate() does in fact return a generator:
def isgenerator(iterable):
return hasattr(iterable,'__iter__') and not hasattr(iterable,'__len__')
isgenerator(x)
# Out[8]: True
So what's going on? Is x a generator or not? Is it in some sense
"generator-like", but not an actual generator? Does Python's use of
duck-typing mean that the test outlined in the final code block above
is actually the best one?
Rather than continue to write down the possibilities running through my
head, I'll just throw this out to those of you who will immediately
know the answer.
|
While the Python documentation says that enumerate is functionally equivalent to:
def enumerate(sequence, start=0):
n = start
for elem in sequence:
yield n, elem
n += 1
The real enumerate function returns an iterator, but not an actual generator. You can see this if you call help(x) after doing creating an enumerate object:
>>> x = enumerate([1,2])
>>> help(x)
class enumerate(object)
| enumerate(iterable[, start]) -> iterator for index, value of iterable
|
| Return an enumerate object. iterable must be another object that supports
| iteration. The enumerate object yields pairs containing a count (from
| start, which defaults to zero) and a value yielded by the iterable argument.
| enumerate is useful for obtaining an indexed list:
| (0, seq[0]), (1, seq[1]), (2, seq[2]), ...
|
| Methods defined here:
|
| __getattribute__(...)
| x.__getattribute__('name') <==> x.name
|
| __iter__(...)
| x.__iter__() <==> iter(x)
|
| next(...)
| x.next() -> the next value, or raise StopIteration
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __new__ = <built-in method __new__ of type object>
| T.__new__(S, ...) -> a new object with type S, a subtype of T
In Python, generators are basically a specific type of iterator that's implemented by using a yield to return data from a function. However, enumerate is actually implemented in C, not pure Python, so there's no yield involved. You can find the source here: http://hg.python.org/cpython/file/2.7/Objects/enumobject.c
|
Use Python to write CSV output to STDOUT
|
I know I can write a CSV file with something like:
with open('some.csv', 'w', newline='') as f:
How would I instead write that output to STDOUT?
|
sys.stdout is a file object corresponding to the program's standard output. You can use its write() method. Note that it's probably not necessary to use the with statement, because stdout does not have to be opened or closed.
So, if you need to create a csv.writer object, you can just say:
import sys
spamwriter = csv.writer(sys.stdout)
|
Difference between setUpClass and setUp in Python unittest
|
Difference between setUpClass vs. setUp in python Unittest framework, why not do set up part in setUp instead of setUpClass?
I want to understand what part of setup is done in setUp and setUpClass functions as well with tearDown and tearDownClass.
|
The difference manifests itself when you have more than one test method in your class. setUpClass and tearDownClass are run once for the whole class; setUp and tearDown are run before and after each test method.
For example:
class Example(unittest.TestCase):
@classmethod
def setUpClass(self):
print("setUpClass")
def setUp(self):
print("setUp")
def test1(self):
print("test1")
def test2(self):
print("test2")
def tearDown(self):
print("tearDown")
@classmethod
def tearDownClass(self):
print("tearDownClass")
When you run this test, it prints:
setUpClass
setUp
test1
tearDown
.setUp
test2
tearDown
.tearDownClass
(The dots (.) are unittest's default output when a test passes.) Observe that setUp and tearDown appear before and after test1 and test2, whereas setUpClass and tearDownClass appear only once, at the beginning and end of the whole test case.
|
pandas joining multiple dataframes on columns
|
I have 3 CSV files. Each has the first column as the (string) names of people, while all the other columns in each dataframe are attributes of that person.
How can I "join" together all three CSV documents to create a single CSV with each row having all the attributes for each unique value of the person's string name?
The join() function in pandas specifies that I need a multiindex, but I'm confused about what a hierarchical indexing scheme has to do with making a join based on a single index.
|
Assumed imports:
import pandas as pd
John Galt's answer is basically a reduce operation. If I have more than a handful of dataframes, I'd put them in a list like this (generated via list comprehensions or loops or whatnot):
dfs = [df0, df1, df2, dfN]
Assuming they have some common column, like name in your example, I'd do the following:
df_final = reduce(lambda left,right: pd.merge(left,right,on='name'), dfs)
That way, your code should work with whatever number of dataframes you want to merge.
|
Get permutation with specified degree by index number
|
I've been working on this for hours but couldn't figure it out.
Define a permutation's degree to be the minimum number of transpositions that need to be composed to create it. So a the degree of (0, 1, 2, 3) is 0, the degree of (0, 1, 3, 2) is 1, the degree of (1, 0, 3, 2) is 2, etc.
Look at the space Snd as the space of all permutations of a sequence of length n that have degree d.
I want two algorithms. One that takes a permutation in that space and assigns it an index number, and another that takes an index number of an item in Snd and retrieves its permutation. The index numbers should obviously be successive (i.e. in the range 0 to len(Snd)-1, with each permutation having a distinct index number.)
I'd like this implemented in O(sane); which means that if you're asking for permutation number 17, the algorithm shouldn't go over all the permutations between 0 and 16 to retrieve your permutation.
Any idea how to solve this?
(If you're going to include code, I prefer Python, thank you.)
Update:
I want a solution in which
The permutations are ordered according to their lexicographic order (and not by manually ordering them, but by an efficient algorithm that gives them with lexicographic order to begin with) and
I want the algorithm to accept a sequence of different degrees as well, so I could say "I want permutation number 78 out of all permutations of degrees 1, 3 or 4 out of the permutation space of range(5)". (Basically the function would take a tuple of degrees.) This'll also affect the reverse function that calculates index from permutation; based on the set of degrees, the index would be different.
I've tried solving this for the last two days and I was not successful. If you could provide Python code, that'd be best.
|
The permutations of length n and degree d are exactly those that can be written as a composition of k = n - d cycles that partition the n elements. The number of such permutations is given by the Stirling numbers of the first kind, written n atop k in square brackets.
Stirling numbers of the first kind satisfy a recurrence relation
[n] [n - 1] [n - 1]
[ ] = (n - 1) [ ] + [ ]
[k] [ k ] [k - 1],
which means, intuitively, the number of ways to partition n elements into k cycles is to partition n - 1 non-maximum elements into k cycles and splice in the maximum element in one of n - 1 ways, or put the maximum element in its own cycle and partition the n - 1 non-maximum elements into k - 1 cycles. Working from a table of recurrence values, it's possible to trace the decisions down the line.
memostirling1 = {(0, 0): 1}
def stirling1(n, k):
if (n, k) not in memostirling1:
if not (1 <= k <= n): return 0
memostirling1[(n, k)] = (n - 1) * stirling1(n - 1, k) + stirling1(n - 1, k - 1)
return memostirling1[(n, k)]
def unrank(n, d, i):
k = n - d
assert 0 <= i <= stirling1(n, k)
if d == 0:
return list(range(n))
threshold = stirling1(n - 1, k - 1)
if i < threshold:
perm = unrank(n - 1, d, i)
perm.append(n - 1)
else:
(q, r) = divmod(i - threshold, stirling1(n - 1, k))
perm = unrank(n - 1, d - 1, r)
perm.append(perm[q])
perm[q] = n - 1
return perm
|
In Python, is object() equal to anything besides itself?
|
If I have the code my_object = object() in Python, will my_object be equal to anything except for itself?
I suspect the answer lies in the __eq__ method of the default object returned by object(). What is the implementation of __eq__ for this default object?
EDIT: I'm using Python 2.7, but am also interested in Python 3 answers. Please clarify whether your answer applies to Python 2, 3, or both.
|
object().__eq__ returns the NotImplemented singleton:
print(object().__eq__(3))
NotImplemented
By the reflexive rules of rich comparisons, when NotImplemented is returned, the "reflected" operation is tried. So if you have an object on the RHS that returns True for that comparison, then you can get a True response even though the LHS did not implement the comparison.
class EqualToEverything(object):
def __eq__(self,other):
return True
ete = EqualToEverything()
ete == object() # we implemented `ete.__eq__`, so this is obviously True
Out[74]: True
object() == ete # still True due to the reflexive rules of rich comparisons
Out[75]: True
python 2 specific bit: if neither object implements __eq__, then python moves on to check if either implement __cmp__. Equivalent reflexive rules apply here.
class ComparableToEverything(object):
def __cmp__(self,other):
return 0
cte = ComparableToEverything()
cte == object()
Out[5]: True
object() == cte
Out[6]: True
__cmp__ is gone in python 3.
In both python 2 and 3, when we exhaust all of these comparison operators and all are NotImplemented, the final fallback is checking identity. (a is b)
|
'pip' is not recognized as an internal or external command
|
I'm running into a weird error trying to install Django on my computer.
This is the sequence that I've typed into my command line:
C:\Python34>python get-pip.py
Requirement already up-to-date: pip in c:\python34\lib\site-packages
Cleaning up...
C:\Python34>pip install Django
'pip' is not recognized as an internal or external command,
operable program or batch file.
C:\Python34>lib\site-packages\pip install Django
'lib\site-packages\pip' is not recognized as an internal or external command,
operable program or batch file.
What could be causing this?
EDIT ___________________
As requested this is what I get when I type in echo %PATH%
C:\Python34>echo %PATH%
C:\Program Files\ImageMagick-6.8.8-Q16;C:\Program Files (x86)\Intel\iCLS Client\
;C:\Program Files\Intel\iCLS Client\;C:\Windows\system32;C:\Windows;C:\Windows\S
ystem32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\
Windows Live\Shared;C:\Program Files (x86)\Intel\OpenCL SDK\2.0\bin\x86;C:\Progr
am Files (x86)\Intel\OpenCL SDK\2.0\bin\x64;C:\Program Files\Intel\Intel(R) Mana
gement Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine C
omponents\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components
\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\P
rogram Files (x86)\nodejs\;C:\Program Files (x86)\Heroku\bin;C:\Program Files (x
86)\git\cmd;C:\RailsInstaller\Ruby2.0.0\bin;C:\RailsInstaller\Git\cmd;C:\RailsIn
staller\Ruby1.9.3\bin;C:\Users\Javi\AppData\Roaming\npm
|
You need to add the path of your pip installation to your PATH system variable. By default, pip is installed to C:\Python34\Scripts\pip (pip now comes bundled with new versions of python), so the path "C:\Python34\Scripts" needs to be added to your PATH variable.
To check if it is already in your PATH variable, type echo %PATH% at the CMD prompt
To add the path of your pip installation to your PATH variable, you can use the Control Panel or the setx command. For example:
setx PATH "%PATH%;C:\Python34\Scripts"
Note:
According to the official documentation, "[v]ariables set with setx variables are available in future command windows only, not in the current command window". In particular, you will need to start a new cmd.exe instance after entering the above command in order to utilize the new environment variable.
Thanks to Scott Bartell for pointing this out.
|
pretty print json in python (pythonic way)
|
I've seen pprint lib. However I'm always retriving json data, and I wonder if there is any easy and fast way to achieve a simple decent printing.
import requests
r = requests.get('http://server.com/api/2/....')
r.json()
(no pretty printing)
The pprint way,
>>> import pprint
>>> import requests
>>> r = requests.get('http://server.com/api/2/....')
>>> pp = pprint.PrettyPrinter(indent=4)
>>> pp.pprint(r.json())
edit
So it looks better now,
>>> import requests
>>> from pprint import pprint
>>> r = requests.get('http://server.com/api/2/....')
>>> pprint(r.json())
|
Python's builtin JSON module can handle that for you:
>>> import json
>>> a = {'hello': 'world', 'a': [1, 2, 3, 4], 'foo': 'bar'}
>>> print(json.dumps(a, indent=2))
{
"hello": "world",
"a": [
1,
2,
3,
4
],
"foo": "bar"
}
|
Set up Python 3 build system with Sublime Text 3
|
I want to configure Sublime Text 3 to build Python 3, but I don't seem to understand how the builds work. Many tutorials have told me to make a build file containing code such as:
{
'cmd': ['/usr/bin/python3', '-u', '$file'],
'file_regex': '^[ ]*File "(â¦*?)", line ([0-9]*)',
'selector': 'source.python'
}
and save it as a file called Python.sublime-build or python3.sublime-build (much of the information I found was conflicting). One tutorial suggested creating a new folder in the ST3 Packages folder called Python and add the build file in there, whilst other tutorials suggested leaving it in the folder called User.
One tutorial explained how I had to change the Environment Variable path on my operating system to get it to work. That didn't seem to help either.
I added a folder Python to Packages (since it wasn't there already) and added in a build file with the name Python.sublime_build which featured only the code I posted above in it. Now when I attempt to run Sublime Text it gives me this error:
Error trying to parse build system:
Expected value in Packages\Python\Python.sublime-build:2:5
|
The reason you're getting the error is that you have a Unix-style path to the python executable, when you're running Windows. Change /usr/bin/python3 to C:/Python32/python.exe (make sure you use the forward slashes / and not Windows-style back slashes \). Once you make this change, you should be all set.
Also, you need to change the single quotes ' to double quotes " like so:
{
"cmd": ["c:/Python32/python.exe", "-u", "$file"],
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"selector": "source.python"
}
The .sublime-build file needs to be valid JSON, which requires strings be wrapped in double quotes, not single.
|
Install py2exe for python 2.7 over pip: this package requires Python 3.3 or later
|
>>> python -c "import sys; print sys.version"
2.7.6 (default, Nov 10 2013, 19:24:18) [MSC v.1500 32 bit (Intel)]
>>> pip --version
pip 1.5.5 from C:\Python27\lib\site-packages (python 2.7)
>>> pip install py2exe
<mumble grumble..>
RuntimeError: This package requires Python 3.3 or later
though official py2exe download page says they have exactly what I need:
So how to install py2exe over pip?
|
It is missing from pypi, if you click on the 0.6.9 link it brings you to the 0.9.2.0 python 3 package, there seems to be no 0.6.9 package available to download.
Try using pip install http://sourceforge.net/projects/py2exe/files/latest/download?source=files
|
locale.getpreferredencoding() - why does this reset string.letters?
|
>>> import string
>>> import locale
>>> string.letters
'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
>>> locale.getpreferredencoding()
'UTF-8'
>>> string.letters
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
Any workarounds for this?
Platform: Linux
Python2.6.7 and Python2.7.3 seem to be affected, Works fine in Python3 (with ascii_letters)
|
Note: what OP did to solve the issue is to pass encoding='UTF-8' to the open call. If you run into this issue and are just looking for a fix this works. The rest of the post is an emphasis on why.
What happens
As Lukas said, the docs specify:
On some systems, it is necessary to invoke setlocale() to obtain the user preferences
Initially, string.letters is set to returning lowercase + uppercase:
lowercase = 'abcdefghijklmnopqrstuvwxyz'
uppercase = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
letters = lowercase + uppercase
However, when you call getpreferredencoding(), the _locale module overrides it by calling PyDict_SetItemString(string, "letters", ulo); after it generates them inside fixup_ulcase(void) with the following:
/* create letters string */
n = 0;
for (c = 0; c < 256; c++) {
if (isalpha(c))
ul[n++] = c;
}
ulo = PyString_FromStringAndSize((const char *)ul, n);
if (!ulo)
return;
if (string)
PyDict_SetItemString(string, "letters", ulo);
Py_DECREF(ulo);
In turn, this is called in PyLocale_setlocale which is indeed setlocale, which is called by getpreferredencoding - code here http://hg.python.org/cpython/file/07a6fca7ff42/Lib/locale.py#l612 :
def getpreferredencoding(do_setlocale = True):
"""Return the charset that the user is likely using,
according to the system configuration."""
if do_setlocale:
oldloc = setlocale(LC_CTYPE)
try:
setlocale(LC_CTYPE, "")
except Error:
pass
result = nl_langinfo(CODESET)
setlocale(LC_CTYPE, oldloc)
return result
else:
return nl_langinfo(CODESET)
How do I avoid it?
Try getpreferredencoding(False)
Why does it not happen in windows?
Windows uses different code for getting the locale, as you can see here.
In Python 3
In Python 3, getdefaultlocale does not accept a boolean setlocale variable and does not call setlocale itself as you can see here.
|
Pandas DataFrame to list
|
I am pulling a subset of data from a column based on conditions in another column being met.
I can get the correct values back but it is in pandas.core.frame.DataFrame. How do I convert that to list?
import pandas as pd
tst = pd.read_csv('C:\\SomeCSV.csv')
lookupValue = tst['SomeCol'] == "SomeValue"
ID = tst[lookupValue][['SomeCol']]
#How To convert ID to a list
|
Use .values to get a numpy.array and then .tolist() to get a list.
For example:
import pandas as pd
df = pd.DataFrame({'a':[1,3,5,7,4,5,6,4,7,8,9],
'b':[3,5,6,2,4,6,7,8,7,8,9]})
Result:
>>> df['a'].values.tolist()
[1, 3, 5, 7, 4, 5, 6, 4, 7, 8, 9]
or you can just use
>>> df['a'].tolist()
[1, 3, 5, 7, 4, 5, 6, 4, 7, 8, 9]
To drop duplicates you can do one of the following:
>>> df['a'].drop_duplicates().values.tolist()
[1, 3, 5, 7, 4, 6, 8, 9]
>>> list(set(df['a'])) # as pointed out by EdChum
[1, 3, 4, 5, 6, 7, 8, 9]
|
Create conda package across many versions
|
I have a very simple Pure Python package on PyPI that I'd like to make available on binstar. My package targets Python 2.6+ and 3.2+ with a single codebase. I also expect that it works equally well on Windows as well as Unix.
Is there a simple way to comprehensively build my package and upload it to binstar for many Python versions?
I've tried a naive use of conda skeleton pypi as suggested in this article. I'd like to replicate this process across many different versions.
|
If you want to build recipes for many different versions of the package, use the --version flag to conda skeleton pypi. I recommend using package-version as a naming convention for the recipes.
If you want to build the same package for many different Python versions, use the --py flag to conda build, like conda build --py 3.4 recipe. The flag can be specified multiple times, like conda build --py 3.3 --py 3.4 recipe, or you can use --py all to build against Python 2.6, 2.7, 3.3, and 3.4.
To convert the package to other platforms, use conda convert (see conda convert -h for usage; be sure to run conda update conda-build, as the API changed a little bit recently).
The easiest way to get the binstar uploading is to run conda config --set binstar_upload yes. This will cause conda build and conda convert to upload the packages to binstar automatically when they are done.
|
Processing Large Files in Python [ 1000 GB or More]
|
Lets say i have a text file of 1000 GB. I need to find how much times a phrase occurs in the text.
Is there any faster way to do this that the one i am using bellow?
How much would it take to complete the task.
phrase = "how fast it is"
count = 0
with open('bigfile.txt') as f:
for line in f:
count += line.count(phrase)
If I am right if I do not have this file in the memory i would meed to wait till the PC loads the file each time I am doing the search and this should take at least 4000 sec for a 250 MB/sec hard drive and a file of 10000 GB.
|
I used file.read() to read the data in chunks, in current examples the chunks were of size 100 MB, 500MB, 1GB and 2GB respectively. The size of my text file is 2.1 GB.
Code:
from functools import partial
def read_in_chunks(size_in_bytes):
s = 'Lets say i have a text file of 1000 GB'
with open('data.txt', 'r+b') as f:
prev = ''
count = 0
f_read = partial(f.read, size_in_bytes)
for text in iter(f_read, ''):
if not text.endswith('\n'):
# if file contains a partial line at the end, then don't
# use it when counting the substring count.
text, rest = text.rsplit('\n', 1)
# pre-pend the previous partial line if any.
text = prev + text
prev = rest
else:
# if the text ends with a '\n' then simple pre-pend the
# previous partial line.
text = prev + text
prev = ''
count += text.count(s)
count += prev.count(s)
print count
Timings:
read_in_chunks(104857600)
$ time python so.py
10000000
real 0m1.649s
user 0m0.977s
sys 0m0.669s
read_in_chunks(524288000)
$ time python so.py
10000000
real 0m1.558s
user 0m0.893s
sys 0m0.646s
read_in_chunks(1073741824)
$ time python so.py
10000000
real 0m1.242s
user 0m0.689s
sys 0m0.549s
read_in_chunks(2147483648)
$ time python so.py
10000000
real 0m0.844s
user 0m0.415s
sys 0m0.408s
On the other hand the simple loop version takes around 6 seconds on my system:
def simple_loop():
s = 'Lets say i have a text file of 1000 GB'
with open('data.txt') as f:
print sum(line.count(s) for line in f)
$ time python so.py
10000000
real 0m5.993s
user 0m5.679s
sys 0m0.313s
Results of @SlaterTyranus's grep version on my file:
$ time grep -o 'Lets say i have a text file of 1000 GB' data.txt|wc -l
10000000
real 0m11.975s
user 0m11.779s
sys 0m0.568s
Results of @woot's solution:
$ time cat data.txt | parallel --block 10M --pipe grep -o 'Lets\ say\ i\ have\ a\ text\ file\ of\ 1000\ GB' | wc -l
10000000
real 0m5.955s
user 0m14.825s
sys 0m5.766s
Got best timing when I used 100 MB as block size:
$ time cat data.txt | parallel --block 100M --pipe grep -o 'Lets\ say\ i\ have\ a\ text\ file\ of\ 1000\ GB' | wc -l
10000000
real 0m4.632s
user 0m13.466s
sys 0m3.290s
Results of woot's second solution:
$ time python woot_thread.py # CHUNK_SIZE = 1073741824
10000000
real 0m1.006s
user 0m0.509s
sys 0m2.171s
$ time python woot_thread.py #CHUNK_SIZE = 2147483648
10000000
real 0m1.009s
user 0m0.495s
sys 0m2.144s
System Specs: Core i5-4670, 7200 RPM HDD
|
Django serving media files (user uploaded files ) in openshift
|
I have successfully deployed my Django project in openshift. But I need to be able to serve files that are uploaded by users. I user MEDIA_ROOT and MEDIA_URL for that. I followed this tutorial here, but nothing happened. I had to change MEDIA_ROOT because the one suggested there isn't correct i think. So my MEDIA_ROOT looks like
MEDIA_ROOT = os.path.join(os.environ.get('OPENSHIFT_DATA_DIR', ''),'media')
MEDIA_URL = '/media/'
I added the .htaccess in /wsgi folder with as it says in the article
RewriteEngine On
RewriteRule ^application/media/(.+)$ /static/$1 [L]
and created the build script to make symbolic link of the media in static as the article says.
#!/bin/bash
if [ ! -d $OPENSHIFT_DATA_DIR/media ]; then
mkdir $OPENSHIFT_DATA_DIR/media
fi
ln -sf $OPENSHIFT_DATA_DIR/media $OPENSHIFT_REPO_DIR/wsgi/static/media
In my urls.py I have added the
urlpatterns += static(settings.MEDIA_ROOT, document_root=settings.MEDIA_URL)
but I still can't serve them. I also tried not to include the django static method in urls.py but the same result.
In another tutorial .htacces is placed inside static folder. Am I doing something wrong?
|
Just for others to know, I solved my problem by correcting the RewriteRule adding media folder to the second part of the rule, so it became
RewriteEngine On
RewriteRule ^application/media/(.+)$ /static/media/$1 [L]
Hope it helps others.
|
python abstract attribute (not property)
|
What's the best practice to define an abstract instance attribute, but not as a property?
I would like to write something like:
class AbstractFoo(metaclass=ABCMeta):
@property
@abstractmethod
def bar(self):
pass
class Foo(AbstractFoo):
def __init__(self):
self.bar = 3
Instead of:
class Foo(AbstractFoo):
def __init__(self):
self._bar = 3
@property
def bar(self):
return self._bar
@bar.setter
def setbar(self, bar):
self._bar = bar
@bar.deleter
def delbar(self):
del self._bar
Properties are handy, but for simple attribute requiring no computation they are an overkill. This is especially important for abstract classes which will be subclassed and implemented by the user (I don't want to force someone to use @property when he just could have written self.foo = foo in the __init__).
Abstract attributes in Python question proposes as only answer to use @property and @abstractmethod: it doesn't answer my question.
http://code.activestate.com/recipes/577761-simple-abstract-constants-to-use-when-abstractprop/ may be the right way, but I am not sure. It also only works with class attributes and not instance attributes.
|
If you really want to enforce that a subclass define a given attribute, you can use metaclass. Personally, I think it may be overkill and not very pythonic, but you could do something like this:
class AbstractFooMeta(type):
def __call__(cls, *args, **kwargs):
"""Called when you call Foo(*args, **kwargs) """
obj = type.__call__(cls, *args, **kwargs)
obj.check_bar()
return obj
class AbstractFoo(object):
__metaclass__ = AbstractFooMeta
bar = None
def check_bar(self):
if self.bar is None:
raise NotImplementedError('Subclasses must define bar')
class GoodFoo(AbstractFoo):
def __init__(self):
self.bar = 3
class BadFoo(AbstractFoo):
def __init__(self):
pass
Basically the meta class redefine __call__ to make sure check_bar is called after the init on an instance.
GoodFoo() Â # ok
BadFoo () Â # yield NotImplementedError
|
Python: select one of multiple installed module versions
|
On my system, I have several modules installed multiple times. To give an example, numpy 1.6.1 is installed in the standard path at /usr/lib/python2.7/dist-packages, and I have an updated version of numpy 1.8.0 installed at /local/python/lib/python2.7/site-packages/.
The reason I cannot simply remove the old version is that I do not have permissions to change anything on my work computer. I however need to use the new numpy version.
I have added /local/python/lib/python2.7/site-packages/ to my PYTHONPATH. Unfortunately, this does not help, since /usr/lib/python2.7/dist-packages is inserted into the path first and therefore, numpy 1.6.1 will be loaded. Here's an example:
>>> import os
>>> print os.environ['PYTHONPATH']
/local/python/lib/python2.7/site-packages
>>> import pprint
>>> import sys
>>> pprint.pprint(sys.path)
['',
'/local/python/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-linux-x86_64.egg',
'/local/python/lib/python2.7/site-packages/pyparsing-2.0.1-py2.7.egg',
'~/.local/lib/python2.7/site-packages/setuptools-3.4.4-py2.7.egg',
'~/.local/lib/python2.7/site-packages/mpldatacursor-0.5_dev-py2.7.egg',
'/usr/lib/python2.7/dist-packages',
'/local/python/lib/python2.7/site-packages',
'/usr/lib/python2.7',
...,
'~/.local/lib/python2.7/dist-packages',
...]
So, it seems that the import order is
current directory
eggs from PYTHONPATH
eggs from local module path (~/.local/lib/python2.7/site-packages/*.egg)
system-wide module path (~/usr/lib/python2.7/dist-packages/)
directories from PYTHONPATH
intermediate paths (omitted for brevity)
userbase directory (~/.local/lib/python2.7/site-packages/)
My problem is that I would need to put item 5. before items 3. and 4. for my code to work properly. Right now, if I import a module that was compiled against numpy 1.8.0 from the /local/* directory, and this module imports numpy, it will still take numpy from the /usr/* directory and fail.
I have circumvented this problem by placing something like this in my scripts:
import sys
sys.path.insert(0, /local/python/lib/python2.7/site-packages/)
Thereby I can force Python to use the right import order, but of course this is not a solution, since I would have to do this in every single script.
|
Besides the suggestions already given in the comment section, have you thought about using virtualenv? This would give you fine-grained control over every module that you want to use. If you're not familiar with virtualenv you'll want to read the documentation to get a feel for how it works.
Purely for example, you could install and set it up, like so (virtualenv-1.11.6 looks to be the most recent version currently):
$ curl -O https://pypi.python.org/packages/source/v/virtualenv/virtualenv-1.11.6.tar.gz
$ tar xvfz virtualenv-1.11.6.tar.gz
$ cd virtualenv-1.11.6
$ python virtualenv.py ../numpyvenv
$ cd ../numpyvenv
$ source ./bin/activate
(numpyvenv) $ pip install numpy
# downloads, compiles, and installs numpy into the virtual environemnt
(numpyvenv) $ python
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> numpy.version.version
'1.9.1'
>>> quit()
(numpyvenv) $ deactivate
$ # the virtual environment has been deactivated
Above, we created a virtual environment named "numpyvenv", activated the environment, installed numpy, printed the numpy version (to show it works), quit python, and deactivated the environment. Next time you activate the environment, numpy will be there along with whatever other modules you install. You may run into hiccups while trying this, but it should get you started.
|
what is the difference between 'transform' and 'fit_transform' in sklearn
|
In the sklearn-python toolbox, there are two functions transform and fit_transform about sklearn.decomposition.RandomizedPCA. The description of two functions are as follows
But what is the difference between them ?
|
Here the difference
you can use pca.transform only if you have already computed PCA on a matrix
In [12]: pc2 = RandomizedPCA(n_components=3)
In [13]: pc2.transform(X) # can't transform because it does not know how to do it.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-13-e3b6b8ea2aff> in <module>()
----> 1 pc2.transform(X)
/usr/local/lib/python3.4/dist-packages/sklearn/decomposition/pca.py in transform(self, X, y)
714 # XXX remove scipy.sparse support here in 0.16
715 X = atleast2d_or_csr(X)
--> 716 if self.mean_ is not None:
717 X = X - self.mean_
718
AttributeError: 'RandomizedPCA' object has no attribute 'mean_'
In [14]: pc2.ftransform(X)
pc2.fit pc2.fit_transform
In [14]: pc2.fit_transform(X)
Out[14]:
array([[-1.38340578, -0.2935787 ],
[-2.22189802, 0.25133484],
[-3.6053038 , -0.04224385],
[ 1.38340578, 0.2935787 ],
[ 2.22189802, -0.25133484],
[ 3.6053038 , 0.04224385]])
if you want to use .transform you need to teach the transformation rule to your pca
In [20]: pca = RandomizedPCA(n_components=3)
In [21]: pca.fit(X)
Out[21]:
RandomizedPCA(copy=True, iterated_power=3, n_components=3, random_state=None,
whiten=False)
In [22]: pca.transform(z)
Out[22]:
array([[ 2.76681156, 0.58715739],
[ 1.92831932, 1.13207093],
[ 0.54491354, 0.83849224],
[ 5.53362311, 1.17431479],
[ 6.37211535, 0.62940125],
[ 7.75552113, 0.92297994]])
In [23]:
In particular PCA transform apply the change of basis obtained with the PCA decomposition of the matrix X to the matrix Z.
|
Using Python 3 in virtualenv
|
Using virtualenv, I run my projects with the default version of Python (2.7). On one project, I need to use Python 3.4.
I used brew install python3 to install it on my Mac. Now, how do I create a virtualenv that uses the new version?
e.g. sudo virtualenv envPython3
If I try:
virtualenv -p python3 test
I get:
Running virtualenv with interpreter /usr/local/bin/python3
Using base prefix '/usr/local/Cellar/python3/3.4.0_1/Frameworks/Python.framework/Versions/3.4'
New python executable in test/bin/python3.4
Also creating executable in test/bin/python
Failed to import the site module
Traceback (most recent call last):
File "/Users/user/Documents/workspace/test/test/bin/../lib/python3.4/site.py", line 67, in <module>
import os
File "/Users/user/Documents/workspace/test/test/bin/../lib/python3.4/os.py", line 634, in <module>
from _collections_abc import MutableMapping
ImportError: No module named '_collections_abc'
ERROR: The executable test/bin/python3.4 is not functioning
ERROR: It thinks sys.prefix is '/Users/user/Documents/workspace/test' (should be '/Users/user/Documents/workspace/test/test')
ERROR: virtualenv is not compatible with this system or executable
|
simply run
virtualenv -p python3 envname
Update after OP's edit:
There was a bug in the OP's version of virtualenv, as described here. The problem was fixed by running:
pip install --upgrade virtualenv
|
What is the difference between a stack and a frame?
|
Under what situations would I want to use one over the other?
What is the difference between:
>>> import inspect
>>> print(inspect.getouterframes(inspect.currentframe()))
[(<frame object at 0x8fc262c>, '<stdin>', 1, '<module>', None, None)]
And:
>>> import traceback
>>> traceback.extract_stack()
[('<stdin>', 1, '<module>', None)]
Update:
Another:
>>> import sys
>>> print(sys._getframe().f_trace,sys._getframe().f_code)
(None, <code object <module> at 0x8682a88, file "<stdin>", line 1>)
I do not understand the nuances here:
Stack Frame
Frame Object
Stack Trace
|
Alright, since this appears to be more about what stack frames/call stacks are in general, let's go through this:
def f():
try:
g()
except:
# WE WILL DO THINGS HERE
def g():
h()
def h():
raise Exception('stuff')
#CALL
f()
When we're in h(), there are 4 frames on the call stack.
[top level]
[f()]
[g()]
[h()] #<-- we're here
(if we tried to put more than sys.getrecursionlimit() frames on the stack, we would get a RuntimeError, which is python's version of StackOverflow ;-))
"Outer" refers to everything above us (literally: the direction "up") in the call stack. So in order, g, then f, then the top (module) level. Likewise, "inner" refers to everything downwards in the call stack. If we catch an exception in f(), that traceback object will have references to all of the inner stack frames that were unwound to get us to that point.
def f():
try:
g()
except:
import inspect
import sys
#the third(last) item in sys.exc_info() is the current traceback object
return inspect.getinnerframes(sys.exc_info()[-1])
This gives:
[(<frame object at 0xaad758>, 'test.py', 3, 'f', [' g()\n'], 0),
(<frame object at 0x7f5edeb23648>, 'test.py', 10, 'g', [' h()\n'], 0),
(<frame object at 0x7f5edeabdc50>, 'test.py', 13, 'h', [" raise Exception('stuff')\n"], 0)]
As expected, the three inner frames f, g, and h. Now, we can take that last frame object (the one from h()) and ask for its outer frames:
[(<frame object at 0x7f6e996e6a48>, 'test.py', 13, 'h', [" raise Exception('stuff')\n"], 0),
(<frame object at 0x1bf58b8>, 'test.py', 10, 'g', [' h()\n'], 0),
(<frame object at 0x7f6e99620240>, 'test.py', 7, 'f', [' return inspect.getinnerframes(sys.exc_info()[-1])\n'], 0),
(<frame object at 0x7f6e99725438>, 'test.py', 23, '<module>', ['print(inspect.getouterframes(f()[-1][0]))\n'], 0)]
So, there you go, that's all that's going on: we're simply navigating the call stack. For comparison, here's what traceback.extract_stack(f()[-1][0]) gives:
[('test.py', 23, '<module>', 'print(traceback.extract_stack(f()[-1][0]))'),
('test.py', 7, 'f', 'return inspect.getinnerframes(sys.exc_info()[-1])'),
('test.py', 10, 'g', 'h()'),
('test.py', 13, 'h', "raise Exception('stuff')")]
Notice the inverted order here compared to getouterframes, and the reduced output. In fact, if you squint your eyes, this basically looks like a regular traceback (and hey, it is, with just a little bit more formatting).
Summing up: both inspect.getouterframes and traceback.extract_stack contain all the information to reproduce what you generally see in your everyday traceback; extract_stack just removes the references to the stack frames, since it is very common to no longer need them once you get to the point of formatting your stack trace from-a-given-frame-outwards.
|
Python Pandas: How to read only first n rows of CSV files in?
|
I have a very large data set and I can't afford to read the entire data set in. So, I'm thinking of reading only one chunk of it to train but I have no idea how to do it. Any thought will be appreciated.
|
If you only want to read the first 999,999 (non-header) rows:
read_csv(..., nrows=999999)
If you only want to read rows 1,000,000 ... 1,999,999
read_csv(..., skiprows=1000000, nrows=999999)
nrows : int, default None Number of rows of file to read. Useful for
reading pieces of large files*
skiprows : list-like or integer
Row numbers to skip (0-indexed) or number of rows to skip (int) at the start of the file
and for large files, you'll probably also want to use chunksize:
chunksize : int, default None
Return TextFileReader object for iteration
pandas.io.parsers.read_csv documentation
|
Django finds tests but fail to import them
|
I am getting weird errors, where a call to ./manage.py test will find my tests but complain that they cannot be imported.
Versions
Python 3.4
Django 1.7b4
My file structure
looks like this (just the relevant bits):
inkasso
âââ db.sqlite3
âââ functional_tests
â  âââ base.py
â  âââ base.pyc
â  âââ __init__.py
â  âââ __init__.pyc
â  âââ __pycache__
â  âââ test_login.py
â  âââ test_login.pyc
âââ __init__.py
âââ inkasso
â  âââ __init__.py
â  âââ __init__.pyc
â  âââ migrations
â  âââ models.py
â  âââ settings.py
â  âââ settings.pyc
â  âââ urls.py
â  âââ wsgi.py
âââ manage.py
âââ static
â  âââ ...
âââ templates
â  âââ ...
âââ web
âââ admins.py
âââ tests
â  âââ __init__.py
â  âââ test_forms.py
â  âââ test_models.py
â  âââ test_views.py
âââ urls.py
âââ views.py
The stack-trace
So when I run ./manage.py test I get the following stak-trace:
$ ./manage.py test
Creating test database for alias 'default'...
EEEE
======================================================================
ERROR: inkasso.functional_tests.test_login (unittest.loader.ModuleImportFailure)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.4/unittest/case.py", line 57, in testPartExecutor
yield
File "/usr/lib/python3.4/unittest/case.py", line 574, in run
testMethod()
File "/usr/lib/python3.4/unittest/loader.py", line 32, in testFailure
raise exception
ImportError: Failed to import test module: inkasso.functional_tests.test_login
Traceback (most recent call last):
File "/usr/lib/python3.4/unittest/loader.py", line 312, in _find_tests
module = self._get_module_from_name(name)
File "/usr/lib/python3.4/unittest/loader.py", line 290, in _get_module_from_name
__import__(name)
ImportError: No module named 'inkasso.functional_tests'
======================================================================
ERROR: inkasso.web.tests.test_forms (unittest.loader.ModuleImportFailure)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.4/unittest/case.py", line 57, in testPartExecutor
yield
File "/usr/lib/python3.4/unittest/case.py", line 574, in run
testMethod()
File "/usr/lib/python3.4/unittest/loader.py", line 32, in testFailure
raise exception
ImportError: Failed to import test module: inkasso.web.tests.test_forms
Traceback (most recent call last):
File "/usr/lib/python3.4/unittest/loader.py", line 312, in _find_tests
module = self._get_module_from_name(name)
File "/usr/lib/python3.4/unittest/loader.py", line 290, in _get_module_from_name
__import__(name)
ImportError: No module named 'inkasso.web'
======================================================================
ERROR: inkasso.web.tests.test_models (unittest.loader.ModuleImportFailure)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.4/unittest/case.py", line 57, in testPartExecutor
yield
File "/usr/lib/python3.4/unittest/case.py", line 574, in run
testMethod()
File "/usr/lib/python3.4/unittest/loader.py", line 32, in testFailure
raise exception
ImportError: Failed to import test module: inkasso.web.tests.test_models
Traceback (most recent call last):
File "/usr/lib/python3.4/unittest/loader.py", line 312, in _find_tests
module = self._get_module_from_name(name)
File "/usr/lib/python3.4/unittest/loader.py", line 290, in _get_module_from_name
__import__(name)
ImportError: No module named 'inkasso.web'
======================================================================
ERROR: inkasso.web.tests.test_views (unittest.loader.ModuleImportFailure)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.4/unittest/case.py", line 57, in testPartExecutor
yield
File "/usr/lib/python3.4/unittest/case.py", line 574, in run
testMethod()
File "/usr/lib/python3.4/unittest/loader.py", line 32, in testFailure
raise exception
ImportError: Failed to import test module: inkasso.web.tests.test_views
Traceback (most recent call last):
File "/usr/lib/python3.4/unittest/loader.py", line 312, in _find_tests
module = self._get_module_from_name(name)
File "/usr/lib/python3.4/unittest/loader.py", line 290, in _get_module_from_name
__import__(name)
ImportError: No module named 'inkasso.web'
----------------------------------------------------------------------
Ran 4 tests in 0.001s
FAILED (errors=4)
Destroying test database for alias 'default'...
So the test runner finds my tests, but for some reason, they are not imported. I have no idea what it is going on. The stack-trace is not very helpful to me :(
Since the root folder is called inkasso and it has a module of the same name, I tried putting print(os.getcwd) and print(sys.path) into manage.py, and all it showed was that CWD and path was both set to point at the root folder, so it should be all good, no? The app itself runs as expected. It is only the tests that are not working.
For giggles, I tried creating an empty module 'web' in inkasso.inkasso, with the result that instead of complaining that inkasso.web doesn't exist, it now complains that inkasso.web.tests does not exist. So this shows that instead of looking in the root 'inkasso' folder, it looks in 'inkasso.inkasso'. So that is the issue. How can I fix it?
|
Yeah... there's a problem with running ./manage.py, in that it adds the current directory to the PYTHONPATH.
This problem happens when you put an __init__.py inside the root folder.
One solution would be, in this case, to never use manage.py, but only django-admin.py <commands> --settings=inkasso.inkasso.settings - of course, this assumes either that when running this command, you're one level up, outside your root folder inkasso, or you have your main package installed in site-packages.
For example, if the complete path to your settings.py file is /home/user/projects/inkasso/inkasso/settings.py, you need to be in /home/user/projects when running this command.
If however you've got your package installed inside your site-packages, the above restriction changes: you can be anywhere except /home/user/projects/inkasso or any of its subfolders or so on.
Another solution is to edit your manage.py file to add this line:
if __name__ =='__main__': #line already present
#this will make the python interpreter see your packages as inkasso.inkasso.whatever
os.chdir('..') # <<<---This is what you want to add
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mvod.dev_settings")
....
|
Why is it slower to iterate over a small string than a small list?
|
I was playing around with timeit and noticed that doing a simple list comprehension over a small string took longer than doing the same operation on a list of small single character strings. Any explanation? It's almost 1.35 times as much time.
>>> from timeit import timeit
>>> timeit("[x for x in 'abc']")
2.0691067844831528
>>> timeit("[x for x in ['a', 'b', 'c']]")
1.5286479570345861
What's happening on a lower level that's causing this?
|
TL;DR
The actual speed difference is closer to 70% (or more) once a lot of the overhead is removed, for Python 2.
Object creation is not at fault. Neither method creates a new object, as one-character strings are cached.
The difference is unobvious, but is likely created from a greater number of checks on string indexing, with regards to the type and well-formedness. It is also quite likely thanks to the need to check what to return.
List indexing is remarkably fast.
>>> python3 -m timeit '[x for x in "abc"]'
1000000 loops, best of 3: 0.388 usec per loop
>>> python3 -m timeit '[x for x in ["a", "b", "c"]]'
1000000 loops, best of 3: 0.436 usec per loop
This disagrees with what you've found...
You must be using Python 2, then.
>>> python2 -m timeit '[x for x in "abc"]'
1000000 loops, best of 3: 0.309 usec per loop
>>> python2 -m timeit '[x for x in ["a", "b", "c"]]'
1000000 loops, best of 3: 0.212 usec per loop
Let's explain the difference between the versions. I'll examine the compiled code.
For Python 3:
import dis
def list_iterate():
[item for item in ["a", "b", "c"]]
dis.dis(list_iterate)
#>>> 4 0 LOAD_CONST 1 (<code object <listcomp> at 0x7f4d06b118a0, file "", line 4>)
#>>> 3 LOAD_CONST 2 ('list_iterate.<locals>.<listcomp>')
#>>> 6 MAKE_FUNCTION 0
#>>> 9 LOAD_CONST 3 ('a')
#>>> 12 LOAD_CONST 4 ('b')
#>>> 15 LOAD_CONST 5 ('c')
#>>> 18 BUILD_LIST 3
#>>> 21 GET_ITER
#>>> 22 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
#>>> 25 POP_TOP
#>>> 26 LOAD_CONST 0 (None)
#>>> 29 RETURN_VALUE
def string_iterate():
[item for item in "abc"]
dis.dis(string_iterate)
#>>> 21 0 LOAD_CONST 1 (<code object <listcomp> at 0x7f4d06b17150, file "", line 21>)
#>>> 3 LOAD_CONST 2 ('string_iterate.<locals>.<listcomp>')
#>>> 6 MAKE_FUNCTION 0
#>>> 9 LOAD_CONST 3 ('abc')
#>>> 12 GET_ITER
#>>> 13 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
#>>> 16 POP_TOP
#>>> 17 LOAD_CONST 0 (None)
#>>> 20 RETURN_VALUE
You see here that the list variant is likely to be slower due to the building of the list each time.
This is the
9 LOAD_CONST 3 ('a')
12 LOAD_CONST 4 ('b')
15 LOAD_CONST 5 ('c')
18 BUILD_LIST 3
part. The string variant only has
9 LOAD_CONST 3 ('abc')
You can check that this does seem to make a difference:
def string_iterate():
[item for item in ("a", "b", "c")]
dis.dis(string_iterate)
#>>> 35 0 LOAD_CONST 1 (<code object <listcomp> at 0x7f4d068be660, file "", line 35>)
#>>> 3 LOAD_CONST 2 ('string_iterate.<locals>.<listcomp>')
#>>> 6 MAKE_FUNCTION 0
#>>> 9 LOAD_CONST 6 (('a', 'b', 'c'))
#>>> 12 GET_ITER
#>>> 13 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
#>>> 16 POP_TOP
#>>> 17 LOAD_CONST 0 (None)
#>>> 20 RETURN_VALUE
This produces just
9 LOAD_CONST 6 (('a', 'b', 'c'))
as tuples are immutable. Test:
>>> python3 -m timeit '[x for x in ("a", "b", "c")]'
1000000 loops, best of 3: 0.369 usec per loop
Great, back up to speed.
For Python 2:
def list_iterate():
[item for item in ["a", "b", "c"]]
dis.dis(list_iterate)
#>>> 2 0 BUILD_LIST 0
#>>> 3 LOAD_CONST 1 ('a')
#>>> 6 LOAD_CONST 2 ('b')
#>>> 9 LOAD_CONST 3 ('c')
#>>> 12 BUILD_LIST 3
#>>> 15 GET_ITER
#>>> >> 16 FOR_ITER 12 (to 31)
#>>> 19 STORE_FAST 0 (item)
#>>> 22 LOAD_FAST 0 (item)
#>>> 25 LIST_APPEND 2
#>>> 28 JUMP_ABSOLUTE 16
#>>> >> 31 POP_TOP
#>>> 32 LOAD_CONST 0 (None)
#>>> 35 RETURN_VALUE
def string_iterate():
[item for item in "abc"]
dis.dis(string_iterate)
#>>> 2 0 BUILD_LIST 0
#>>> 3 LOAD_CONST 1 ('abc')
#>>> 6 GET_ITER
#>>> >> 7 FOR_ITER 12 (to 22)
#>>> 10 STORE_FAST 0 (item)
#>>> 13 LOAD_FAST 0 (item)
#>>> 16 LIST_APPEND 2
#>>> 19 JUMP_ABSOLUTE 7
#>>> >> 22 POP_TOP
#>>> 23 LOAD_CONST 0 (None)
#>>> 26 RETURN_VALUE
The odd thing is that we have the same building of the list, but it's still faster for this. Python 2 is acting strangely fast.
Let's remove the comprehensions and re-time. The _ = is to prevent it getting optimised out.
>>> python3 -m timeit '_ = ["a", "b", "c"]'
10000000 loops, best of 3: 0.0707 usec per loop
>>> python3 -m timeit '_ = "abc"'
100000000 loops, best of 3: 0.0171 usec per loop
We can see that initialization is not significant enough to account for the difference between the versions (those numbers are small)! We can thus conclude that Python 3 has slower comprehensions. This makes sense as Python 3 changed comprehensions to have safer scoping.
Well, now improve the benchmark (I'm just removing overhead that isn't iteration). This removes the building of the iterable by pre-assigning it:
>>> python3 -m timeit -s 'iterable = "abc"' '[x for x in iterable]'
1000000 loops, best of 3: 0.387 usec per loop
>>> python3 -m timeit -s 'iterable = ["a", "b", "c"]' '[x for x in iterable]'
1000000 loops, best of 3: 0.368 usec per loop
>>> python2 -m timeit -s 'iterable = "abc"' '[x for x in iterable]'
1000000 loops, best of 3: 0.309 usec per loop
>>> python2 -m timeit -s 'iterable = ["a", "b", "c"]' '[x for x in iterable]'
10000000 loops, best of 3: 0.164 usec per loop
We can check if calling iter is the overhead:
>>> python3 -m timeit -s 'iterable = "abc"' 'iter(iterable)'
10000000 loops, best of 3: 0.099 usec per loop
>>> python3 -m timeit -s 'iterable = ["a", "b", "c"]' 'iter(iterable)'
10000000 loops, best of 3: 0.1 usec per loop
>>> python2 -m timeit -s 'iterable = "abc"' 'iter(iterable)'
10000000 loops, best of 3: 0.0913 usec per loop
>>> python2 -m timeit -s 'iterable = ["a", "b", "c"]' 'iter(iterable)'
10000000 loops, best of 3: 0.0854 usec per loop
No. No it is not. The difference is too small, especially for Python 3.
So let's remove yet more unwanted overhead... by making the whole thing slower! The aim is just to have a longer iteration so the time hides overhead.
>>> python3 -m timeit -s 'import random; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' '[x for x in iterable]'
100 loops, best of 3: 3.12 msec per loop
>>> python3 -m timeit -s 'import random; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' '[x for x in iterable]'
100 loops, best of 3: 2.77 msec per loop
>>> python2 -m timeit -s 'import random; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' '[x for x in iterable]'
100 loops, best of 3: 2.32 msec per loop
>>> python2 -m timeit -s 'import random; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' '[x for x in iterable]'
100 loops, best of 3: 2.09 msec per loop
This hasn't actually changed much, but it's helped a little.
So remove the comprehension. It's overhead that's not part of the question:
>>> python3 -m timeit -s 'import random; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' 'for x in iterable: pass'
1000 loops, best of 3: 1.71 msec per loop
>>> python3 -m timeit -s 'import random; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'for x in iterable: pass'
1000 loops, best of 3: 1.36 msec per loop
>>> python2 -m timeit -s 'import random; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' 'for x in iterable: pass'
1000 loops, best of 3: 1.27 msec per loop
>>> python2 -m timeit -s 'import random; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'for x in iterable: pass'
1000 loops, best of 3: 935 usec per loop
That's more like it! We can get slightly faster still by using deque to iterate. It's basically the same, but it's faster:
>>> python3 -m timeit -s 'import random; from collections import deque; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' 'deque(iterable, maxlen=0)'
1000 loops, best of 3: 777 usec per loop
>>> python3 -m timeit -s 'import random; from collections import deque; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'deque(iterable, maxlen=0)'
1000 loops, best of 3: 405 usec per loop
>>> python2 -m timeit -s 'import random; from collections import deque; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' 'deque(iterable, maxlen=0)'
1000 loops, best of 3: 805 usec per loop
>>> python2 -m timeit -s 'import random; from collections import deque; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'deque(iterable, maxlen=0)'
1000 loops, best of 3: 438 usec per loop
What impresses me is that Unicode is competitive with bytestrings. We can check this explicitly by trying bytes and unicode in both:
bytes
>>> python3 -m timeit -s 'import random; from collections import deque; iterable = b"".join(chr(random.randint(0, 127)).encode("ascii") for _ in range(100000))' 'deque(iterable, maxlen=0)' :(
1000 loops, best of 3: 571 usec per loop
>>> python3 -m timeit -s 'import random; from collections import deque; iterable = [chr(random.randint(0, 127)).encode("ascii") for _ in range(100000)]' 'deque(iterable, maxlen=0)'
1000 loops, best of 3: 394 usec per loop
>>> python2 -m timeit -s 'import random; from collections import deque; iterable = b"".join(chr(random.randint(0, 127)) for _ in range(100000))' 'deque(iterable, maxlen=0)'
1000 loops, best of 3: 757 usec per loop
>>> python2 -m timeit -s 'import random; from collections import deque; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'deque(iterable, maxlen=0)'
1000 loops, best of 3: 438 usec per loop
Here you see Python 3 actually faster than Python 2.
unicode
>>> python3 -m timeit -s 'import random; from collections import deque; iterable = u"".join( chr(random.randint(0, 127)) for _ in range(100000))' 'deque(iterable, maxlen=0)'
1000 loops, best of 3: 800 usec per loop
>>> python3 -m timeit -s 'import random; from collections import deque; iterable = [ chr(random.randint(0, 127)) for _ in range(100000)]' 'deque(iterable, maxlen=0)'
1000 loops, best of 3: 394 usec per loop
>>> python2 -m timeit -s 'import random; from collections import deque; iterable = u"".join(unichr(random.randint(0, 127)) for _ in range(100000))' 'deque(iterable, maxlen=0)'
1000 loops, best of 3: 1.07 msec per loop
>>> python2 -m timeit -s 'import random; from collections import deque; iterable = [unichr(random.randint(0, 127)) for _ in range(100000)]' 'deque(iterable, maxlen=0)'
1000 loops, best of 3: 469 usec per loop
Again, Python 3 is faster, although this is to be expected (str has had a lot of attention in Python 3).
In fact, this unicode-bytes difference is very small, which is impressive.
So let's analyse this one case, seeing as it's fast and convenient for me:
>>> python3 -m timeit -s 'import random; from collections import deque; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' 'deque(iterable, maxlen=0)'
1000 loops, best of 3: 777 usec per loop
>>> python3 -m timeit -s 'import random; from collections import deque; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'deque(iterable, maxlen=0)'
1000 loops, best of 3: 405 usec per loop
We can actually rule out Tim Peter's 10-times-upvoted answer!
>>> foo = iterable[123]
>>> iterable[36] is foo
True
These are not new objects!
But this is worth mentioning: indexing costs. The difference will likely be in the indexing, so remove the iteration and just index:
>>> python3 -m timeit -s 'import random; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' 'iterable[123]'
10000000 loops, best of 3: 0.0397 usec per loop
>>> python3 -m timeit -s 'import random; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'iterable[123]'
10000000 loops, best of 3: 0.0374 usec per loop
The difference seems small, but at least half of the cost is overhead:
>>> python3 -m timeit -s 'import random; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'iterable; 123'
100000000 loops, best of 3: 0.0173 usec per loop
so the speed difference is sufficient to decide to blame it. I think.
So why is indexing a list so much faster?
Well, I'll come back to you on that, but my guess is that's is down to the check for interned strings (or cached characters if it's a separate mechanism). This will be less fast than optimal. But I'll go check the source (although I'm not comfortable in C...) :).
So here's the source:
static PyObject *
unicode_getitem(PyObject *self, Py_ssize_t index)
{
void *data;
enum PyUnicode_Kind kind;
Py_UCS4 ch;
PyObject *res;
if (!PyUnicode_Check(self) || PyUnicode_READY(self) == -1) {
PyErr_BadArgument();
return NULL;
}
if (index < 0 || index >= PyUnicode_GET_LENGTH(self)) {
PyErr_SetString(PyExc_IndexError, "string index out of range");
return NULL;
}
kind = PyUnicode_KIND(self);
data = PyUnicode_DATA(self);
ch = PyUnicode_READ(kind, data, index);
if (ch < 256)
return get_latin1_char(ch);
res = PyUnicode_New(1, ch);
if (res == NULL)
return NULL;
kind = PyUnicode_KIND(res);
data = PyUnicode_DATA(res);
PyUnicode_WRITE(kind, data, 0, ch);
assert(_PyUnicode_CheckConsistency(res, 1));
return res;
}
Walking from the top, we'll have some checks. These are boring. Then some assigns, which should also be boring. The first interesting line is
ch = PyUnicode_READ(kind, data, index);
but we'd hope that is fast, as we're reading from a contiguous C array by indexing it. The result, ch, will be less than 256 so we'll return the cached character in get_latin1_char(ch).
So we'll run (dropping the first checks)
kind = PyUnicode_KIND(self);
data = PyUnicode_DATA(self);
ch = PyUnicode_READ(kind, data, index);
return get_latin1_char(ch);
Where
#define PyUnicode_KIND(op) \
(assert(PyUnicode_Check(op)), \
assert(PyUnicode_IS_READY(op)), \
((PyASCIIObject *)(op))->state.kind)
(which is boring because asserts get ignored in debug [so I can check that they're fast] and ((PyASCIIObject *)(op))->state.kind) is (I think) an indirection and a C-level cast);
#define PyUnicode_DATA(op) \
(assert(PyUnicode_Check(op)), \
PyUnicode_IS_COMPACT(op) ? _PyUnicode_COMPACT_DATA(op) : \
_PyUnicode_NONCOMPACT_DATA(op))
(which is also boring for similar reasons, assuming the macros (Something_CAPITALIZED) are all fast),
#define PyUnicode_READ(kind, data, index) \
((Py_UCS4) \
((kind) == PyUnicode_1BYTE_KIND ? \
((const Py_UCS1 *)(data))[(index)] : \
((kind) == PyUnicode_2BYTE_KIND ? \
((const Py_UCS2 *)(data))[(index)] : \
((const Py_UCS4 *)(data))[(index)] \
) \
))
(which involves indexes but really isn't slow at all) and
static PyObject*
get_latin1_char(unsigned char ch)
{
PyObject *unicode = unicode_latin1[ch];
if (!unicode) {
unicode = PyUnicode_New(1, ch);
if (!unicode)
return NULL;
PyUnicode_1BYTE_DATA(unicode)[0] = ch;
assert(_PyUnicode_CheckConsistency(unicode, 1));
unicode_latin1[ch] = unicode;
}
Py_INCREF(unicode);
return unicode;
}
Which confirms my suspicion that:
This is cached:
PyObject *unicode = unicode_latin1[ch];
This should be fast. The if (!unicode) is not run, so it's literally equivalent in this case to
PyObject *unicode = unicode_latin1[ch];
Py_INCREF(unicode);
return unicode;
Honestly, after testing the asserts are fast (by disabling them [I think it works on the C-level asserts...]), the only plausibly-slow parts are:
PyUnicode_IS_COMPACT(op)
_PyUnicode_COMPACT_DATA(op)
_PyUnicode_NONCOMPACT_DATA(op)
Which are:
#define PyUnicode_IS_COMPACT(op) \
(((PyASCIIObject*)(op))->state.compact)
(fast, as before),
#define _PyUnicode_COMPACT_DATA(op) \
(PyUnicode_IS_ASCII(op) ? \
((void*)((PyASCIIObject*)(op) + 1)) : \
((void*)((PyCompactUnicodeObject*)(op) + 1)))
(fast if the macro IS_ASCII is fast), and
#define _PyUnicode_NONCOMPACT_DATA(op) \
(assert(((PyUnicodeObject*)(op))->data.any), \
((((PyUnicodeObject *)(op))->data.any)))
(also fast as it's an assert plus an indirection plus a cast).
So we're down (the rabbit hole) to:
PyUnicode_IS_ASCII
which is
#define PyUnicode_IS_ASCII(op) \
(assert(PyUnicode_Check(op)), \
assert(PyUnicode_IS_READY(op)), \
((PyASCIIObject*)op)->state.ascii)
Hmm... that seems fast too...
Well, OK, but let's compare it to PyList_GetItem. (Yeah, thanks Tim Peters for giving me more work to do :P.)
PyObject *
PyList_GetItem(PyObject *op, Py_ssize_t i)
{
if (!PyList_Check(op)) {
PyErr_BadInternalCall();
return NULL;
}
if (i < 0 || i >= Py_SIZE(op)) {
if (indexerr == NULL) {
indexerr = PyUnicode_FromString(
"list index out of range");
if (indexerr == NULL)
return NULL;
}
PyErr_SetObject(PyExc_IndexError, indexerr);
return NULL;
}
return ((PyListObject *)op) -> ob_item[i];
}
We can see that on non-error cases this is just going to run:
PyList_Check(op)
Py_SIZE(op)
((PyListObject *)op) -> ob_item[i]
Where PyList_Check is
#define PyList_Check(op) \
PyType_FastSubclass(Py_TYPE(op), Py_TPFLAGS_LIST_SUBCLASS)
(TABS! TABS!!!) (issue21587) That got fixed and merged in 5 minutes. Like... yeah. Damn. They put Skeet to shame.
#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size)
#define PyType_FastSubclass(t,f) PyType_HasFeature(t,f)
#ifdef Py_LIMITED_API
#define PyType_HasFeature(t,f) ((PyType_GetFlags(t) & (f)) != 0)
#else
#define PyType_HasFeature(t,f) (((t)->tp_flags & (f)) != 0)
#endif
So this is normally really trivial (two indirections and a couple of boolean checks) unless Py_LIMITED_API is on, in which case... ???
Then there's the indexing and a cast (((PyListObject *)op) -> ob_item[i]) and we're done.
So there are definitely fewer checks for lists, and the small speed differences certainly imply that it could be relevant.
I think in general, there's just more type-checking and indirection (->) for Unicode. It seems I'm missing a point, but what?
|
filter items in a python dictionary where keys contain a specific string
|
I'm a C coder developing something in python. I know how to do the following in C (and hence in C-like logic applied to python), but I'm wondering what the 'Python' way of doing it is.
I have a dictionary d, and I'd like to operate on a subset of the items, only those who's key (string) contains a specific substring.
i.e. the C logic would be:
for key in d:
if filter_string in key:
# do something
else
# do nothing, continue
I'm imagining the python version would be something like
filtered_dict = crazy_python_syntax(d, substring)
for key,value in filtered_dict.iteritems():
# do something
I've found a lot of posts on here regarding filtering dictionaries, but couldn't find one which involved exactly this.
My dictionary is not nested and i'm using python 2.7
|
How about a dict comprehension:
filtered_dict = {k:v for k,v in d.iteritems() if filter_string in k}
One you see it, it should be self-explanatory, as it reads like English pretty well.
This syntax requires Python 2.7 or greater.
In Python 3, there is only dict.items(), not iteritems() so you would use:
filtered_dict = {k:v for (k,v) in d.items() if filter_string in k}
|
ImportError: No module named win32com.client
|
I am currently using python 2.7 and trying to open an excel sheet.
When using the code below:
import os
from win32com.client import Dispatch
xlApp = win32com.client.Dispatch("Excel.Application")
xlApp.Visible = True
# Open the file we want in Excel
workbook = xlApp.Workbooks.Open('example.xls')
I get this error:
ImportError: No module named win32com.client
Is there any possibility of getting the error since I am using 64-bit Windows machine?
Please help me with this.
|
pip install pywin32 didn't work for me but pypiwin32 did.
|
Filter list of strings, ignoring substrings of other items
|
How can I filter through a list which containing strings and substrings to return only the longest strings. (If any item in the list is a substring of another, return only the longer string.)
I have this function. Is there a faster way?
def filterSublist(lst):
uniq = lst
for elem in lst:
uniq = [x for x in uniq if (x == elem) or (x not in elem)]
return uniq
lst = ["a", "abc", "b", "d", "xy", "xyz"]
print filterSublist(lst)
> ['abc', 'd', 'xyz']
> Function time: 0.000011
|
A simple quadratic time solution would be this:
res = []
n = len(lst)
for i in xrange(n):
if not any(i != j and lst[i] in lst[j] for j in xrange(n)):
res.append(lst[i])
But we can do much better:
Let $ be a character that does not appear in any of your strings and has a lower value than all your actual characters.
Let S be the concatenation of all your strings, with $ in between. In your example, S = a$abc$b$d$xy$xyz.
You can build the suffix array of S in linear time. You can also use a much simpler O(n log^2 n) construction algorithm that I described in another answer.
Now for every string in lst, check if it occurs in the suffix array exactly once. You can do two binary searches to find the locations of the substring, they form a contiguous range in the suffix array. If the string occurs more than once, you remove it.
With LCP information precomputed, this can be done in linear time as well.
Example O(n log^2 n) implementation, adapted from my suffix array answer:
def findFirst(lo, hi, pred):
""" Find the first i in range(lo, hi) with pred(i) == True.
Requires pred to be a monotone. If there is no such i, return hi. """
while lo < hi:
mid = (lo + hi) // 2
if pred(mid): hi = mid;
else: lo = mid + 1
return lo
# uses the algorithm described in http://stackoverflow.com/a/21342145/916657
class SuffixArray(object):
def __init__(self, s):
""" build the suffix array of s in O(n log^2 n) where n = len(s). """
n = len(s)
log2 = 0
while (1<<log2) < n:
log2 += 1
rank = [[0]*n for _ in xrange(log2)]
for i in xrange(n):
rank[0][i] = s[i]
L = [0]*n
for step in xrange(1, log2):
length = 1 << step
for i in xrange(n):
L[i] = (rank[step - 1][i],
rank[step - 1][i + length // 2] if i + length // 2 < n else -1,
i)
L.sort()
for i in xrange(n):
rank[step][L[i][2]] = \
rank[step][L[i - 1][2]] if i > 0 and L[i][:2] == L[i-1][:2] else i
self.log2 = log2
self.rank = rank
self.sa = [l[2] for l in L]
self.s = s
self.rev = [0]*n
for i, j in enumerate(self.sa):
self.rev[j] = i
def lcp(self, x, y):
""" compute the longest common prefix of s[x:] and s[y:] in O(log n). """
n = len(self.s)
if x == y:
return n - x
ret = 0
for k in xrange(self.log2 - 1, -1, -1):
if x >= n or y >= n:
break
if self.rank[k][x] == self.rank[k][y]:
x += 1<<k
y += 1<<k
ret += 1<<k
return ret
def compareSubstrings(self, x, lx, y, ly):
""" compare substrings s[x:x+lx] and s[y:y+yl] in O(log n). """
l = min((self.lcp(x, y), lx, ly))
if l == lx == ly: return 0
if l == lx: return -1
if l == ly: return 1
return cmp(self.s[x + l], self.s[y + l])
def count(self, x, l):
""" count occurences of substring s[x:x+l] in O(log n). """
n = len(self.s)
cs = self.compareSubstrings
lo = findFirst(0, n, lambda i: cs(self.sa[i], min(l, n - self.sa[i]), x, l) >= 0)
hi = findFirst(0, n, lambda i: cs(self.sa[i], min(l, n - self.sa[i]), x, l) > 0)
return hi - lo
def debug(self):
""" print the suffix array for debugging purposes. """
for i, j in enumerate(self.sa):
print str(i).ljust(4), self.s[j:], self.lcp(self.sa[i], self.sa[i-1]) if i >0 else "n/a"
def filterSublist(lst):
splitter = "\x00"
s = splitter.join(lst) + splitter
sa = SuffixArray(s)
res = []
offset = 0
for x in lst:
if sa.count(offset, len(x)) == 1:
res.append(x)
offset += len(x) + 1
return res
However, the interpretation overhead likely causes this to be slower than the O(n^2) approaches unless S is really large (in the order of 10^5 characters or more).
|
How to setup Django project in PyCharm
|
I'm new in this area so I have a question. Recently I started working with Python and Django. I installed PyCharm Community edition as my IDE, but I'm unable create Django project. I looked for some tutorials, and there is option te select "project type", but in latest version this option is missing. Can someone tell me how to do this?
|
You can simply go to:
In Run -> Edit Configurations create new configuration
Script: path_to/manage.py
Script parameters: runserver
|
OSError - Errno 13 Permission denied
|
I am trying to upload image through admin page, but it keeps saying:
[Errno 13] Permission denied: '/path/to/my/site/media/userfolder/2014/05/26'
the folders userfolder/2014/05/26 are created dynamically while uploading.
In Traceback, i found that the error is occuring during this command:
In /usr/lib64/python2.6/os.py Line 157. while calling
mkdir(name, mode)
meaning, it cannot create any folder as it doesnot have the permission to do this
I have OpenSuse as OS in Server. In httpd.conf, i have this:
<Directory /path/to/my/site/media>
Order allow,deny
Allow from all
</Directory>
Do I have to chmod or chown something?
|
You need to change the directory permission so that web server process can change the directory.
To change ownership of the directory, use chown:
chown -R user-id:group-id /path/to/the/directory
To see which user own the web server process (change httpd accordingly):
ps aux | grep httpd | grep -v grep
OR
ps -efl | grep httpd | grep -v grep
|
Using JSON Type with Flask-sqlalchemy & Postgresql
|
Background: I am building a Flask App and I have stored my data into a postgresql database and within a JSON column type.
Task: In my view functions, I would like to order a database query by {Key:Value} from JSON column
Accomplished: I have been successful in performing this query at the psql command-line by using the following command for example:
select * from target where cast(product->>'profit' as float) > 100 order by cast(product->>'salesrank' as integer) asc;
Problem: I can not replicate this query in my code (see code for Model below in Extra Info Section)
from app import app, db
from models import Target
data = Target.query.order_by(Target.product['salesrank'])
Error received - ProgrammingError: (ProgrammingError) could not identify an ordering operator for type json
LINE 2: FROM target ORDER BY target.product -> 'salesrank'
^
HINT: Use an explicit ordering operator or modify the query.
'SELECT target.id AS target_id, target.store AS target_store, target.product AS target_product, target.asin AS target_asin, target.date AS target_date \nFROM target ORDER BY target.product -> %(product_1)s \n LIMIT %(param_1)s' {'product_1': 'salesrank', 'param_1': 1}
Extra Info
My Target model was set up as such:
#models.py
from app import db
from sqlalchemy.dialects.postgresql import JSON
import datetime
class Target(db.Model):
__tablename__ = 'target'
id = db.Column(db.Integer)
store = db.Column(db.String())
product = db.Column(JSON)
asin = db.Column(db.String(), primary_key=True)
date = db.Column(db.DateTime, default=datetime.datetime.utcnow())
My App.py file where I define Flask and Sqlalchemy
from flask import Flask
import os
from flask.ext.sqlalchemy import SQLAlchemy
from flask_bootstrap import Bootstrap
app = Flask(__name__)
app.config.from_object(os.environ['APP_SETTINGS'])
db = SQLAlchemy(app)
Bootstrap(app)
import views
from app import app
from models import Result
if __name__ == '__main__':
app.run(host='192.168.1.5', port=5000, debug=True)
Thank you for any help you can provide!
|
Looking at the SQLAlchemy documentation for the JSON data type it appears that you should be able to use the .cast method:
from sqlalchemy.types import Integer
from app import app, db
from models import Target
data = Target.query.order_by(Target.product['salesrank'].cast(Integer))
|
Argument Unpacking wastes Stack Frames
|
When a function is called by unpacking arguments, it seems to increase the recursion depth twice. I would like to know why this happens.
Normally:
depth = 0
def f():
global depth
depth += 1
f()
try:
f()
except RuntimeError:
print(depth)
#>>> 999
With an unpacking call:
depth = 0
def f():
global depth
depth += 1
f(*())
try:
f()
except RuntimeError:
print(depth)
#>>> 500
In theory both should reach about 1000:
import sys
sys.getrecursionlimit()
#>>> 1000
This happens on CPython 2.7 and CPython 3.3.
On PyPy 2.7 and PyPy 3.3 there is a difference, but it is much smaller (1480 vs 1395 and 1526 vs 1395).
As you can see from the disassembly, there is little difference between the two, other than the type of call (CALL_FUNCTION vs CALL_FUNCTION_VAR):
import dis
def f():
f()
dis.dis(f)
#>>> 34 0 LOAD_GLOBAL 0 (f)
#>>> 3 CALL_FUNCTION 0 (0 positional, 0 keyword pair)
#>>> 6 POP_TOP
#>>> 7 LOAD_CONST 0 (None)
#>>> 10 RETURN_VALUE
def f():
f(*())
dis.dis(f)
#>>> 47 0 LOAD_GLOBAL 0 (f)
#>>> 3 BUILD_TUPLE 0
#>>> 6 CALL_FUNCTION_VAR 0 (0 positional, 0 keyword pair)
#>>> 9 POP_TOP
#>>> 10 LOAD_CONST 0 (None)
#>>> 13 RETURN_VALUE
|
The exception message actually offers you a hint. Compare the non-unpacking option:
>>> import sys
>>> sys.setrecursionlimit(4) # to get there faster
>>> def f(): f()
...
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
RuntimeError: maximum recursion depth exceeded
with:
>>> def f(): f(*())
...
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
RuntimeError: maximum recursion depth exceeded while calling a Python object
Note the addition of the while calling a Python object. This exception is specific to the PyObject_CallObject() function. You won't see this exception when you set an odd recursion limit:
>>> sys.setrecursionlimit(5)
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
RuntimeError: maximum recursion depth exceeded
because that is the specific exception raised in the ceval.c frame evaluation code inside PyEval_EvalFrameEx():
/* push frame */
if (Py_EnterRecursiveCall(""))
return NULL;
Note the empty message there. This is a crucial difference.
For your 'regular' function (no variable arguments), what happens is that an optimized path is picked; a Python function that doesn't need tuple or keyword argument unpacking support is handled directly in the fast_function() function of the evaluation loop. A new frameobject with the Python bytecode object for the function is created, and run. This is one recursion check.
But for a function call with variable arguments (tuple or dictionary or both), the fast_function() call cannot be used. Instead, ext_do_call() (extended call) is used, which handles the argument unpacking, then uses PyObject_Call() to invoke the function. PyObject_Call() does a recursion limit check, and 'calls' the function object. The function object is invoked via the function_call() function, which calls PyEval_EvalCodeEx(), which calls PyEval_EvalFrameEx(), which makes the second recursion limit check.
TL;DR version
Python functions calling Python functions are optimised and bypass the PyObject_Call() C-API function, unless argument unpacking takes place. Both Python frame execution and PyObject_Call() make recursion limit tests, so bypassing PyObject_Call() avoids incrementing the recursion limit check per call.
More places with 'extra' recursion depth checks
You can grep the Python source code for Py_EnterRecursiveCall for other locations where recursion depth checks are made; various libraries, such as json and pickle use it to avoid parsing structures that are too deeply nested or recursive, for example. Other checks are placed in the list and tuple __repr__ implementations, rich comparisons (__gt__, __lt__, __eq__, etc.), handling the __call__ callable object hook and handling __str__ calls.
As such, you can hit the recursion limit much faster still:
>>> class C:
... def __str__(self):
... global depth
... depth += 1
... return self()
... def __call__(self):
... global depth
... depth += 1
... return str(self)
...
>>> depth = 0
>>> sys.setrecursionlimit(10)
>>> C()()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 9, in __call__
File "<stdin>", line 5, in __str__
RuntimeError: maximum recursion depth exceeded while calling a Python object
>>> depth
2
|
upload file to my dropbox from python script
|
I want to upload a file from my python script to my dropbox account automatically. I can't find anyway to do this with just a user/pass. Everything I see in the Dropbox SDK is related to an app having user interaction. I just want to do something like this:
https://api-content.dropbox.com/1/files_put//?user=me&pass=blah
Please help, thanks!
|
Thanks to @smarx for the answer above! I just wanted to clarify for anyone else trying to do this.
Make sure you install the dropbox module first of course, pip install dropbox.
Create an app under your own dropbox account in the "App Console". (https://www.dropbox.com/developers/apps)
Just for the record I created my App with the following:
a. App Type as "Dropbox API APP".
b. Type of data access as "Files & Datastores"
c. Folder access as "My app needs access to files already on Dropbox". (ie: Permission Type as "Full Dropbox".)
Then click the "generate access token" button and cut/paste into the python example below in place of <auth_token>:
import dropbox
client = dropbox.client.DropboxClient(<auth_token>)
print 'linked account: ', client.account_info()
f = open('working-draft.txt', 'rb')
response = client.put_file('/magnum-opus.txt', f)
print 'uploaded: ', response
folder_metadata = client.metadata('/')
print 'metadata: ', folder_metadata
f, metadata = client.get_file_and_metadata('/magnum-opus.txt')
out = open('magnum-opus.txt', 'wb')
out.write(f.read())
out.close()
print metadata
|
Celery parallel distributed task with multiprocessing
|
I have a CPU intensive Celery task. I would like to use all the processing power (cores) across lots of EC2 instances to get this job done faster (a celery parallel distributed task with multiprocessing - I think).
The terms, threading, multiprocessing, distributed computing, distributed parallel processing are all terms I'm trying to understand better.
Example task:
@app.task
for item in list_of_millions_of_ids:
id = item # do some long complicated equation here very CPU heavy!!!!!!!
database.objects(newid=id).save()
Using the code above (with an example if possible) how one would ago about distributed this task using Celery by allowing this one task to be split up utilising all the computing CPU power across all available machine in the cloud?
|
Your goals are:
Distribute your work to many machines (distributed
computing/distributed parallel processing)
Distribute the work on a given machine across all CPUs
(multiprocessing/threading)
Celery can do both of these for you fairly easily. The first thing to understand is that each celery worker is configured by default to run as many tasks as there are CPU cores available on a system:
Concurrency is the number of prefork worker process used to process
your tasks concurrently, when all of these are busy doing work new
tasks will have to wait for one of the tasks to finish before it can
be processed.
The default concurrency number is the number of CPUâs on that machine
(including cores), you can specify a custom number using -c option.
There is no recommended value, as the optimal number depends on a
number of factors, but if your tasks are mostly I/O-bound then you can
try to increase it, experimentation has shown that adding more than
twice the number of CPUâs is rarely effective, and likely to degrade
performance instead.
This means each individual task doesn't need to worry about using multiprocessing/threading to make use of multiple CPUs/cores. Instead, celery will run enough tasks concurrently to use each available CPU.
With that out of the way, the next step is to create a task that handles processing some subset of your list_of_millions_of_ids. You have a couple of options here - one is to have each task handle a single ID, so you run N tasks, where N == len(list_of_millions_of_ids). This will guarantee that work is evenly distributed amongst all your tasks, since there will never be a case where one worker finishes early and is just waiting around; if it needs work, it can pull an id off the queue. You can do this (as mentioned by John Doe) using the a celery group.
tasks.py:
@app.task
def process_id(item):
id = item #long complicated equation here
database.objects(newid=id).save()
And to execute the tasks:
from celery import group
from tasks import process_id
jobs = group(process_id.s(item) for item in list_of_millions_of_ids)
result = jobs.apply_async()
Another option is to break the list into smaller pieces, and distribute the pieces to your workers. This approach runs the risk of wasting some cycles, because you may end up with some workers waiting around while others are still doing work. However, the celery documentation notes that this concern is often unfounded:
Some may worry that chunking your tasks results in a degradation of
parallelism, but this is rarely true for a busy cluster and in
practice since you are avoiding the overhead of messaging it may
considerably increase performance.
So, you may find that chunking the list and distributing the chunks to each task performs better, because of the reduced messaging overhead. You can probably also lighten the load on the database a bit this way, by calculating each id, storing it in a list, and then adding the whole list into the DB once you're done, rather than doing it one id at a time. The chunking approach would look something like this
tasks.py:
@app.task
def process_ids(items):
for item in items:
id = item #long complicated equation here
database.objects(newid=id).save() # Still adding one id at a time, but you don't have to.
And to start the tasks:
from tasks import process_ids
jobs = process_ids.chunks(list_of_millions_of_ids, 30) # break the list into 30 chunks. Experiment with what number works best here.
jobs.apply_async()
You can experiment a bit with what chunking size gives you the best result. You want to find a sweet spot where you're cutting down messaging overhead while also keeping the size small enough that you don't end up with workers finishing their chunk much faster than another worker, and then just waiting around with nothing to do.
|
How do I get the return value when using Python exec on the code object of a function?
|
For testing purposes I want to directly execute a function defined inside of another function.
I can get to the code object of the child function, through the code (func_code) of the parent function, but when I exec it, i get no return value.
Is there a way to get the return value from the exec'ed code?
|
Yes, you need to have the assignment within the exec statement:
>>> def foo():
... return 5
...
>>> exec("a = foo()")
>>> a
5
This probably isn't relevant for your case since its being used in controlled testing, but be careful with using exec with user defined input.
|
Django REST Framework and FileField absolute url
|
I've defined a simple Django app that includes the following model:
class Project(models.Model):
name = models.CharField(max_length=200)
thumbnail = models.FileField(upload_to='media', null=True)
(Technically yes, that could have been an ImageField.)
In a template, it's easy enough to include the MEDIA_URL value (duly coded in settings.py) as a prefix to the thumbnail URL. The following works fine:
<div id="thumbnail"><img src="{{ MEDIA_URL }}{{ current_project.thumbnail }}" alt="thumbnail" width="400" height="300" border="0" /></div>
Using DRF, I've defined a HyperlinkedModelSerializer descendant called ProjectSerializer:
class ProjectSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Project
fields = ( 'id' ,'url', 'name', 'thumbnail')
And I've defined a very straightforward ModelViewSet descendant:
class ProjectViewSet(viewsets.ModelViewSet):
queryset = Project.objects.all()
serializer_class = ProjectSerializer
A sample of the resulting JSON looks like this:
{
"id": 1,
"url": "http://localhost:8000/api/v1/projects/1/",
"name": "Institutional",
"thumbnail": "media/institutional_thumb_1.jpg"
}
I have not yet been able to figure out how to provide a thumbnail field that includes the full url to the image in my project's JSON representation.
I would think that I would need to create a custom field in the ProjectSerializer, but have not been successful.
|
Try SerializerMethodField
Example (untested):
class MySerializer(serializers.ModelSerializer):
thumbnail_url = serializers.SerializerMethodField('get_thumbnail_url')
def get_thumbnail_url(self, obj):
return self.context['request'].build_absolute_uri(self.thumbnail_url)
The request must available to the serializer, so it can build the full absolute URL for you. One way is to explicitly pass it in when the serializer is created, similar to this:
serializer = MySerializer(account, context={'request': request})
|
Cx-Freeze Error - Python 34
|
I have a Cx_Freeze setup file that I am trying to make work. What is terribly frustrating is that it used to Freeze appropriately. Now, however, I get the following error:
edit. the error that shows up is not a Python exception through the console, but a crash report when attempting to launch the resulting exe file generated through the freeze.
'File 'notetest.py', line 1, in
_find_and_load importlib_bootstrap.py, line 2214
....
AttributeError 'module' object has no attribute '_fix_up_module'
My setup.py file follows:
import sys
import os
from cx_Freeze import setup, Executable
build_exe_options = {'packages': [], 'excludes' : []}
base = 'Win32GUI'
exe = Executable(
script = 'notetest.py',
initScript = None,
base = 'Win32GUI',
targetName = 'MedicaidAid.exe',
compress = True,
appendScriptToExe = True,
appendScriptToLibrary = True,
icon = None
)
setup( name = 'MedicaidAid',
version = '0.85',
description = 'MedicaidAid Software',
options = {'build_exe': build_exe_options},
executables = [Executable('notetest.py', base = base)])
|
You should install cx_freeze from this site. It contains an important patch that solves the problem (see this discussion for detailed).
|
"Could not run curl-config: [Errno 2] No such file or directory" when installing pycurl
|
I'm trying to install pycurl via:
sudo pip install pycurl
It downloaded fine, but when when it runs setup.py I get the following traceback:
Downloading/unpacking pycurl
Running setup.py egg_info for package pycurl
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/tmp/pip-build-root/pycurl/setup.py", line 563, in <module>
ext = get_extension()
File "/tmp/pip-build-root/pycurl/setup.py", line 368, in get_extension
ext_config = ExtensionConfiguration()
File "/tmp/pip-build-root/pycurl/setup.py", line 65, in __init__
self.configure()
File "/tmp/pip-build-root/pycurl/setup.py", line 100, in configure_unix
raise ConfigurationError(msg)
__main__.ConfigurationError: Could not run curl-config: [Errno 2] No such file or directory
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/tmp/pip-build-root/pycurl/setup.py", line 563, in <module>
ext = get_extension()
File "/tmp/pip-build-root/pycurl/setup.py", line 368, in get_extension
ext_config = ExtensionConfiguration()
File "/tmp/pip-build-root/pycurl/setup.py", line 65, in __init__
self.configure()
File "/tmp/pip-build-root/pycurl/setup.py", line 100, in configure_unix
raise ConfigurationError(msg)
__main__.ConfigurationError: Could not run curl-config: [Errno 2] No such file or directory
Any idea why this is happening and how to get around it
|
On Debian I needed the following package to fix this.
sudo apt-get install libcurl4-openssl-dev
|
TypeError: method() takes 1 positional argument but 2 were given
|
If I have a class ...
class MyClass:
def method(arg):
print(arg)
... which I use to create an object ...
my_object = MyClass()
... on which I call method("foo") like so ...
>>> my_object.method("foo")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: method() takes exactly 1 positional argument (2 given)
... why does Python tell me I gave it two arguments, when I only gave one?
|
In Python, this:
my_object.method("foo")
... is syntactic sugar, which the interpreter translates behind the scenes into:
MyClass.method(my_object, "foo")
... which, as you can see, does indeed have two arguments - it's just that the first one is implicit, from the point of view of the caller.
This is because most methods do some work with the object they're called on, so there needs to be some way for that object to be referred to inside the method. By convention, this first argument is called self inside the method definition:
class MyNewClass:
def method(self, arg):
print(self)
print(arg)
If you call method("foo") on an instance of MyNewClass, it works as expected:
>>> my_new_object = MyNewClass()
>>> my_new_object.method("foo")
<__main__.MyNewClass object at 0x29045d0>
foo
Occasionally (but not often), you really don't care about the object that your method is bound to, and in that circumstance, you can decorate the method with the builtin staticmethod() function to say so:
class MyOtherClass:
@staticmethod
def method(arg):
print(arg)
... in which case you don't need to add a self argument to the method definition, and it still works:
>>> my_other_object = MyOtherClass()
>>> my_other_object.method("foo")
foo
|
How to convert int to Enum in python?
|
Using the new Enum feature (via backport enum34) with python 2.7.6.
Given the following definition, how can I convert an int to the corresponding Enum value?
from enum import Enum
class Fruit(Enum):
Apple = 4
Orange = 5
Pear = 6
I know I can hand craft a series of if-statements to do the conversion but is there an easy pythonic way to convert? Basically, I'd like a function ConvertIntToFruit(int) that returns an enum value.
My use case is I have a csv file of records where I'm reading each record into an object. One of the file fields is an integer field that represents an enumeration. As I'm populating the object I'd like to convert that integer field from the file into the corresponding Enum value in the object.
|
'Call' the Enum class:
Fruit(5)
to turn 5 into Fruit.Orange:
>>> from enum import Enum
>>> class Fruit(Enum):
... Apple = 4
... Orange = 5
... Pear = 6
...
>>> Fruit(5)
<Fruit.Orange: 5>
From the Programmatic access to enumeration members and their attributes section of the documentation:
Sometimes itâs useful to access members in enumerations
programmatically (i.e. situations where Color.red wonât do because the
exact color is not known at program-writing time). Enum allows such
access:
>>> Color(1)
<Color.red: 1>
>>> Color(3)
<Color.blue: 3>
|
TypeError when converting dictionary to JSON array
|
How do I take a python dictionary where the keys and values are Strings and convert it into a JSON String.
This is what I have right now:
import json
def create_simple_meeting(subject, startDate, endDate, location, body):
info = dict()
if(subject != ""):
info["subject"] = subject
if(startDate != ""):
info["startDate"] = startDate
if(endDate != ""):
info["endDate"] = endDate
if(body != ""):
info["body"] = body
if(location != ""):
info["location"] = location
print(json.dumps(dict))
create_simple_meeting("This is the subject of our meeting.","2014-05-29 11:00:00","2014-05-29 12:00:00", "Boca Raton", "We should definitely meet up, man")
And it gives me this error
File "/Users/bens/Documents/workspace/Copy of ws1 for py java playing/opias/robot/libs/playing.py", line 15, in create_simple_meeting
print(json.dumps(dict))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 243, in dumps
return _default_encoder.encode(obj)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: <type 'dict'> is not JSON serializable
|
You are trying to serialise the type object, dict, instead of info. Dump the right variable:
print(json.dumps(info))
|
Slow access to Django's request.body
|
Sometimes this line of Django app (hosted using Apache/mod_wsgi) takes a lot of time to execute (eg. 99% of eg. 6 seconds of request handling, as measured by New Relic), when submitted by some mobile clients:
raw_body = request.body
(where request is an incoming request)
The questions I have:
What could have slowed down access to request.body so much?
What would be the correct configuration for Apache to wait before invoking Django until client sends whole payload? Maybe the problem is in Apache configuration.
Django's body attribute in HttpRequest is a property, so that really resolves on what is really being done there and how to make it happen outside of the Django app, if possible. I want Apache to wait for full request before sending it to Django app.
|
Regarding (1), Apache passes control to the mod_wsgi handler as soon as the request's headers are available, and mod_wsgi then passes control on to Python. The internal implementation of request.body then calls the read() method which eventually calls the implementation within mod_wsgi, which requests the request's body from Apache and, if it hasn't been completely received by Apache yet, blocks until it is available.
Regarding (2), this is not possible with mod_wsgi alone. At least, the hook processing incoming requests doesn't provide a mechanism to block until the full request is available. Another poster suggested to use nginx as a proxy in a response to this duplicate question.
|
Plotting with seaborn using the matplotlib object-oriented interface
|
I strongly prefer using matplotlib in OOP style:
f, axarr = plt.subplots(2, sharex=True)
axarr[0].plot(...)
axarr[1].plot(...)
This makes it easier to keep track of multiple figures and subplots.
Question: How to use seaborn this way? Or, how to change this example to OOP style? How to tell seaborn plotting functions like lmplot which Figure or Axes it plots to?
|
It depends a bit on which seaborn function you are using.
The plotting functions in seaborn are broadly divided into two classes
"Axes-level" functions, including regplot, boxplot, kdeplot, and many others
"Figure-level" functions, including lmplot, factorplot, jointplot and one or two others
The first group is identified by taking an explicit ax argument and returning an Axes object. As this suggests, you can use them in an "object oriented" style by passing your Axes to them:
f, (ax1, ax2) = plt.subplots(2)
sns.regplot(x, y, ax=ax1)
sns.kdeplot(x, ax=ax2)
Axes-level functions will only draw onto an Axes and won't otherwise mess with the figure, so they can coexist perfectly happily in an object-oriented matplotlib script.
The second group of functions (Figure-level) are distinguished by the fact that the resulting plot can potentially include several Axes which are always organized in a "meaningful" way. That means that the functions need to have total control over the figure, so it isn't possible to plot, say, an lmplot onto one that already exists. Calling the function always initializes a figure and sets it up for the specific plot it's drawing.
However, once you've called lmplot, it will return an object of the type FacetGrid. This object has some methods for operating on the resulting plot that know a bit about the structure of the plot. It also exposes the underlying figure and array of axes at the FacetGrid.fig and FacetGrid.axes arguments. The jointplot function is very similar, but it uses a JointGrid object. So you can still use these functions in an object-oriented context, but all of your customization has to come after you've called the function.
|
How do I revert to a previous package in Anaconda?
|
If I do
conda info pandas
I can see all of the packages available.
I updated my pandas to the latest this morning, but I need to revert to a prior version now. I tried
conda update pandas 0.13.1
but that didn't work. How do I specify which version to use?
|
I had to use the install function instead:
conda install pandas=0.13.1
|
Expressing IS NOT NULL without causing a PEP8 error
|
At a certain point in my project, I need to query a SQLAlchemy object for columns that are NOT NULL. In my code, I do:
session.query(MyModel).filter(MyModel.my_column != None).all()
...and it works great. Problem is, whenever I run pep8 or one of the other linters on the file, it raises an error E711: Comparison to None should be if cond is not None:. I agree with the spirit of this guideline, and as such I don't want to silence the warning just because of one little line.
Is there another way to write this? Preferably one where we don't have to dig into the func module?
|
PEP8 isn't meant to be followed to the letter.
You're recommended to use is None instead of == None because is cannot be overloaded (unlike ==):
>>> class Bad(object):
... def __eq__(self, other):
... return True
...
>>> instance = Bad()
>>> instance == None
True
>>> instance is None
False
In your case, you want to use the overloaded == operator, so the warning is incorrect. The SQLAlchemy docs give you an alternative notation if you don't want to disable warnings on that line:
session.query(MyModel).filter(MyModel.my_column.isnot(None)).all()
Although I'd just use Flake8 and disable warnings for that particular line:
session.query(MyModel).filter(MyModel.my_column != None).all() # noqa
|
Why isn't this a syntax error in python?
|
Noticed a line in our codebase today which I thought surely would have failed the build with syntax error, but tests were passing so apparently it was actually valid python (in both 2.x and 3).
Whitespace is sometimes not required in the conditional expression:
>>> 1if True else 0
1
It doesn't work if the LHS is a variable:
>>> x = 1
>>> xif True else 0
File "<stdin>", line 1
xif True else 0
^
SyntaxError: invalid syntax
But it does seem to still work with other types of literals:
>>> {'hello'}if False else 'potato'
'potato'
What's going on here, is it intentionally part of the grammar for some reason? Is this odd quirk a known/documented behaviour?
|
Whitespace between tokens
Except at the beginning of a logical line or in string literals, the whitespace characters space, tab and formfeed can be used interchangeably to separate tokens. Whitespace is needed between two tokens only if their concatenation could otherwise be interpreted as a different token (e.g., ab is one token, but a b is two tokens).
So in this case, 1if is not a valid token, so the whitespace is optional. The 1 is interpreted as an integer literal of which the if is not a part. So if is interpreted separately and recognized as a keyword.
In xif however, an identifier is recognized, so Python is not able to see that you wanted to do x if there.
|
Function definition in Python takes a lot of time
|
Why is python trying to calculate the value of p during definition? It takes ages to define this function.
def f():
raise Exception('Some error')
p = 2322111239**42322222334923492304923
print 'Defined!'
Also if the value of p is being calculated during definition, why is it possible to define this function without errors?
def f():
return 4
p = 11/0
This one obviously works fine because constants are not involved:
def f():
raise Exception('Some error')
x=42322222334923492304923
p = 2322111239**x
print 'Defined!'
|
It is the peephole optimizer:
http://hg.python.org/cpython/file/eabff2a97b5b/Python/peephole.c#l88
See in particular lines 104-106
case BINARY_POWER:
newconst = PyNumber_Power(v, w, Py_None);
break;
The intention is to speed up runtime execution of the function at the cost of slower definiton (read: import) time. It makes sense because you only need to compile the code for the function once, but you may need to call it many times.
I believe the optimizer was written by Raymond Hettinger, who is quite active on SO, maybe he can confirm my claims.
|
django.core.exceptions.ImproperlyConfigured: Requested setting CACHES, but settings are not configured. You must either define the environment varia
|
I've tried everything that I could find to fix this issue, and I'm starting to tear my hair out a little.
I'm getting this error:
django.core.exceptions.ImproperlyConfigured: Requested setting CACHES, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
My scripts run fine when I do:
python3 ./manage.py runserver
However, whenever I try to run tests, I get the above error...
I use a VirtualEnv, that inherits nothing globally, everything is installed (with the correct version), and within my manage.py I have set:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "<appname>.settings")
I'm using PyCharm Proffesional to develop, and I've tried running the tests in both the IDE, and in the shell.
Within the shell I'm using :
python3 manage.py test
The shell is finding no tests. The test is basic, and I'm not really all that bothered about the content of it currently as it's the environment I'm struggling with. UPDATE: I have solved the issue with the shell. Tests must be defined by:
def test_<name>():
However this hasn't solved my issue with PyCharm.
I have also called:
settings.configure()
Which told me that it was already configured.
Please note that I am not using any database with Django, and I have commented the appropriate things out of the settings.
The full error is:
Traceback (most recent call last):
File "/root/kiloenv/lib/python3.4/site-packages/django/conf/__init__.py", line 38, in _setup
settings_module = os.environ[ENVIRONMENT_VARIABLE]
File "/usr/lib/python3.4/os.py", line 631, in __getitem__
raise KeyError(key) from None
KeyError: 'DJANGO_SETTINGS_MODULE'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/<username>/Documents/<dirname>/<appname>/tests.py", line 1, in <module>
from django.test.utils import setup_test_environment
File "/root/virtualenv/lib/python3.4/site-packages/django/test/__init__.py", line 5, in <module>
from django.test.client import Client, RequestFactory
File "/root/virtualenv/lib/python3.4/site-packages/django/test/client.py", line 11, in <module>
from django.contrib.auth import authenticate, login, logout, get_user_model
File "/root/virtualenv/lib/python3.4/site-packages/django/contrib/auth/__init__.py", line 6, in <module>
from django.middleware.csrf import rotate_token
File "/root/virtualenv/lib/python3.4/site-packages/django/middleware/csrf.py", line 14, in <module>
from django.utils.cache import patch_vary_headers
File "/root/virtualenv/lib/python3.4/site-packages/django/utils/cache.py", line 26, in <module>
from django.core.cache import get_cache
File "/root/virtualenv/lib/python3.4/site-packages/django/core/cache/__init__.py", line 69, in <module>
if DEFAULT_CACHE_ALIAS not in settings.CACHES:
File "/root/virtualenv/lib/python3.4/site-packages/django/conf/__init__.py", line 54, in __getattr__
self._setup(name)
File "/root/virtualenv/lib/python3.4/site-packages/django/conf/__init__.py", line 47, in _setup
% (desc, ENVIRONMENT_VARIABLE))
django.core.exceptions.ImproperlyConfigured: Requested setting CACHES, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
I have ran PyCharm under sudo to ensure that the issue wasn't permissions as I'm storing the env under root.
EDIT: I've just discovered that my tests which do not use Django are running fine, but Pycharm is still throwing several failures. These failures aren't the individual tests, they're just the error which I have mentioned here (there's 341 tests that aren't Django related). I only have one test which uses Django, which will not get past initializing and throwing the mentioned error.
Hope I've been explanitory
|
Use this
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
instead of
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "<appname>.settings")
|
Is it good practice to use `import __main__`?
|
I'm working on a relatively large Python application, and there are several resources that I would like to keep as global variables accessible throughout several different modules. These values are things like the version number, version date, the global configuration, and some static paths to resources. I've also included a DEBUG flag that gets set by a command line option so that I can run my application in a debug mode without needing the full environment.
The values I'm importing I've been careful to ensure are ones that do not change over the course of running the program, and I've documented them as global constant variables that should not be touched. My code looks essentially like
# Main.py
import wx
from gui import Gui
DEBUG = False
GLOBAL_CONFIG = None
VERSION = '1.0'
ICON_PATH = 'some/path/to/the/app.ico'
def main():
global DEBUG, GLOBAL_CONFIG
# Simplified
import sys
DEBUG = '--debug' in sys.argv
GLOBAL_CONFIG = load_global_config()
# Other set-up for the application, e.g. setting up logging, configs, etc
app = wx.App()
gui = Gui()
app.MainLoop()
if __name__ == '__main__':
main()
# gui.py
import wx
from __main__ import DEBUG, GLOBAL_CONFIG, ICON_PATH
import controller
class Gui(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None)
icon = wx.Icon(ICON_PATH, wx.BITMAP_TYPE_ICO)
self.SetIcon(icon)
# Always make a copy so we don't accidentally modify it
conf = GLOBAL_CONFIG.copy()
self.controller = controller.Controller(conf)
# More setup, building the layout, etc
# controller.py
from __main__ import DEBUG
import logging
log = logging.getLogger('controller')
class Controller(object):
def __init__(self, conf):
if DEBUG:
log.info("Initializing controller in DEBUG mode")
self.conf = conf
# Other setup ...
This is obviously far stripped down from what my application actually is, and neglects error handling, documentation, and basically all implementation details.
Now, I've seen it said that this is a bad idea, but without explanation for why. Since most results when googling for variants of "python import __main__" are questions about what if __name__ == '__main__' is, it's hard to find some solid information on this topic. So far I've had no problems with it, and it's actually been really convenient.
So is this considered good Python practice, or is there a reason I should avoid this design?
|
I think there are two main (ha ha) reasons one might prescribe an avoidance of this pattern.
It obfuscates the origin of the variables you're importing.
It breaks (or at least it's tough to maintain) if your program has multiple entry points. Imagine if someone, very possibly you, wanted to extract some subset of your functionality into a standalone library--they'd have to delete or redefine every one of those orphaned references to make the thing usable outside of your application.
If you have total control over the application and there will never be another entry point or another use for your features, and you're sure you don't mind the ambiguity, I don't think there's any objective reason why the from __main__ import foo pattern is bad. I don't like it personally, but again, it's basically for the two reasons above.
I think a more robust/developer-friendly solution may be something like this, creating a special module specifically for holding these super-global variables. You can then import the module and refer to module.VAR anytime you need the setting. Essentially, just creating a special module namespace in which to store super-global runtime configuration.
# conf.py (for example)
# This module holds all the "super-global" stuff.
def init(args):
global DEBUG
DEBUG = '--debug' in args
# set up other global vars here.
You would then use it more like this:
# main.py
import conf
import app
if __name__ == '__main__':
import sys
conf.init(sys.argv[1:])
app.run()
# app.py
import conf
def run():
if conf.DEBUG:
print('debug is on')
Note the use of conf.DEBUG rather than from conf import DEBUG. This construction means that you can alter the variable during the life of the program, and have that change reflected elsewhere (assuming a single thread/process, obviously).
Another upside is that this is a fairly common pattern, so other developers will readily recognize it. It's easily comparable to the settings.py file used by various popular apps (e.g. django), though I avoided that particular name because settings.py is conventionally a bunch of static objects, not a namespace for runtime parameters. Other good names for the configuration namespace module described above might be runtime or params, for example.
|
add column with constant value to pandas dataframe
|
I don't know why this puts NaN into 'new' column?
df['new'] = pd.Series([0 for x in range(len(df.index))])
EDIT:
df['new'] = 0
works :)
|
The reason this puts NaN into a column is because df.index and the Index of your right-hand-side object are different. @zach shows the proper way to assign a new column of zeros. In general, pandas tries to do as much alignment of indices as possible. One downside is that when indices are not aligned you get NaN wherever they aren't aligned. Play around with the reindex and align methods to gain some intuition for alignment works with objects that have partially, totally, and not-aligned-all aligned indices. For example here's how DataFrame.align() works with partially aligned indices:
In [9]: df = DataFrame({'a': randint(3, size=10)})
In [10]:
In [10]: df
Out[10]:
a
0 0
1 2
2 0
3 1
4 0
5 0
6 0
7 0
8 0
9 0
In [11]: s = df.a[:5]
In [12]: dfa, sa = df.align(s, axis=0)
In [13]: dfa
Out[13]:
a
0 0
1 2
2 0
3 1
4 0
5 0
6 0
7 0
8 0
9 0
In [14]: sa
Out[14]:
0 0
1 2
2 0
3 1
4 0
5 NaN
6 NaN
7 NaN
8 NaN
9 NaN
Name: a, dtype: float64
|
set multi index of an existing data frame in pandas
|
I have a dataframe that looks like this
Emp1 Empl2 date Company
0 0 0 2012-05-01 apple
1 0 1 2012-05-29 apple
2 0 1 2013-05-02 apple
3 0 1 2013-11-22 apple
18 1 0 2011-09-09 google
19 1 0 2012-02-02 google
20 1 0 2012-11-26 google
21 1 0 2013-05-11 google
I want to pass the company and date for setting the multiIndex for this dataframe. Currently it has the default index. I am using
df.set_index(['Company', 'date'], inplace=True)
Code
df = pd.DataFrame()
for c in company_list:
row = pd.DataFrame([dict(company = '%s' %s, date = datetime.date(2012, 05, 01))])
df = df.append(row, ignore_index = True)
for e in emp_list:
dataset = pd.read_sql("select company, emp_name, date(date), count(*) from company_table where = '"+s+"' and emp_name = '"+b+"' group by company, date, name LIMIT 5 ", con)
if len(dataset) == 0:
row = pd.DataFrame([dict(sitename='%s' %s, name = '%s' %b, date = datetime.date(2012, 05, 01), count = np.nan)])
dataset = dataset.append(row, ignore_index=True)
dataset = dataset.rename(columns = {'count': '%s' %b})
dataset = dataset.groupby(['company', 'date', 'emp_name'], as_index = False).sum()
dataset = dataset.drop('emp_name', 1)
df = pd.merge(df, dataset, how = '')
df = df.sort('date', ascending = True)
df.fillna(0, inplace = True)
df.set_index(['Company', 'date'], inplace=True)
print df
But when I print this dataframe, it prints None. I saw this solution from stackoverflow it self. Is this not the correct way of doing it. Also I want to shuffle the positions of the columns company and date so that company becomes the first index, and date becomes the second in Hierarchy. Any ideas on this?
|
When you pass inplace in makes the changes on the original variable and returns None, and the function does not return the modified dataframe, it returns None.
is_none = df.set_index(['Company', 'date'], inplace=True)
df # the dataframe you want
is_none # has the value None
so when you have a line like:
df = df.set_index(['Company', 'date'], inplace=True)
it first modifies df... but then it sets df to None!
That is, you should just use the line:
df.set_index(['Company', 'date'], inplace=True)
|
Python: Numpy standard deviation error
|
This is a simple test
import numpy as np
data = np.array([-1,0,1])
print data.std()
>> 0.816496580928
I don't understand how this result been generated? Obviously:
( (1^0.5 + 1^0.5 + 0^0.5)/(3-1) )^0.5 = 1
and in matlab it gives me std([-1,0,1]) = 1. Could you help me get understand how numpy.std() works?
|
First, you need to divide by N (3), not N-1. As Iarsmans pointed out, python will use the population variance, not the sample variance. Second you need to take the square root, not the square. (Variance is the square of the standard deviation, not the other way around.)
So the real answer is sqrt(2/3) which is exactly that: 0.8164965...
To use a different value (than N) for the degrees of freedom, use the keyword argument ddofwith a value other than 0:
np.std(data, ddof=1)
|
How to receive json data using HTTP POST request in Django 1.6?
|
I am learning Django 1.6.
I want to post some JSON using HTTP POST request and I am using Django for this task for learning.
I tried to use request.POST['data'], request.raw_post_data, request.body but none are working for me.
my views.py is
import json
from django.http import StreamingHttpResponse
def main_page(request):
if request.method=='POST':
received_json_data=json.loads(request.POST['data'])
#received_json_data=json.loads(request.body)
return StreamingHttpResponse('it was post request: '+str(received_json_data))
return StreamingHttpResponse('it was GET request')
I am posting JSON data using requests module.
import requests
import json
url = "http://localhost:8000"
data = {'data':[{'key1':'val1'}, {'key2':'val2'}]}
headers = {'content-type': 'application/json'}
r=requests.post(url, data=json.dumps(data), headers=headers)
r.text
r.text should print that message and posted data but I am not able to solve this simple problem. please tell me how to collect posted data in Django 1.6?
|
You're confusing form-encoded and JSON data here. request.POST['foo'] is for form-encoded data. You are posting raw JSON, so you should use request.body.
received_json_data=json.loads(request.body)
|
How to truncate a string using str.format in Python?
|
How to truncate a string using str.format in Python? Is it even possible?
There is a width parameter mentioned in the Format Specification Mini-Language:
format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]
...
width ::= integer
...
But specifying it apparently only works for padding, not truncating:
>>> '{:5}'.format('aaa')
'aaa '
>>> '{:5}'.format('aaabbbccc')
'aaabbbccc'
So it's more a minimal width than width really.
I know I can slice strings as lists, but the data I process here is completely dynamic including the format string and the args that go in. I cannot just go and explicitly slice one.
|
Use .precision instead:
>>> '{:.5}'.format('aaabbbccc')
'aaabb'
According to the documentation Format Specification Mini-Language:
The precision is a decimal number indicating how many digits should be
displayed after the decimal point for a floating point value formatted
with 'f' and 'F', or before and after the decimal point for a floating
point value formatted with 'g' or 'G'. For non-number types the field
indicates the maximum field size - in other words, how many characters
will be used from the field content. The precision is not allowed for
integer values.
|
How to hide *pyc files in atom editor
|
As In topic. I am using https://atom.io/ fro Python/Django development and would like to hide all the *.pyc files from sidebar. How to configure it?
|
The method for hiding files that you do not want showing up in the Tree View (which is what most people mean when they ask this question) depends on whether or not you've added the files to your .gitignore. If you have, then all you have to do is:
Open Settings
Scroll down the list on the left to find the Tree View package
Click on it to bring up the package-specific settings
Ensure Hide Vcs Ignored Files is checked
If you want to hide certain files in the Tree View whether you have a Git project open or not:
Open Settings
Add the file mask to the comma-separated list under Ignored Names (add *.pyc in your case)
Scroll down the list on the left to find the Tree View package
Click on it to bring up the package-specific settings
Ensure Hide Ignored Names is checked
Also note that when you add a file mask to the list of Ignored Names that files matching that mask will not show up in other parts of Atom like the fuzzy-finder:find-file (Cmd+T on OS X and Ctrl+T on Windows/Linux by default) command.
|
pandas dataframe groupby datetime month
|
Consider a csv file:
string,date,number
a string,2/5/11 9:16am,1.0
a string,3/5/11 10:44pm,2.0
a string,4/22/11 12:07pm,3.0
a string,4/22/11 12:10pm,4.0
a string,4/29/11 11:59am,1.0
a string,5/2/11 1:41pm,2.0
a string,5/2/11 2:02pm,3.0
a string,5/2/11 2:56pm,4.0
a string,5/2/11 3:00pm,5.0
a string,5/2/14 3:02pm,6.0
a string,5/2/14 3:18pm,7.0
I can read this in, and reformat the date column into datetime format:
b=pd.read_csv('b.dat')
b['date']=pd.to_datetime(b['date'],format='%m/%d/%y %I:%M%p')
I have been trying to group the data by month. It seems like there should be an obvious way of accessing the month and grouping by that. But I can't seem to do it. Does anyone know how?
What I am currently trying is re-indexing by the date:
b.index=b['date']
I can access the month like so:
b.index.month
However I can't seem to find a function to lump together by month.
|
Managed to do it:
pd.groupby(b,by=[b.index.month,b.index.year])
|
Segmentation fault and crashing when trying to import opencv
|
I have mac os x 10.9. I downloaded opencv-python using homebrew and I have both the python 2.7 and python 3.4 versions off of the main python site, downloaded the usual way with macs. I need to use opencv, but they do not have a download package for macs so I used homebrew:
brew tap homebrew/science
brew install opencv
When I type python into the terminal, and then type import cv, I get a segmentation error and python quits unexpectedly. The same thing happens in IDLE. For python 3.4, I get a no module error. Here it is:
Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 00:54:21)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/aurora_alvarez-buylla/cv.py", line 1, in <module>
from cv2.cv import *
ImportError: No module named 'cv2'
Python 2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:52:43)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv
Segmentation fault: 11
and then it quits unexpectedly. Does anyone know what is going on? Frankly I have spent way too much time on this opencv issue and am finding it very frustrating. Thank you!
|
Chris Muktar pointed in this thread that the problem is caused by the conflict between the system Python and brewed Python. Following that idea, I resolved this issue by removing the system python and re-installing opencv:
$ cd /usr/bin
$ sudo mv python python.bak
$ brew uninstall opencv
$ brew install opencv
|
Why do backslashes appear twice?
|
When I create a string containing backslashes, they get duplicated:
>>> my_string = "why\does\it\happen?"
>>> my_string
'why\\does\\it\\happen?'
Why?
|
What you are seeing is the representation of the my_string created by its __repr__() method. If you print it, you can see that you've actually got single backslashes, just as you intended:
>>> print(my_string)
why\does\it\happen?
You can get the standard representation of a string (or any other object) with the repr() built-in function:
>>> print(repr(my_string))
'why\\does\\it\\happen?'
Python represents backslashes in strings as \\ because the backslash is an escape character - for instance, \n represents a newline, and \t represents a tab.
This can sometimes get you into trouble:
>>> print("this\text\is\not\what\it\seems")
this ext\is
ot\what\it\seems
Because of this, there needs to be a way to tell Python you really want the two characters \n rather than a newline, and you do that by escaping the backslash itself, with another one:
>>> print("this\\text\is\what\you\\need")
this\text\is\what\you\need
When Python returns the representation of a string, it plays safe, escaping all backslashes (even if they wouldn't otherwise be part of an escape sequence), and that's what you're seeing. However, the string itself contains only single backslashes.
More information about Python's string literals can be found at: String and Bytes literals in the Python documentation.
|
Best way to run Julia code in an IPython notebook (or Python code in an IJulia notebook)
|
My goal is to run only a few lines of Julia in an IPython notebook where the majority of the code will be Python for some experiments ...
I found a nice example notebook here:
http://nbviewer.ipython.org/github/JuliaLang/IJulia.jl/blob/master/python/doc/JuliaMagic.ipynb
And now I am wondering how I would install the IPython extension for Julia (I am primarily using IPython 2.1) so that I can load it via
%load_ext julia.magic
I am also very new to julia and I am wondering if there is a performance benefit of "mixing numpy and julia" as shown in this notebook (over regular Python numpy or regular Julia code)
When I understand the concept correctly, I would use IJulia notebooks (which I set up successfully) if I am only interested in running Julia code?
I installed IJulia, and i can also run IJulia notebooks, but I actually only wanted to have a small portion of Julia code in my notebook, the rest should be Python / Cython.
Unfortunately, I read that magic functions are not yet fully supported: "One difference from IPython is that the IJulia kernel currently does not support "magics", which are special commands prefixed with % or %% to execute code in a different language"
Is there a way to run Python code in IJulia notebooks?
|
Run Julia inside an IPython notebook
Hack
In order to run Julia snippets (or other language) inside an IPython notebook, I just append the string 'julia' to the default list in the _script_magics_default method from the ScriptMagics class in:
/usr/lib/python3.4/site-packages/IPython/core/magics/script.py or
/usr/lib/python2.7/site-packages/IPython/core/magics/script.py.
Example:
# like this:
defaults = [
'sh',
'bash',
'perl',
'ruby',
'python',
'python2',
'python3',
'pypy',
'julia', # add your own magic
]
Example notebook (using Python3)
Julia Magic (Bi-directional)
To use %load_ext julia.magic, you would need to run the setup.py here:
Update (09/04/2014): the setup.py file has been moved to pyjulia.jl:
https://github.com/JuliaLang/pyjulia
Which you get when Pkg.add("IJulia") clones the repo in your filesystem:
cd ~/.julia/v0.3/IJulia/python/
sudo python2 setup.py install
Currently this only works for me in Python2. Python3 complains about:
ImportError: No module named 'core'
when I try to load the extention, but installs without complains.
After installing it you can also do this from inside Python2:
from julia import Julia
j = Julia()
arr = j.run('[1:10]')
type(arr) # numpy.ndarray
http://blog.leahhanson.us/julia-calling-python-calling-julia.html
Runing a script from your system shell
Use the shell mode syntax in a notebook cell:
!julia my_script.jl
Run Python inside IJulia notebook
Using PyCall
It's not really running python code in the context you want, but you can also use Python libraries from within Julia:
using PyCall
@pyimport math
println(math.pi)
https://github.com/stevengj/PyCall.jl
Runing a script from your system shell
Use the shell mode syntax in a notebook cell:
;python my_script.py
http://julia.readthedocs.org/en/latest/manual/interacting-with-julia/?highlight=shell#shell-mode
|
"The C extension could not be compiled' error. - while installing Flask
|
I get the error below when installing flask in virtualenv on debian 7. apt-get-install tells me I already have GCC. I tried apt-get install libpcre3-dev but then reinstalled flask with pip install Flask-scss --force-reinstall -I but still got the same error. How do I fix this so that the speedups are used?
The Error:
markupsafe/_speedups.c:12:20: fatal error: Python.h: No such file or directory
compilation terminated.
==========================================================================
WARNING: The C extension could not be compiled, speedups are not enabled.
Failure information, if any, is above.
Retrying the build without the C extension now.
==========================================================================
WARNING: The C extension could not be compiled, speedups are not enabled.
Plain-Python installation succeeded.
==========================================================================
|
You need to install the python-dev package too; it depends on the right package that contains Python.h:
apt-get install python-dev
|
Developing Python applications in Qt Creator
|
I've developed a few Qt projects in C++ using Qt Creator in the past, but now I want to experiment with the Python implementation of Qt. I discovered that Qt Creator 2.8 and higher support Python, but I haven't been able to figure out how to create a Qt application in Python with it so far. Online documentation about it appears to be scarce.
How do I set up such a project in Qt Creator? Ideally I'm looking for a simple "Hello World" project that I can open in Qt Creator and use that as a starting point to build something.
|
Currently, Qt Creator allows you to create Python files (not projects) and run them. It also has syntax highlighting, but it lacks more complex features such as autocomplete.
Running scripts requires some configuration (I used this tutorial). Open Qt Creator and go to Tools->Options->Environment->External Tools. Click Add->Add category and create a new category (for example, Python). Then, select the created category and click Add->Add Tool to create a new tool - RunPy for example. Select the created tool and fill the fields on the right:
Description - any value
Executable - path to python.exe
Arguments - %{CurrentDocument:FilePath}
Working directory - %{CurrentDocument:Path}
Environment - QT_LOGGING_TO_CONSOLE=1
You get something like this:
Now, go to File->New File or Project->Python and select Python source file. To run the created script: Tools->External->Python->RunPy.
You can also add pyuic to it the same way:
Click again on the Add->Add Tool button to create a new tool - PyUic now. Select it again and fill the fields on the right:
Description - any value
Executable - path to pyuic5
Arguments - -o UI%{CurrentDocument:FileBaseName}.py -x %{CurrentDocument:FilePath}
Working directory - %{CurrentDocument:Path}
Environment - QT_LOGGING_TO_CONSOLE=1
Then you should have PyUic connected as well.
|
Finding median of list in Python
|
How do you find the median of a list in Python? The list can be of any size and the numbers are not guaranteed to be in any particular order.
If the list contains an even number of elements, the function should return the average of the middle two.
Here are some examples (sorted for display purposes):
median([1]) == 1
median([1, 1]) == 1
median([1, 1, 2, 4]) == 1.5
median([0, 2, 5, 6, 8, 9, 9]) == 6
median([0, 0, 0, 0, 4, 4, 6, 8]) == 2
|
Python 3.4 has statistics.median:
Return the median (middle value) of numeric data.
When the number of data points is odd, return the middle data point.
When the number of data points is even, the median is interpolated by taking the average of the two middle values:
>>> median([1, 3, 5])
3
>>> median([1, 3, 5, 7])
4.0
Usage:
import statistics
items = [1, 2, 3, 6, 8]
statistics.median(items)
#>>> 3
It's pretty careful with types, too:
statistics.median(map(float, items))
#>>> 3.0
from decimal import Decimal
statistics.median(map(Decimal, items))
#>>> Decimal('3')
|
Beautiful Soup: 'ResultSet' object has no attribute 'find_all'?
|
I am trying to scrape a simple table using Beautiful Soup. Here is my code:
import requests
from bs4 import BeautifulSoup
url = 'https://gist.githubusercontent.com/anonymous/c8eedd8bf41098a8940b/raw/c7e01a76d753f6e8700b54821e26ee5dde3199ab/gistfile1.txt'
r = requests.get(url)
soup = BeautifulSoup(r.text)
table = soup.find_all(class_='dataframe')
first_name = []
last_name = []
age = []
preTestScore = []
postTestScore = []
for row in table.find_all('tr'):
col = table.find_all('td')
column_1 = col[0].string.strip()
first_name.append(column_1)
column_2 = col[1].string.strip()
last_name.append(column_2)
column_3 = col[2].string.strip()
age.append(column_3)
column_4 = col[3].string.strip()
preTestScore.append(column_4)
column_5 = col[4].string.strip()
postTestScore.append(column_5)
columns = {'first_name': first_name, 'last_name': last_name, 'age': age, 'preTestScore': preTestScore, 'postTestScore': postTestScore}
df = pd.DataFrame(columns)
df
However, whenever I run it, I get this error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-116-a900c2872793> in <module>()
14 postTestScore = []
15
---> 16 for row in table.find_all('tr'):
17 col = table.find_all('td')
18
AttributeError: 'ResultSet' object has no attribute 'find_all'
I have read around a dozen StackOverflow questions about this error, and I cannot figure out what I am doing wrong.
|
The table variable contains an array. You would need to call find_all on its members (even though you know it's an array with only one member), not on the entire thing.
>>> type(table)
<class 'bs4.element.ResultSet'>
>>> type(table[0])
<class 'bs4.element.Tag'>
>>> len(table[0].find_all('tr'))
6
>>>
|
Finding k closest numbers to a given number
|
Say I have a list [1,2,3,4,5,6,7]. I want to find the 3 closest numbers to, say, 6.5. Then the returned value would be [5,6,7].
Finding one closest number is not that tricky in python, which can be done using
min(myList, key=lambda x:abs(x-myNumber))
But I am trying not to put a loop around this to find k closest numbers. Is there a pythonic way to achieve the above task?
|
The heapq.nsmallest() function will do this neatly and efficiently:
>>> from heapq import nsmallest
>>> s = [1,2,3,4,5,6,7]
>>> nsmallest(3, s, key=lambda x: abs(x-6.5))
[6, 7, 5]
Essentially this says, "Give me the three input values that have the lowest absolute difference from the number 6.5".
The algorithm for nsmallest makes a single pass over the data, keeping no more than the n best values in memory at any time (that means that it works with any input iterator, is cache-efficient, and space-efficient).
The algorithm only adds new values to the heap when a new "best" value is found. Accordingly, it minimizes the number of comparisons made. For example, if you are looking for the 100 best values out of 1,000,000 random inputs, it typically makes fewer than 1,008,000 comparisons (about 0.8% more compares than using min() to find the single best value).
The key functions for min(), nsmallest(), and sorted() all guarantee that the key function is called exactly once per value in the input iterable. That means that this technique will be efficient for even more complex and interesting examples of the n-closest value problem (i.e. words that sound the most alike, closest colors, smallest diffs, fewest genetic mutations, Euclidean distance, etc).
Both nsmallest() and sorted() will return a list rank ordered by nearness (ties are settled by which value was seen first).
For those who are interested, there is a somewhat involved analysis of expected number of comparisons here and here. Quick summary:
Average case for random inputs: n + k * (log(k, 2) * log(n/k) + log(k, 2) + log(n/k))
Best case for ascending inputs: n + k * log(k, 2)
Worst case for descending inputs: n * log(k, 2)
|
pandas ValueError: numpy.dtype has the wrong size, try recompiling
|
I took a new clean install of OSX 10.9.3 and installed pip, and then did
pip install pandas
pip install numpy
Both installs seemed to be perfectly happy, and ran without any errors (though there were a zillion warnings). When I tried to run a python script with import pandas, I got the following error:
numpy.dtype has the wrong size, try recompiling Traceback (most recent call last):
File "./moen.py", line 7, in import pandas File "/Library/Python/2.7/site-packages/pandas/__init__.py", line 6, in from . import hashtable, tslib, lib
File "numpy.pxd", line 157, in init pandas.hashtable (pandas/hashtable.c:22331)
ValueError: numpy.dtype has the wrong size, try recompiling
How do I fix this error and get pandas to load properly?
|
You can install previous version of pandas.
pip uninstall numpy
pip uninstall pandas
pip install pandas==0.13.1
In my situation it solved problem...
|
pyvenv-3.4 returned non-zero exit status 1
|
I'm in Kubuntu 14.04 , I want to create a virtualenv with python3.4. I did with python2.7 before in other folder. But when I try:
pyvenv-3.4 venv
I've got:
Error: Command '['/home/fmr/projects/ave/venv/bin/python3.4', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1
|
I got a solution installing python-virtualenv
sudo apt-get install python-virtualenv
and using
virtualenv --python=/usr/bin/python3.4 venv
|
Recursive feature elimination on Random Forest using scikit-learn
|
I'm trying to preform recursive feature elimination using scikit-learn and a random forest classifier, with OOB ROC as the method of scoring each subset created during the recursive process.
However, when I try to use the RFECV method, I get an error saying AttributeError: 'RandomForestClassifier' object has no attribute 'coef_'
Random Forests don't have coefficients per se, but they do have rankings by Gini score. So, I'm wondering how to get arround this problem.
Please note that I want to use a method that will explicitly tell me what features from my pandas DataFrame were selected in the optimal grouping as I am using recursive feature selection to try to minimize the amount of data I will input into the final classifier.
Here's some example code:
from sklearn import datasets
import pandas as pd
from pandas import Series
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import RFECV
iris = datasets.load_iris()
x=pd.DataFrame(iris.data, columns=['var1','var2','var3', 'var4'])
y=pd.Series(iris.target, name='target')
rf = RandomForestClassifier(n_estimators=500, min_samples_leaf=5, n_jobs=-1)
rfecv = RFECV(estimator=rf, step=1, cv=10, scoring='ROC', verbose=2)
selector=rfecv.fit(x, y)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/bbalin/anaconda/lib/python2.7/site-packages/sklearn/feature_selection/rfe.py", line 336, in fit
ranking_ = rfe.fit(X_train, y_train).ranking_
File "/Users/bbalin/anaconda/lib/python2.7/site-packages/sklearn/feature_selection/rfe.py", line 148, in fit
if estimator.coef_.ndim > 1:
AttributeError: 'RandomForestClassifier' object has no attribute 'coef_'
|
Here's what I've done to adapt RandomForestClassifier to work with RFECV:
class RandomForestClassifierWithCoef(RandomForestClassifier):
def fit(self, *args, **kwargs):
super(RandomForestClassifierWithCoef, self).fit(*args, **kwargs)
self.coef_ = self.feature_importances_
Just using this class does the trick if you use 'accuracy' or 'f1' score. For 'roc_auc', RFECV complains that multiclass format is not supported. Changing it to two-class classification with the code below, the 'roc_auc' scoring works. (Using Python 3.4.1 and scikit-learn 0.15.1)
y=(pd.Series(iris.target, name='target')==2).astype(int)
Plugging into your code:
from sklearn import datasets
import pandas as pd
from pandas import Series
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import RFECV
class RandomForestClassifierWithCoef(RandomForestClassifier):
def fit(self, *args, **kwargs):
super(RandomForestClassifierWithCoef, self).fit(*args, **kwargs)
self.coef_ = self.feature_importances_
iris = datasets.load_iris()
x=pd.DataFrame(iris.data, columns=['var1','var2','var3', 'var4'])
y=(pd.Series(iris.target, name='target')==2).astype(int)
rf = RandomForestClassifierWithCoef(n_estimators=500, min_samples_leaf=5, n_jobs=-1)
rfecv = RFECV(estimator=rf, step=1, cv=2, scoring='roc_auc', verbose=2)
selector=rfecv.fit(x, y)
|
Parse XML from URL into python object
|
The goodreads website has this API for accessing a user's 'shelves:' https://www.goodreads.com/review/list/20990068.xml?key=nGvCqaQ6tn9w4HNpW8kquw&v=2&shelf=toread
It returns XML. I'm trying to create a django project that shows books on a shelf from this API. I'm looking to find out how (or if there is a better way than) to write my view so I can pass an object to my template. Currently, this is what I'm doing:
import urllib2
def homepage(request):
file = urllib2.urlopen('https://www.goodreads.com/review/list/20990068.xml?key=nGvCqaQ6tn9w4HNpW8kquw&v=2&shelf=toread')
data = file.read()
file.close()
dom = parseString(data)
I'm not entirely sure how to manipulate this object if I'm doing this correctly. I'm following this tutorial.
|
I'd use xmltodict to make a python dictionary out of the XML data structure and pass this dictionary to the template inside the context:
import urllib2
import xmltodict
def homepage(request):
file = urllib2.urlopen('https://www.goodreads.com/review/list/20990068.xml?key=nGvCqaQ6tn9w4HNpW8kquw&v=2&shelf=toread')
data = file.read()
file.close()
data = xmltodict.parse(data)
return render_to_response('my_template.html', {'data': data})
|
uWSGI request timeout in Python
|
Trying to set the timeout for requests in uWSGI, I'm not sure of the correct setting. There seem to be multiple timeout options (socket, interface, etc.) and it's not readily evident which setting to configure or where to set it.
The behavior I'm looking for is to extend the time a request to the resource layer of a REST application can take.
|
You're propably looking for the harakiri parameter - if request takes longer than specified harakiri time (in seconds), the request will be dropped and the corresponding worker recycled.
For standalone uwsgi (ini config):
[uwsgi]
http = 0.0.0.0:80
harakiri = 30
...
If you have nginx proxy before uwsgi you have to increase timeout as well:
location / {
proxy_pass http://my_uwsgi_upstream;
proxy_read_timeout 30s;
proxy_send_timeout 30s;
}
If you want (for some strange reason) higher timeout than 60s, you might consider communication over uwsgi protocol. Configuration is quite similar nginx site:
location / {
uwsgi_read_timeout 120s;
uwsgi_send_timeout 120s;
uwsgi_pass my_upstream;
include uwsgi_params;
}
uwsgi:
[uwsgi]
socket = 0.0.0.0:80
protocol = uwsgi
harakiri = 120
...
|
How to include a local table of contents into Sphinx doc?
|
How to include a local table of contents into Sphinx doc?
I tried
.. toc::
But that doesn't seem to have any effect: nothing is inserted in the document.
Basically I need links to the sections in the current page to be placed at a certain location of each page.
Is this possible?
Thanks!
|
I'm not 100% sure this is what you're looking for, but the .. contents:: directive may help. By default, it'll give you the headings for the whole page, wherever you put the directive. With :local: specified, it will generate a local TOC for the headings below where you put the directive (handy for sub-section tocs).
More details here: http://docutils.sourceforge.net/docs/ref/rst/directives.html#table-of-contents
|
Convert generator object to list for debugging
|
When I'm debugging in Python using IPython, I sometimes hit a break-point and I want to examine a variable that is currently a generator. The simplest way I can think of doing this is converting it to a list, but I'm not clear on what's an easy way of doing this in one line in ipdb, since I'm so new to Python.
|
Simply call list on the generator.
lst = list(gen)
lst
Be aware that this affects the generator which will not return any further items.
You also cannot directly call list in IPython, as it conflicts with a command for listing lines of code.
Tested on this file:
def gen():
yield 1
yield 2
yield 3
yield 4
yield 5
import ipdb
ipdb.set_trace()
g1 = gen()
text = "aha" + "bebe"
mylst = range(10, 20)
which when run:
$ python code.py
> /home/javl/sandbox/so/debug/code.py(10)<module>()
9
---> 10 g1 = gen()
11
ipdb> n
> /home/javl/sandbox/so/debug/code.py(12)<module>()
11
---> 12 text = "aha" + "bebe"
13
ipdb> lst = list(g1)
ipdb> lst
[1, 2, 3, 4, 5]
ipdb> q
Exiting Debugger.
General method for escaping function/variable/debugger name conflicts
There are debugger commands p and pp that will print and prettyprint any expression following them.
So you could use it as follows:
$ python code.py
> /home/javl/sandbox/so/debug/code.py(10)<module>()
9
---> 10 g1 = gen()
11
ipdb> n
> /home/javl/sandbox/so/debug/code.py(12)<module>()
11
---> 12 text = "aha" + "bebe"
13
ipdb> p list(g1)
[1, 2, 3, 4, 5]
ipdb> c
There is also an exec command, called by prefixing your expression with !, which forces debugger to take your expression as Python one.
ipdb> !list(g1)
[]
For more details see help p, help pp and help exec when in debugger.
ipdb> help exec
(!) statement
Execute the (one-line) statement in the context of
the current stack frame.
The exclamation point can be omitted unless the first word
of the statement resembles a debugger command.
To assign to a global variable you must always prefix the
command with a 'global' command, e.g.:
(Pdb) global list_options; list_options = ['-l']
|
X and Y axis labels for Bokeh figure
|
Does anyone know how to add x and y axis title/labels for a Bokeh figure? E.g. X-axis: time, Y-axis: stock price.
Thanks a lot!
|
Bokeh 0.11.1
As @bigreddot kindly points out, the user's guide section on axes now shows how to edit properties of existing axes. The way to do it is the same as before:
p = figure(width=300, height=300, x_axis_label='Initial xlabel')
p.xaxis.axis_label = 'New xlabel'
Bokeh 0.11
It has been added to the documentation, although it still seems to be missing from the docs for figure. You can set labels in a call to figure, or you can set them on an existing figure (dead link, see update for 0.11.1 above). The following demonstrates both:
p = figure(width=300, height=300, x_axis_label='Initial xlabel')
p.xaxis.axis_label = 'New xlabel'
Bokeh 0.5.0
In Bokeh 0.5.0, you can also use x_axis_label and y_axis_label, as in this example. Use it like so:
from bokeh.plotting import figure
figure(title = "My figure",
x_axis_label = "Time",
y_axis_label = "Stock price")
They can also be used in the same manner in bokeh.plotting.circle(). I believe, but have not tested it, that they work in the other plotting methods too.
At any rate, this will be added to documentation soon. (See this issue).
|
FieldError at /admin/ - Unknown field(s) (added_on) specified for UserProfile
|
I'm using a custom user model in Django. The model works fine and is able to create a user. But when I try to access the admin page it throws me the error
FieldError at /admin/
Unknown field(s) (added_on) specified for UserProfile
The UserProfile has a added_on attribute. I can't think of any reason why this would show. If I remove the added_on attribute from the admin.py file, the admin panel works.
Here is my models.py
from django.db import models
from django.contrib.auth.models import User, BaseUserManager, AbstractBaseUser
from django.conf import settings
class UserProfileManager(BaseUserManager):
def create_user(self, email, username, name, password=None):
if not email:
raise ValueError('Users must have an email address')
user = self.model(
username=username,
name=name,
email=self.normalize_email(email),
)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, email, username, name, password):
user = self.create_user(email=email,
password=password,
username=username,
name=name
)
user.is_admin = True
user.save(using=self._db)
return user
class UserProfile(AbstractBaseUser):
SHOPPER = 1
TECH_ENTHU = 2
TECH_JUNKIE = 3
TECH_NINJA = 4
TECH_GURU = 5
LEVELS = (
(SHOPPER, 'Shopper'),
(TECH_ENTHU, 'Tech Enthusiast'),
(TECH_JUNKIE, 'Tech Junkie'),
(TECH_NINJA, 'Tech Ninja'),
(TECH_GURU, 'Tech Guru')
)
email = models.EmailField(max_length=255, unique=True)
username = models.CharField(max_length=100, unique=True)
name = models.CharField(max_length=255)
location = models.CharField(max_length=255, blank=True, null=True)
website = models.CharField(max_length=255, blank=True, null=True)
image_1 = models.CharField(max_length=255, blank=True, null=True)
image_2 = models.CharField(max_length=255, blank=True, null=True)
image_3 = models.CharField(max_length=255, blank=True, null=True)
points = models.PositiveIntegerField(default=0)
level = models.PositiveSmallIntegerField(choices=LEVELS, default=SHOPPER)
added_on = models.DateTimeField(auto_now_add=True)
is_active = models.BooleanField(default=True)
is_admin = models.BooleanField(default=False)
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['username', 'name']
objects = UserProfileManager()
def get_full_name(self):
return self.name
def get_short_name(self):
return self.name
def __unicode__(self):
return self.email
def has_perm(self, perm, obj=None):
return True
def has_module_perms(self, app_label):
return True
@property
def is_staff(self):
return self.is_admin
class OldUser(models.Model):
old_user_id = models.BigIntegerField()
user = models.ForeignKey(settings.AUTH_USER_MODEL)
converted = models.BooleanField(default=False)
Here is my admin.py
from django import forms
from django.contrib import admin
from django.contrib.auth.models import Group
from django.contrib.auth.admin import UserAdmin
from django.contrib.auth.forms import ReadOnlyPasswordHashField
from users.models import UserProfile
class UserCreationForm(forms.ModelForm):
"""A form for creating new users. Includes all the required
fields, plus a repeated password."""
password1 = forms.CharField(label='Password', widget=forms.PasswordInput)
password2 = forms.CharField(label='Password confirmation', widget=forms.PasswordInput)
class Meta:
model = UserProfile
fields = ('username', 'name')
def clean_password2(self):
# Check that the two password entries match
password1 = self.cleaned_data.get("password1")
password2 = self.cleaned_data.get("password2")
if password1 and password2 and password1 != password2:
raise forms.ValidationError("Passwords don't match")
return password2
def save(self, commit=True):
# Save the provided password in hashed format
user = super(UserCreationForm, self).save(commit=False)
user.set_password(self.cleaned_data["password1"])
if commit:
user.save()
return user
class UserChangeForm(forms.ModelForm):
"""A form for updating users. Includes all the fields on
the user, but replaces the password field with admin's
password hash display field.
"""
password = ReadOnlyPasswordHashField()
class Meta:
model = UserProfile
fields = ('email', 'password', 'username', 'name', 'location', 'website', 'image_1', 'image_2', 'image_3',
'points', 'level', 'added_on', 'is_active', 'is_admin')
def clean_password(self):
# Regardless of what the user provides, return the initial value.
# This is done here, rather than on the field, because the
# field does not have access to the initial value
return self.initial["password"]
class UserProfileAdmin(UserAdmin):
# The forms to add and change user instances
form = UserChangeForm
add_form = UserCreationForm
# The fields to be used in displaying the User model.
# These override the definitions on the base UserAdmin
# that reference specific fields on auth.User.
list_display = ('email', 'username', 'name', 'points', 'level', 'is_admin')
list_filter = ('is_admin',)
fieldsets = (
(None, {'fields': ('email', 'password')}),
('Personal info', {'fields': ('username', 'name', 'location', 'website', 'image_1', 'image_2', 'image_3',
'points', 'level', 'added_on')}),
('Permissions', {'fields': ('is_admin',)}),
)
# add_fieldsets is not a standard ModelAdmin attribute. UserAdmin
# overrides get_fieldsets to use this attribute when creating a user.
add_fieldsets = (
(None, {
'classes': ('wide',),
'fields': ('email', 'username', 'name', 'password1', 'password2')}
),
)
search_fields = ('email',)
ordering = ('email',)
filter_horizontal = ()
admin.site.register(UserProfile, UserProfileAdmin)
# Since we're not using Django's built-in permissions,
# unregister the Group model from admin.
admin.site.unregister(Group)
Here is the traceback
Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/admin/
Django Version: 1.6.2
Python Version: 2.7.3
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'users')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware')
Traceback:
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in get_response
101. resolver_match = resolver.resolve(request.path_info)
File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py" in resolve
318. for pattern in self.url_patterns:
File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py" in url_patterns
346. patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py" in urlconf_module
341. self._urlconf_module = import_module(self.urlconf_name)
File "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py" in import_module
40. __import__(name)
File "/home/jaskaran/coding/buyingiq/authentication/authentication/urls.py" in <module>
4. admin.autodiscover()
File "/usr/local/lib/python2.7/dist-packages/django/contrib/admin/__init__.py" in autodiscover
29. import_module('%s.admin' % app)
File "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py" in import_module
40. __import__(name)
File "/home/jaskaran/coding/buyingiq/authentication/users/admin.py" in <module>
36. class UserChangeForm(forms.ModelForm):
File "/usr/local/lib/python2.7/dist-packages/django/forms/models.py" in __new__
292. raise FieldError(message)
Exception Type: FieldError at /admin/
Exception Value: Unknown field(s) (added_on) specified for UserProfile
|
Your problem is the auto_now_add=True on that field. See the notes on the documentation for DateField:
Note that the current date is always used; itâs not just a default value that you can override.
and
As currently implemented, setting auto_now or auto_now_add to True will cause the field to have editable=False and blank=True set.
Since editable=False, you can't include it in the list of fields for that form (you could put it in readonly_fields, of course).
If you want the value to take the creation date as a default, but still allow it to be edited and overridden, you should use default instead:
added_on = models.DateTimeField(default=datetime.datetime.now)
(side note, you should always use the callable for the default value, without the calling parentheses).
|
How do I create test and train samples from one dataframe with pandas?
|
I have a fairly large dataset in the form of a dataframe and I was wondering how I would be able to split the dataframe into two random samples (80% and 20%) for training and testing.
Thanks!
|
SciKit Learn's train_test_split is a good one.
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
train, test = train_test_split(df, test_size = 0.2)
|
cx_Oracle doesn't connect when using SID instead of service name on connection string
|
I have a connection string that looks like this
con_str = "myuser/mypass@oracle.sub.example.com:1521/ora1"
Where ora1 is the SID of my database. Using this information in SQL Developer works fine, meaning that I can connect and query without problems.
However, if I attempt to connect to Oracle using this string, it fails.
cx_Oracle.connect(con_str)
DatabaseError: ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
This connection string format works if the ora1 is a service name, though.
I have seen other questions that seem to have the reverse of my problem (it works with SID, but not Service name)
Using Oracle Service Names with SQLAlachemy
Oracle SID and Service name; connection problems
cx_Oracle & Connecting to Oracle DB Remotely
What is the proper way to connect to Oracle, using cx_Oracle, using an SID and not a service name? How do I do this without the need to adjust the TNSNAMES.ORA file? My application is distributed to many users internally and making changes to the TNSNAMES file is less than ideal when dealing with users without administrator privileges on their Windows machines. Additionally, when I use service name, I don't need to touch this file at all and would like it keep it that way.
|
I a similar scenario, I was able to connect to the database by using cx_Oracle.makedsn() to create a dsn string with a given SID (instead of the service name):
dsnStr = cx_Oracle.makedsn("oracle.sub.example.com", "1521", "ora1")
This returns something like
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=oracle.sub.example.com)(PORT=1521)))(CONNECT_DATA=(SID=ora1)))
which can then be used with cx_Oracle.connect() to connect to the database:
con = cx_Oracle.connect(user="myuser", password="mypass", dsn=dsnStr)
print con.version
con.close()
|
Length of string in Jinja/Flask
|
Jinja unfortunately does not support executing arbitrary Python code, such as
{% if len(some_var)>1 %} ... {% endif %}
My current workaround is to use the deprecated, ugly, double-underscore method:
{% if some_var.__len__()>1 %} ... {% endif %}
Although this works, I'm afraid that some future implementation of strings might break this code. Is there a better way to do this?
|
You can use the length filter:
{% if some_var|length > 1 %}
|
Unsupported command-line flag: --ignore-certificate-errors
|
Using Python 2.7.5, python module selenium (2.41.0) and chromedriver (2.9).
When Chrome starts it displays a message in a yellow popup bar: "You are using an unsupported command-line flag: --ignore-certificate-errors. Stability and security will suffer." This simple example reproduces the problem.
from selenium import webdriver
browser = webdriver.Chrome()
browser.get("http://google.com/")
How do I remove this command-line flag in python selenium?
|
This extra code removes the --ignore-certificate-errors command-line flag for me. In my opinion the arguments that can be added to webdriver.Chrome() could (and should) be better documented somewhere, I found this solution in a comment on the chromedriver issues page (see post #25).
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_experimental_option("excludeSwitches", ["ignore-certificate-errors"])
browser = webdriver.Chrome(chrome_options=options)
browser.get("http://google.com/")
|
Python TypeError: non-empty format string passed to object.__format__
|
I hit this TypeError exception recently, which I found very difficult to debug. I eventually reduced it to this small test case:
>>> "{:20}".format(b"hi")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: non-empty format string passed to object.__format__
This is very non-obvious, to me anyway. The workaround for my code was to decode the byte string into unicode:
>>> "{:20}".format(b"hi".decode("ascii"))
'hi '
What is the meaning of this exception? Is there a way it can be made more clear?
|
bytes objects do not have a __format__ method of their own, so the default from object is used:
>>> bytes.__format__ is object.__format__
True
>>> '{:20}'.format(object())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: non-empty format string passed to object.__format__
It just means that you cannot use anything other than straight up, unformatted unaligned formatting on these. Explicitly convert to a string object (as you did by decoding bytes to str) to get format spec support.
You can make the conversion explicit by using the !s string conversion:
>>> '{!s:20s}'.format(b"Hi")
"b'Hi' "
>>> '{!s:20s}'.format(object())
'<object object at 0x1100b9080>'
object.__format__ explicitly rejects format strings to avoid implicit string conversions, specifically because formatting instructions are type specific.
|
Can PyCharm drop into debug when py.test tests fail
|
When running tests with py.test there is a --pdb option to enter pdb on failure.
Is there a similar way to enter the debugger when running the same test from within PyCharm?
|
There is a py.test plugin, pytest-pycharm, that will halt the PyCharm debugger when a test emits an uncaught exception.
|
Cmake is not able to find Python-libraries
|
Getting this error:
sudo: unable to resolve host coderw@ll
-- Could NOT find PythonLibs (missing: PYTHON_LIBRARIES PYTHON_INCLUDE_DIRS)
CMake Error at /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108
(message):
Could NOT find PythonInterp (missing: PYTHON_EXECUTABLE)
Call Stack (most recent call first):
/usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:315
(_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake-2.8/Modules/FindPythonInterp.cmake:139
(FIND_PACKAGE_HANDLE_STANDARD_ARGS)
Code/cmake/Modules/FindNumPy.cmake:10 (find_package)
CMakeLists.txt:114 (find_package)
-- Configuring incomplete, errors occurred!
See also "/home/coderwall/Desktop/rdkit/build/CMakeFiles/CMakeOutput.log".
See also "/home/coderwall/Desktop/rdkit/build/CMakeFiles/CMakeError.log".
I have already installed:
sudo apt-get install python-dev
Environment variable are already set as follow:
PYTHON_INCLUDE_DIRS=/usr/include/python2.7
PYTHON_LIBRARIES=/usr/lib/python2.7/config/libpython2.7.so
Location of python.h : /usr/lib/include/python2.7/python.h
Location of python libs: /usr/lib/python2.7/
How to solve this?
|
I was facing this problem while trying to compile OpenCV 3 on a Xubuntu 14.04 Thrusty Tahr system.
With all the dev packages of Python installed, the configuration process was always returning the message:
Could NOT found PythonInterp: /usr/bin/python2.7 (found suitable version "2.7.6", minimum required is "2.7")
Could NOT find PythonLibs (missing: PYTHON_INCLUDE_DIRS) (found suitable exact version "2.7.6")
Found PythonInterp: /usr/bin/python3.4 (found suitable version "3.4", minimum required is "3.4")
Could NOT find PythonLibs (missing: PYTHON_LIBRARIES) (Required is exact version "3.4.0")
The CMake version available on Thrusty Tahr repositories is 2.8.
Some posts inspired me to upgrade CMake.
I've added a PPA CMake repository which installs CMake version 3.2.
After the upgrade everything ran smoothly and the compilation was successful.
|
How to change default install location for pip
|
I'm trying to install Pandas using pip, but I'm having a bit of trouble. I just ran sudo pip install pandas which successfully downloaded pandas. However, it did not get downloaded to the location that I wanted. Here's what I see when I use pip show pandas:
---
Name: pandas
Version: 0.14.0
Location: /Library/Python/2.7/site-packages/pandas-0.14.0-py2.7-macosx-10.9-intel.egg
Requires: python-dateutil, pytz, numpy
So it is installed. But I was confused when I created a new Python Project and searched under System Libs/lib/python for pandas, because it didn't show up. Some of the other packages that I've downloaded in the past did show up, however, so I tried to take a look at where those were. Running pip show numpy (which I can import with no problem) yielded:
---
Name: numpy
Version: 1.6.2
Location: /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python
Requires:
Which is in a completely different directory. For the sake of confirming my error, I ran pip install pyquery to see where it would be downloaded to, and got:
Name: pyquery
Version: 1.2.8
Location: /Library/Python/2.7/site-packages
Requires: lxml, cssselect
So the same place as pandas...
How do I change the default download location for pip so that these packages are downloaded to the same location that numpy is in?
Note: There were a few similar questions that I saw when searching for a solution, but I didn't see anything that mentioned permanently changing the default location.
|
According to pip documentation at
http://pip.readthedocs.org/en/stable/user_guide/#configuration
You will need to specify the default install location within a pip.ini file, which, also according to the website above is usually located as follows
On Unix and Mac OS X the configuration file is: $HOME/.pip/pip.conf
On Windows, the configuration file is: %HOME%\pip\pip.ini
The %HOME% is located in C:\Users\Bob on windows assuming your name is Bob
On linux the $HOME directory can be located by using cd ~
You may have to create the pip.ini file when you find your pip directory. Within your pip.ini or pip.config you will then need to put (assuming your on windows) something like
[global]
target=C:\Users\Bob\Desktop
Except that you would replace C:\Users\Bob\Desktop with whatever path you desire. If you are on Linux you would replace it with something like /usr/local/your/path
After saving the command would then be
pip install pandas
However, the program you install might assume it will be installed in a certain directory and might not work as a result of being installed elsewhere.
|
Argparse: Required arguments listed under "optional arguments"?
|
I use the following simple code to parse some arguments; note that one of them is required. Unfortunately, when the user runs the script without providing the argument, the displayed usage/help text does not indicate that there is a non-optional argument, which I find very confusing. How can I get python to indicate that an argument is not optional?
Here is the code:
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description='Foo')
parser.add_argument('-i','--input', help='Input file name', required=True)
parser.add_argument('-o','--output', help='Output file name', default="stdout")
args = parser.parse_args()
print ("Input file: %s" % args.input )
print ("Output file: %s" % args.output )
When running above code without providing the required argument, I get the following output:
usage: foo.py [-h] -i INPUT [-o OUTPUT]
Foo
optional arguments:
-h, --help show this help message and exit
-i INPUT, --input INPUT
Input file name
-o OUTPUT, --output OUTPUT
Output file name
|
Parameters starting with - or -- are usually considered optional. All other parameters are positional parameters and as such required by design (like positional function arguments). It is possible to require optional arguments, but this is a bit against their design. Since they are still part of the non-positional arguments, they will still be listed under the confusing header âoptional argumentsâ even if they are required. The missing square brackets in the usage part however show that they are indeed required.
See also the documentation:
In general, the argparse module assumes that flags like -f and --bar indicate optional arguments, which can always be omitted at the command line.
Note: Required options are generally considered bad form because users expect options to be optional, and thus they should be avoided when possible.
That being said, the headers âpositional argumentsâ and âoptional argumentsâ in the help are generated by two argument groups in which the arguments are automatically separated into. Now, you could âhack into itâ and change the name of the optional ones, but a far more elegant solution would be to create another group for ârequired named argumentsâ (or whatever you want to call them):
parser = argparse.ArgumentParser(description='Foo')
parser.add_argument('-o', '--output', help='Output file name', default='stdout')
requiredNamed = parser.add_argument_group('required named arguments')
requiredNamed.add_argument('-i', '--input', help='Input file name', required=True)
parser.parse_args(['-h'])
usage: [-h] [-o OUTPUT] -i INPUT
Foo
optional arguments:
-h, --help show this help message and exit
-o OUTPUT, --output OUTPUT
Output file name
required named arguments:
-i INPUT, --input INPUT
Input file name
|
Matplotlib: Move ticklabels between ticks
|
I want to create a visualization of a confusion matrix using matplotlib.
Parameters to the methods shown below are the class labels (alphabet),
the classification results as a list of lists (conf_arr) and an output filename.
I am pretty happy with the result so far, with one last problem:
I am not able to center the axis tick labels between the gridlines.
If I pass the extent parameter to the imshow method as below,
the grid is aligned as I would like it to be.
If I comment it out, the grid is missaligned but the labels are were I would like
them to be.
I think I need a way to move the ticklabel between the associated tick and the next tick
but I do not know if and how this is possible.
To summarize, I want the grid/ticks like in the left image, but the ticklabels
positioned like in the right image:
def create_confusion_matrix(alphabet, conf_arr, outputname):
norm_conf = []
width = len(conf_arr)
height = len(conf_arr[0])
for i in conf_arr:
a = 0
tmp_arr = []
a = sum(i, 0)
for j in i:
tmp_arr.append(float(j)/float(a))
norm_conf.append(tmp_arr)
fig = plt.figure(figsize=(14,14))
#fig = plt.figure()
plt.clf()
ax = fig.add_subplot(111)
ax.set_aspect(1)
ax.grid(which='major')
res = ax.imshow(np.array(norm_conf), cmap=plt.cm.binary,
interpolation='none', aspect='1', vmax=1,
##Commenting out this line sets labels correctly,
##but the grid is off
extent=[0, width, height, 0]
)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.2)
cb = fig.colorbar(res, cax=cax)
#Axes
ax.set_xticks(range(width))
ax.set_xticklabels(alphabet, rotation='vertical')
ax.xaxis.labelpad = 0.5
ax.set_yticks(range(height))
ax.set_yticklabels(alphabet, rotation='horizontal')
#plt.tight_layout()
plt.savefig(outputname, format='png')
The produced image looks like this:
|
As you've noticed, they're centered by default and you're overriding the default behavior by specifying extent=[0, width, height, 0].
There are a number of ways to handle this. One is to use pcolor and set the edgecolors and linestyles to look like the gridlines (you actually need pcolor and not pcolormesh for this to work). However, you'll have to change the extents to get the ticks in the center as imshow does by default.
import matplotlib.pyplot as plt
import numpy as np
data = np.random.random((10,10))
labels = 'abcdefghij'
fig, ax = plt.subplots()
im = ax.pcolor(data, cmap='gray', edgecolor='black', linestyle=':', lw=1)
fig.colorbar(im)
# Shift ticks to be at 0.5, 1.5, etc
for axis in [ax.xaxis, ax.yaxis]:
axis.set(ticks=np.arange(0.5, len(labels)), ticklabels=labels)
plt.show()
Alternatively, you could turn on the minor grid and place it at the pixel boundaries. Because you want fixed labels, we'll just set everything manually. Otherwise, a MultipleLocator would make more sense:
import matplotlib.pyplot as plt
import numpy as np
data = np.random.random((10,10))
labels = 'abcdefghij'
fig, ax = plt.subplots()
im = ax.imshow(data, cmap='gray', interpolation='none')
fig.colorbar(im)
# Set the major ticks at the centers and minor tick at the edges
locs = np.arange(len(labels))
for axis in [ax.xaxis, ax.yaxis]:
axis.set_ticks(locs + 0.5, minor=True)
axis.set(ticks=locs, ticklabels=labels)
# Turn on the grid for the minor ticks
ax.grid(True, which='minor')
plt.show()
|
Reset color cycle in Matplotlib
|
Say I have data about 3 trading strategies, each with and without transaction costs. I want to plot, on the same axes, the time series of each of the 6 variants (3 strategies * 2 trading costs). I would like the "with transaction cost" lines to be plotted with alpha=1 and linewidth=1 while I want the "no transaction costs" to be plotted with alpha=0.25 and linewidth=5. But I would like the color to be the same for both versions of each strategy.
I would like something along the lines of:
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
for c in with_transaction_frame.columns:
ax.plot(with_transaction_frame[c], label=c, alpha=1, linewidth=1)
****SOME MAGIC GOES HERE TO RESET THE COLOR CYCLE
for c in no_transaction_frame.columns:
ax.plot(no_transaction_frame[c], label=c, alpha=0.25, linewidth=5)
ax.legend()
What is the appropriate code to put on the indicated line to reset the color cycle so it is "back to the start" when the second loop is invoked?
|
You can reset the colorcycle to the original with Axes.set_color_cycle. Looking at the code for this, there is a function to do the actual work:
def set_color_cycle(self, clist=None):
if clist is None:
clist = rcParams['axes.color_cycle']
self.color_cycle = itertools.cycle(clist
And a method on the Axes which uses it:
def set_color_cycle(self, clist):
"""
Set the color cycle for any future plot commands on this Axes.
*clist* is a list of mpl color specifiers.
"""
self._get_lines.set_color_cycle(clist)
self._get_patches_for_fill.set_color_cycle(clist)
This basically means you can call the set_color_cycle with None as the only argument, and it will be replaced with the default cycle found in rcParams['axes.color_cycle'].
I tried this with the following code and got the expected result:
import matplotlib.pyplot as plt
import numpy as np
for i in range(3):
plt.plot(np.arange(10) + i)
plt.gca().set_color_cycle(None)
for i in range(3):
plt.plot(np.arange(10, 1, -1) + i)
plt.show()
|
How can I get the IP address of eth0 in Python?
|
When an error occurs in a Python script on Unix , an email is sent.
I have been asked to add {Testing Environment} to the subject line of the email if the IP address is 192.168.100.37 which is the testing server. This way we can have one version of a script and a way to tell if the email is coming from messed up data on the testing server.
However, when I google I keep finding this code:
import socket
socket.gethostbyname(socket.gethostname())
However, that's giving me the IP address of 127.0.1.1. When I use ifconfig I get this
eth0 Link encap:Ethernet HWaddr 00:1c:c4:2c:c8:3e
inet addr:192.168.100.37 Bcast:192.168.100.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:75760697 errors:0 dropped:411180 overruns:0 frame:0
TX packets:23166399 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:59525958247 (59.5 GB) TX bytes:10142130096 (10.1 GB)
Interrupt:19 Memory:f0500000-f0520000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:25573544 errors:0 dropped:0 overruns:0 frame:0
TX packets:25573544 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:44531490070 (44.5 GB) TX bytes:44531490070 (44.5 GB)
Firstly, I don't know where it got 127.0.1.1 from, but either way that's not what I want. When I google I keep coming to the same syntax, Bash scripts or netifaces and I'm trying to use standard libraries.
So how can I get the IP address of eth0 in Python?
|
Two methods:
Method #1 (use external package)
You need to ask for the IP address that is bound to your eth0 interface. This is available from the netifaces package
import netifaces as ni
ni.ifaddresses('eth0')
ip = ni.ifaddresses('eth0')[2][0]['addr']
print ip # should print "192.168.100.37"
You can also get a list of all available interfaces via
ni.interfaces()
Method #2 (no external package)
Here's a way to get the IP address without using a python package:
import socket
import fcntl
import struct
def get_ip_address(ifname):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
return socket.inet_ntoa(fcntl.ioctl(
s.fileno(),
0x8915, # SIOCGIFADDR
struct.pack('256s', ifname[:15])
)[20:24])
get_ip_address('eth0') # '192.168.0.110'
Note: detecting the IP address to determine what environment you are using is quite a hack. Almost all frameworks provide a very simple way to set/modify an environment variable to indicate the current environment. Try and take a look at your documentation for this. It should be as simple as doing
if app.config['ENV'] == 'production':
#send production email
else:
#send development email
|
pymongo.errors.CursorNotFound: cursor id '...' not valid at server
|
I am trying to fetch some ids that exist in a mongo database with the following code:
client = MongoClient('xx.xx.xx.xx', xxx)
db = client.test_database
db = client['...']
collection = db.test_collection
collection = db["..."]
for cursor in collection.find({ "$and" : [{ "followers" : { "$gt" : 2000 } }, { "followers" : { "$lt" : 3000 } }, { "list_followers" : { "$exists" : False } }] }):
print cursor['screenname']
print cursor['_id']['uid']
id = cursor['_id']['uid']
However after a little I am receiving : pymongo.errors.CursorNotFound: cursor id '...' not valid at server. I found this article which refer to that problem. Nevertheless it is not clear to me which solution to take. Is it possible to use find().batch_size(30)? What exactly does the above command? Can I take all the database ids using batch_size?
|
You're getting this error because the cursor is timing out on the server (after 10 minutes of inactivity).
From the pymongo documentation:
Cursors in MongoDB can timeout on the server if theyâve been open for
a long time without any operations being performed on them. This can
lead to an CursorNotFound exception being raised when attempting to
iterate the cursor.
When you call the collection.find method it queries a collection and it returns a cursor to the documents. To get the documents you iterate the cursor. When you iterate over the cursor the driver is actually performing requests to the MongoDB server to fetch more data from the server. The amount of data returned in each request to fetch more data is set by the batch_size() method.
From the documentation:
Limits the number of documents returned in one batch. Each batch
requires a round trip to the server. It can be adjusted to optimize
performance and limit data transfer.
Setting the batch_size to a lower value will help you with the timeout errors errors, but it will increase the number of times you're going to get access the MongoDB server to get all the documents.
The default batch size:
For most queries, the first batch returns 101 documents or just enough
documents to exceed 1 megabyte. Batch size will not exceed the maximum BSON document size (16 MB).
There is no universal "right" batch size. You should test with different values and see what is the appropriate value for your use case i.e. how many documents can you process in a 10 minute window.
The last resort will be that you set timeout=False. But you need to be sure that the cursor is closed after you finish processing the data.
|
How can I get tweets older than a week (using tweepy or other python libraries)
|
I have been trying to figure this out but this is a really frustrating. I'm trying to get tweets with a certain hashtag (a great amount of tweets) using Tweepy. But this doesn't go back more than one week. I need to go back at least two years for a period of a couple of months. Is this even possible, if so how?
Just for the check here is my code
import tweepy
import csv
consumer_key = '####'
consumer_secret = '####'
access_token = '####'
access_token_secret = '####'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
# Open/Create a file to append data
csvFile = open('tweets.csv', 'a')
#Use csv Writer
csvWriter = csv.writer(csvFile)
for tweet in tweepy.Cursor(api.search,q="#ps4",count=100,\
lang="en",\
since_id=2014-06-12).items():
print tweet.created_at, tweet.text
csvWriter.writerow([tweet.created_at, tweet.text.encode('utf-8')])
|
You cannot use the twitter search API to collect tweets from two years ago. Per the docs:
Also note that the search results at twitter.com may return historical results while the Search API usually only serves tweets from the past week. - Twitter documentation.
If you need a way to get old tweets, you can get them from individual users because collecting tweets from them is limited by number rather than time (so in many cases you can go back months or years). A third-party service that collects tweets like Topsy may be useful in your case as well (shut down as of July 2016, but other services exist).
|
Django CommandError: App 'polls' has migrations
|
Trying to follow the tutorial at Django project.
The problem I've come across is that when performing the command:
python manage.py sql polls I'm given the error:
CommandError: App 'polls' has migrations. only the sqlmigrate and sqlflush commands can be used when an app has migrations
So far I can't seem to find any guide on the internet, or this website for a solution to the program.
|
You can either run python manage.py makemigration followed by python manage.py migrate or just delete migrations folder
|
Adding a new pandas column with mapped value from a dictionary
|
I'm trying do something that should be really simple in pandas, but it seems anything but. I'm trying to add a column to an existing pandas dataframe that is a mapped value based on another (existing) column. Here is a small test case:
import pandas as pd
equiv = {7001:1, 8001:2, 9001:3}
df = pd.DataFrame( {"A": [7001, 8001, 9001]} )
df["B"] = equiv(df["A"])
print(df)
I was hoping the following would result:
A B
0 7001 1
1 8001 2
2 9001 3
Instead, I get an error telling me that equiv is not a callable function. Fair enough, it's a dictionary, but even if I wrap it in a function I still get frustration. So I tried to use a map function that seems to work with other operations, but it also is defeated by use of a dictionary:
df["B"] = df["A"].map(lambda x:equiv[x])
In this case I just get KeyError: 8001. I've read through documentation and previous posts, but have yet to come across anything that suggests how to mix dictionaries with pandas dataframes. Any suggestions would be greatly appreciated.
|
The right way of doing it will be df["B"] = df["A"].map(equiv).
In [55]:
import pandas as pd
equiv = {7001:1, 8001:2, 9001:3}
df = pd.DataFrame( {"A": [7001, 8001, 9001]} )
df["B"] = df["A"].map(equiv)
print(df)
A B
0 7001 1
1 8001 2
2 9001 3
[3 rows x 2 columns]
And it will handle the situation when the key does not exist very nicely, considering the following example:
In [56]:
import pandas as pd
equiv = {7001:1, 8001:2, 9001:3}
df = pd.DataFrame( {"A": [7001, 8001, 9001, 10000]} )
df["B"] = df["A"].map(equiv)
print(df)
A B
0 7001 1
1 8001 2
2 9001 3
3 10000 NaN
[4 rows x 2 columns]
|
Changing User Agent in Python 3 for urrlib.request.urlopen
|
I want to open a url using urllib.request.urlopen('someurl'):
with urllib.request.urlopen('someurl') as url:
b = url.read()
I keep getting the following error:
urllib.error.HTTPError: HTTP Error 403: Forbidden
I understand the error to be due to the site not letting python access it, to stop bots wasting their network resources- which is understandable. I went searching and found that you need to change the user agent for urllib. However all the guides and solutions I have found for this issue as to how to change the user agent have been with urllib2, and I am using python 3 so all the solutions don't work.
How can I fix this problem with python 3?
|
From the Python docs:
import urllib.request
req = urllib.request.Request(
url,
data=None,
headers={
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36'
}
)
f = urllib.request.urlopen(req)
print(f.read().decode('utf-8'))
|
Does Google App Engine support Python 3?
|
I started learning Python 3.4 and would like to start using libraries as well as Google App Engine, but the majority of Python libraries only support Python 2.7 and the same with Google App Engine.
Should I learn 2.7 instead or is there an easier way? (Is it possible to have 2 Python versions on my machine at the same time?)
|
No, It doesn't.
Google App Engine (GAE) uses sandboxed Python 2.7 runtime for Python applications. That is the normal App Engine Hosting. However, in GAE you can use Managed VM Hosting.
The Managed VM Hosting lets you run GAE applications on configurable Google Compute Engine Virtual Machines. Giving you more flexibility. Managed VMs at the moment ,at Alpha phase, only support Java 7, Python 2.7 and Go 1.4 runtime environments. To get other runtimes (like Python 3 or node.js) you can create user-configurable custom runtime.
Note: With Managed VMs you won't have the capabilities of Python 2.7 GAE libraries.
If you insist on using GAE, since Python 3+ is not viable, I would suggest learning 2.7 and switching to 3+ versions when GAE libraries gets ported to Python 3+. You can easily switch to the other if you learn one of the versions.
If you insist on using Python 3+, you can use Heroku or Microsoft Azure. Both of them supports Python 2.7 and 3.4.
|
About the changing id of a Python immutable string
|
Something about the id of objects of type str (in python 2.7) puzzles me. The str type is immutable, so I would expect that once it is created, it will always have the same id. I believe I don't phrase myself so well, so instead I'll post an example of input and output sequence.
>>> id('so')
140614155123888
>>> id('so')
140614155123848
>>> id('so')
140614155123808
so in the meanwhile, it changes all the time. However, after having a variable pointing at that string, things change:
>>> so = 'so'
>>> id('so')
140614155123728
>>> so = 'so'
>>> id(so)
140614155123728
>>> not_so = 'so'
>>> id(not_so)
140614155123728
So it looks like it freezes the id, once a variable holds that value. Indeed, after del so and del not_so, the output of id('so') start changing again.
This is not the same behaviour as with (small) integers.
I know there is not real connection between immutability and having the same id; still, I am trying to figure out the source of this behaviour. I believe that someone whose familiar with python's internals would be less surprised than me, so I am trying to reach the same point...
Update
Trying the same with a different string gave different results...
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
Now it is equal...
|
CPython does not intern strings by default, but in practice, a lot of places in the Python codebase do reuse already-created string objects. A lot of Python internals use (the C-equivalent of) the intern() function call to explicitly intern Python strings, but in general, a Python string literal creates a new string object each time.
Python is also free to reuse memory locations, and Python will also make immutable values constant by storing them once, at compile time, with the bytecode in code objects. The Python REPL (interactive interpreter) also stores the most recent expression result in the _ name, which muddles up things some more.
As such, you will see the same id crop up from time to time.
Running just the line id(<string literal>) in the REPL goes through several steps:
The line is compiled, which includes creating a constant for the string object:
>>> compile("id('foo')", '<stdin>', 'single').co_consts
('foo', None)
This shows the stored constants with the compiled bytecode; in this case a string 'foo' and the None singleton.
On execution, the string is loaded from the code constants, and id() returns the memory location. The resulting int value is bound to _, as well as printed:
>>> import dis
>>> dis.dis(compile("id('foo')", '<stdin>', 'single'))
1 0 LOAD_NAME 0 (id)
3 LOAD_CONST 0 ('foo')
6 CALL_FUNCTION 1
9 PRINT_EXPR
10 LOAD_CONST 1 (None)
13 RETURN_VALUE
The code object is not referenced by anything, reference count drops to 0 and the code object is deleted. As a consequence, so is the string object.
Python can then perhaps reuse the same memory location for a new string object, if you re-run the same code. This usually leads to the same memory address being printed if you repeat this code. This does depend on what else you do with your Python memory.
ID reuse is not predictable; if in the meantime the garbage collector runs to clear circular references, other memory could be freed and you'll get new memory addresses.
Next, the Python compiler will also intern any Python string stored as a constant, provided it is a valid identifier. The Python code object factory function PyCode_New will intern any string object that contains only letters, digits or an underscore:
/* Intern selected string constants */
for (i = PyTuple_Size(consts); --i >= 0; ) {
PyObject *v = PyTuple_GetItem(consts, i);
if (!PyString_Check(v))
continue;
if (!all_name_chars((unsigned char *)PyString_AS_STRING(v)))
continue;
PyString_InternInPlace(&PyTuple_GET_ITEM(consts, i));
}
Since you created strings that fit that criteria, they are interned, which is why you see the same ID being used for the 'so' string, even when recreated and bound to different identifiers.
Incidentally, your new name so = 'so' binds a string to a name that contains the same characters. In other words, you are creating a global whose name and value are equal. As Python interns both identifiers and qualifying constants, you end up using the same string object for both the identifier and its value:
>>> compile("so = 'so'", '<stdin>', 'single').co_names[0] is compile("so = 'so'", '<stdin>', 'single').co_consts[0]
True
If you create strings that are either not code object constants, or contain characters outside of the letters + numbers + underscore range, you'll see the id() value not being reused:
>>> some_var = 'Look ma, spaces and punctuation!'
>>> some_other_var = 'Look ma, spaces and punctuation!'
>>> id(some_var)
4493058384
>>> id(some_other_var)
4493058456
>>> foo = 'Concatenating_' + 'also_helps_if_long_enough'
>>> bar = 'Concatenating_' + 'also_helps_if_long_enough'
>>> foo is bar
False
>>> foo == bar
True
The Python peephole optimizer does pre-calculate the results of simple expressions, but if this results in a sequence longer than 20 the output is ignored (to prevent bloating code objects and memory use); so concatenating shorter strings consisting only of name characters can still lead to interned strings if the result is 20 characters or shorter.
|
Pandas read_csv low_memory and dtype options
|
When calling
df = pd.read_csv('somefile.csv')
I get:
/Users/josh/anaconda/envs/py27/lib/python2.7/site-packages/pandas/io/parsers.py:1130:
DtypeWarning: Columns (4,5,7,16) have mixed types. Specify dtype
option on import or set low_memory=False.
Why is the dtype option related to low_memory, and why would making it False help with this problem?
|
The deprecated low_memory option
The low_memory option is not properly deprecated, but it should be, since it does not actually do anything differently[source]
The reason you get this low_memory warning is because guessing dtypes for each column is very memory demanding. Pandas tries to determine what dtype to set by analyzing the data in each column.
Dtype Guessing (very bad)
Pandas can only determine what dtype a column should have once the whole file is read. This means nothing can really be parsed before the whole file is read unless you risk having to change the dtype of that column when you read the last value.
Consider the example of one file which has a column called user_id.
It contains 10 million rows where the user_id is always numbers.
Since pandas cannot know it is only numbers, it will probably keep it as the original strings until it has read the whole file.
Specifying dtypes (should always be done)
adding
dtype={'user_id': int}
to the pd.read_csv() call will make pandas know when it starts reading the file, that this is only integers.
Also worth noting is that if the last line in the file would have "foobar" written in the user_id column, the loading would crash if the above dtype was specified.
Example of broken data that breaks when dtypes are defined
import pandas as pd
from StringIO import StringIO
csvdata = """user_id,username
1,Alice
3,Bob
foobar,Caesar"""
sio = StringIO(csvdata)
pd.read_csv(sio, dtype={"user_id": int, "username": object})
ValueError: invalid literal for long() with base 10: 'foobar'
dtypes are typically a numpy thing, read more about them here:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.html
Gotchas, caveats, notes
Setting dtype=object will silence the above warning, but will not make it more memory efficient, only process efficient if anything.
Setting dtype=unicode will not do anything, since to numpy, a unicode is represented as object.
Usage of converters
@sparrow correctly points out the usage of converters to avoid pandas blowing up when encountering 'foobar' in a column specified as int. I would like to add that converters are really heavy and inefficient to use in pandas and should be used as a last resort. This is because the read_csv process is a single process.
CSV files can be processed line by line and thus can be processed by multiple converters in parallel more efficiently by simply cutting the file into segments and running multiple processes, something that python does not support. But this is a different story.
|
How do you call an instance of a class in Python?
|
This is inspired by a question I just saw, "Change what is returned by calling class instance", but was quickly answered with __repr__ (and accepted, so the questioner did not actually intend to call the instance).
Now calling an instance of a class can be done like this:
instance_of_object = object()
instance_of_object()
but we'll get an error, something like TypeError: 'object' object is not callable.
So to ensure we have this question on Stackoverflow:
How do you actually call an instance of a class in Python?
|
You call an instance of a class as in the following:
o = object() # create our instance
o() # call the instance
But this will typically give us an error.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'object' object is not callable
How can we call the instance as intended, and perhaps get something useful out of it?
We have to implement Python special method, __call__!
class Knight(object):
def __call__(self, foo, bar, baz=None):
print(foo)
print(bar)
print(bar)
print(bar)
print(baz)
Instantiate the class:
a_knight = Knight()
Now we can call the class instance:
a_knight('ni!', 'ichi', 'pitang-zoom-boing!')
which prints:
ni!
ichi
ichi
ichi
pitang-zoom-boing!
And we have now actually, and successfully, called an instance of the class!
|
Permanently set Python path for Anaconda within Cygwin
|
I'm trying to install Anaconda on my Windows 7 machine. I often use cygwin to for my command-line work, and I would like to manage Anaconda from there. I've worked through the graphic installer without any issues, and checked necessary boxes to reset my default path to this install of python. I go ahead to check where python is and initially I get this...
$ which python
/usr/bin/python
From here python works fine...
$ python
Python 2.7.5 (default, Oct 2 2013, 22:34:09)
[GCC 4.8.1] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
But I'm trying to work from anaconda, so I should just need to redefine my path...
$ export PATH=/cygdrive/c/anaconda:$PATH
$ which python
/cygdrive/c/anaconda/python
And now I should be good to go, but when I try and step into python, it just hangs
$ python
Any idea why this might be happening? verbose return, below...
$ python -v
# installing zipimport hook
import zipimport # builtin
# installed zipimport hook
# C:\anaconda\lib\site.pyc matches C:\anaconda\lib\site.py
import site # precompiled from C:\anaconda\lib\site.pyc
# C:\anaconda\lib\os.pyc matches C:\anaconda\lib\os.py
import os # precompiled from C:\anaconda\lib\os.pyc
import errno # builtin
import nt # builtin
# C:\anaconda\lib\ntpath.pyc matches C:\anaconda\lib\ntpath.py
import ntpath # precompiled from C:\anaconda\lib\ntpath.pyc
# C:\anaconda\lib\stat.pyc matches C:\anaconda\lib\stat.py
import stat # precompiled from C:\anaconda\lib\stat.pyc
# C:\anaconda\lib\genericpath.pyc matches C:\anaconda\lib\genericpath.py
import genericpath # precompiled from C:\anaconda\lib\genericpath.pyc
# C:\anaconda\lib\warnings.pyc matches C:\anaconda\lib\warnings.py
import warnings # precompiled from C:\anaconda\lib\warnings.pyc
# C:\anaconda\lib\linecache.pyc matches C:\anaconda\lib\linecache.py
import linecache # precompiled from C:\anaconda\lib\linecache.pyc
# C:\anaconda\lib\types.pyc matches C:\anaconda\lib\types.py
import types # precompiled from C:\anaconda\lib\types.pyc
# C:\anaconda\lib\UserDict.pyc matches C:\anaconda\lib\UserDict.py
import UserDict # precompiled from C:\anaconda\lib\UserDict.pyc
# C:\anaconda\lib\_abcoll.pyc matches C:\anaconda\lib\_abcoll.py
import _abcoll # precompiled from C:\anaconda\lib\_abcoll.pyc
# C:\anaconda\lib\abc.pyc matches C:\anaconda\lib\abc.py
import abc # precompiled from C:\anaconda\lib\abc.pyc
# C:\anaconda\lib\_weakrefset.pyc matches C:\anaconda\lib\_weakrefset.py
import _weakrefset # precompiled from C:\anaconda\lib\_weakrefset.pyc
import _weakref # builtin
# C:\anaconda\lib\copy_reg.pyc matches C:\anaconda\lib\copy_reg.py
import copy_reg # precompiled from C:\anaconda\lib\copy_reg.pyc
# C:\anaconda\lib\traceback.pyc matches C:\anaconda\lib\traceback.py
import traceback # precompiled from C:\anaconda\lib\traceback.pyc
# C:\anaconda\lib\sysconfig.pyc matches C:\anaconda\lib\sysconfig.py
import sysconfig # precompiled from C:\anaconda\lib\sysconfig.pyc
# C:\anaconda\lib\re.pyc matches C:\anaconda\lib\re.py
import re # precompiled from C:\anaconda\lib\re.pyc
# C:\anaconda\lib\sre_compile.pyc matches C:\anaconda\lib\sre_compile.py
import sre_compile # precompiled from C:\anaconda\lib\sre_compile.pyc
import _sre # builtin
# C:\anaconda\lib\sre_parse.pyc matches C:\anaconda\lib\sre_parse.py
import sre_parse # precompiled from C:\anaconda\lib\sre_parse.pyc
# C:\anaconda\lib\sre_constants.pyc matches C:\anaconda\lib\sre_constants.py
import sre_constants # precompiled from C:\anaconda\lib\sre_constants.pyc
# C:\anaconda\lib\locale.pyc matches C:\anaconda\lib\locale.py
import locale # precompiled from C:\anaconda\lib\locale.pyc
import encodings # directory C:\anaconda\lib\encodings
# C:\anaconda\lib\encodings\__init__.pyc matches C:\anaconda\lib\encodings\__init__.py
import encodings # precompiled from C:\anaconda\lib\encodings\__init__.pyc
# C:\anaconda\lib\codecs.pyc matches C:\anaconda\lib\codecs.py
import codecs # precompiled from C:\anaconda\lib\codecs.pyc
import _codecs # builtin
# C:\anaconda\lib\encodings\aliases.pyc matches C:\anaconda\lib\encodings\aliases.py
import encodings.aliases # precompiled from C:\anaconda\lib\encodings\aliases.pyc
import operator # builtin
# C:\anaconda\lib\functools.pyc matches C:\anaconda\lib\functools.py
import functools # precompiled from C:\anaconda\lib\functools.pyc
import _functools # builtin
import _locale # builtin
# C:\anaconda\lib\encodings\cp1252.pyc matches C:\anaconda\lib\encodings\cp1252.py
import encodings.cp1252 # precompiled from C:\anaconda\lib\encodings\cp1252.pyc
# zipimport: found 13 names in C:\anaconda\lib\site-packages\runipy-0.1.0-py2.7.egg
# zipimport: found 144 names in C:\anaconda\lib\site-packages\setuptools-3.6-py2.7.egg
Python 2.7.7 |Anaconda 2.0.1 (64-bit)| (default, Jun 11 2014, 10:40:02) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and htt
Another (possibly related) issue I'm having is needing to reset the path every time I close/open cygwin. I've entered the following text into .bashrc and .profile to try and set the path permanently:
# Set path to python from anaconda install
export PATH=/cygdrive/c/anaconda:$PATH
After opening and closing cygwin, I return to:
$ which python
/usr/bin/python
Could this be related to setting certain system environment variables?
|
To work with the interactive Python shell in Cygwin I use the -i option.
To get it from the Anaconda install, I used the steps suggested above:
$ export PATH=/cygdrive/c/anaconda:$PATH
$ which python
/cygdrive/c/anaconda/python
Then I launch python within Cygwin with the -i option:
$ python -i
Python 2.7.8 |Anaconda 2.1.0 (64-bit)| (default, Jul 2 2014, 15:12:11) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://binstar.org
>>>>
The libraries are also working fine. For instance the pandas library (which has been installed through Anaconda) can be imported now.
>>>> import pandas
>>>> pandas.DataFrame
<class 'pandas.core.frame.DataFrame'>
Now to make this permanent I added the path in my bashrc file:
$ echo 'export PATH=/cygdrive/c/anaconda:$PATH' >> .bashrc
$ source .bashrc
$ which python
/cygdrive/c/anaconda/python
I hope this helps.
|
DistutilsOptionError: must supply either home or prefix/exec-prefix -- not both
|
I've been usually installed python packages through pip.
For Google App Engine, I need to install packages to another target directory.
So, I've tried
pip install -I flask-restful --target ./lib
but, always fail with
must supply either home or prefix/exec-prefix -- not both
any good suggestions?
|
Are you using OS X and Homebrew? The Homebrew python page https://github.com/Homebrew/brew/blob/master/docs/Homebrew-and-Python.md calls out a known issue with pip and a work around.
Worked for me.
You can make this "empty prefix" the default by adding a
~/.pydistutils.cfg file with the following contents:
[install]
prefix=
Edit: Do not use this Homebrew recommended option, it will break normal pip operations.
|
What are the differences between slices and partitions of RDDs?
|
I am using Spark's Python API and running Spark 0.8.
I am storing a large RDD of floating point vectors and I need to perform calculations of one vector against the entire set.
Is there any difference between slices and partitions in an RDD?
When I create the RDD, I pass it 100 as a parameter which causes it to store the RDD as 100 slices and create 100 tasks when performing the calculations. I want to know if partitioning the data would improve performance beyond the slicing by enabling the system to process the data more efficiently (i.e. is there a difference between performing operations over a partition versus over just operating over every element in the sliced RDD).
For example, is there any significant difference between these two pieces of code?
rdd = sc.textFile(demo.txt, 100)
vs
rdd = sc.textFile(demo.txt)
rdd.partitionBy(100)
|
I believe slices and partitions are the same thing in Apache Spark.
However, there is a subtle but potentially significant difference between the two pieces of code you posted.
This code will attempt to load demo.txt directly into 100 partitions using 100 concurrent tasks:
rdd = sc.textFile('demo.txt', 100)
For uncompressed text, it will work as expected. But if instead of demo.txt you had a demo.gz, you will end up with an RDD with only 1 partition. Reads against gzipped files cannot be parallelized.
On the other hand, the following code will first open demo.txt into an RDD with the default number of partitions, then it will explicitly repartition the data into 100 partitions that are roughly equal in size.
rdd = sc.textFile('demo.txt')
rdd = rdd.repartition(100)
So in this case, even with a demo.gz you will end up with an RDD with 100 partitions.
As a side note, I replaced your partitionBy() with repartition() since that's what I believe you were looking for. partitionBy() requires the RDD to be an RDD of tuples. Since repartition() is not available in Spark 0.8.0, you should instead be able to use coalesce(100, shuffle=True).
Spark can run 1 concurrent task for every partition of an RDD, up to the number of cores in your cluster. So if you have a cluster with 50 cores, you want your RDDs to at least have 50 partitions (and probably 2-3x times that).
As of Spark 1.1.0, you can check how many partitions an RDD has as follows:
rdd.getNumPartitions() # Python API
rdd.partitions.size // Scala API
Before 1.1.0, the way to do this with the Python API was rdd._jrdd.splits().size().
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.