instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
|---|---|---|
Why is my object properly removed from a list when __eq__ isn't being called?
|
I have the following code, which is making me scratch my head -
class Element:
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
def eq(self, other):
print('comparing {} to {} ({})'.format(self.name,
other.name,
self.name == other.name))
return self.name == other.name
Element.__eq__ = eq
elements = [
Element('a'),
Element('b'),
Element('c'),
Element('d')
]
print('before {}'.format(elements))
elements.remove(elements[3])
print('after {}'.format(elements))
Which yields the following output -
before [a, b, c, d]
comparing a to d (False)
comparing b to d (False)
comparing c to d (False)
after [a, b, c]
Why isn't eq() outputting comparing d to d (True)?
The reason I'm monkey patching __eq__ instead of simply implementing it in my Element class is because I'm testing how monkey patching works before I implement it with one of the libraries I'm using.
|
The fourth element is the exactly same object with the object the code is passing (elements[3]).
In other word,
>>> elements[3] is elements[3]
True
>>> elements[3] == elements[3]
True
So, no need to check the equality because they(?) are identical (same) one.
Equality check will happen if they are not identical. For example, __eq__ will be called if the code passes another object with the same value:
elements.remove(Element('d'))
|
How does one ignore unexpected keyword arguments passed to a function?
|
Suppose I have some funciton, f
def f (a=None):
print a
Now, if I have a dictionary such as dct = {"a":"Foo"}, I may call f(**dct) and get the result Foo printed.
However, suppose I have a dictionary dct2 = {"a":"Foo", "b":"Bar"}. If I call f(**dct2) I get a
TypeError: f() got an unexpected keyword argument 'b'
Fair enough. However, is there anyway to, in the definition of f or in the calling of it, tell python to just ignore any keys that are not parameter names? Preferable a method that allows defaults to be specified.
|
As an extension to the answer posted by @Bas, I would suggest to add the kwargs arguments (variable length keyword arguments) as the second parameter to the function
>>> def f (a=None, **kwargs):
print a
>>> dct2 = {"a":"Foo", "b":"Bar"}
>>> f(**dct2)
Foo
This would necessarily suffice the case of
to just ignore any keys that are not parameter names
However, it lacks the default values of parameters, which is a nice feature that it would be nice to keep
|
multiprocessing.Pool: What's the difference between map_async and imap?
|
I'm trying to learn how to use Python's multiprocessing package, but I don't understand the difference between map_async and imap.
I noticed that both map_async and imap are executed asynchronously. So when should I use one over the other? And how should I retrieve the result returned by map_async?
Should I use something like this?
def test():
result = pool.map_async()
pool.close()
pool.join()
return result.get()
result=test()
for i in result:
print i
|
There are two key differences between imap/imap_unordered and map/map_async:
The way they consume the iterable you pass to them.
The way they return the result back to you.
map consumes your iterable by converting the iterable to a list (assuming it isn't a list already), breaking it into chunks, and sending those chunks to the worker processes in the Pool. Breaking the iterable into chunks performs better than passing each item in the iterable between processes one item at a time - particularly if the iterable is large. However, turning the iterable into a list in order to chunk it can have a very high memory cost, since the entire list will need to be kept in memory.
imap doesn't turn the iterable you give it into a list, nor does break it into chunks (by default). It will iterate over the iterable one element at a time, and send them each to a worker process. This means you don't take the memory hit of converting the whole iterable to a list, but it also means the performance is slower for large iterables, because of the lack of chunking. This can be mitigated by passing a chunksize argument larger than default of 1, however.
The other major difference between imap/imap_unordered and map/map_async, is that with imap/imap_unordered, you can start receiving results from workers as soon as they're ready, rather than having to wait for all of them to be finished. With map_async, an AsyncResult is returned right away, but you can't actually retrieve results from that object until all of them have been processed, at which points it returns the same list that map does (map is actually implemented internally as map_async(...).get()). There's no way to get partial results; you either have the entire result, or nothing.
imap and imap_unordered both return iterables right away. With imap, the results will be yielded from the iterable as soon as they're ready, while still preserving the ordering of the input iterable. With imap_unordered, results will be yielded as soon as they're ready, regardless of the order of the input iterable. So, say you have this:
import multiprocessing
import time
def func(x):
time.sleep(x)
return x + 2
if __name__ == "__main__":
p = multiprocessing.Pool()
start = time.time()
for x in p.imap(func, [1,5,3]):
print("{} (Time elapsed: {}s)".format(x, int(time.time() - start)))
This will output:
3 (Time elapsed: 1s)
7 (Time elapsed: 5s)
5 (Time elapsed: 5s)
If you use p.imap_unordered instead of p.imap, you'll see:
3 (Time elapsed: 1s)
5 (Time elapsed: 3s)
7 (Time elapsed: 5s)
If you use p.map or p.map_async().get(), you'll see:
3 (Time elapsed: 5s)
7 (Time elapsed: 5s)
5 (Time elapsed: 5s)
So, the primary reasons to use imap/imap_unordered over map_async are:
Your iterable is large enough that converting it to a list would cause you to run out of/use too much memory.
You want to be able to start processing the results before all of them are completed.
|
Generate temporary file names without creating actual file in Python
|
The question, number 10501247, in stackoverflow gives answer how to create temporary file in Python.
I only need to have temporary file name in my case.
Calling tempfile.NamedTemporaryFile() returns file handle after actual file creation.
Is there way to get file name only?
# Trying to get temp file path
tf = tempfile.NamedTemporaryFile()
temp_file_name = tf.name
tf.close()
# Here is my real purpose to get the temp_file_name
f = gzip.open(temp_file_name ,'wb')
...
|
If you want a temp file name only you can call inner tempfile function _get_candidate_names():
import tempfile
temp_name = next(tempfile._get_candidate_names())
% e.g. px9cp65s
Calling next again, will return another name, etc. This does not give you the path to temp folder. To get default 'tmp' directory, use:
defult_tmp_dir = tempfile._get_default_tempdir()
% results in: /tmp
|
Delete every non utf-8 symbols froms string
|
I have a big amount of files and parser. What I Have to do is strip all non utf-8 symbols and put data in mongodb.
Currently I have code like this.
with open(fname, "r") as fp:
for line in fp:
line = line.strip()
line = line.decode('utf-8', 'ignore')
line = line.encode('utf-8', 'ignore')
somehow I still get an error
bson.errors.InvalidStringData: strings in documents must be valid UTF-8:
1/b62010montecassianomcir\xe2\x86\x90ta0\xe2\x86\x90008923304320733/290066010401040101506055soccorin
I don't get it. Is there some simple way to do it?
UPD: seems like Python and Mongo don't agree about definition of Utf-8 Valid string.
|
Try below code line instead of last two lines. Hope it helps:
line=line.decode('utf-8','ignore').encode("utf-8")
|
Is there a way to delete created variables, functions, etc from the memory of the interpreter?
|
I've been searching for the accurate answer to this question for a couple of days now but haven't got anything good. I'm not a complete beginner in programming, but not yet even on the intermediate level.
When I'm in the shell of Python, I type: dir() and I can see all the names of all the objects in the current scope (main block), there are 6 of them:
['__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__']
Then, when I'm declaring a variable, for example x = 10, it automatically adds to that lists of objects under built-in module dir(), and when I type dir() again, it shows now:
['__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'x']
The same goes for functions, classes and so on.
How do I delete all those new objects without erasing the standard 6 which where available at the beginning?
I've read here about "memory cleaning", "cleaning of the console", which erases all the text from the command prompt window:
>>> import sys
>>> clear = lambda: os.system('cls')
>>> clear()
But all this has nothing to do with what I'm trying to achieve, it doesn't clean out all used objects.
|
You can delete individual names with del:
del x
or you can remove them from the globals() object:
for name in dir():
if not name.startswith('_'):
del globals()[name]
This is just an example loop; it defensively only deletes names that do not start with an underscore, making a (not unreasoned) assumption that you only used names without an underscore at the start in your interpreter. You could use a hard-coded list of names to keep instead (whitelisting) if you really wanted to be thorough. There is no built-in function to do the clearing for you, other than just exit and restart the interpreter.
Modules you've imported (import os) are going to remain imported because they are referenced by sys.modules; subsequent imports will reuse the already imported module object. You just won't have a reference to them in your current global namespace.
|
How to check if a pymongo cursor has query results
|
I need to check if a find statement returns a non-empty query.
What I was doing was the following:
query = collection.find({"string": field})
if not query: #do something
Then I realized that my if statement was never executed because find returns a cursor, either the query is empty or not.
Therefore I checked the documentation and I find two methods that can help me:
count(with_limit_and_skip=False) which (from the description):
Returns the number of documents in the results set for this query.
It seems a good way to check, but this means that I need to count
all the results in cursor to know if it is zero or not, right? A little bit expensive?
retrieved which (from the description):
The number of documents retrieved so far.
I tested it on an empty query set and it returns zero, but it's not
clear what it does and I don't know if it's right for me.
So, which is the best way (best practice) to check if a find() query returns an empty set or not? Is one of the methods described above right for this purpose? And what about performance? Are there other ways to do it?
Just to be clear: I need to know if the query is empty and I'd like to find the best way with the cursor with respect to performance and being pythonic.
|
.count() is the correct way to find the number of results that are returned in the query. The count() method does not exhaust the iterator for your cursor, so you can safely do a .count() check before iterating over the items in the result set.
Performance of the count method was greatly improved in MongoDB 2.4. The only thing that could slow down your count is does the query have an index set on it, or not. To find out if you have an index on the query, you can do something like
query = collection.find({"string": field})
print query.explain()
If you see BasicCursor in the result, you need an index on your string field for this query.
|
Selenium Python: how to wait until the page is loaded?
|
I want to scrape all the data of a page implemented by a infinite scroll. The following python code works.
for i=1:100
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(5)
This means every time I scroll down to the bottom, I need to wait for 5 seconds, which is generally enough for the page to finish loading the newly generated contents. But, this may not be time efficient. The page may finish loading the new contents within 5 seconds. How can I detect whether the page have finished loading the new contents every time I scroll down? If I can detect this, once I know the page have finished loading, I can scroll down again to see more contents. This is more time efficient.
|
As @user227215 said you should use WebDriverWait to wait for an element located in your page:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
browser = webdriver.Firefox()
browser.get("url")
delay = 3 # seconds
try:
WebDriverWait(browser, delay).until(EC.presence_of_element_located(browser.find_element_by_id('IdOfMyElement')))
print "Page is ready!"
except TimeoutException:
print "Loading took too much time!"
I have used it for checking alerts. You can use any other type methods to find the locator.
EDIT 1:
I should mention that the webdriver will wait for a page to load by default. It does not wait for loading inside frames and also for ajax requests. It means when you use .get('url'), your browser will wait until the page is completely loaded and then go to the next command in the code. But when you are posting an ajax request, webdriver does not wait and it's your responsibility to wait an appropriate amount of time for the page or a part of page to load; so there is a module named expected_conditions.
|
Resource u'tokenizers/punkt/english.pickle' not found
|
My Code:
import nltk.data
tokenizer = nltk.data.load('nltk:tokenizers/punkt/english.pickle')
ERROR Message:
[ec2-user@ip-172-31-31-31 sentiment]$ python mapper_local_v1.0.py
Traceback (most recent call last):
File "mapper_local_v1.0.py", line 16, in <module>
tokenizer = nltk.data.load('nltk:tokenizers/punkt/english.pickle')
File "/usr/lib/python2.6/site-packages/nltk/data.py", line 774, in load
opened_resource = _open(resource_url)
File "/usr/lib/python2.6/site-packages/nltk/data.py", line 888, in _open
return find(path_, path + ['']).open()
File "/usr/lib/python2.6/site-packages/nltk/data.py", line 618, in find
raise LookupError(resource_not_found)
LookupError:
Resource u'tokenizers/punkt/english.pickle' not found. Please
use the NLTK Downloader to obtain the resource:
>>>nltk.download()
Searched in:
- '/home/ec2-user/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- u''
I'm trying to run this program in Unix machine:
As per the error message, I logged into python shell from my unix machine then I used the below commands:
import nltk
nltk.download()
and then I downloaded all the available things using d- down loader and l- list options but still the problem persists.
I tried my best to find the solution in internet but I got the same solution what I did as I mentioned in my above steps.
|
To add to alvas' answer, you can download only the punkt corpus:
nltk.download('punkt')
Downloading all sounds like overkill to me. Unless that's what you want.
|
How can I get 2.x-like sorting behaviour in Python 3.x?
|
I'm trying to replicate (and if possible improve on) Python 2.x's sorting behaviour in 3.x, so that mutually orderable types like int, float etc. are sorted as expected, and mutually unorderable types are grouped within the output.
Here's an example of what I'm talking about:
>>> sorted([0, 'one', 2.3, 'four', -5]) # Python 2.x
[-5, 0, 2.3, 'four', 'one']
>>> sorted([0, 'one', 2.3, 'four', -5]) # Python 3.x
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: str() < int()
My previous attempt at this, using a class for the key parameter to sorted() (see
Why does this key class for sorting heterogeneous sequences behave oddly?) is fundamentally broken, because its approach of
Trying to compare values, and
If that fails, falling back to comparing the string representation of their types
can lead to intransitive ordering, as explained by BrenBarn's excellent answer.
A naïve approach, which I initially rejected without even trying to code it, would be to use a key function that returns a (type, value) tuple:
def motley(value):
return repr(type(value)), value
However, this doesn't do what I want. In the first place, it breaks the natural ordering of mutually orderable types:
>>> sorted([0, 123.4, 5, -6, 7.89])
[-6, 0, 5, 7.89, 123.4]
>>> sorted([0, 123.4, 5, -6, 7.89], key=motley)
[7.89, 123.4, -6, 0, 5]
Secondly, it raises an exception when the input contains two objects of the same intrinsically unorderable type:
>>> sorted([{1:2}, {3:4}], key=motley)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: dict() < dict()
... which admittedly is the standard behaviour in both Python 2.x and 3.x â but ideally I'd like such types to be grouped together (I don't especially care about their ordering, but it would seem in keeping with Python's guarantee of stable sorting that they retain their original order).
I can work around the first of these problems for numeric types by special-casing them:
from numbers import Real
from decimal import Decimal
def motley(value):
numeric = Real, Decimal
if isinstance(value, numeric):
typeinfo = numeric
else:
typeinfo = type(value)
return repr(typeinfo), value
... which works as far as it goes:
>>> sorted([0, 'one', 2.3, 'four', -5], key=motley)
[-5, 0, 2.3, 'four', 'one']
... but doesn't account for the fact that there may be other distinct (possibly user-defined) types which are mutually orderable, and of course still fails with intrinsically unorderable types:
>>> sorted([{1:2}, {3:4}], key=motley)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: dict() < dict()
Is there another approach which solves both the problem of arbitrary, distinct-but-mutually-orderable types and that of intrinsically unorderable types?
|
The actual Python 2 implementation is quite involved, but object.c's default_3way_compare does the final fallback after instances have been given a chance to implement normal comparison rules.
Implementing that function as pure Python in a wrapper gives us the same sorting semantics in Python 3:
from numbers import Number
# decorator for type to function mapping special cases
def per_type_cmp(type_):
try:
mapping = per_type_cmp.mapping
except AttributeError:
mapping = per_type_cmp.mapping = {}
def decorator(cmpfunc):
mapping[type_] = cmpfunc
return cmpfunc
return decorator
class python2_sort_key(object):
_unhandled_types = {complex}
def __init__(self, ob):
self._ob = ob
def __lt__(self, other):
self, other = self._ob, other._ob # we don't care about the wrapper
# default_3way_compare is used only if direct comparison failed
try:
return self < other
except TypeError:
pass
# special casing for types
for type_, special_cmp in per_type_cmp.mapping.items():
if isinstance(self, type_) and isinstance(other, type_):
return special_cmp(self, other)
# explicitly raise again, Python 2 won't sort these either
if type(self) in python2_sort_key._unhandled_types:
raise TypeError('no ordering relation is defined for {}'.format(
type(self).__name__))
if type(other) in python2_sort_key._unhandled_types:
raise TypeError('no ordering relation is defined for {}'.format(
type(other).__name__))
# same type but no ordering defined, go by id
if type(self) is type(other):
return id(self) < id(other)
# None always comes first
if self is None:
return True
if other is None:
return False
# Sort by typename, but numbers are sorted before other types
self_tname = '' if isinstance(self, Number) else type(self).__name__
other_tname = '' if isinstance(other, Number) else type(other).__name__
if self_tname != other_tname:
return self_tname < other_tname
# same typename, or both numbers, but different type objects, order
# by the id of the type object
return id(type(self)) < id(type(other))
@per_type_cmp(dict)
def dict_cmp(a, b):
if len(a) != len(b):
return len(a) < len(b)
adiff = min(k for k in a if a.get(k) != b.get(k))
bdiff = min(k for k in b if b.get(k) != a.get(k))
if adiff != bdiff:
return adiff < bdiff
return a[adiff] < b[bdiff]
I incorporated handling dictionary sorting as implemented in Python 2, since that'd be supported by the type itself.
I've also added special casing for complex numbers, as Python 2 raises an exception when you try sort to these:
>>> sorted([0.0, 1, (1+0j), False, (2+3j)])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: no ordering relation is defined for complex numbers
You may have to add more special cases if you want to emulate Python 2 behaviour exactly.
If you wanted to sort complex numbers anyway you'll need to consistently put them with the non-numbers group; e.g.:
# Sort by typename, but numbers are sorted before other types
if isinstance(self, Number) and not isinstance(self, complex):
self_tname = ''
else:
self_tname = type(self).__name__
if isinstance(other, Number) and not isinstance(other, complex):
other_tname = ''
else:
other_tname = type(other).__name__
Some test cases:
>>> sorted([0, 'one', 2.3, 'four', -5], key=python2_sort_key)
[-5, 0, 2.3, 'four', 'one']
>>> sorted([0, 'one', 2.3, 'four', -5], key=python2_sort_key)
[-5, 0, 2.3, 'four', 'one']
>>> sorted([0, 123.4, 5, -6, 7.89], key=python2_sort_key)
[-6, 0, 5, 7.89, 123.4]
>>> sorted([{1:2}, {3:4}], key=python2_sort_key)
[{1: 2}, {3: 4}]
>>> sorted([{1:2}, None, {3:4}], key=python2_sort_key)
[None, {1: 2}, {3: 4}]
|
Can't install Scipy through pip
|
When installing scipy through pip with :
pip install scipy
Pip fails to build scipy and throws the following error:
Cleaning up...
Command /Users/administrator/dev/KaggleAux/env/bin/python2.7 -c "import setuptools, tokenize;__file__='/Users/administrator/dev/KaggleAux/env/build/scipy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/zl/7698ng4d4nxd49q1845jd9340000gn/T/pip-eO8gua-record/install-record.txt --single-version-externally-managed --compile --install-headers /Users/administrator/dev/KaggleAux/env/bin/../include/site/python2.7 failed with error code 1 in /Users/administrator/dev/KaggleAux/env/build/scipy
Storing debug log for failure in /Users/administrator/.pip/pip.log
How can I get scipy to build successfully? This may be a new issue with OSX Yosemite since I just upgraded and haven't had issues installing scipy before.
Debug log:
Cleaning up...
Removing temporary dir /Users/administrator/dev/KaggleAux/env/build...
Command /Users/administrator/dev/KaggleAux/env/bin/python2.7 -c "import setuptools, tokenize;__file__='/Users/administrator/dev/KaggleAux/env/build/scipy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/zl/7698ng4d4nxd49q1845jd9340000gn/T/pip-eO8gua-record/install-record.txt --single-version-externally-managed --compile --install-headers /Users/administrator/dev/KaggleAux/env/bin/../include/site/python2.7 failed with error code 1 in /Users/administrator/dev/KaggleAux/env/build/scipy
Exception information:
Traceback (most recent call last):
File "/Users/administrator/dev/KaggleAux/env/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/Users/administrator/dev/KaggleAux/env/lib/python2.7/site-packages/pip/commands/install.py", line 283, in run
requirement_set.install(install_options, global_options, root=options.root_path)
File "/Users/administrator/dev/KaggleAux/env/lib/python2.7/site-packages/pip/req.py", line 1435, in install
requirement.install(install_options, global_options, *args, **kwargs)
File "/Users/administrator/dev/KaggleAux/env/lib/python2.7/site-packages/pip/req.py", line 706, in install
cwd=self.source_dir, filter_stdout=self._filter_install, show_stdout=False)
File "/Users/administrator/dev/KaggleAux/env/lib/python2.7/site-packages/pip/util.py", line 697, in call_subprocess
% (command_desc, proc.returncode, cwd))
InstallationError: Command /Users/administrator/dev/KaggleAux/env/bin/python2.7 -c "import setuptools, tokenize;__file__='/Users/administrator/dev/KaggleAux/env/build/scipy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/zl/7698ng4d4nxd49q1845jd9340000gn/T/pip-eO8gua-record/install-record.txt --single-version-externally-managed --compile --install-headers /Users/administrator/dev/KaggleAux/env/bin/../include/site/python2.7 failed with error code 1 in /Users/administrator/dev/KaggleAux/env/build/scipy
|
After opening up an issue with the SciPy team, we found that you need to upgrade pip with:
pip install --upgrade pip
And in Python 3 this works:
python3 -m pip install --upgrade pip
for SciPy to install properly. Why? Because:
Older versions of pip have to be told to use wheels, IIRC with --use-wheel. Or you can upgrade pip itself, then it should pick up the wheels.
Upgrading pip solves the issue, but you might be able to just use the --use-wheel flag as well.
|
Why is Flask application not creating any logs when hosted by Gunicorn?
|
I'm trying to add logging to a web application which uses Flask.
When hosted using the built-in server (i.e. python3 server.py), logging works. When hosted using Gunicorn, the log file is not created.
The simplest code which reproduces the problem is this one:
#!/usr/bin/env python
import logging
from flask import Flask
flaskApp = Flask(__name__)
@flaskApp.route('/')
def index():
flaskApp.logger.info('Log message')
print('Direct output')
return 'Hello World\n'
if __name__ == "__main__":
logHandler = logging.FileHandler('/var/log/demo/app.log')
logHandler.setLevel(logging.INFO)
flaskApp.logger.addHandler(logHandler)
flaskApp.logger.setLevel(logging.INFO)
flaskApp.run()
The application is called using:
gunicorn server:flaskApp -b :80 -w 4
--access-gfile /var/log/demo/access.log
--error-logfile /var/log/demo/error.log
When doing a request to the home page of the site, the following happens:
I receive the expected HTTP 200 "Hello World\n" in response.
There is a trace of the request in /var/log/demo/access.log.
/var/log/demo/error.log stays the same (there are just the boot events).
There is the "Direct output" line in the terminal.
There is no '/var/log/demo/app.log'. If I create the file prior to launching the application, the file is not modified.
Note that:
The directory /var/log/demo can be accessed (read, write, execute) by everyone, so this is not the permissions issue.
If I add StreamHandler as a second handler, there is still no trace of the "Log message" message neither in the terminal, nor in Gunicorn log files.
Gunicorn is installed using pip3 install gunicorn, so there shouldn't be any mismatch with Python versions.
What's happening?
|
When you use python3 server.py you are running the server3.py script.
When you use gunicorn server:flaskApp ... you are running the gunicorn startup script which then imports the module server and looks for the variable flaskApp in that module.
Since server.py is being imported the __name__ var will contain "server", not "__main__" and therefore you log handler setup code is not being run.
You could simply move the log handler setup code outside of the if __name__ == "__main__": stanza. But ensure that you keep flaskApp.run() in there since you do not want that to be run when gunicorn imports server.
More about what does if __name__ == â__main__â: do?
|
Return and yield in the same function
|
What exactly happens, when yield and return are used in the same function in Python, like this?
def find_all(a_str, sub):
start = 0
while True:
start = a_str.find(sub, start)
if start == -1: return
yield start
start += len(sub) # use start += 1 to find overlapping matches
Is it still a generator?
|
Yes, it' still a generator. The return is (almost) equivalent to raising StopIteration.
PEP 255 spells it out:
Specification: Return
A generator function can also contain return statements of the form:
"return"
Note that an expression_list is not allowed on return statements in
the body of a generator (although, of course, they may appear in the
bodies of non-generator functions nested within the generator).
When a return statement is encountered, control proceeds as in any
function return, executing the appropriate finally clauses (if any
exist). Then a StopIteration exception is raised, signalling that the
iterator is exhausted. A StopIteration exception is also raised if
control flows off the end of the generator without an explict return.
Note that return means "I'm done, and have nothing interesting to
return", for both generator functions and non-generator functions.
Note that return isn't always equivalent to raising StopIteration:
the difference lies in how enclosing try/except constructs are
treated. For example,
>>> def f1():
... try:
... return
... except:
... yield 1
>>> print list(f1())
[]
because, as in any function, return simply exits, but
>>> def f2():
... try:
... raise StopIteration
... except:
... yield 42
>>> print list(f2())
[42]
because StopIteration is captured by a bare "except", as is any
exception.
|
Preserve custom attributes when pickling subclass of numpy array
|
I've created a subclass of numpy ndarray following the numpy documentation. In particular, I have added a custom attribute by modifying the code provided.
I'm manipulating instances of this class within a parallel loop, using Python multiprocessing. As I understand it, the way that the scope is essentially 'copied' to multiple threads is using pickle.
The problem I am now coming up against relates to the way that numpy arrays are pickled. I can't find any comprehensive documentation about this, but some discussions between the dill developers suggest that I should be focusing on the __reduce__ method, which is being called upon pickling.
Can anyone shed any more light on this? The minimal working example is really just the numpy example code I linked to above, copied here for completeness:
import numpy as np
class RealisticInfoArray(np.ndarray):
def __new__(cls, input_array, info=None):
# Input array is an already formed ndarray instance
# We first cast to be our class type
obj = np.asarray(input_array).view(cls)
# add the new attribute to the created instance
obj.info = info
# Finally, we must return the newly created object:
return obj
def __array_finalize__(self, obj):
# see InfoArray.__array_finalize__ for comments
if obj is None: return
self.info = getattr(obj, 'info', None)
Now here is the problem:
import pickle
obj = RealisticInfoArray([1, 2, 3], info='foo')
print obj.info # 'foo'
pickle_str = pickle.dumps(obj)
new_obj = pickle.loads(pickle_str)
print new_obj.info # raises AttributeError
Thanks.
|
np.ndarray uses __reduce__ to pickle itself. We can take a look at what it actually returns when you call that function to get an idea of what's going on:
>>> obj = RealisticInfoArray([1, 2, 3], info='foo')
>>> obj.__reduce__()
(<built-in function _reconstruct>, (<class 'pick.RealisticInfoArray'>, (0,), 'b'), (1, (3,), dtype('int64'), False, '\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00'))
So, we get a 3-tuple back. The docs for __reduce__ describe what each element is doing:
When a tuple is returned, it must be between two and five elements
long. Optional elements can either be omitted, or None can be provided
as their value. The contents of this tuple are pickled as normal and
used to reconstruct the object at unpickling time. The semantics of
each element are:
A callable object that will be called to create the initial version of
the object. The next element of the tuple will provide arguments for
this callable, and later elements provide additional state information
that will subsequently be used to fully reconstruct the pickled data.
In the unpickling environment this object must be either a class, a
callable registered as a âsafe constructorâ (see below), or it must
have an attribute __safe_for_unpickling__ with a true value.
Otherwise, an UnpicklingError will be raised in the unpickling
environment. Note that as usual, the callable itself is pickled by
name.
A tuple of arguments for the callable object.
Optionally, the objectâs state, which will be passed to the objectâs
__setstate__() method as described in section Pickling and unpickling normal class instances. If the object has no __setstate__() method,
then, as above, the value must be a dictionary and it will be added to
the objectâs __dict__.
So, _reconstruct is the function called to rebuild the object, (<class 'pick.RealisticInfoArray'>, (0,), 'b') are the arguments passed to that function, and (1, (3,), dtype('int64'), False, '\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00')) gets passed to the class' __setstate__. This gives us an opportunity; we could override __reduce__ and provide our own tuple to __setstate__, and then additionally override __setstate__, to set our custom attribute when we unpickle. We just need to make sure we preserve all the data the parent class needs, and call the parent's __setstate__, too:
class RealisticInfoArray(np.ndarray):
def __new__(cls, input_array, info=None):
obj = np.asarray(input_array).view(cls)
obj.info = info
return obj
def __array_finalize__(self, obj):
if obj is None: return
self.info = getattr(obj, 'info', None)
def __reduce__(self):
# Get the parent's __reduce__ tuple
pickled_state = super(RealisticInfoArray, self).__reduce__()
# Create our own tuple to pass to __setstate__
new_state = pickled_state[2] + (self.info,)
# Return a tuple that replaces the parent's __setstate__ tuple with our own
return (pickled_state[0], pickled_state[1], new_state)
def __setstate__(self, state):
self.info = state[-1] # Set the info attribute
# Call the parent's __setstate__ with the other tuple elements.
super(RealisticInfoArray, self).__setstate__(state[0:-1])
Usage:
>>> obj = pick.RealisticInfoArray([1, 2, 3], info='foo')
>>> pickle_str = pickle.dumps(obj)
>>> pickle_str
"cnumpy.core.multiarray\n_reconstruct\np0\n(cpick\nRealisticInfoArray\np1\n(I0\ntp2\nS'b'\np3\ntp4\nRp5\n(I1\n(I3\ntp6\ncnumpy\ndtype\np7\n(S'i8'\np8\nI0\nI1\ntp9\nRp10\n(I3\nS'<'\np11\nNNNI-1\nI-1\nI0\ntp12\nbI00\nS'\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x03\\x00\\x00\\x00\\x00\\x00\\x00\\x00'\np13\nS'foo'\np14\ntp15\nb."
>>> new_obj = pickle.loads(pickle_str)
>>> new_obj.info
'foo'
|
Reset ipython kernel
|
I was wondering if there is a way to restart the ipython kernel without closing it, like the kernel restart function that exists in the notebook. I tried %reset but that doesn't seem to clear the imports.
|
Even though it would be handy if %reset would clear the namespace and the cache for the imports (as in the notebook) one can explicitly reload a previously imported module using importlib.reload in python3.4 or imp.reload in python3.0-3.3 (and if needed reset the kernel in a second step).
|
Python Pandas: How to get the row names from index of a dataframe?
|
So assume I have a dataframe with rownames that aren't a column of their own per se such as the following:
X Y
Row 1 0 5
Row 2 8 1
Row 3 3 0
How would I extract these row names as a list, if I have their index?
For example, it would look something like:
function_name(dataframe[indices])
['Row 1' 'Row 2']
Thanks for your help!
|
df.index
That will output the row names as pandas Index object. You can get it as a pure list via:
list(df.index)
Finally, the index supports slicing similar to columns. IE:
df.index['Row 2':'Row 5'] etc...
|
Pandas join issue: columns overlap but no suffix specified
|
I have following 2 data frames:
df_a =
mukey DI PI
0 100000 35 14
1 1000005 44 14
2 1000006 44 14
3 1000007 43 13
4 1000008 43 13
df_b =
mukey niccdcd
0 190236 4
1 190237 6
2 190238 7
3 190239 4
4 190240 7
When I try to join these 2 dataframes:
join_df = LS_sgo.join(MSU_pi,on='mukey',how='left')
I get the error:
*** ValueError: columns overlap but no suffix specified: Index([u'mukey'], dtype='object')
Why is this so? The dataframes do have common 'mukey' values.
|
Your error on the snippet of data you posted is a little cryptic, in that because there are no common values, the join operation fails because the values don't overlap it requires you to supply a suffix for the left and right hand side:
In [173]:
df_a.join(df_b, on='mukey', how='left', lsuffix='_left', rsuffix='_right')
Out[173]:
mukey_left DI PI mukey_right niccdcd
index
0 100000 35 14 NaN NaN
1 1000005 44 14 NaN NaN
2 1000006 44 14 NaN NaN
3 1000007 43 13 NaN NaN
4 1000008 43 13 NaN NaN
merge works because it doesn't have this restriction:
In [176]:
df_a.merge(df_b, on='mukey', how='left')
Out[176]:
mukey DI PI niccdcd
0 100000 35 14 NaN
1 1000005 44 14 NaN
2 1000006 44 14 NaN
3 1000007 43 13 NaN
4 1000008 43 13 NaN
|
NumPy array is not JSON serializable
|
After creating a NumPy array, and saving it as a Django context variable, I receive the following error when loading the webpage:
array([ 0, 239, 479, 717, 952, 1192, 1432, 1667], dtype=int64) is not JSON serializable
What does this mean?
|
I regularly "jsonify" np.arrays. Try using the ".tolist()" method on the arrays first, like this:
import numpy as np
import codecs, json
a = np.arange(10).reshape(2,5) # a 2 by 5 array
b = a.tolist() # nested lists with same data, indices
file_path = "/path.json" ## your path variable
json.dump(b, codecs.open(file_path, 'w', encoding='utf-8'), separators=(',', ':'), sort_keys=True, indent=4) ### this saves the array in .json format
In order to "unjsonify" the array use:
obj_text = codecs.open(file_path, 'r', encoding='utf-8').read()
b_new = json.loads(obj_text)
a_new = np.array(b_new)
|
How to show PIL Image in ipython notebook
|
This is my code
from PIL import Image
pil_im = Image.open('data/empire.jpg')
I would like to do some image manipulation on it, and then show it on screen.
I am having problem with showing PIL Image in python notebook.
I have tried:
print pil_im
And just
pil_im
But both just give me:
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=569x800 at 0x10ECA0710>
|
You can use IPython's Module: display to load the image. You can read more from the Doc.
from IPython.display import Image
Image(filename='data/empire.jpg')
updated
As OP's requirement is to use PIL, if you want to show inline image, you can use matplotlab.pyplot.imshow with numpy.asarray like this too:
from matplotlib.pyplot import imshow
import numpy as np
from PIL import Image
%matplotlib inline
pil_im = Image.open('data/empire.jpg', 'r')
imshow(np.asarray(pil_im))
If you only require a preview rather than an inline, you may just use show like this:
pil_im = Image.open('data/empire.jpg', 'r')
pil_im.show()
|
How to do Python package management?
|
Coming from a Node.js + npm background, it is really nightmarish trying to understand all the things related to Python package management. After a few hours of research, I've stumbled upon all those keywords:
easy_install
virtualenv
pip
setuptools
distutils
pypi
wheel
egg
site-packages
Can someone help me decipher those terms and put them in historical context? For example, "distutils was the first package manager but it was superseded by X in Y because Z".
I absolutely love Python (the language) but package management seems like a real nightmare to learn for someone who has been using the amazing npm for the last few years.
|
Types of Packages
Egg vs Wheel vs Neither. What's meant by neither is that a python package can be installed from its "source" without being packaged as an egg or wheel.
Packaging Utilities
There are several libraries which provide utilities for packaging python applications, including distutils and setuptools. There is already an excellent post on this.
easy_install
Part of setuptools, allows building and installing python packages. Often discouraged in favor of Pip. Designed to make installation of packages easy, doing the chore of downloading and moving them to the correct place for you (hence the name).
Pip
A package manager for python packages, and a replacement for easy_install! See here for some reasons why people prefer it over easy_install. Can do neat things like install a package directly from a git repository or compile C extensions on the target machine. The latter is debatable as to whether or not it's desirable, but nonetheless it's a nice feature to have if you want it.
PyPI
The python package index, where easy_install and Pip search for available packages, by default. Basically a giant online repository of modules that are accepted by the community.
virtualenv
A way of hacking your environment variables to "isolate" an installation of python and it's related modules. Prefers Pip, because Ian Bicking wrote them both. Basically, you use pip to install virtualenv system wide, which then allows you to create python virtual environments, each with their own copy of python, pip, and assorted modules. This lets you have multiple versions of python or install a module just for testing, without mucking up your system-wide python install.
virtualenvwrapper
A really handy shell script that makes creating and tearing down virtual environments easier.
site-packages
One of the supported locations for installing python modules into. Lives someplace like /usr/lib/pythonX.X/site-packages. There are other supported locations, like dist-packages or user specific locations.
What does all this mean for you?
I'd recommend you don't pay any attention to easy_install and just use pip. Please also always use virtualenv. Usually, the only python modules you should install system-wide on your workstation are pip and virtualenv. I've completely ignored eggs and wheels, but if you plan to distribute packages professionally or host them on PyPI, you probably want to investigate those. Also, if you are creating python packages, you will need to learn to write a setup script, with setuptools. My recommendation is to never use distutils.
Some more Reading
A page on python.org about packaging which covers a lot of these topics
Python packaging is a nightmare
A great post that goes against the most common recommendations, including mine!
|
How do I print the key-value pairs of a dictionary in python
|
I want to output my key value pairs from a python dictionary as such:
key1 \t value1
key2 \t value2
I thought I could maybe do it like this:
for i in d:
print d.keys(i), d.values(i)
but obviously that's not how it goes as the keys() and values() don't take an argument...
Thanks.
|
Your existing code just needs a little tweak. i is the key, so you would just need to use it:
for i in d:
print i, d[i]
You can also get an iterator that contains both keys and values. In Python 2, d.items() returns a list of (key, value) tuples, while d.iteritems() returns an iterator that provides the same:
for k, v in d.iteritems():
print k, v
In Python 3, d.items() returns the iterator; to get a list, you need to pass the iterator to list() yourself.
for k, v in d.items():
print(k, v)
|
Why can I not create a wheel in python?
|
Here are the commands I am running:
$ python setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
$ pip --version
pip 1.5.6 from /usr/local/lib/python3.4/site-packages (python 3.4)
$ python -c "import setuptools; print(setuptools.__version__)"
2.1
$ python --version
Python 3.4.1
$ which python
/usr/local/bin/python
Also, I am running a mac with homebrewed python
Here is my setup.py script:
https://gist.github.com/cloudformdesign/4791c46fe7cd52eb61cd
I'm going absolutely crazy -- I can't figure out why this wouldn't be working.
|
Install the wheel package first:
pip install wheel
The documentation isn't overly clear on this, but "the wheel project provides a bdist_wheel command for setuptools" actually means "the wheel package...".
|
Install pywin32 with pip in Windows 7 does not work in python 3.4.2
|
Hy@everybody,
i've try to install pywin32 via pip (1.5.6) with python 3.4.2 under Windows 7, but i've always got the follwing error-message:
Could not find any downloads that satisfy the requirement pywin32
Some externally hosted files were ignored (use --allow-external pywin32 to all
ow).
Cleaning up...
No distributions at all found for pywin32
Storing debug log for failure in C:\Users\tonka\pip\pip.log
All other installations with pip works absolutely fine. On pypi.org, there is an package for pywin32, so don't really understand why this is happening.
I've also tried this --allow-external flag, but then i got the following error
Could not find any downloads that satisfy the requirement pywin32
Some insecure and unverifiable files were ignored (use --allow-unverified pywi
n32 to allow).
Cleaning up...
No distributions at all found for pywin32
Storing debug log for failure in C:\Users\tonka\pip\pip.log
Then i used the --allow-unverified flag and i get this error
Could not find any downloads that satisfy the requirement pywin32
Cleaning up...
No distributions at all found for pywin32
Storing debug log for failure in C:\Users\tonka\pip\pip.log
Is there a package in pip for pywin32 for 3.4? pypi-link
Does anybody has the same problem/behavior?
Edit:
For anybody who is searching for a way to install pywin32 via pip. Use the package pypiwin32 instead of pywin32, it is the same package, but it is pip-able
Greetings
Tonka
|
In answer to your first question, it appears that there is not a pip pywin32 package for python 3.4. If you take a look at this page: url https://pypi.python.org/simple/pywin32/, you can see that there is a 214_download_url. This indicates that version 214 is the latest available, and the 214 download directory here does not contain an installer for anything later than python 3.1.
In answer to your second question, yes, I see the same problem/behaviour with python 3.4.1. As mentioned by @sk11 in a comment, you can download an installer from here. (I can't add a comment because I don't have the rep.) I've just tried the installer and it appears to work fine.
|
Django Model() vs Model.objects.create()
|
What it the difference between running two commands:
foo = FooModel()
and
bar = BarModel.objects.create()
Does the second one immediately creates a BarModel at database, while for FooModel the save() method has to be called explicitly to add it?
|
https://docs.djangoproject.com/en/dev/topics/db/queries/#creating-objects
To create and save an object in a single step, use the create() method.
|
Matplotlib: ValueError: x and y must have same first dimension
|
I am trying to fit a linear line of best fit to my matplotlib graph. I keep getting the error that x and y do not have the same first dimension. But they both have lengths of 15. What am I doing wrong?
import matplotlib.pyplot as plt
from scipy import stats
import numpy as np
x = [0.46,0.59,0.68,0.99,0.39,0.31,1.09,0.77,0.72,0.49,0.55,0.62,0.58,0.88,0.78]
y = [0.315,0.383,0.452,0.650,0.279,0.215,0.727,0.512,0.478,0.335,0.365,0.424,0.390,0.585,0.511]
xerr = [0.01]*15
yerr = [0.001]*15
plt.rc('font', family='serif', size=13)
m, b = np.polyfit(x, y, 1)
plt.plot(x,y,'s',color='#0066FF')
plt.plot(x, m*x + b, 'r-') #BREAKS ON THIS LINE
plt.errorbar(x,y,xerr=xerr,yerr=0,linestyle="None",color='black')
plt.xlabel('$\Delta t$ $(s)$',fontsize=20)
plt.ylabel('$\Delta p$ $(hPa)$',fontsize=20)
plt.autoscale(enable=True, axis=u'both', tight=False)
plt.grid(False)
plt.xlim(0.2,1.2)
plt.ylim(0,0.8)
plt.show()
|
You should make x and y numpy arrays, not lists:
x = np.array([0.46,0.59,0.68,0.99,0.39,0.31,1.09,
0.77,0.72,0.49,0.55,0.62,0.58,0.88,0.78])
y = np.array([0.315,0.383,0.452,0.650,0.279,0.215,0.727,0.512,
0.478,0.335,0.365,0.424,0.390,0.585,0.511])
With this change, it produces the expect plot. If they are lists, m * x will not produce the result you expect, but an empty list. Note that m is anumpy.float64 scalar, not a standard Python float.
I actually consider this a bit dubious behavior of Numpy. In normal Python, multiplying a list with an integer just repeats the list:
In [42]: 2 * [1, 2, 3]
Out[42]: [1, 2, 3, 1, 2, 3]
while multiplying a list with a float gives an error (as I think it should):
In [43]: 1.5 * [1, 2, 3]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-43-d710bb467cdd> in <module>()
----> 1 1.5 * [1, 2, 3]
TypeError: can't multiply sequence by non-int of type 'float'
The weird thing is that multiplying a Python list with a Numpy scalar apparently works:
In [45]: np.float64(0.5) * [1, 2, 3]
Out[45]: []
In [46]: np.float64(1.5) * [1, 2, 3]
Out[46]: [1, 2, 3]
In [47]: np.float64(2.5) * [1, 2, 3]
Out[47]: [1, 2, 3, 1, 2, 3]
So it seems that the float gets truncated to an int, after which you get the standard Python behavior of repeating the list, which is quite unexpected behavior. The best thing would have been to raise an error (so that you would have spotted the problem yourself instead of having to ask your question on Stackoverflow) or to just show the expected element-wise multiplication (in which your code would have just worked). Interestingly, addition between a list and a Numpy scalar does work:
In [69]: np.float64(0.123) + [1, 2, 3]
Out[69]: array([ 1.123, 2.123, 3.123])
|
Django SMTPAuthenticationError
|
I am new in django and developing a web application using django.I have successfully set the Signup functionality using Userena in my web application and can Register as a user with Verification Email.
i can show you my SMTP settings in my settings.py file
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_PORT = 587
EMAIL_HOST_USER = 'raihncse@gmail.com'
DEFAULT_FROM_EMAIL = 'raihncse@gmail.com'
SERVER_EMAIL = 'raihncse@gmail.com'
EMAIL_HOST_PASSWORD = '**************'
everything was fine before i have changed my EMAIL_HOST_PASSWORD
Infact,for some reason i have to change the previous password of that SERVER_EMAIL(raihncse@gmail.com).I have already edit the EMAIL_HOST_PASSWORD according to the new SERVER_EMAIL. password.
but now, if i want to register as a new user,i am facing the following error
SMTPAuthenticationError at /accounts/signup/
(534, '5.7.14 <https://accounts.google.com/ContinueSignIn?sarp=1&scc=1&plt=AKgnsbvNq\n5.7.14 S3l1pFXENupDa_SdPphNHrnzeLPUOyf6O0l1s31w7h_UARx11P89AxPeeZ6xBi2KeQRjsw\n5.7.14 nvpxZhPVv771W9ljEDyeWnqpqv3_nakuPo36BEl3IlYj9qVujNB5cm0TYFf9LQAxRjFdda\n5.7.14 xh-y5spA9zIQONDsvRRgN3e0DXoIkgxTO3Mu75IaACi-XlvFtFfPBiQ81gUrBZ_PhZsBmh\n5.7.14 ajsSf-flNEmoSydyOTNdmwdB0__8> Please log in via your web browser and\n5.7.14 then try again.\n5.7.14 Learn more at\n5.7.14 https://support.google.com/mail/bin/answer.py?answer=78754 40sm12125121qgi.47 - gsmtp')
|
A relatively recent change in Google's authentication system means you're going to have to "allow less secure app access" to your Google account, in order for this to work.
In your error, you are recommended to visit this link: https://support.google.com/mail/answer/78754
On that page:
Step #2 asks you to try Displaying an Unlock Captcha
Step #3 explains how to allow less secure app access. In summary:
Go to Allow less secure apps and choose "Allow" to let less secure apps access your Google account. We don't recommend this option because it may make it easier for someone to gain access to your account.
|
Is it possible to do additive blending with matplotlib?
|
When dealing with overlapping high density scatter or line plots of different colors it can be convenient to implement additive blending schemes, where the RGB colors of each marker add together to produce the final color in the canvas. This is a common operation in 2D and 3D render engines.
However, in Matplotlib I've only found support for alpha/opacity blending. Is there any roundabout way of doing it or am I stuck with rendering to bitmap and then blending them in some paint program?
Edit: Here's some example code and a manual solution.
This will produce two partially overlapping random distributions:
x1 = randn(1000)
y1 = randn(1000)
x2 = randn(1000) * 5
y2 = randn(1000)
scatter(x1,y1,c='b',edgecolors='none')
scatter(x2,y2,c='r',edgecolors='none')
This will produce in matplotlib the following:
As you can see, there are some overlapping blue points that are occluded by red points and we would like to see them. By using alpha/opacity blending in matplotlib, you can do:
scatter(x1,y1,c='b',edgecolors='none',alpha=0.5)
scatter(x2,y2,c='r',edgecolors='none',alpha=0.5)
Which will produce the following:
But what I really want is the following:
I can do it manually by rendering each plot independently to a bitmap:
xlim = plt.xlim()
ylim = plt.ylim()
scatter(x1,y1,c='b',edgecolors='none')
plt.xlim(xlim)
plt.ylim(ylim)
scatter(x2,y2,c='r',edgecolors='none')
plt.xlim(xlim)
plt.ylim(ylim)
plt.savefig(r'scatter_blue.png',transparent=True)
plt.savefig(r'scatter_red.png',transparent=True)
Which gives me the following images:
What you can do then is load them as independent layers in Paint.NET/PhotoShop/gimp and just additive blend them.
Now ideal would be to be able to do this programmatically in Matplotlib, since I'll be processing hundreds of these!
|
If you only need an image as the result, you can get the canvas buffer as a numpy array, and then do the blending, here is an example:
from matplotlib import pyplot as plt
import numpy as np
fig, ax = plt.subplots()
ax.scatter(x1,y1,c='b',edgecolors='none')
ax.set_xlim(-4, 4)
ax.set_ylim(-4, 4)
ax.patch.set_facecolor("none")
ax.patch.set_edgecolor("none")
fig.canvas.draw()
w, h = fig.canvas.get_width_height()
img = np.frombuffer(fig.canvas.buffer_rgba(), np.uint8).reshape(h, w, -1).copy()
ax.clear()
ax.scatter(x2,y2,c='r',edgecolors='none')
ax.set_xlim(-4, 4)
ax.set_ylim(-4, 4)
ax.patch.set_facecolor("none")
ax.patch.set_edgecolor("none")
fig.canvas.draw()
img2 = np.frombuffer(fig.canvas.buffer_rgba(), np.uint8).reshape(h, w, -1).copy()
img[img[:, :, -1] == 0] = 0
img2[img2[:, :, -1] == 0] = 0
fig.clf()
plt.imshow(np.maximum(img, img2))
plt.subplots_adjust(0, 0, 1, 1)
plt.axis("off")
plt.show()
the result:
|
Django 1.7 migrations won't recreate a dropped table, why?
|
Using Django 1.7 migrations.
I accidentally dropped a table in my database. I assumed that by running migration again this would recreate the table but no, Django states "No migrations to apply".
How to I get Django to recreate the table?
I have run:
> makemigrations - No changes detected
> migrate - No migrations to apply.
I have tried making a change to the model and running a new migration and it simply states that "Table 'x.test_customer' doesn't exist" which is correct, but what I was hoping it that it would recreate the table.
|
Go to your database and find the table django_migrations. Delete all the rows which have app equals your app name.
Then do a makemigrations & migrate will work.
|
Django delete superuser
|
This may be a duplicate, but I couldn't find the question anywhere, so I'll go ahead and ask:
Is there a simple way to delete a superuser from the terminal, perhaps analogous to Django's createsuperuser command?
|
There's no built in command but you can easily do this from the shell:
> django-admin.py shell
$ from django.contrib.auth.models import User
$ User.objects.get(username="joebloggs", is_superuser=True).delete()
|
Convert Pandas DataFrame to dictionary
|
I have a DataFrame with four columns. I want to convert this DataFrame to a python dictionary. I want the elements of first column be keys and the elements of other columns in same row be values.
Dataframe:
ID A B C
0 p 1 3 2
1 q 4 3 2
2 r 4 0 9
Output should be like this:
Dictionary:
{'p': [1,3,2],'q': [4,3,2],'r': [4,0,9]}
|
The to_dict() method sets the column names as dictionary keys so you'll need to reshape your DataFrame slightly. Setting the 'ID' column as the index and then transposing the DataFrame is one way to achieve this.
to_dict() also accepts an outtype keyword argument which you'll need in order to output a list of values for each column. Otherwise, a dictionary of the form {index:value} will be returned for each column.
These steps can be done with the following line:
>>> df.set_index('ID').T.to_dict('list')
{'p': [1, 3, 2], 'q': [4, 3, 2], 'r': [4, 0, 9]}
|
PIL: ImportError: The _imaging extension was built for another version of pillow or PIL
|
I get the error:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-4-0f6709e38f49> in <module>()
----> 1 from PIL import Image
C:\Anaconda\lib\site-packages\PIL\Image.py in <module>()
61 from PIL import _imaging as core
62 if PILLOW_VERSION != getattr(core, 'PILLOW_VERSION', None):
---> 63 raise ImportError("The _imaging extension was built for another "
64 " version of Pillow or PIL")
65
ImportError: The _imaging extension was built for another version of Pillow or PIL
Whenever I try to use the PIL library. I'm trying to load and work on a bunch of .gif's, and what I'm trying now, is the following:
from PIL import Image
Trying a different approach, through scipy with:
import scipy.ndimage as spnd
os.chdir('C:\\WeatherSink\\data\\')
spnd.imread('2014-11-03-0645.gif')
Fails with:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-3-23c383b79646> in <module>()
1 os.chdir('C:\\WeatherSink\\data\\')
----> 2 spnd.imread('2014-11-03-0645.gif')
C:\Anaconda\lib\site-packages\scipy\ndimage\io.pyc in imread(fname, flatten, mode)
36 from PIL import Image
37 except ImportError:
---> 38 raise ImportError("Could not import the Python Imaging Library (PIL)"
39 " required to load image files. Please refer to"
40 " http://pypi.python.org/pypi/PIL/ for installation"
ImportError: Could not import the Python Imaging Library (PIL) required to load image files. Please refer to http://pypi.python.org/pypi/PIL/ for installation instructions.
The first approach guides me towards the versions of PIL installed. I try emulating the getattr(...), and that returns None. So I'm not surprised that it's less than functioning.
But does anyone know how to 'fix' the errors?
I'm running on win7, managing python2.7 through conda. I've tried to remove and re-install the packages as well, without any change in the output.
Help is much appreciated.
|
This is only a installation issue.
first install pip in you system if it is not insalled it is available for windows also https://pip.pypa.io/en/latest/installing.html
upgrade your numpy, pip/pillow,scipy
pip install -U numpy
pip install -U pil/pillow
pip install -U scipy
or best option for windows is to use anaconda
i think pip is already installed in conda. this will resolve your system version issue.
In [1]: from PIL import Image
In [2]: import scipy.ndimage as spnd
In [3]: x = spnd.imread('ppuf100X91.gif')
In [4]: print x
[[255 255 255 ..., 255 255 255]
[255 255 255 ..., 255 255 255]
[255 255 255 ..., 255 255 255]
...,
[255 255 255 ..., 255 255 255]
[255 255 255 ..., 255 255 255]
[255 255 255 ..., 255 255 255]]
|
PyPi description markdown doesn't work
|
I uploaded a package to PyPi using:
python setup.py register -r pypi
python setup.py sdist upload -r pypi
I'm trying to modify the decsription, I wrote (please don't edit the formatting of the following piece of code, I made it in purpose to demonstrate my problem):
**nose-docstring-plugin**
This plugin enables you to modify docstring of tests based on their attributes, for example:
```python
@attr(section='MySection', type='functional+', module='MyModule', id=1)
def test_function(self):
"""
This is the original docstring
"""
pass
```
However, the text appears as it is, without the markdown formatting. What am I doing wrong?
|
As @Martijn Pieters stated, PyPi does not support Markdown. I'm not sure where I learned the following trick, but you can use Pandoc and PyPandoc to convert your Markdown files into RestructuredText before uploading into PyPi. To accomplish this, add the following to your setup.py file:
try:
import pypandoc
long_description = pypandoc.convert('README.md', 'rst')
except(IOError, ImportError):
long_description = open('README.md').read()
setup(
name='blah',
version=find_version('blah.py'),
description='Short description',
long_description=long_description,
)
To install Pandoc on OS X, I used Homebrew:
brew install pandoc
To install PyPandoc, I used pip:
pip install pypandoc
|
How to signal slots in a GUI from a different process?
|
Context:
In Python a main thread spawns a 2nd process (using multiprocessing module) and then launches a GUI (using PyQt4). At this point the main thread blocks until the GUI is closed. The 2nd process is always processing and ideally should emit signal(s) to specific slot(s) in the GUI in an asynchronous manner.
Question:
Which approach/tools are available in Python and PyQt4 to achieve that and how? Preferably in a soft-interrupt manner rather than polling.
Abstractly speaking, the solution I can think of is a "tool/handler" instantiated in the main thread that grabs the available slots from the GUI instance and connects with the grabbed signals from the 2nd process, assuming I provide this tool some information of what to expect or hard coded. This could be instantiated to a 3rd process/thread.
|
This is an example Qt application demonstrating sending signals from a child process to slots in the mother process. I'm not sure this is right approach but it works.
I differentiate between process as mother and child, because the word parent is alread used in the Qt context.
The mother process has two threads. Main thread of mother process sends data to child process via multiprocessing.Queue. Child process sends processed data and signature of the signal to be sent to the second thread of mother process via multiprocessing.Pipe. The second thread of mother process actually emits the signal.
Python 2.X, PyQt4:
from multiprocessing import Process, Queue, Pipe
from threading import Thread
import sys
from PyQt4.QtCore import *
from PyQt4.QtGui import *
class Emitter(QObject, Thread):
def __init__(self, transport, parent=None):
QObject.__init__(self,parent)
Thread.__init__(self)
self.transport = transport
def _emit(self, signature, args=None):
if args:
self.emit(SIGNAL(signature), args)
else:
self.emit(SIGNAL(signature))
def run(self):
while True:
try:
signature = self.transport.recv()
except EOFError:
break
else:
self._emit(*signature)
class Form(QDialog):
def __init__(self, queue, emitter, parent=None):
super(Form,self).__init__(parent)
self.data_to_child = queue
self.emitter = emitter
self.emitter.daemon = True
self.emitter.start()
self.browser = QTextBrowser()
self.lineedit = QLineEdit('Type text and press <Enter>')
self.lineedit.selectAll()
layout = QVBoxLayout()
layout.addWidget(self.browser)
layout.addWidget(self.lineedit)
self.setLayout(layout)
self.lineedit.setFocus()
self.setWindowTitle('Upper')
self.connect(self.lineedit,SIGNAL('returnPressed()'),self.to_child)
self.connect(self.emitter,SIGNAL('data(PyQt_PyObject)'), self.updateUI)
def to_child(self):
self.data_to_child.put(unicode(self.lineedit.text()))
self.lineedit.clear()
def updateUI(self, text):
text = text[0]
self.browser.append(text)
class ChildProc(Process):
def __init__(self, transport, queue, daemon=True):
Process.__init__(self)
self.daemon = daemon
self.transport = transport
self.data_from_mother = queue
def emit_to_mother(self, signature, args=None):
signature = (signature, )
if args:
signature += (args, )
self.transport.send(signature)
def run(self):
while True:
text = self.data_from_mother.get()
self.emit_to_mother('data(PyQt_PyObject)', (text.upper(),))
if __name__ == '__main__':
app = QApplication(sys.argv)
mother_pipe, child_pipe = Pipe()
queue = Queue()
emitter = Emitter(mother_pipe)
form = Form(queue, emitter)
ChildProc(child_pipe, queue).start()
form.show()
app.exec_()
And as convenience also Python 3.X, PySide:
from multiprocessing import Process, Queue, Pipe
from threading import Thread
from PySide import QtGui, QtCore
class Emitter(QtCore.QObject, Thread):
def __init__(self, transport, parent=None):
QtCore.QObject.__init__(self, parent)
Thread.__init__(self)
self.transport = transport
def _emit(self, signature, args=None):
if args:
self.emit(QtCore.SIGNAL(signature), args)
else:
self.emit(QtCore.SIGNAL(signature))
def run(self):
while True:
try:
signature = self.transport.recv()
except EOFError:
break
else:
self._emit(*signature)
class Form(QtGui.QDialog):
def __init__(self, queue, emitter, parent=None):
super().__init__(parent)
self.data_to_child = queue
self.emitter = emitter
self.emitter.daemon = True
self.emitter.start()
self.browser = QtGui.QTextBrowser()
self.lineedit = QtGui.QLineEdit('Type text and press <Enter>')
self.lineedit.selectAll()
layout = QtGui.QVBoxLayout()
layout.addWidget(self.browser)
layout.addWidget(self.lineedit)
self.setLayout(layout)
self.lineedit.setFocus()
self.setWindowTitle('Upper')
self.lineedit.returnPressed.connect(self.to_child)
self.connect(self.emitter, QtCore.SIGNAL('data(PyObject)'), self.updateUI)
def to_child(self):
self.data_to_child.put(self.lineedit.text())
self.lineedit.clear()
def updateUI(self, text):
self.browser.append(text[0])
class ChildProc(Process):
def __init__(self, transport, queue, daemon=True):
Process.__init__(self)
self.daemon = daemon
self.transport = transport
self.data_from_mother = queue
def emit_to_mother(self, signature, args=None):
signature = (signature, )
if args:
signature += (args, )
self.transport.send(signature)
def run(self):
while True:
text = self.data_from_mother.get()
self.emit_to_mother('data(PyQt_PyObject)', (text.upper(),))
if __name__ == '__main__':
app = QApplication(sys.argv)
mother_pipe, child_pipe = Pipe()
queue = Queue()
emitter = Emitter(mother_pipe)
form = Form(queue, emitter)
ChildProc(child_pipe, queue).start()
form.show()
app.exec_()
|
Using an OrderedDict in **kwargs
|
Is it possible to pass an OrderedDict instance to a function which uses the **kwargs syntax and retain the ordering?
What I'd like to do is :
def I_crave_order(**kwargs):
for k, v in kwargs.items():
print k, v
example = OrderedDict([('first', 1), ('second', 2), ('third', -1)])
I_crave_order(**example)
>> first 1
>> second 2
>> third -1
However the actual result is:
>> second 2
>> third -1
>> first 1
ie, typical random dict ordering.
I have other uses where setting the explicitly is good, so I want to keep **kwargs and not just pass the OrderedDict as a regular argument
|
No, it is not possible. Or, rather, it is possible, but the OrderedDict is just going to get turned into a dict anyway.
The first thing to realize is that the value you pass in **example does not automatically become the value in **kwargs. Consider this case, where kwargs will only have part of example:
def f(a, **kwargs):
pass
example = {'a': 1, 'b': 2}
f(**example)
Or this case, where it will have more values than those in example:
example = {'b': 2}
f(a=1, c=3, **example)
Or even no overlap at all:
example = {'a': 1}
f(b=2, **example)
So, what you're asking for doesn't really make sense.
Still, it might be nice if there were some way to specify that you want an ordered **kwargs, no matter where the keywords came fromâexplicit keyword args in the order they appear, followed by all of the items of **example in the order they appear in example (which could be arbitrary if example were a dict, but could also be meaningful if it were an OrderedDict).
Defining all the fiddly details, and keeping the performance acceptable, turns out to be harder than it sounds. See PEP 468, and the linked threads, for some discussion on the idea. It seems to have stalled this time around, but if someone picks it up and champions it (and writes a reference implementation for people to play withâwhich depends on an OrderedDict accessible from the C API, but that will hopefully be there in 3.5+), I suspect it would eventually get into the language.
By the way, note that if this were possible, it would almost certainly be used in the constructor for OrderedDict itself. But if you try that, all you're doing is freezing some arbitrary order as the permanent order:
>>> d = OrderedDict(a=1, b=2, c=3)
OrderedDict([('a', 1), ('c', 3), ('b', 2)])
Meanwhile, what options do you have?
Well, the obvious option is just to pass example as a normal argument instead of unpacking it:
def f(example):
pass
example = OrderedDict([('a', 1), ('b', 2)])
f(example)
Or, of course, you can use *args and pass the items as tuples, but that's generally uglier.
There might be some other workarounds in the threads linked from the PEP, but really, none of them are going to be better than this. (Exceptâ¦Â IIRC, Li Haoyi came up with a solution based on his MacroPy to pass order-retaining keyword arguments, but I don't remember the details. MacroPy solutions in general are awesome if you're willing to use MacroPy and write code that doesn't quite read like Python, but that isn't always appropriateâ¦)
|
Convert Pandas Column to DateTime
|
I have one field in a pandas DataFrame that was imported as string format.
It should be a datetime variable.
How do I convert it to a datetime column and then filter based on date.
Example:
DataFrame Name: raw_data
Column Name: Mycol
Value
Format in Column: '05SEP2014:00:00:00.000'
|
Use the to_datetime function, specifying a format to match your data.
raw_data['Mycol'] = pd.to_datetime(raw_data['Mycol'], format='%d%b%Y:%H:%M:%S.%f')
|
Efficiently get indices of histogram bins in Python
|
Short Question
I have a large 10000x10000 elements image, which I bin into a few hundred different sectors/bins. I then need to perform some iterative calculation on the values contained within each bin.
How do I extract the indices of each bin to efficiently perform my calculation using the bins values?
What I am looking for is a solution which avoids the bottleneck of having to select every time "ind == j" from my large array. Is there a way to obtain directly, in one go, the indices of the elements belonging to every bin?
Detailed Explanation
1. Straightforward Solution
One way to achieve what I need is to use a code like the following (see e.g. THIS related answer), where I digitize my values and then have a j-loop selecting digitized indices equal to j like below
import numpy as np
# This function func() is just a place mark for a much more complicated function.
# I am aware that my problem could be easily speed up in the specific case of
# of the sum() function, but I am looking for a general solution to the problem.
def func(x):
y = np.sum(x)
return y
vals = np.random.random(1e8)
nbins = 100
bins = np.linspace(0, 1, nbins+1)
ind = np.digitize(vals, bins)
result = [func(vals[ind == j]) for j in range(1, nbins)]
What I am looking for is a solution which avoids the bottleneck of having to select every time "ind == j" from my large array. Is there a way to obtain directly, in one go, the indices of the elements belonging to every bin?
2. Using binned_statistics
The above approach turns out to be the same implemented in scipy.stats.binned_statistic, for the general case of a user-defined function. Using Scipy directly an identical output can be obtained with the following
import numpy as np
from scipy.stats import binned_statistics
vals = np.random.random(1e8)
results = binned_statistic(vals, vals, statistic=func, bins=100, range=[0, 1])[0]
3. Using labeled_comprehension
Another Scipy alternative is to use scipy.ndimage.measurements.labeled_comprehension. Using that function, the above example would become
import numpy as np
from scipy.ndimage import labeled_comprehension
vals = np.random.random(1e8)
nbins = 100
bins = np.linspace(0, 1, nbins+1)
ind = np.digitize(vals, bins)
result = labeled_comprehension(vals, ind, np.arange(1, nbins), func, float, 0)
Unfortunately also this form is inefficient and in particular it has no speed advantage over my original example.
4. Comparison with IDL language
To further clarify, what I am looking for is a functionality equivalent to the REVERSE_INDICES keyword in the HISTOGRAM function of the IDL language HERE. Can this very useful functionality be efficiently replicated in Python?
Specifically, using the IDL language the above example could be written as
vals = randomu(s, 1e8)
nbins = 100
bins = [0:1:1./nbins]
h = histogram(vals, MIN=bins[0], MAX=bins[-2], NBINS=nbins, REVERSE_INDICES=r)
result = dblarr(nbins)
for j=0, nbins-1 do begin
jbins = r[r[j]:r[j+1]-1] ; Selects indices of bin j
result[j] = func(vals[jbins])
endfor
The above IDL implementation is about 10 times faster than the Numpy one, due to the fact that the indices of the bins do not have to be selected for every bin. And the speed difference in favour of the IDL implementation increases with the number of bins.
|
If you don't really need the indices, but you just want to do some processing on the bins:
vals.sort()
bins = np.linspace(0, 1, nbins+1)
inds = np.searchsorted(vals, bins[1:-1])
result = [func(group) for group in np.split(vals, inds)]
Or, trying to be clever:
# faster than numpy.histogram, but only works in the range (0,1)
digitized = (nbins*vals).astype(int)
histogram = np.bincount(digitized)
offsets = np.cumsum(histogram[:-1])
grouped = np.partition(vals, offsets)
result = [func(group) for group in np.split(grouped, offsets)]
Though this isn't that much faster than the first one, because partitioning the array takes quite some time. I also think edge cases are handled incorrectly..
If you want to calculate the statistic on different data than you're binning, you can use argsort and argpartition in the code above respectively, but unfortunately this makes things quite a lot slower.
However, I found that a particular sparse matrix constructor does a very similar operation fairly efficiently. It's a bit obscure but we can abuse it for this purpose:
def binned_statistic(x, values, func, nbins, range):
'''The usage is approximately the same as the scipy one'''
from scipy.sparse import csr_matrix
N = len(values)
r0, r1 = range
digitized = (float(nbins) / (r1-r0) * (x-r0)).astype(int)
S = csr_matrix((values, [digitized, np.arange(N)]), shape=(nbins, N))
return [func(group) for group in np.split(S.data, S.indptr[1:-1])]
I avoided np.digitize because it doesn't use the fact that all bins are equal width and hence is slow, but the method I used instead may not handle all edge cases perfectly.
|
pandas to_csv first extra column remove, how to?
|
I'm trying to create csv with pandas , but when I export to csv it gave me one extra row
d = {'one' : pd.Series([1., 2., 3.]),'two' : pd.Series([1., 2., 3., 4.])}
df0_fa = pd.DataFrame(d)
df_csv = df0_fa.to_csv('revenue/data/test.csv',mode = 'w')
so my result would be :
,one,two
0,1.0,1.0
1,2.0,2.0
2,3.0,3.0
3,,4.0
But what I want is
one,two
1.0,1.0
2.0,2.0
3.0,3.0
,4.0
|
What you are seeing is the index column. Just set index=False:
df_csv = df0_fa.to_csv('revenue/data/test.csv',mode = 'w', index=False)
|
super() and @staticmethod interaction
|
Is super() not meant to be used with staticmethods?
When I try something like
class First(object):
@staticmethod
def getlist():
return ['first']
class Second(First):
@staticmethod
def getlist():
l = super(Second).getlist()
l.append('second')
return l
a = Second.getlist()
print a
I get the following error
Traceback (most recent call last):
File "asdf.py", line 13, in <module>
a = Second.getlist()
File "asdf.py", line 9, in getlist
l = super(Second).getlist()
AttributeError: 'super' object has no attribute 'getlist'
If I change the staticmethods to classmethods and pass the class instance to super(), things work fine. Am I calling super(type) incorrectly here or is there something I'm missing?
|
The short answer to
Am I calling super(type) incorrectly here or is there something I'm missing?
is: yes, you're calling it incorrectly... AND (indeed, because) there is something you're missing.
But don't feel bad; this is an extremely difficult subject.
The documentation notes that
If the second argument is omitted, the super object returned is unbound.
The use case for unbound super objects is extremely narrow and rare. See these articles by Michele Simionato for his discussion on super():
Things to Know About Python Super [1 of 3]
Things to Know About Python Super [2 of 3] (this one specifically covers unbound super)
Things to Know About Python Super [3 of 3]
Also, he argues strongly for removing unbound super from Python 3 here.
I said you were calling it "incorrectly" (though correctness is largely meaningless without context, and a toy example doesn't give much context). Because unbound super is so rare, and possibly just flat-out unjustified, as argued by Simionato, the "correct" way to use super() is to provide the second argument.
In your case, the simplest way to make your example work is
class First(object):
@staticmethod
def getlist():
return ['first']
class Second(First):
@staticmethod
def getlist():
l = super(Second, Second).getlist() # note the 2nd argument
l.append('second')
return l
a = Second.getlist()
print a
If you think it looks funny that way, you're not wrong. But I think what most people are expecting when they see super(X) (or hoping for when they try it in their own code) is what Python gives you if you do super(X, X).
|
Why does PyCharm use 120 Character Lines even though PEP8 Specifies 79?
|
PEP8 clearly specifies 79 characters, however, PyCharm defaults to 120 and gives me the warning "PEP8: line too long (... > 120 characters)".
Did previous versions of PEP8 use 120 and PyCharm not update its PEP8 checker? I couldn't find any previous versions of the PEP8 Guide, however, I can easily find previous version of the PEP8 Python scripts.
I'm starting a new Python project and I'm not sure which to use.
References:
http://legacy.python.org/dev/peps/pep-0008/
|
PyCharm is built on top of IntelliJ. IntelliJ has a default line length of 120 characters.
This is probably because you can't fit a common Java name like: @annotated public static MyObjectFactoryFactory enterpriseObjectFactoryFactoryBuilderPattern { in a mere 80 character line. (I'm poking fun, but Java names to tend to be longer by convention).
The pep8 checker is configurable, so you can specify a better max line length - like 79 characters.
The error is misleading because the pep8 checker formats the text with something like "PEP8: line too long(... > %s characters)" % max_line_setting. So it's using the pep8 checker, with a specific configuration, not claiming that pep8 specifies a 120 character line.
|
Running cron python jobs within docker
|
I would like to run a python cron job inside of a docker container in detached mode. My set-up is below:
My python script is test.py
#!/usr/bin/env python
import datetime
print "Cron job has run at %s" %datetime.datetime.now()
My cron file is my-crontab
* * * * * /test.py > /dev/console
and my Dockerfile is
FROM ubuntu:latest
RUN apt-get update && apt-get install -y software-properties-common python-software-properties && apt-get update
RUN apt-get install -y python cron
ADD my-crontab /
ADD test.py /
RUN chmod a+x test.py
RUN crontab /my-crontab
ENTRYPOINT cron -f
What are the potential problems with this approach? Are there other approaches and what are their pros and cons?
|
Several issues that I faced while trying to get a cron job running in a docker container were:
time in the docker container is in UTC not local time;
the docker environment is not passed to cron;
as Thomas noted, cron logging leaves a lot to be desired and accessing it through docker requires a docker-based solution.
There are cron-specific issues and are docker-specific issues in the list, but in any case they have to be addressed to get cron working.
To that end, my current working solution to the problem posed in the question is as follows:
Create a docker volume to which all scripts running under cron will write:
# Dockerfile for test-logs
# BUILD-USING: docker build -t test-logs .
# RUN-USING: docker run -d -v /t-logs --name t-logs test-logs
# INSPECT-USING: docker run -t -i --volumes-from t-logs ubuntu:latest /bin/bash
FROM stackbrew/busybox:latest
# Create logs volume
VOLUME /var/log
CMD ["true"]
The script that will run under cron is test.py:
#!/usr/bin/env python
# python script which needs an environment variable and runs as a cron job
import datetime
import os
test_environ = os.environ["TEST_ENV"]
print "Cron job has run at %s with environment variable '%s'" %(datetime.datetime.now(), test_environ)
In order to pass the environment variable to the script that I want to run under cron, follow Thomas' suggestion and put a crontab fragment for each script (or group of scripts) that has need of a docker environment variable in /etc/cron.d with a placeholder XXXXXXX which must be set.
# placed in /etc/cron.d
# TEST_ENV is an docker environment variable that the script test.py need
TEST_ENV=XXXXXXX
#
* * * * * root python /test.py >> /var/log/test.log
Instead of calling cron directly, wrap cron in a python script that does does things: 1. reads the environment variable from the docker environment variable and sets the environment variable in a crontab fragment.
#!/usr/bin/env python
# run-cron.py
# sets environment variable crontab fragments and runs cron
import os
from subprocess import call
import fileinput
# read docker environment variables and set them in the appropriate crontab fragment
environment_variable = os.environ["TEST_ENV"]
for line in fileinput.input("/etc/cron.d/cron-python",inplace=1):
print line.replace("XXXXXXX", environment_variable)
args = ["cron","-f", "-L 15"]
call(args)
The Dockerfile that for the container in which the cron jobs run is as follows:
# BUILD-USING: docker build -t test-cron .
# RUN-USING docker run --detach=true --volumes-from t-logs --name t-cron test-cron
FROM debian:wheezy
#
# Set correct environment variables.
ENV HOME /root
ENV TEST_ENV test-value
RUN apt-get update && apt-get install -y software-properties-common python-software-properties && apt-get update
# Install Python Setuptools
RUN apt-get install -y python cron
RUN apt-get purge -y python-software-properties software-properties-common && apt-get clean -y && apt-get autoclean -y && apt-get autoremove -y && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ADD cron-python /etc/cron.d/
ADD test.py /
ADD run-cron.py /
RUN chmod a+x test.py run-cron.py
# Set the time zone to the local time zone
RUN echo "America/New_York" > /etc/timezone && dpkg-reconfigure --frontend noninteractive tzdata
CMD ["/run-cron.py"]
Finally, create the containers and run them:
Create the log volume (test-logs) container: docker build -t test-logs .
Run log volume: docker run -d -v /t-logs --name t-logs test-logs
Create the cron container: docker build -t test-cron .
Run the cron container: docker run --detach=true --volumes-from t-logs --name t-cron test-cron
To inspect the log files of the scripts running under cron: docker run -t -i --volumes-from t-logs ubuntu:latest /bin/bash. The log files are in /var/log.
|
'collectstatic' command fails when WhiteNoise is enabled
|
I'm trying to serve static files through WhiteNoise as per Heroku's recommendation. When I run collectstatic in my development environment, this happens:
Post-processing 'css/iconic/open-iconic-bootstrap.css' failed!
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/Pieter/.virtualenvs/radiant/lib/python3.4/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/home/Pieter/.virtualenvs/radiant/lib/python3.4/site-packages/django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/Pieter/.virtualenvs/radiant/lib/python3.4/site-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "/home/Pieter/.virtualenvs/radiant/lib/python3.4/site-packages/django/core/management/base.py", line 338, in execute
output = self.handle(*args, **options)
File "/home/Pieter/.virtualenvs/radiant/lib/python3.4/site-packages/django/core/management/base.py", line 533, in handle
return self.handle_noargs(**options)
File "/home/Pieter/.virtualenvs/radiant/lib/python3.4/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 168, in handle_noargs
collected = self.collect()
File "/home/Pieter/.virtualenvs/radiant/lib/python3.4/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 120, in collect
raise processed
File "/home/Pieter/.virtualenvs/radiant/lib/python3.4/site-packages/django/contrib/staticfiles/storage.py", line 242, in post_process
content = pattern.sub(converter, content)
File "/home/Pieter/.virtualenvs/radiant/lib/python3.4/site-packages/django/contrib/staticfiles/storage.py", line 181, in converter
hashed_url = self.url(unquote(joined_result), force=True)
File "/home/Pieter/.virtualenvs/radiant/lib/python3.4/site-packages/django/contrib/staticfiles/storage.py", line 128, in url
hashed_name = self.stored_name(clean_name)
File "/home/Pieter/.virtualenvs/radiant/lib/python3.4/site-packages/django/contrib/staticfiles/storage.py", line 277, in stored_name
cache_name = self.clean_name(self.hashed_name(name))
File "/home/Pieter/.virtualenvs/radiant/lib/python3.4/site-packages/django/contrib/staticfiles/storage.py", line 91, in hashed_name
(clean_name, self))
ValueError: The file 'css/fonts/open-iconic.eot' could not be found with <whitenoise.django.GzipManifestStaticFilesStorage object at 0x7f57fc5b1550>.
The static collection command runs without incident when I comment out this line in my settings:
STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
What's going wrong here and how do I fix it? I already tried emptying my static file output folder. It runs smoothly until it starts processing one specific file.
|
The problem here is that css/iconic/open-iconic-bootstrap.css is referencing a file, open-iconic.eot, which doesn't exist in the expected location.
When you run collectstatic with that storage backend Django attempts to rewrite all the URLs in your CSS files so they reference the files by their new names e.g, css/iconic/open-iconic.8a7442ca6bed.eot. If it can't find the file it stops with that error.
|
Pandas Replace NaN with blank/empty string
|
I have a Pandas Dataframe as shown below:
1 2 3
0 a NaN read
1 b l unread
2 c NaN read
I want to remove the NaN values with an empty string so that it looks like so:
1 2 3
0 a "" read
1 b l unread
2 c "" read
|
Slightly shorter is:
df = df.fillna('')
This will fill na's (e.g. NaN's) with ''.
Edit:
If you want to fill a single column, you can use:
df.column1 = df.column1.fillna('')
|
Opening a SSL socket connection in Python
|
I'm trying to establish a secure socket connection in Python, and i'm having a hard time with the SSL bit of it. I've found some code examples of how to establish a connection with SSL, but they all involve key files. The server i'm trying to connect with doesn't need to receive any keys or certificates. My question is how do I essentially wrap a python socket connection with SSL. I know for a fact that the cipher i'm suppose to use is ADH-AES256-SHA, and the protocol is TLSv1. This is what i've been trying:
import socket
import ssl
# SET VARIABLES
packet, reply = "<packet>SOME_DATA</packet>", ""
HOST, PORT = 'XX.XX.XX.XX', 4434
# CREATE SOCKET
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(10)
# WRAP SOCKET ???
ssl.wrap_socket(sock, ssl_version="TLSv1", ciphers="ADH-AES256-SHA")
# CONNECT AND PRINT REPLY
sock.connect((HOST, PORT))
sock.send(packet)
print sock.recv(1280)
# CLOSE SOCKET CONNECTION
sock.close()
When I run this code, I don't get any errors, but I get a blank response. When trying to debug this code in the command line, by typing in python in the terminal and pasting in code line by line, I get what i'm assuming is a status code when running sock.send(packet). The integer response I get is 26. If anyone knows what this means, or can help in anyway it would be greatly appreciated. Thanks in advance!
|
Ok, I figured out what was wrong. It was kind of foolish of me. I had two problems with my code. My first mistake was when specifying the ssl_version I put in TLSv1 when it should have been ssl.PROTOCOL_TLSv1. The second mistake was that I wasn't referencing the wrapped socket, instead I was calling the original socket that I have created. The below code seemed to work for me.
import socket
import ssl
# SET VARIABLES
packet, reply = "<packet>SOME_DATA</packet>", ""
HOST, PORT = 'XX.XX.XX.XX', 4434
# CREATE SOCKET
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(10)
# WRAP SOCKET
wrappedSocket = ssl.wrap_socket(sock, ssl_version=ssl.PROTOCOL_TLSv1, ciphers="ADH-AES256-SHA")
# CONNECT AND PRINT REPLY
wrappedSocket.connect((HOST, PORT))
wrappedSocket.send(packet)
print wrappedSocket.recv(1280)
# CLOSE SOCKET CONNECTION
wrappedSocket.close()
Hope this can help somebody!
|
sklearn agglomerative clustering linkage matrix
|
I'm trying to draw a complete-link scipy.cluster.hierarchy.dendrogram, and I found that scipy.cluster.hierarchy.linkage is slower than sklearn.AgglomerativeClustering.
However, sklearn.AgglomerativeClustering doesn't return the distance between clusters and the number of original observations, which scipy.cluster.hierarchy.dendrogram needs. Is there a way to take them?
|
It's possible, but it isn't pretty. It requires (at a minimum) a small rewrite of AgglomerativeClustering.fit (source). The difficulty is that the method requires a number of imports, so it ends up getting a bit nasty looking. To add in this feature:
Insert the following line after line 748:
kwargs['return_distance'] = True
Replace line 752 with:
self.children_, self.n_components_, self.n_leaves_, parents, self.distance = \
This will give you a new attribute, distance, that you can easily call.
A couple things to note:
When doing this, I ran into this issue about the check_array function on line 711. This can be fixed by using check_arrays (from sklearn.utils.validation import check_arrays). You can modify that line to become X = check_arrays(X)[0]. This appears to be a bug (I still have this issue on the most recent version of scikit-learn).
Depending on which version of sklearn.cluster.hierarchical.linkage_tree you have, you may also need to modify it to be the one provided in the source.
To make things easier for everyone, here is the full code that you will need to use:
from heapq import heapify, heappop, heappush, heappushpop
import warnings
import sys
import numpy as np
from scipy import sparse
from sklearn.base import BaseEstimator, ClusterMixin
from sklearn.externals.joblib import Memory
from sklearn.externals import six
from sklearn.utils.validation import check_arrays
from sklearn.utils.sparsetools import connected_components
from sklearn.cluster import _hierarchical
from sklearn.cluster.hierarchical import ward_tree
from sklearn.cluster._feature_agglomeration import AgglomerationTransform
from sklearn.utils.fast_dict import IntFloatDict
def _fix_connectivity(X, connectivity, n_components=None,
affinity="euclidean"):
"""
Fixes the connectivity matrix
- copies it
- makes it symmetric
- converts it to LIL if necessary
- completes it if necessary
"""
n_samples = X.shape[0]
if (connectivity.shape[0] != n_samples or
connectivity.shape[1] != n_samples):
raise ValueError('Wrong shape for connectivity matrix: %s '
'when X is %s' % (connectivity.shape, X.shape))
# Make the connectivity matrix symmetric:
connectivity = connectivity + connectivity.T
# Convert connectivity matrix to LIL
if not sparse.isspmatrix_lil(connectivity):
if not sparse.isspmatrix(connectivity):
connectivity = sparse.lil_matrix(connectivity)
else:
connectivity = connectivity.tolil()
# Compute the number of nodes
n_components, labels = connected_components(connectivity)
if n_components > 1:
warnings.warn("the number of connected components of the "
"connectivity matrix is %d > 1. Completing it to avoid "
"stopping the tree early." % n_components,
stacklevel=2)
# XXX: Can we do without completing the matrix?
for i in xrange(n_components):
idx_i = np.where(labels == i)[0]
Xi = X[idx_i]
for j in xrange(i):
idx_j = np.where(labels == j)[0]
Xj = X[idx_j]
D = pairwise_distances(Xi, Xj, metric=affinity)
ii, jj = np.where(D == np.min(D))
ii = ii[0]
jj = jj[0]
connectivity[idx_i[ii], idx_j[jj]] = True
connectivity[idx_j[jj], idx_i[ii]] = True
return connectivity, n_components
# average and complete linkage
def linkage_tree(X, connectivity=None, n_components=None,
n_clusters=None, linkage='complete', affinity="euclidean",
return_distance=False):
"""Linkage agglomerative clustering based on a Feature matrix.
The inertia matrix uses a Heapq-based representation.
This is the structured version, that takes into account some topological
structure between samples.
Parameters
----------
X : array, shape (n_samples, n_features)
feature matrix representing n_samples samples to be clustered
connectivity : sparse matrix (optional).
connectivity matrix. Defines for each sample the neighboring samples
following a given structure of the data. The matrix is assumed to
be symmetric and only the upper triangular half is used.
Default is None, i.e, the Ward algorithm is unstructured.
n_components : int (optional)
Number of connected components. If None the number of connected
components is estimated from the connectivity matrix.
NOTE: This parameter is now directly determined directly
from the connectivity matrix and will be removed in 0.18
n_clusters : int (optional)
Stop early the construction of the tree at n_clusters. This is
useful to decrease computation time if the number of clusters is
not small compared to the number of samples. In this case, the
complete tree is not computed, thus the 'children' output is of
limited use, and the 'parents' output should rather be used.
This option is valid only when specifying a connectivity matrix.
linkage : {"average", "complete"}, optional, default: "complete"
Which linkage critera to use. The linkage criterion determines which
distance to use between sets of observation.
- average uses the average of the distances of each observation of
the two sets
- complete or maximum linkage uses the maximum distances between
all observations of the two sets.
affinity : string or callable, optional, default: "euclidean".
which metric to use. Can be "euclidean", "manhattan", or any
distance know to paired distance (see metric.pairwise)
return_distance : bool, default False
whether or not to return the distances between the clusters.
Returns
-------
children : 2D array, shape (n_nodes-1, 2)
The children of each non-leaf node. Values less than `n_samples`
correspond to leaves of the tree which are the original samples.
A node `i` greater than or equal to `n_samples` is a non-leaf
node and has children `children_[i - n_samples]`. Alternatively
at the i-th iteration, children[i][0] and children[i][1]
are merged to form node `n_samples + i`
n_components : int
The number of connected components in the graph.
n_leaves : int
The number of leaves in the tree.
parents : 1D array, shape (n_nodes, ) or None
The parent of each node. Only returned when a connectivity matrix
is specified, elsewhere 'None' is returned.
distances : ndarray, shape (n_nodes-1,)
Returned when return_distance is set to True.
distances[i] refers to the distance between children[i][0] and
children[i][1] when they are merged.
See also
--------
ward_tree : hierarchical clustering with ward linkage
"""
X = np.asarray(X)
if X.ndim == 1:
X = np.reshape(X, (-1, 1))
n_samples, n_features = X.shape
linkage_choices = {'complete': _hierarchical.max_merge,
'average': _hierarchical.average_merge,
}
try:
join_func = linkage_choices[linkage]
except KeyError:
raise ValueError(
'Unknown linkage option, linkage should be one '
'of %s, but %s was given' % (linkage_choices.keys(), linkage))
if connectivity is None:
from scipy.cluster import hierarchy # imports PIL
if n_clusters is not None:
warnings.warn('Partial build of the tree is implemented '
'only for structured clustering (i.e. with '
'explicit connectivity). The algorithm '
'will build the full tree and only '
'retain the lower branches required '
'for the specified number of clusters',
stacklevel=2)
if affinity == 'precomputed':
# for the linkage function of hierarchy to work on precomputed
# data, provide as first argument an ndarray of the shape returned
# by pdist: it is a flat array containing the upper triangular of
# the distance matrix.
i, j = np.triu_indices(X.shape[0], k=1)
X = X[i, j]
elif affinity == 'l2':
# Translate to something understood by scipy
affinity = 'euclidean'
elif affinity in ('l1', 'manhattan'):
affinity = 'cityblock'
elif callable(affinity):
X = affinity(X)
i, j = np.triu_indices(X.shape[0], k=1)
X = X[i, j]
out = hierarchy.linkage(X, method=linkage, metric=affinity)
children_ = out[:, :2].astype(np.int)
if return_distance:
distances = out[:, 2]
return children_, 1, n_samples, None, distances
return children_, 1, n_samples, None
if n_components is not None:
warnings.warn(
"n_components is now directly calculated from the connectivity "
"matrix and will be removed in 0.18",
DeprecationWarning)
connectivity, n_components = _fix_connectivity(X, connectivity)
connectivity = connectivity.tocoo()
# Put the diagonal to zero
diag_mask = (connectivity.row != connectivity.col)
connectivity.row = connectivity.row[diag_mask]
connectivity.col = connectivity.col[diag_mask]
connectivity.data = connectivity.data[diag_mask]
del diag_mask
if affinity == 'precomputed':
distances = X[connectivity.row, connectivity.col]
else:
# FIXME We compute all the distances, while we could have only computed
# the "interesting" distances
distances = paired_distances(X[connectivity.row],
X[connectivity.col],
metric=affinity)
connectivity.data = distances
if n_clusters is None:
n_nodes = 2 * n_samples - 1
else:
assert n_clusters <= n_samples
n_nodes = 2 * n_samples - n_clusters
if return_distance:
distances = np.empty(n_nodes - n_samples)
# create inertia heap and connection matrix
A = np.empty(n_nodes, dtype=object)
inertia = list()
# LIL seems to the best format to access the rows quickly,
# without the numpy overhead of slicing CSR indices and data.
connectivity = connectivity.tolil()
# We are storing the graph in a list of IntFloatDict
for ind, (data, row) in enumerate(zip(connectivity.data,
connectivity.rows)):
A[ind] = IntFloatDict(np.asarray(row, dtype=np.intp),
np.asarray(data, dtype=np.float64))
# We keep only the upper triangular for the heap
# Generator expressions are faster than arrays on the following
inertia.extend(_hierarchical.WeightedEdge(d, ind, r)
for r, d in zip(row, data) if r < ind)
del connectivity
heapify(inertia)
# prepare the main fields
parent = np.arange(n_nodes, dtype=np.intp)
used_node = np.ones(n_nodes, dtype=np.intp)
children = []
# recursive merge loop
for k in xrange(n_samples, n_nodes):
# identify the merge
while True:
edge = heappop(inertia)
if used_node[edge.a] and used_node[edge.b]:
break
i = edge.a
j = edge.b
if return_distance:
# store distances
distances[k - n_samples] = edge.weight
parent[i] = parent[j] = k
children.append((i, j))
# Keep track of the number of elements per cluster
n_i = used_node[i]
n_j = used_node[j]
used_node[k] = n_i + n_j
used_node[i] = used_node[j] = False
# update the structure matrix A and the inertia matrix
# a clever 'min', or 'max' operation between A[i] and A[j]
coord_col = join_func(A[i], A[j], used_node, n_i, n_j)
for l, d in coord_col:
A[l].append(k, d)
# Here we use the information from coord_col (containing the
# distances) to update the heap
heappush(inertia, _hierarchical.WeightedEdge(d, k, l))
A[k] = coord_col
# Clear A[i] and A[j] to save memory
A[i] = A[j] = 0
# Separate leaves in children (empty lists up to now)
n_leaves = n_samples
# # return numpy array for efficient caching
children = np.array(children)[:, ::-1]
if return_distance:
return children, n_components, n_leaves, parent, distances
return children, n_components, n_leaves, parent
# Matching names to tree-building strategies
def _complete_linkage(*args, **kwargs):
kwargs['linkage'] = 'complete'
return linkage_tree(*args, **kwargs)
def _average_linkage(*args, **kwargs):
kwargs['linkage'] = 'average'
return linkage_tree(*args, **kwargs)
_TREE_BUILDERS = dict(
ward=ward_tree,
complete=_complete_linkage,
average=_average_linkage,
)
def _hc_cut(n_clusters, children, n_leaves):
"""Function cutting the ward tree for a given number of clusters.
Parameters
----------
n_clusters : int or ndarray
The number of clusters to form.
children : list of pairs. Length of n_nodes
The children of each non-leaf node. Values less than `n_samples` refer
to leaves of the tree. A greater value `i` indicates a node with
children `children[i - n_samples]`.
n_leaves : int
Number of leaves of the tree.
Returns
-------
labels : array [n_samples]
cluster labels for each point
"""
if n_clusters > n_leaves:
raise ValueError('Cannot extract more clusters than samples: '
'%s clusters where given for a tree with %s leaves.'
% (n_clusters, n_leaves))
# In this function, we store nodes as a heap to avoid recomputing
# the max of the nodes: the first element is always the smallest
# We use negated indices as heaps work on smallest elements, and we
# are interested in largest elements
# children[-1] is the root of the tree
nodes = [-(max(children[-1]) + 1)]
for i in xrange(n_clusters - 1):
# As we have a heap, nodes[0] is the smallest element
these_children = children[-nodes[0] - n_leaves]
# Insert the 2 children and remove the largest node
heappush(nodes, -these_children[0])
heappushpop(nodes, -these_children[1])
label = np.zeros(n_leaves, dtype=np.intp)
for i, node in enumerate(nodes):
label[_hierarchical._hc_get_descendent(-node, children, n_leaves)] = i
return label
class AgglomerativeClustering(BaseEstimator, ClusterMixin):
"""
Agglomerative Clustering
Recursively merges the pair of clusters that minimally increases
a given linkage distance.
Parameters
----------
n_clusters : int, default=2
The number of clusters to find.
connectivity : array-like or callable, optional
Connectivity matrix. Defines for each sample the neighboring
samples following a given structure of the data.
This can be a connectivity matrix itself or a callable that transforms
the data into a connectivity matrix, such as derived from
kneighbors_graph. Default is None, i.e, the
hierarchical clustering algorithm is unstructured.
affinity : string or callable, default: "euclidean"
Metric used to compute the linkage. Can be "euclidean", "l1", "l2",
"manhattan", "cosine", or 'precomputed'.
If linkage is "ward", only "euclidean" is accepted.
memory : Instance of joblib.Memory or string (optional)
Used to cache the output of the computation of the tree.
By default, no caching is done. If a string is given, it is the
path to the caching directory.
n_components : int (optional)
Number of connected components. If None the number of connected
components is estimated from the connectivity matrix.
NOTE: This parameter is now directly determined from the connectivity
matrix and will be removed in 0.18
compute_full_tree : bool or 'auto' (optional)
Stop early the construction of the tree at n_clusters. This is
useful to decrease computation time if the number of clusters is
not small compared to the number of samples. This option is
useful only when specifying a connectivity matrix. Note also that
when varying the number of clusters and using caching, it may
be advantageous to compute the full tree.
linkage : {"ward", "complete", "average"}, optional, default: "ward"
Which linkage criterion to use. The linkage criterion determines which
distance to use between sets of observation. The algorithm will merge
the pairs of cluster that minimize this criterion.
- ward minimizes the variance of the clusters being merged.
- average uses the average of the distances of each observation of
the two sets.
- complete or maximum linkage uses the maximum distances between
all observations of the two sets.
pooling_func : callable, default=np.mean
This combines the values of agglomerated features into a single
value, and should accept an array of shape [M, N] and the keyword
argument ``axis=1``, and reduce it to an array of size [M].
Attributes
----------
labels_ : array [n_samples]
cluster labels for each point
n_leaves_ : int
Number of leaves in the hierarchical tree.
n_components_ : int
The estimated number of connected components in the graph.
children_ : array-like, shape (n_nodes-1, 2)
The children of each non-leaf node. Values less than `n_samples`
correspond to leaves of the tree which are the original samples.
A node `i` greater than or equal to `n_samples` is a non-leaf
node and has children `children_[i - n_samples]`. Alternatively
at the i-th iteration, children[i][0] and children[i][1]
are merged to form node `n_samples + i`
"""
def __init__(self, n_clusters=2, affinity="euclidean",
memory=Memory(cachedir=None, verbose=0),
connectivity=None, n_components=None,
compute_full_tree='auto', linkage='ward',
pooling_func=np.mean):
self.n_clusters = n_clusters
self.memory = memory
self.n_components = n_components
self.connectivity = connectivity
self.compute_full_tree = compute_full_tree
self.linkage = linkage
self.affinity = affinity
self.pooling_func = pooling_func
def fit(self, X, y=None):
"""Fit the hierarchical clustering on the data
Parameters
----------
X : array-like, shape = [n_samples, n_features]
The samples a.k.a. observations.
Returns
-------
self
"""
X = check_arrays(X)[0]
memory = self.memory
if isinstance(memory, six.string_types):
memory = Memory(cachedir=memory, verbose=0)
if self.linkage == "ward" and self.affinity != "euclidean":
raise ValueError("%s was provided as affinity. Ward can only "
"work with euclidean distances." %
(self.affinity, ))
if self.linkage not in _TREE_BUILDERS:
raise ValueError("Unknown linkage type %s."
"Valid options are %s" % (self.linkage,
_TREE_BUILDERS.keys()))
tree_builder = _TREE_BUILDERS[self.linkage]
connectivity = self.connectivity
if self.connectivity is not None:
if callable(self.connectivity):
connectivity = self.connectivity(X)
connectivity = check_arrays(
connectivity, accept_sparse=['csr', 'coo', 'lil'])
n_samples = len(X)
compute_full_tree = self.compute_full_tree
if self.connectivity is None:
compute_full_tree = True
if compute_full_tree == 'auto':
# Early stopping is likely to give a speed up only for
# a large number of clusters. The actual threshold
# implemented here is heuristic
compute_full_tree = self.n_clusters < max(100, .02 * n_samples)
n_clusters = self.n_clusters
if compute_full_tree:
n_clusters = None
# Construct the tree
kwargs = {}
kwargs['return_distance'] = True
if self.linkage != 'ward':
kwargs['linkage'] = self.linkage
kwargs['affinity'] = self.affinity
self.children_, self.n_components_, self.n_leaves_, parents, \
self.distance = memory.cache(tree_builder)(X, connectivity,
n_components=self.n_components,
n_clusters=n_clusters,
**kwargs)
# Cut the tree
if compute_full_tree:
self.labels_ = _hc_cut(self.n_clusters, self.children_,
self.n_leaves_)
else:
labels = _hierarchical.hc_get_heads(parents, copy=False)
# copy to avoid holding a reference on the original array
labels = np.copy(labels[:n_samples])
# Reasign cluster numbers
self.labels_ = np.searchsorted(np.unique(labels), labels)
return self
Below is a simple example showing how to use the modified AgglomerativeClustering class:
import numpy as np
import AgglomerativeClustering # Make sure to use the new one!!!
d = np.array(
[
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
]
)
clustering = AgglomerativeClustering(n_clusters=2, compute_full_tree=True,
affinity='euclidean', linkage='complete')
clustering.fit(d)
print clustering.distance
That example has the following output:
[ 5.19615242 10.39230485]
This can then be compared to a scipy.cluster.hierarchy.linkage implementation:
import numpy as np
from scipy.cluster.hierarchy import linkage
d = np.array(
[
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
]
)
print linkage(d, 'complete')
Output:
[[ 1. 2. 5.19615242 2. ]
[ 0. 3. 10.39230485 3. ]]
Just for kicks I decided to follow up on your statement about performance:
import AgglomerativeClustering
from scipy.cluster.hierarchy import linkage
import numpy as np
import time
l = 1000; iters = 50
d = [np.random.random(100) for _ in xrange(1000)]
t = time.time()
for _ in xrange(iters):
clustering = AgglomerativeClustering(n_clusters=l-1,
affinity='euclidean', linkage='complete')
clustering.fit(d)
scikit_time = (time.time() - t) / iters
print 'scikit-learn Time: {0}s'.format(scikit_time)
t = time.time()
for _ in xrange(iters):
linkage(d, 'complete')
scipy_time = (time.time() - t) / iters
print 'SciPy Time: {0}s'.format(scipy_time)
print 'scikit-learn Speedup: {0}'.format(scipy_time / scikit_time)
This gave me the following results:
scikit-learn Time: 0.566560001373s
SciPy Time: 0.497740001678s
scikit-learn Speedup: 0.878530077083
According to this, the implementation from Scikit-Learn takes 0.88x the execution time of the SciPy implementation, i.e. SciPy's implementation is 1.14x faster. It should be noted that:
I modified the original scikit-learn implementation
I only did a small number of iterations
I only tested a small number of test cases (both cluster size as well as number of items per dimension should be tested)
I ran SciPy second, so it is had the advantage of obtaining more cache hits on the source data
The two methods don't exactly do the same thing.
With all of that in mind, you should really evaluate which method performs better for your specific application. There are also functional reasons to go with one implementation over the other.
|
mysql-python install fatal error
|
I am trying to pip install mysql-python connector but it keeps erroring on me. Works fine on my mac and another windows machine but not this one. I have downloaded visual studio c++ and tried it as 32 bit and 64. Does anyone have an idea on how to get around this.
_mysql.c(42) : fatal error C1083: Cannot open include file: 'config-win.h': No s
uch file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 9.0\\VC\\BIN\\c
l.exe' failed with exit status 2
----------------------------------------
Cleaning up...
Command C:\Users\Admin1\Desktop\python\virtual\Scripts\python.exe -c "import set
uptools, tokenize;__file__='C:\\Users\\Admin1\\Desktop\\python\\virtual\\build\\
MySQL-python\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).r
ead().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\admin
1\appdata\local\temp\1\pip-6pmwrd-record\install-record.txt --single-version-ext
ernally-managed --compile --install-headers C:\Users\Admin1\Desktop\python\virtu
al\include\site\python2.7 failed with error code 1 in C:\Users\Admin1\Desktop\py
thon\virtual\build\MySQL-python
Storing debug log for failure in C:\Users\Admin1\pip\pip.log
|
for 64-bit windows
install using wheel
pip install wheel
download from http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysql-python
pip install MySQL_python-1.2.5-cp27-none-win_amd64.whl
|
Deploying a local django app using openshift
|
I've built a webapp using django. In order to host it I'm trying to use openshift but am having difficulty in getting anything working. There seems to be a lack of step by steps for this. So far I have git working fine, the app works on the local dev environment and I've successfully created an app on openshift.
Following the URL on openshift once created I just get the standard page of "Welcome to your Openshift App".
I've followed this https://developers.openshift.com/en/python-getting-started.html#step1 to try changing the wsgi.py file. Changed it to hello world, pushed it and yet I still get the openshift default page.
Is there a good comprehensive resource anywhere for getting local Django apps up and running on Openshift? Most of what I can find on google are just example apps which aren't that useful as I already have mine built.
|
Edit: Remember this is a platform-dependent answer and since the OpenShift platform serving Django may change, this answer could become invalid. As of Apr 1 2016, this answer remains valid at its whole extent.
Many times this happened to me and, since I had to mount at least 5 applications, I had to create my own lifecycle:
Don't use the Django cartridge, but the python 2.7 cartridge. Using the Django cart. and trying to update the django version brings many headaches, not included if you do it from scratch.
Clone your repository via git. You will get yourproject and...
# git clone yourrepo@rhcloud.com:app.git yourproject <- replace it with your actual openshift repo address
yourproject/
+---wsgi.py
+---setup.py
*---.openshift/ (with its contents - I omit them now)
Make a virtualenv for your brand-new repository cloned into your local machine. Activate it and install Django via pip and all the dependencies you would need (e.g. a new Pillow package, MySQL database package, ...). Create a django project there. Say, yourdjproject. Edit Create, alongside, a wsgi/static directory with an empty, dummy, file (e.g. .gitkeep - the name is just convention: you can use any name you want).
#assuming you have virtualenv-wrapper installed and set-up
mkvirtualenv myenvironment
workon myenvironment
pip install Django[==x.y[.z]] #select your version; optional.
#creating the project inside the git repository
cd path/to/yourproject/
django-admin.py startproject yourjdproject .
#creating dummy wsgi/static directory for collectstatic
mkdir -p wsgi/static
touch wsgi/static/.gitkeep
Create a django app there. Say, yourapp. Include it in your project.
You will have something like this (django 1.7):
yourproject/
+---wsgi/
| +---static/
| +---.gitkeep
+---wsgi.py
+---setup.py
+---.openshift/ (with its contents - I omit them now)
+---yourdjproject/
| +----__init__.py
| +----urls.py
| +----settings.py
| +----wsgi.py
+---+yourapp/
+----__init__.py
+----models.py
+----views.py
+----tests.py
+----migrations
+---__init__.py
Set up your django application as you'd always do (I will not detail it here). Remember to include all the dependencies you installed, in the setup.py file accordingly (This answer is not the place to describe WHY, but the setup.py is the package installer and openshift uses it to reinstall your app on each deploy, so keep it up to date with the dependencies).
Create your migrations for your models.
Edit the openshift-given WSGI script as follows. You will be including the django WSGI application AFTER including the virtualenv (openshift creates one for python cartridges), so the pythonpath will be properly set up.
#!/usr/bin/python
import os
virtenv = os.environ['OPENSHIFT_PYTHON_DIR'] + '/virtenv/'
virtualenv = os.path.join(virtenv, 'bin/activate_this.py')
try:
execfile(virtualenv, dict(__file__=virtualenv))
except IOError:
pass
from yourdjproject.wsgi import application
Edit the hooks in .openshift/action_hooks to automatically perform db sincronization and media management:
build hook
#!/bin/bash
#this is .openshift/action/hooks/build
#remember to make it +x so openshift can run it.
if [ ! -d ${OPENSHIFT_DATA_DIR}media ]; then
mkdir -p ${OPENSHIFT_DATA_DIR}media
fi
ln -snf ${OPENSHIFT_DATA_DIR}media $OPENSHIFT_REPO_DIR/wsgi/static/media
######################### end of file
deploy hook
#!/bin/bash
#this one is the deploy hook .openshift/action_hooks/deploy
source $OPENSHIFT_HOMEDIR/python/virtenv/bin/activate
cd $OPENSHIFT_REPO_DIR
echo "Executing 'python manage.py migrate'"
python manage.py migrate
echo "Executing 'python manage.py collectstatic --noinput'"
python manage.py collectstatic --noinput
########################### end of file
Now you have the wsgi ready, pointing to the django wsgi by import, and you have your scripts running. It is time to consider the locations for static and media files we used in such scripts. Edit your Django settings to tell where did you want such files:
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
STATIC_ROOT = os.path.join(BASE_DIR, 'wsgi', 'static')
MEDIA_ROOT = os.path.join(BASE_DIR, 'wsgi', 'static', 'media')
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'yourjdproject', 'static'),)
TEMPLATE_DIRS = (os.path.join(BASE_DIR, 'yourjdproject', 'templates'),)
Create a sample view, a sample model, a sample migration, and PUSH everything.
Edit Remember to put the right settings to consider both environments so you can test and run in a local environment AND in openshift (usually, this would involve having a local_settings.py, optionally imported if the file exists, but I will omit that part and put everything in the same file). Please read this file conciously since things like yourlocaldbname are values you MUST set accordingly:
"""
Django settings for yourdjproject project.
For more information on this file, see
https://docs.djangoproject.com/en/1.7/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.7/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
ON_OPENSHIFT = False
if 'OPENSHIFT_REPO_DIR' in os.environ:
ON_OPENSHIFT = True
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '60e32dn-za#y=x!551tditnset(o9b@2bkh1)b$hn&0$ec5-j7'
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'yourapp',
#more apps here
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
)
ROOT_URLCONF = 'yourdjproject.urls'
WSGI_APPLICATION = 'yourdjproject.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.7/ref/settings/#databases
if ON_OPENSHIFT:
DEBUG = True
TEMPLATE_DEBUG = False
ALLOWED_HOSTS = ['*']
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'youropenshiftgenerateddatabasename',
'USER': os.getenv('OPENSHIFT_MYSQL_DB_USERNAME'),
'PASSWORD': os.getenv('OPENSHIFT_MYSQL_DB_PASSWORD'),
'HOST': os.getenv('OPENSHIFT_MYSQL_DB_HOST'),
'PORT': os.getenv('OPENSHIFT_MYSQL_DB_PORT'),
}
}
else:
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql', #If you want to use MySQL
'NAME': 'yourlocaldbname',
'USER': 'yourlocalusername',
'PASSWORD': 'yourlocaluserpassword',
'HOST': 'yourlocaldbhost',
'PORT': '3306', #this will be the case for MySQL
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.7/topics/i18n/
LANGUAGE_CODE = 'yr-LC'
TIME_ZONE = 'Your/Timezone/Here'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.7/howto/static-files/
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
STATIC_ROOT = os.path.join(BASE_DIR, 'wsgi', 'static')
MEDIA_ROOT = os.path.join(BASE_DIR, 'wsgi', 'static', 'media')
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'yourdjproject', 'static'),)
TEMPLATE_DIRS = (os.path.join(BASE_DIR, 'yourdjproject', 'templates'),)
Git add, commit, push, enjoy.
cd path/to/yourproject/
git add .
git commit -m "Your Message"
git push origin master # THIS COMMAND WILL TAKE LONG
# git enjoy
Your sample Django app is almost ready to go! But if your application has external dependencies it will blow with no apparent reason. This is the reason I told you to develop a simple application. Now it is time to make your dependencies work.
[untested!] You can edit the deploy hook and add a command after the command cd $OPENSHIFT_REPO_DIR, like this: pip install -r requirements.txt, assuming the requirements.txt file exists in your project. pip should exist in your virtualenv, but if it does not, you can see the next solution.
Alternatively, the setup.py is an already-provided approach on OpenShift. What I did many times is -assuming the requirements.txt file exists- is:
Open that file, read all its lines.
For each line, if it has a #, remove the # and everything after.
strip leading and trailing whitespaces.
Discard empty lines, and have the result (i.e. remaining lines) as an array.
That result must be assigned to the install_requires= keyword argument in the setup call in the setup.py file.
I'm sorry I did not include this in the tutorial before! But you need to actually install Django in the server. Perhaps an obvious suggestion, and every Python developer could know that beforehand. But seizing this opportunity I remark: Include the appropriate Django dependency in the requirements.txt (or setup.py depending on whetheryou use or not a requirements.txt file), as you include any other dependency.
This should help you to mount a Django application, and took me a lot of time to standarize the process. Enjoy it and don't hesitate on contacting me via comment if something goes wrong
Edit (for those with the same problem who don't expect to find the answer in this post's comments): Remember that if you edit the build or deploy hook files under Windows and you push the files, they will fly to the server with 0644 permissions, since Windows does not support this permission scheme Unix has, and has no way to assign permissions since these files do not have any extension. You will notice this because your scripts will not be executed when deploying. So try to deploy those files only from Unix-based systems.
Edit 2: You can use git hooks (e.g. pre_commit) to set permissions for certain files, like pipeline scripts (build, deploy, ...). See the comments by @StijndeWitt and @OliverBurdekin in this answer, and also this question for more details.
|
How to find the number of nested lists in a list?
|
The function takes a list and returns an int depending on how many lists are in the list not including the list itself. (For the sake of simplicity we can assume everything is either an integer or a list.)
For example:
x=[1,2,[[[]]],[[]],3,4,[1,2,3,4,[[]] ] ]
count_list(x) # would return 8
I think using recursion would help but I am not sure how to implement it, this is what I have so far.
def count_list(a,count=None, i=None):
if count==None and i==None:
count=0
i=0
if i>len(a)
return(count)
if a[i]==list
i+=1
count+=1
return(count_list(a[i][i],count))
else:
i+=1
return(count_list(a[i]))
|
You can do it with a recursion function :
def count(l):
return sum(1+count(i) for i in l if isinstance(i,list))
Demo:
>>> x=[1,2,[[[]]],[[]],3,4,[1,2,3,4,[[]] ] ]
>>> count(x)
8
|
OverflowError occurs when using cython with a large int
|
python 3.4, windows 10, cython 0.21.1
I'm compiling this function to c with cython
def weakchecksum(data):
"""
Generates a weak checksum from an iterable set of bytes.
"""
cdef long a, b, l
a = b = 0
l = len(data)
for i in range(l):
a += data[i]
b += (l - i)*data[i]
return (b << 16) | a, a, b
which produces this error:
"OverflowError: Python int too large to convert to C long"
I've also tried declaring them as unsigned longs. What type do I use to work with really large numbers? If it's too large for a c long are there any workarounds?
|
If you make sure that your calculations are in c (for instance, declare i to be long, and put the data element into a cdefed variable or cast it before calculation), you won't get this error. Your actual results, though, could vary depending on platform, depending (potentially) on the exact assembly code generated and the resulting treatment of overflows. There are better algorithms for this, as @cod3monk3y has noted (look at the "simple checksums" link).
|
Checking whether data frame is copy or view in Pandas
|
Is there an easy way to check whether to data frames are different copies or views of the same data that doesn't involve manipulations? I'm trying to get a grip on when each is generated, and given how idiosyncratic the rules seem to be, I'd like an easy way to test.
For example, I thought "id(df.values)" would be stable across views, but they don't seem to be:
# Make two data frames that are views of same data.
df = pd.DataFrame([[1,2,3,4],[5,6,7,8]], index = ['row1','row2'],
columns = ['a','b','c','d'])
df2 = df.iloc[0:2,:]
# Demonstrate they are views:
df.iloc[0,0] = 99
df2.iloc[0,0]
Out[70]: 99
# Now try and compare the id on values attribute
# Different despite being views!
id(df.values)
Out[71]: 4753564496
id(df2.values)
Out[72]: 4753603728
# And we can of course compare df and df2
df is df2
Out[73]: False
Other answers I've looked up that try to give rules, but don't seem consistent, and also don't answer this question of how to test:
What rules does Pandas use to generate a view vs a copy?
Pandas: Subindexing dataframes: Copies vs views
Understanding pandas dataframe indexing
Re-assignment in Pandas: Copy or view?
And of course:
- http://pandas.pydata.org/pandas-docs/version/0.15.0/indexing.html#returning-a-view-versus-a-copy
Edit: for clarity
UPDATE: Comments below seem to answer the question -- looking at the df.values.base attribute rather than df.values attribute does it, as does a reference to the df._is_copy attribute (though the latter is probably very bad form since it's an internal).
|
Answers from HYRY and Marius in comments!
One can check either by:
testing equivalence of the values.base attribute rather than the values attribute, as in:
df.values.base is df2.values.base instead of df.values is df2.values.
or using the (admittedly internal) _is_view attribute (df2._is_view returns True).
Thanks everyone!
|
pandas create new column based on values from other columns
|
I've tried different methods from other questions but still can't seem to find the right answer for my problem. The critical piece of this is that if the person is counted as Hispanic they can't be counted as anything else. Even if they have a "1" in another ethnicity column they still are counted as Hispanic not a two or more races. Similarly, if the sum of all the ERI columns is greater than 1 they are counted as two or more races and can't be counted as a unique ethnicity(accept for Hispanic). Hopefully this makes sense. Any help will be greatly appreciated.
Its almost like doing a for loop through each row and if each record meets a criteria they are added to one list and eliminated from the original.
From the dataframe below I need to calculate a new column based off of the following:
========================= CRITERIA ===============================
IF [ERI_Hispanic] = 1 THEN RETURN âHispanicâ
ELSE IF SUM([ERI_AmerInd_AKNatv] + [ERI_Asian] + [ERI_Black_Afr.Amer] + [ERI_HI_PacIsl] + [ERI_White]) > 1 THEN RETURN âTwo or Moreâ
ELSE IF [ERI_AmerInd_AKNatv] = 1 THEN RETURN âA/I AK Nativeâ
ELSE IF [ERI_Asian] = 1 THEN RETURN âAsianâ
ELSE IF [ERI_Black_Afr.Amer] = 1 THEN RETURN âBlack/AAâ
ELSE IF [ERI_HI_PacIsl] = 1 THEN RETURN âHaw/Pac Isl.â
ELSE IF [ERI_White] = 1 THEN RETURN âWhiteâ
Comment: If the ERI Flag for Hispanic is True (1), then employee is classified as âHispanicâ
Comment: If more than 1 non-Hispanic ERI Flag are true, return âTwo or Moreâ
====================== DATAFRAME ===========================
In [13]: df1
Out [13]:
lname fname rno_cd eri_afr_amer eri_asian eri_hawaiian eri_hispanic eri_nat_amer eri_white rno_defined
0 MOST JEFF E 0 0 0 0 0 1 White
1 CRUISE TOM E 0 0 0 1 0 0 White
2 DEPP JOHNNY 0 0 0 0 0 1 Unknown
3 DICAP LEO 0 0 0 0 0 1 Unknown
4 BRANDO MARLON E 0 0 0 0 0 0 White
5 HANKS TOM 0 0 0 0 0 1 Unknown
6 DENIRO ROBERT E 0 1 0 0 0 1 White
7 PACINO AL E 0 0 0 0 0 1 White
8 WILLIAMS ROBIN E 0 0 1 0 0 0 White
9 EASTWOOD CLINT E 0 0 0 0 0 1 White
|
OK, two steps to this - first is to write a function that does the translation you want - I've put an example together based on your pseudo-code:
def label_race (row):
if row['eri_hispanic'] == 1 :
return 'Hispanic'
if row['eri_afr_amer'] + row['eri_asian'] + row['eri_hawaiian'] + row['eri_nat_amer'] + row['eri_white'] > 1 :
return 'Two Or More'
if row['eri_nat_amer'] == 1 :
return 'A/I AK Native'
if row['eri_asian'] == 1:
return 'Asian'
if row['eri_afr_amer'] == 1:
return 'Black/AA'
if row['eri_hawaiian'] == 1:
return 'Haw/Pac Isl.'
if row['eri_white'] == 1:
return 'White'
return 'Other'
You may want to go over this, but it seems to do the trick - notice that the parameter going into the function is considered to be a Series object labelled "row".
Next, use the apply function in pandas to apply the function - e.g.
df.apply (lambda row: label_race (row),axis=1)
Note the axis=1 specifier, that means that the application is done at a row, rather than a column level. The results are here:
0 White
1 Hispanic
2 White
3 White
4 Other
5 White
6 Two Or More
7 White
8 Haw/Pac Isl.
9 White
If you're happy with those results, then run it again, posting the results into a new column in your original dataframe.
df['race_label'] = df.apply (lambda row: label_race (row),axis=1)
The resultant dataframe looks like this (scroll to the right to see the new column):
lname fname rno_cd eri_afr_amer eri_asian eri_hawaiian eri_hispanic eri_nat_amer eri_white rno_defined race_label
0 MOST JEFF E 0 0 0 0 0 1 White White
1 CRUISE TOM E 0 0 0 1 0 0 White Hispanic
2 DEPP JOHNNY NaN 0 0 0 0 0 1 Unknown White
3 DICAP LEO NaN 0 0 0 0 0 1 Unknown White
4 BRANDO MARLON E 0 0 0 0 0 0 White Other
5 HANKS TOM NaN 0 0 0 0 0 1 Unknown White
6 DENIRO ROBERT E 0 1 0 0 0 1 White Two Or More
7 PACINO AL E 0 0 0 0 0 1 White White
8 WILLIAMS ROBIN E 0 0 1 0 0 0 White Haw/Pac Isl.
9 EASTWOOD CLINT E 0 0 0 0 0 1 White White
|
Python NLTK: SyntaxError: Non-ASCII character '\xc3' in file (Senitment Analysis -NLP)
|
I am playing around with NLTK to do an assignment on sentiment analysis. I am using Python 2.7. NLTK 3.0 and NUMPY 1.9.1 version.
This is code :
__author__ = 'karan'
import nltk
import re
import sys
def main():
print("Start");
# getting the stop words
stopWords = open("english.txt","r");
stop_word = stopWords.read().split();
AllStopWrd = []
for wd in stop_word:
AllStopWrd.append(wd);
print("stop words-> ",AllStopWrd);
# sample and also cleaning it
tweet1= 'Love, my new toyà½Ã¸Ã ½Ã¸#iPhone6. Its good http://t.co/sHY1cab7sx'
print("old tweet-> ",tweet1)
tweet1 = tweet1.lower()
tweet1 = ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)"," ",tweet1).split())
print(tweet1);
tw = tweet1.split()
print(tw)
#tokenize
sentences = nltk.word_tokenize(tweet1)
print("tokenized ->", sentences)
#remove stop words
Otweet =[]
for w in tw:
if w not in AllStopWrd:
Otweet.append(w);
print("sans stop word-> ",Otweet)
# get taggers for neg/pos/inc/dec/inv words
taggers ={}
negWords = open("neg.txt","r");
neg_word = negWords.read().split();
print("ned words-> ",neg_word)
posWords = open("pos.txt","r");
pos_word = posWords.read().split();
print("pos words-> ",pos_word)
incrWords = open("incr.txt","r");
inc_word = incrWords.read().split();
print("incr words-> ",inc_word)
decrWords = open("decr.txt","r");
dec_word = decrWords.read().split();
print("dec wrds-> ",dec_word)
invWords = open("inverse.txt","r");
inv_word = invWords.read().split();
print("inverse words-> ",inv_word)
for nw in neg_word:
taggers.update({nw:'negative'});
for pw in pos_word:
taggers.update({pw:'positive'});
for iw in inc_word:
taggers.update({iw:'inc'});
for dw in dec_word:
taggers.update({dw:'dec'});
for ivw in inv_word:
taggers.update({ivw:'inv'});
print("tagger-> ",taggers)
print(taggers.get('little'))
# get parts of speech
posTagger = [nltk.pos_tag(tw)]
print("posTagger-> ",posTagger)
main();
This is the error that I am getting when running my code:
SyntaxError: Non-ASCII character '\xc3' in file C:/Users/karan/PycharmProjects/mainProject/sentiment.py on line 19, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details
How do I fix this error?
I also tried the code using Python 3.4.2 and with nltk 3.0 and NumPy 1.9.1 but then I get the error:
Traceback (most recent call last):
File "C:/Users/karan/PycharmProjects/mainProject/sentiment.py", line 80, in <module>
main();
File "C:/Users/karan/PycharmProjects/mainProject/sentiment.py", line 72, in main
posTagger = [nltk.pos_tag(tw)]
File "C:\Python34\lib\site-packages\nltk\tag\__init__.py", line 100, in pos_tag
tagger = load(_POS_TAGGER)
File "C:\Python34\lib\site-packages\nltk\data.py", line 779, in load
resource_val = pickle.load(opened_resource)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xcb in position 0: ordinal not in range(128)
|
Add the following to the top of your file # coding=utf-8
If you go to the link in the error you can seen the reason why:
Defining the Encoding
Python will default to ASCII as standard encoding if no other
encoding hints are given.
To define a source code encoding, a magic comment must
be placed into the source files either as first or second
line in the file, such as:
# coding=
|
Django Rest Framework - Authentication credentials were not provided
|
I'm developing an API using Django Rest Framework. I'm trying to list or create an "Order" object, but when i'm trying to access the console gives me this error:
{"detail": "Authentication credentials were not provided."}
Views:
from django.shortcuts import render
from rest_framework import viewsets
from django.contrib.auth.models import User
from rest_framework.renderers import JSONRenderer, YAMLRenderer
from rest_framework.response import Response
from rest_framework.views import APIView
from order.models import *
from API.serializers import *
from rest_framework.permissions import IsAuthenticated
class OrderViewSet(viewsets.ModelViewSet):
model = Order
serializer_class = OrderSerializer
permission_classes = (IsAuthenticated,)
Serializer:
class OrderSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Order
fields = ('field1', 'field2')
And my URLs:
# -*- coding: utf-8 -*-
from django.conf.urls import patterns, include, url
from django.conf import settings
from django.contrib import admin
from django.utils.functional import curry
from django.views.defaults import *
from rest_framework import routers
from API.views import *
admin.autodiscover()
handler500 = "web.views.server_error"
handler404 = "web.views.page_not_found_error"
router = routers.DefaultRouter()
router.register(r'orders', OrdersViewSet)
urlpatterns = patterns('',
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework')),
url(r'^api-token-auth/', 'rest_framework.authtoken.views.obtain_auth_token'),
url(r'^api/', include(router.urls)),
)
And then I'm using this command in the console:
curl -X GET http://127.0.0.1:8000/api/orders/ -H 'Authorization: Token 12383dcb52d627eabd39e7e88501e96a2sadc55'
And the error say:
{"detail": "Authentication credentials were not provided."}
|
If you are runnig Django on Apache using mod_wsgi you have to add
WSGIPassAuthorization On
in your httpd.conf. Otherwise authorization header will be stripped out by mod_wsgi.
|
Optional dot in regex
|
Say I want to replace all the matches of Mr. and Mr with Mister.
I am using the following regex: \bMr(\.)?\b to match either Mr. or just Mr. Then, I use the re.sub() method to do the replacement.
What is puzzling me is that it is replacing Mr. with Mister.. Why is this keeping the dot . at the end? It looks like it is not matching the Mr\. case but just Mr.
import re
s="a rMr. Nobody Mr. Nobody is Mr Nobody and Mra Nobody."
re.sub(r"\bMr(\.)?\b","Mister", s)
Returns:
'a rMr. Nobody Mister. Nobody is Mister Nobody and Mra Nobody.'
I also tried with the following, but also without luck:
re.sub(r"\b(Mr\.|Mr)\b","Mister", s)
My desired output is:
'a rMr. Nobody Mister Nobody is Mister Nobody and Mra Nobody.'
^ ^
no dot this should be kept as it is
|
I think you want to capture 'Mr' followed by either a '.' or a word boundary:
r"\bMr(?:\.|\b)"
In use:
>>> import re
>>> re.sub(r"\bMr(?:\.|\b)", "Mister", "a rMr. Nobody Mr. Nobody is Mr Nobody and Mra Nobody.")
'a rMr. Nobody Mister Nobody is Mister Nobody and Mra Nobody.'
|
Which function in spark is used to combine two RDDs by keys
|
Let us say I have the following two RDDs, with the following key-pair values.
rdd1 = [ (key1, [value1, value2]), (key2, [value3, value4]) ]
and
rdd2 = [ (key1, [value5, value6]), (key2, [value7]) ]
Now, I want to join them by key values, so for example I want to return the following
ret = [ (key1, [value1, value2, value5, value6]), (key2, [value3, value4, value7]) ]
How I can I do this, in spark using python or scala? One way is to use join, but join would create a tuple inside the tuple. But I want to only have one tuple per key value pair.
|
Just use join and then map the resulting rdd.
rdd1.join(rdd2).map(case (k, (ls, rs)) => (k, ls ++ rs))
|
How to check whether two lists are circularly identical in Python
|
For instance, I have lists:
a[0] = [1, 1, 1, 0, 0]
a[1] = [1, 1, 0, 0, 1]
a[2] = [0, 1, 1, 1, 0]
# and so on
They seem to be different, but if it is supposed that the start and the end are connected, then they are circularly identical.
The problem is, each list which I have has a length of 55 and contains only three ones and 52 zeros in it. Without circular condition, there are 26,235 (55 choose 3) lists. However, if the condition 'circular' exists, there are a huge number of circularly identical lists
Currently I check circularly identity by following:
def is_dup(a, b):
for i in range(len(a)):
if a == list(numpy.roll(b, i)): # shift b circularly by i
return True
return False
This function requires 55 cyclic shift operations at the worst case. And there are 26,235 lists to be compared with each other. In short, I need 55 * 26,235 * (26,235 - 1) / 2 = 18,926,847,225 computations. It's about nearly 20 Giga!
Is there any good way to do it with less computations? Or any data types that supports circular?
|
First off, this can be done in O(n) in terms of the length of the list
You can notice that if you will duplicate your list 2 times ([1, 2, 3]) will be [1, 2, 3, 1, 2, 3] then your new list will definitely hold all possible cyclic lists.
So all you need is to check whether the list you are searching is inside a 2 times of your starting list. In python you can achieve this in the following way (assuming that the lengths are the same).
list1 = [1, 1, 1, 0, 0]
list2 = [1, 1, 0, 0, 1]
print ' '.join(map(str, list2)) in ' '.join(map(str, list1 * 2))
Some explanation about my oneliner:
list * 2 will combine a list with itself, map(str, [1, 2]) convert all numbers to string and ' '.join() will convert array ['1', '2', '111'] into a string '1 2 111'.
As pointed by some people in the comments, oneliner can potentially give some false positives, so to cover all the possible edge cases:
def isCircular(arr1, arr2):
if len(arr1) != len(arr2):
return False
str1 = ' '.join(map(str, arr1))
str2 = ' '.join(map(str, arr2))
if len(str1) != len(str2):
return False
return str1 in str2 + ' ' + str2
P.S.1 when speaking about time complexity, it is worth noticing that O(n) will be achieved if substring can be found in O(n) time. It is not always so and depends on the implementation in your language (although potentially it can be done in linear time KMP for example).
P.S.2 for people who are afraid strings operation and due to this fact think that the answer is not good. What important is complexity and speed. This algorithm potentially runs in O(n) time and O(n) space which makes it much better than anything in O(n^2) domain. To see this by yourself, you can run a small benchmark (creates a random list pops the first element and appends it to the end thus creating a cyclic list. You are free to do your own manipulations)
from random import random
bigList = [int(1000 * random()) for i in xrange(10**6)]
bigList2 = bigList[:]
bigList2.append(bigList2.pop(0))
# then test how much time will it take to come up with an answer
from datetime import datetime
startTime = datetime.now()
print isCircular(bigList, bigList2)
print datetime.now() - startTime # please fill free to use timeit, but it will give similar results
0.3 seconds on my machine. Not really long. Now try to compare this with O(n^2) solutions. While it is comparing it, you can travel from US to Australia (most probably by a cruise ship)
|
Is Haskell's laziness an elegant alternative to Python's generators?
|
In a programming exercise, it was first asked to program the factorial function and then calculate the sum: 1! + 2! + 3! +... n! in O(n) multiplications (so we can't use the factorial directly). I am not searching the solution to this specific (trivial) problem, I'm trying to explore Haskell abilities and this problem is a toy I would like to play with.
I thought Python's generators could be a nice solution to this problem. For example :
from itertools import islice
def ifact():
i , f = 1, 1
yield 1
while True:
f *= i
i += 1
yield f
def sum_fact(n):
return sum(islice(ifact(),5))
Then I've tried to figure out if there was something in Haskell having a similar behavior than this generator and I thought that laziness do all the staff without any additional concept.
For example, we could replace my Python ifact with
fact = scan1 (*) [1..]
And then solve the exercise with the following :
sum n = foldl1 (+) (take n fact)
I wonder if this solution is really "equivalent" to Python's one regarding time complexity and memory usage. I would say that Haskell's solution never store all the list fact since their elements are used only once.
Am I right or totally wrong ?
EDIT :
I should have check more precisely:
Prelude> foldl1 (+) (take 4 fact)
33
Prelude> :sprint fact
fact = 1 : 2 : 6 : 24 : _
So (my implementation of) Haskell store the result, even if it's no longer used.
|
Indeed, lazy lists can be used this way. There are some subtle differences though:
Lists are data structures. So you can keep them after evaluating them, which can be both good and bad (you can avoid recomputation of values and to recursive tricks as @ChrisDrost described, at the cost of keeping memory unreleased).
Lists are pure. In generators you can have computations with side effects, you can't do that with lists (which is often desirable).
Since Haskell is a lazy language, laziness is everywhere and if you just convert a program from an imperative language to Haskell, the memory requirements can change considerably (as @RomanL describes in his answer).
But Haskell offers more advanced tools to accomplish the generator/consumer pattern. Currently there are three libraries that focus on this problem: pipes, conduit and iteratees. My favorite is conduit, it's easy to use and the complexity of its types is kept low.
They have several advantages, in particular that you can create complex pipelines and you can base them on a chosen monad, which allows you to say what side effects are allowed in a pipeline.
Using conduit, your example could be expressed as follows:
import Data.Functor.Identity
import Data.Conduit
import qualified Data.Conduit.List as C
ifactC :: (Num a, Monad m) => Producer m a
ifactC = loop 1 1
where
loop r n = let r' = r * n
in yield r' >> loop r' (n + 1)
sumC :: (Num a, Monad m) => Consumer a m a
sumC = C.fold (+) 0
main :: IO ()
main = (print . runIdentity) (ifactC $= C.isolate 5 $$ sumC)
-- alternatively running the pipeline in IO monad directly:
-- main = (ifactC $= C.isolate 5 $$ sumC) >>= print
Here we create a Producer (a conduit that consumes no input) that yields factorials indefinitely. Then we compose it with isolate, which ensures that no more than a given number of values are propagated through it, and then we compose it with a Consumer that just sums values and returns the result.
|
How to speed up multiple inner products in python
|
I have some simple code that does the following.
It iterates over all possible length n lists F with +-1 entries . For each one it iterates over all possible length 2n lists S with +-1 entries where the first half of $S$ is simply a copy of the second half. The code computes the inner product of F with each sublist of S of length n. For each F, S it counts the inner products that are zero until the first non-zero inner product.
Here is the code.
#!/usr/bin/python
from __future__ import division
import itertools
import operator
import math
n=14
m=n+1
def innerproduct(A, B):
assert (len(A) == len(B))
s = 0
for k in xrange(0,n):
s+=A[k]*B[k]
return s
leadingzerocounts = [0]*m
for S in itertools.product([-1,1], repeat = n):
S1 = S + S
for F in itertools.product([-1,1], repeat = n):
i = 0
while (i<m):
ip = innerproduct(F, S1[i:i+n])
if (ip == 0):
leadingzerocounts[i] +=1
i+=1
else:
break
print leadingzerocounts
The correct output for n=14 is
[56229888, 23557248, 9903104, 4160640, 1758240, 755392, 344800, 172320, 101312, 75776, 65696, 61216, 59200, 59200, 59200]
Using pypy this takes 1 min 18 seconds for n = 14. Unfortunately I would really like to run it for 16,18,20,22,24,26 . I don't mind using numba or cython but I would like to stay close to python if at all possible.
Any help speeding this up is very much appreciated.
I'll keep a record of the fastest solutions here. (Please let me know if I miss an updated answer.)
n = 22 at 9m35.081s by Eisenstat (C)
n = 18 at 1m16.344s by Eisenstat (pypy)
n = 18 at 2m54.998s by Tupteq (pypy)
n = 14 at 26s by Neil (numpy)
n - 14 at 11m59.192s by kslote1 (pypy)
|
This new code gets another order of magnitude speedup by taking advantage of the cyclic symmetry of the problem. This Python version enumerates necklaces with Duval's algorithm; the C version uses brute force. Both incorporate the speedups described below. On my machine, the C version solves n = 20 in 100 seconds! A back-of-the-envelope calculation suggests that, if you were to let it run for a week on a single core, it could do n = 26, and, as noted below, it's amenable to parallelism.
import itertools
def necklaces_with_multiplicity(n):
assert isinstance(n, int)
assert n > 0
w = [1] * n
i = 1
while True:
if n % i == 0:
s = sum(w)
if s > 0:
yield (tuple(w), i * 2)
elif s == 0:
yield (tuple(w), i)
i = n - 1
while w[i] == -1:
if i == 0:
return
i -= 1
w[i] = -1
i += 1
for j in range(n - i):
w[i + j] = w[j]
def leading_zero_counts(n):
assert isinstance(n, int)
assert n > 0
assert n % 2 == 0
counts = [0] * n
necklaces = list(necklaces_with_multiplicity(n))
for combo in itertools.combinations(range(n - 1), n // 2):
for v, multiplicity in necklaces:
w = list(v)
for j in combo:
w[j] *= -1
for i in range(n):
counts[i] += multiplicity * 2
product = 0
for j in range(n):
product += v[j - (i + 1)] * w[j]
if product != 0:
break
return counts
if __name__ == '__main__':
print(leading_zero_counts(12))
C version:
#include <stdio.h>
enum {
N = 14
};
struct Necklace {
unsigned int v;
int multiplicity;
};
static struct Necklace g_necklace[1 << (N - 1)];
static int g_necklace_count;
static void initialize_necklace(void) {
g_necklace_count = 0;
for (unsigned int v = 0; v < (1U << (N - 1)); v++) {
int multiplicity;
unsigned int w = v;
for (multiplicity = 2; multiplicity < 2 * N; multiplicity += 2) {
w = ((w & 1) << (N - 1)) | (w >> 1);
unsigned int x = w ^ ((1U << N) - 1);
if (w < v || x < v) goto nope;
if (w == v || x == v) break;
}
g_necklace[g_necklace_count].v = v;
g_necklace[g_necklace_count].multiplicity = multiplicity;
g_necklace_count++;
nope:
;
}
}
int main(void) {
initialize_necklace();
long long leading_zero_count[N + 1];
for (int i = 0; i < N + 1; i++) leading_zero_count[i] = 0;
for (unsigned int v_xor_w = 0; v_xor_w < (1U << (N - 1)); v_xor_w++) {
if (__builtin_popcount(v_xor_w) != N / 2) continue;
for (int k = 0; k < g_necklace_count; k++) {
unsigned int v = g_necklace[k].v;
unsigned int w = v ^ v_xor_w;
for (int i = 0; i < N + 1; i++) {
leading_zero_count[i] += g_necklace[k].multiplicity;
w = ((w & 1) << (N - 1)) | (w >> 1);
if (__builtin_popcount(v ^ w) != N / 2) break;
}
}
}
for (int i = 0; i < N + 1; i++) {
printf(" %lld", 2 * leading_zero_count[i]);
}
putchar('\n');
return 0;
}
You can get a bit of speedup by exploiting the sign symmetry (4x) and by iterating over only those vectors that pass the first inner product test (asymptotically, O(sqrt(n))x).
import itertools
n = 10
m = n + 1
def innerproduct(A, B):
s = 0
for k in range(n):
s += A[k] * B[k]
return s
leadingzerocounts = [0] * m
for S in itertools.product([-1, 1], repeat=n - 1):
S1 = S + (1,)
S1S1 = S1 * 2
for C in itertools.combinations(range(n - 1), n // 2):
F = list(S1)
for i in C:
F[i] *= -1
leadingzerocounts[0] += 4
for i in range(1, m):
if innerproduct(F, S1S1[i:i + n]):
break
leadingzerocounts[i] += 4
print(leadingzerocounts)
C version, to get a feel for how much performance we're losing to PyPy (16 for PyPy is roughly equivalent to 18 for C):
#include <stdio.h>
enum {
HALFN = 9,
N = 2 * HALFN
};
int main(void) {
long long lzc[N + 1];
for (int i = 0; i < N + 1; i++) lzc[i] = 0;
unsigned int xor = 1 << (N - 1);
while (xor-- > 0) {
if (__builtin_popcount(xor) != HALFN) continue;
unsigned int s = 1 << (N - 1);
while (s-- > 0) {
lzc[0]++;
unsigned int f = xor ^ s;
for (int i = 1; i < N + 1; i++) {
f = ((f & 1) << (N - 1)) | (f >> 1);
if (__builtin_popcount(f ^ s) != HALFN) break;
lzc[i]++;
}
}
}
for (int i = 0; i < N + 1; i++) printf(" %lld", 4 * lzc[i]);
putchar('\n');
return 0;
}
This algorithm is embarrassingly parallel because it's just accumulating over all values of xor. With the C version, a back-of-the-envelope calculation suggests that a few thousand hours of CPU time would suffice to calculate n = 26, which works out to a couple hundred dollars at current rates on EC2. There are undoubtedly some optimizations to be made (e.g., vectorization), but for a one-off like this I'm not sure how much more programmer effort is worthwhile.
|
What's the correct way to use a unix domain socket in requests framework?
|
Usually, doing a post request using requests framework is done by:
payload = {'key1': 'value1', 'key2': 'value2'}
r = requests.post("http://httpbin.org/post", data=payload)
But: How do I connect to a unix socket instead of doing a TCP connection?
On a related note, how to encode domain path in the URL?
libcurl allows application to supply own socket on which to perform request
LDAP invented own scheme ldapi where socket name is %-encoded in host field
httpie uses http+unix scheme and %-encoded path in host field
These are some examples, but is there an RFC or established best practice?
|
There's no need to reinvent the wheel:
https://github.com/msabramo/requests-unixsocket
URL scheme is http+unix and socket path is percent-encoded into the host field:
import requests_unixsocket
session = requests_unixsocket.Session()
# Access /path/to/page from /tmp/profilesvc.sock
r = session.get('http+unix://%2Ftmp%2Fprofilesvc.sock/path/to/page')
assert r.status_code == 200
|
pandas unique values multiple columns
|
df = pd.DataFrame({'Col1' : ['Bob', 'Joe', 'Bill', 'Mary', 'Joe'],
'Col2' : ['Joe', 'Steve', 'Bob', 'Bob', 'Steve'],
'Col3' : np.random.random(5)})
What is the best way to return the unique values of 'Col1' and 'Col2'?
desired output is
'Bob', 'Joe', 'Bill', 'Mary', 'Steve'
|
One way is to select the columns and pass them to np.unique:
>>> np.unique(df[['Col1', 'Col2']])
array(['Bill', 'Bob', 'Joe', 'Mary', 'Steve'], dtype=object)
Note that some versions of Pandas/NumPy may require you to explicitly pass the values from the columns with the .values attribute:
np.unique(df[['Col1', 'Col2']].values)
A faster way is to use pd.unique. This function uses a hashtable-based algorithm instead of NumPy's sort-based algorithm. You will need to pass a 1D array using ravel():
>>> pd.unique(df[['Col1', 'Col2']].values.ravel())
array(['Bob', 'Joe', 'Steve', 'Bill', 'Mary'], dtype=object)
The difference in speed is significant for larger DataFrames:
>>> df1 = pd.concat([df]*100000) # DataFrame with 500000 rows
>>> %timeit np.unique(df1[['Col1', 'Col2']].values)
1 loops, best of 3: 619 ms per loop
>>> %timeit pd.unique(df1[['Col1', 'Col2']].values.ravel())
10 loops, best of 3: 49.9 ms per loop
|
Solve Cross Origin Resource Sharing with Flask
|
For the following ajax post request for Flask (how can I use data posted from ajax in flask?):
$.ajax({
url: "http://127.0.0.1:5000/foo",
type: "POST",
contentType: "application/json",
data: JSON.stringify({'inputVar': 1}),
success: function( data ) {
alert( "success" + data );
}
});
I get a Cross Origin Resource Sharing (CORS) error:
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'null' is therefore not allowed access.
The response had HTTP status code 500.
I tried solving it in the two following ways, but none seems to work.
Using Flask-CORS
This is a Flask extension for handling CORS that should make cross-origin AJAX possible.
http://flask-cors.readthedocs.org/en/latest/
How to enable CORS in flask and heroku
Flask-cors wrapper not working when jwt auth wrapper is applied.
Javascript - No 'Access-Control-Allow-Origin' header is present on the requested resource
My pythonServer.py using this solution:
from flask import Flask
from flask.ext.cors import CORS, cross_origin
app = Flask(__name__)
cors = CORS(app, resources={r"/foo": {"origins": "*"}})
app.config['CORS_HEADERS'] = 'Content-Type'
@app.route('/foo', methods=['POST','OPTIONS'])
@cross_origin(origin='*',headers=['Content-Type','Authorization'])
def foo():
return request.json['inputVar']
if __name__ == '__main__':
app.run()
Using specific Flask Decorator
This is an official Flask code snippet defining a decorator that should allow CORS on the functions it decorates.
http://flask.pocoo.org/snippets/56/
Python Flask cross site HTTP POST - doesn't work for specific allowed origins
http://chopapp.com/#351l7gc3
My pythonServer.py using this solution:
from flask import Flask, make_response, request, current_app
from datetime import timedelta
from functools import update_wrapper
app = Flask(__name__)
def crossdomain(origin=None, methods=None, headers=None,
max_age=21600, attach_to_all=True,
automatic_options=True):
if methods is not None:
methods = ', '.join(sorted(x.upper() for x in methods))
if headers is not None and not isinstance(headers, basestring):
headers = ', '.join(x.upper() for x in headers)
if not isinstance(origin, basestring):
origin = ', '.join(origin)
if isinstance(max_age, timedelta):
max_age = max_age.total_seconds()
def get_methods():
if methods is not None:
return methods
options_resp = current_app.make_default_options_response()
return options_resp.headers['allow']
def decorator(f):
def wrapped_function(*args, **kwargs):
if automatic_options and request.method == 'OPTIONS':
resp = current_app.make_default_options_response()
else:
resp = make_response(f(*args, **kwargs))
if not attach_to_all and request.method != 'OPTIONS':
return resp
h = resp.headers
h['Access-Control-Allow-Origin'] = origin
h['Access-Control-Allow-Methods'] = get_methods()
h['Access-Control-Max-Age'] = str(max_age)
if headers is not None:
h['Access-Control-Allow-Headers'] = headers
return resp
f.provide_automatic_options = False
return update_wrapper(wrapped_function, f)
return decorator
@app.route('/foo', methods=['GET','POST','OPTIONS'])
@crossdomain(origin="*")
def foo():
return request.json['inputVar']
if __name__ == '__main__':
app.run()
Can you please give some some indication of why that is?
|
@Matt:
It worked like a champ, after bit modification to your code
# initialization
app = Flask(__name__)
app.config['SECRET_KEY'] = 'the quick brown fox jumps over the lazy dog'
app.config['CORS_HEADERS'] = 'Content-Type'
cors = CORS(app, resources={r"/foo": {"origins": "localhost"}})
@app.route('/foo', methods=['POST'])
@cross_origin(origin='localhost',headers=['Content- Type','Authorization'])
def foo():
return request.json['inputVar']
if __name__ == '__main__':
app.run()
I replaced * by localhost. Since as i read in many blogs and posts, you should allow access for specific domain
|
why do perl, ruby use /dev/urandom
|
I strace'd a simple script using perl and bash.
$ strace perl -e 'echo "test";' 2>&1 | grep 'random'
open("/dev/urandom", O_RDONLY) = 3
$ strace bash 'echo "test"' 2>&1 | grep 'random'
$
Why does perl need the pseudorandom number generator for such a trivial script? I would expect opening /dev/urandom only after the first use of random data.
Edit: I also tested python and ruby
$ strace python -c 'print "test"' 2>&1 | grep random
$
$ strace ruby -e 'print "test\n"' 2>&1 | grep random
open("/dev/urandom", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_CLOEXEC) = 3
Why do perl and ruby open it with different modes?
|
Try searching for "Denial of Service via Algorithmic Complexity Attacks".
In brief, if a Perl script accepts outside input (from a file, network, etc) and stores that data in a hash, an attacker who can influence the data can exploit the hashing algorithm to deteriorate hashes (O(1) lookups) into linked lists (O(N) lookups). To defend against this type of attack, certain parameters of the hashing algorithm are randomised at program start-up so that an attacker cannot construct a sequence of hash keys that will cause a problem.
This is obviously not specific to Perl. Any program which uses a hashing algorithm is potentially vulnerable to this type of attack.
|
Efficiently sorting a numpy array in descending order?
|
I am surprised this specific question hasn't been asked before, but I really didn't find it on SO nor on the documentation of np.sort.
Say I have a random numpy array holding integers, e.g:
> temp = np.random.randint(1,10, 10)
> temp
array([2, 4, 7, 4, 2, 2, 7, 6, 4, 4])
If I sort it, I get ascending order by default:
> np.sort(temp)
array([2, 2, 2, 4, 4, 4, 4, 6, 7, 7])
but I want the solution to be sorted in descending order.
Now, I know I can always do:
reverse_order = np.sort(temp)[::-1]
but is this last statement efficient? Doesn't it create a copy in ascending order, and then reverses this copy to get the result in reversed order? If this is indeed the case, is there an efficient alternative? It doesn't look like np.sort accepts parameters to change the sign of the comparisons in the sort operation to get things in reverse order.
|
temp[::-1].sort() sorts the array in place, np.sort(temp)[::-1] create a new array.
In [25]: temp = np.random.randint(1,10, 10)
In [26]: temp
Out[26]: array([5, 2, 7, 4, 4, 2, 8, 6, 4, 4])
In [27]: id(temp)
Out[27]: 139962713524944
In [28]: temp[::-1].sort()
In [29]: temp
Out[29]: array([8, 7, 6, 5, 4, 4, 4, 4, 2, 2])
In [30]: id(temp)
Out[30]: 139962713524944
|
What is the difference between contiguous and non-contiguous arrays?
|
In the numpy manual about the reshape() function, it says
>>> a = np.zeros((10, 2))
# A transpose make the array non-contiguous
>>> b = a.T
# Taking a view makes it possible to modify the shape without modifying the
# initial object.
>>> c = b.view()
>>> c.shape = (20)
AttributeError: incompatible shape for a non-contiguous array
My questions are:
What are continuous and noncontiguous arrays? Is it similar to the contiguous memory block in C like What is a contiguous memory block?
Is there any performance difference between these two? When should we use one or the other?
Why does transpose make the array non-contiguous?
Why does c.shape = (20) throws an error incompatible shape for a non-contiguous array?
Thanks for your answer!
|
A contiguous array is just an array stored in an unbroken block of memory: to access the next value in the array, we just move to the next memory address.
Consider the 2D array arr = np.arange(12).reshape(3,4). It looks like this:
In the computer's memory, the values of arr are stored like this:
This means arr is a C contiguous array because the rows are stored as contiguous blocks of memory. The next memory address hold the next row value. To move down a column, three blocks of memory have to be skipped.
Transposing the array with arr.T means that C contiguity is lost because adjacent row entries are no longer in adjacent memory addresses. However, arr.T is Fortran contiguous since the columns are in contiguous blocks of memory:
Performance-wise, it's better to have contiguous arrays because accessing memory addresses which are next to each other is often faster than accessing addresses which are more "spread out" across memory. This means that operations over contiguous arrays will often be quicker.
As a consequence of C contiguous memory layout, row-wise operations are usually faster than column-wise operations. For example, you'll typically find that
np.sum(arr, axis=1) # sum the rows
is slightly faster than:
np.sum(arr, axis=0) # sum the columns
Similarly, operations on columns will be slightly faster for Fortran contiguous arrays.
Finally, why can't we flatten the Fortran contiguous array by assigning a new shape?
>>> arr2 = arr.T
>>> arr2.shape = 12
AttributeError: incompatible shape for a non-contiguous array
In order for this to be possible NumPy would have to put the rows of arr.T together like this:
(Setting the shape attribute directly assumes C order - i.e. NumPy tries to perform the operation row-wise.)
This is impossible to do. For any axis, NumPy needs to have a constant stride length (the number of bytes to move) to get to the next element of the array. Flattening arr.T in this way would require skipping forwards and backwards in memory to retrieve consecutive values of the array.
If we wrote arr2.reshape(12) instead, NumPy would copy the values of arr2 into a new block of memory (since it can't return a view on to the original data for this shape).
|
Reduce a key-value pair into a key-list pair with Apache Spark
|
I am writing a Spark application and want to combine a set of Key-Value pairs (K, V1), (K, V2), ..., (K, Vn) into one Key-Multivalue pair (K, [V1, V2, ..., Vn]). I feel like I should be able to do this using the ReduceByKey function with something of the flavor:
My_KMV = My_KV.reduce(lambda a, b: a.append([b]))
The error that I get when this occurs is:
'NoneType' object has no attribue 'append'.
My keys are integers and values V1,...,Vn are tuples. My goal is to create a single pair with the key and a list of the values (tuples).
|
Map and ReduceByKey
Input type and output type of reduce must be the same, therefore if you want to aggregate a list, you have to map the input to lists. Afterwards you combine the lists into one list.
Combining lists
You'll need a method to combine lists into one list. Phyton provides some methods to combine lists.
append modifies the first list and will always return None.
x = [1, 2, 3]
x.append([4, 5])
# x is [1, 2, 3, [4, 5]]
extend does the same, but unwraps lists:
x = [1, 2, 3]
x.extend([4, 5])
# x is [1, 2, 3, 4, 5]
Both methods return None, but you'll need a method that returns the combined list, therefore just use the plus sign.
x = [1, 2, 3] + [4, 5]
# x is [1, 2, 3, 4, 5]
Spark
file = spark.textFile("hdfs://...")
counts = file.flatMap(lambda line: line.split(" ")) \
.map(lambda actor: (actor.split(",")[0], actor)) \
# transform each value into a list
.map(lambda nameTuple: (nameTuple[0], [ nameTuple[1] ])) \
# combine lists: ([1,2,3] + [4,5]) becomes [1,2,3,4,5]
.reduceByKey(lambda a, b: a + b)
CombineByKey
It's also possible to solve this with combineByKey, which is used internally to implement reduceByKey, but it's more complex and "using one of the specialized per-key combiners in Spark can be much faster". Your use case is simple enough for the upper solution.
GroupByKey
It's also possible to solve this with groupByKey, but it reduces parallelization and therefore could be much slower for big data sets.
|
Does virtualenv serve a purpose (in production) when using docker?
|
For development we use virtualenv to have an isolated development when it comes to dependencies. From this question it seems deploying Python applications in a virtualenv is recommended.
Now we're starting to use docker for deployment. This provides a more isolated environment so I'm questioning the use of virtualenv inside a docker container. In the case of a single application I do not think virtualenv has a purpose as docker already provides isolation. In the case where multiple applications are deployed on a single docker container, I do think virtualenv has a purpose as the applications can have conflicting dependencies.
Should virtualenv be used when a single application is deployed in a docker container?
Should docker contain multiple applications or only one application per container?
If so, should virtualenv be used when deploying a container with multiple applications?
|
Virtualenv was created long before docker. Today, I lean towards docker instead of virtualenv for these reasons:
Virtualenv still means people consuming your product need to download eggs. With docker, they get something which is "known to work". No strings attached.
Docker can do much more than virtualenv (like create a clean environment when you have products that need different Python versions).
The main drawback for Docker today is poor Windows support.
As for "how many apps per container", the usual policy is 1.
|
Changing the rotation of tick labels in Seaborn heatmap
|
I'm plotting a heatmap in Seaborn. The problem is that I have too many squares in my plot so the x and y labels are too close to each other to be useful. So I'm creating a list of xticks and yticks to use. However passing this list to the function rotates the labels in the plot. It would be really nice to have seaborn automatically drop some of the ticks, but barring that I would like to be able to have the yticks upright.
import pandas as pd
import numpy as np
import seaborn as sns
data = pd.DataFrame(np.random.normal(size=40*40).reshape(40,40))
yticks = data.index
keptticks = yticks[::int(len(yticks)/10)]
yticks = ['' for y in yticks]
yticks[::int(len(yticks)/10)] = keptticks
xticks = data.columns
keptticks = xticks[::int(len(xticks)/10)]
xticks = ['' for y in xticks]
xticks[::int(len(xticks)/10)] = keptticks
sns.heatmap(data,linewidth=0,yticklabels=yticks,xticklabels=xticks)
|
seaborn uses matplotlib internally, as such you can use matplotlib functions to modify your plots. I've modified the code below to use the plt.yticks function to set rotation=0 which fixes the issue.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data = pd.DataFrame(np.random.normal(size=40*40).reshape(40,40))
yticks = data.index
keptticks = yticks[::int(len(yticks)/10)]
yticks = ['' for y in yticks]
yticks[::int(len(yticks)/10)] = keptticks
xticks = data.columns
keptticks = xticks[::int(len(xticks)/10)]
xticks = ['' for y in xticks]
xticks[::int(len(xticks)/10)] = keptticks
sns.heatmap(data,linewidth=0,yticklabels=yticks,xticklabels=xticks)
# This sets the yticks "upright" with 0, as opposed to sideways with 90.
plt.yticks(rotation=0)
plt.show()
|
How can I use Django OAuth Toolkit with Python Social Auth?
|
I'm building an API using Django Rest Framework. Later this API is supposed to be consumed by iOS and Android devices. I want to allow my users to sign-up with oauth2-providers like Facebook and Google. In this case, they shouldn't have to create an account with my platform at all. But users should also be able to sign-up when not having a Facebook/Google account, for which I'm using django-oauth-toolkit, so I have my own oauth2-provider.
For external providers I'm using python-social-auth, which works fine and automatically creates the user objects.
I want the clients to authenticate by using bearer tokens, which works fine for users that signed up with my provider (django-oauth-toolkit provides authentication scheme and permission classes for Django REST Framework).
However, python-social-auth only implements session based authentication, so there is no straightforward way to make authenticated API requests on behalf of users that registered by an external oauth2 provider.
If I use an access_token that has been generated by django-oauth-toolkit, doing a request like this works:
curl -v -H "Authorization: Bearer <token_generated_by_django-oauth-toolkit>" http://localhost:8000/api/
However, the following doesn't work since there is no corresponding authentication scheme for Django REST Framework and the AUTHENTICATION_BACKENDS provided by python-social-auth only work for session-based authentication:
curl -v -H "Authorization: Bearer <token_stored_by_python-social-auth>" http://localhost:8000/api/
Using the browseable API provided by Django REST Framework after authenticating with python-social-auth works just fine, only API calls without a session cookie don't work.
I'm wondering what the best approach is for this problem. The way I see it, I have basically two options:
A: When a user signs up with an external oauth2 provider (handled by python-social-auth), hook into the process to create an oauth2_provider.models.AccessToken and continue to use 'oauth2_provider.ext.rest_framework.OAuth2Authentication', now authenticating also users that registered with an external provider. This approach is suggested here:
https://groups.google.com/d/msg/django-rest-framework/ACKx1kY7kZM/YPWFA2DP9LwJ
B: Use python-social-auth for API request authentication. I could get my own users into python-social-auth by writing a custom backend and using register_by_access_token. However, since API calls cannot utilize Django sessions this would mean I would have to write an authentication scheme for Django Rest Framework that utilizes the data stored by python-social-auth. Some pointers on how to do this can be found here:
http://psa.matiasaguirre.net/docs/use_cases.html#signup-by-oauth-access-token
http://blog.wizer.fr/2013/11/angularjs-facebook-with-a-django-rest-api/
http://cbdev.blogspot.it/2014/02/facebook-login-with-angularjs-django.html
However, the way I understand it python-social-auth only verifies the token when doing a login and relies on the Django session afterwards. This would mean I would have to find a way to prevent python-social-auth from doing the whole oauth2-flow for each stateless API request and rather check against the data stored in the DB, which isn't really optimized for querying since it's stored as JSON (I could use UserSocialAuth.objects.get(extra_data__contains=) though).
I would also have to take care of verifying the scopes of an access token and use them to check permissions, something django-oauth-toolkit already does (TokenHasScope, required_scopes etc).
At the moment, I'm leaning towards using option A, since django-oauth-toolkit provides good integration with Django Rest Framework and I get everything I need out of the box. The only drawback is that I have to "inject" the access_tokens retrieved by python-social-auth into the AccessToken model of django-oauth-toolkit, which feels wrong somehow, but would probably be by far the easiest approach.
Does anybody have any objections on doing that or has maybe tackled the same problem in a different way? Am I missing something obvious and making my life harder than necessary?
If anybody has already integrated django-oauth-toolkit with python-social-auth and external oauth2 providers I would be very thankful for some pointers or opinions.
|
A lot of the difficulty in implementing OAuth comes down to understanding how the authorization flow is supposed to work. This is mostly because this is the "starting point" for logging in, and when working with a third-party backend (using something like Python Social Auth) you are actually doing this twice: once for your API and once for the third-party API.
Authorizing requests using your API and a third-party backend
The authentication process that you need is go through is:
Mobile App -> Your API : Authorization redirect
Your API -> Django Login : Displays login page
Django Login -> Facebook : User signs in
Facebook -> Django Login : User authorizes your API
Django Login -> Your API : User signs in
Your API -> Mobile App : User authorizes mobile app
I'm using "Facebook" as the third-party backend here, but the process is the same for any backend.
From the perspective of your mobile app, you are only redirecting to the /authorize url provided by Django OAuth Toolkit. From there, the mobile app waits until the callback url is reached, just like in the standard OAuth authorization flow. Almost everything else (Django login, social login, etc.) is handled by either Django OAuth Toolkit or Python Social Auth in the background.
This will also be compatible with pretty much any OAuth libraries that you use, and the authorization flow will work the same no matter what third party backend is used. It will even handle the (common) case where you need to be able to support Django's authentication backend (email/username and password) as well as a third-party login.
Mobile App -> Your API : Authorization redirect
Your API -> Django Login : Displays login page
Django Login -> Your API : User signs in
Your API -> Mobile App : User authorizes mobile app
What's also important to note here is that the mobile app (which could be any OAuth client) never receives the Facebook/third-party OAuth tokens. This is incredibly important, as it makes sure your API acts as an intermediary between the OAuth client and you user's social accounts.
Mobile App -> Your API : Authorization redirect
Your API -> Mobile App : Receives OAuth token
Mobile App -> Your API : Requests the display name
Your API -> Facebook : Requests the full name
Facebook -> Your API : Sends back the full name
Your API -> Mobile App : Send back a display name
Otherwise, the OAuth client would be able to bypass your API and make requests on your behalf to the third-party APIs.
Mobile App -> Your API : Authorization redirect
Your API -> Mobile App : Receives Facebook token
Mobile App -> Facebook : Requests all of the followers
Facebook -> Mobile App : Sends any requested data
You'll notice that at this point you would have lost all control over the third-party tokens. This is especially dangerous because most tokens can access a wide range of data, which opens the door to abuse and eventually goes down under your name. Most likely, those logging into your API/website did not intend on sharing their social information with the OAuth client, and were instead expecting you to keep that information private (as much as possible), but instead you are exposing that information to everyone.
Authenticating requests to your API
When the mobile application then uses your OAuth token to make requests to your API, all of the authentication happens through Django OAuth Toolkit (or your OAuth provider) in the background. All you see is that there is a User associated with your request.
Mobile App -> Your API : Sends request with OAuth token
Your API -> Django OAuth Toolkit : Verifies the token
Django OAuth Toolkit -> Your API : Returns the user who is authenticated
Your API -> Mobile App : Sends requested data back
This is important, because after the authorization stage it shouldn't make a difference if the user is coming from Facebook or Django's authentication system. Your API just needs a User to work with, and your OAuth provider should be able to handle the authentication and verification of the token.
This isn't much different from how Django REST framework authenticates the user when using session-backed authentication.
Web Browser -> Your API : Sends session cookie
Your API -> Django : Verifies session token
Django -> Your API : Returns session data
Your API -> Django : Verifies the user session
Django -> Your API : Returns the logged in user
Your API -> Web Browser : Returns the requested data
Again, all of this is handled by Django OAuth Toolkit and does not require extra work to implement.
Working with a native SDK
In most cases, you are going to be authenticating the user through your own website and using Python Social Auth to handle everything. But the one notable exception is when using a native SDK, as authentication and authorization is handled through the native system, which means you are bypassing your API entirely. This is great for applications which need to sign in with a third party, or applications which don't use your API at all, but it's a nightmare when both come together.
This is because your server can't validate the login and is forced to assume that the login is valid and genuine, which means it bypasses any and all security that Python Social Auth gives you.
Mobile App -> Facebook SDK : Opens the authorization prompt
Facebook SDK -> Mobile App : Gets the Facebook token
Mobile App -> Your API : Sends the Facebook token for authorization
Your API -> Django Login : Tries to validate the token
Django Login -> Your API : Returns a matching user
Your API -> Mobile App : Sends back an OAuth token for the user
You'll notice that this skips over your API during the authentication phase, and then forces your API to make assumptions about the token that is passed in. But there are definitely cases where this risk may be worth it, so you should evaluate that before throwing it out. It's a trade off between quick and native logins for your user and potentially handling bad or malicious tokens.
|
Django check if a related object exists error: RelatedObjectDoesNotExist
|
I have a method has_related_object in my model that needs to check if a related object exists
class Business(base):
name = models.CharField(max_length=100, blank=True, null=True)
def has_related_object(self):
return (self.customers is not None) and (self.car is not None)
class Customer(base):
name = models.CharField(max_length=100, blank=True, null=True)
person = models.OneToOneField('Business', related_name="customer")
But I get the error:
Business.has_related_object()
RelatedObjectDoesNotExist: Business has no customer.
|
This is because the ORM has to go to the database to check to see if customer exists. Since it doesn't exist, it raises an exception.
You'll have to change your method to the following:
def has_related_object(self):
has_customer = False
try:
has_customer = (self.customers is not None)
except Customer.DoesNotExist:
pass
return has_customer and (self.car is not None)
I don't know the situation with self.car so I'll leave it to you to adjust it if it needs it.
Side note:
If you were doing this on a model that has the ForeignKeyField or OneToOneField on it, you would be able to do the following as a shortcut to avoid the database query.
def has_business(self):
return self.business_id is not None
|
Why does the 'is' operator say these methods aren't the same?
|
Consider this code:
class Person(object):
def sayHello(self):
return 'Hello'
print(Person().sayHello is Person().sayHello)
I would expect it to show True. Why does it show False?
|
Methods on are bound to instances at runtime. When you run the following code:
print(Person().sayHello is Person().sayHello)
you create two instances and each time you have a different memory address.
>>> Person().sayHello
<bound method Person.sayHello of <__main__.Person object at 0x7fbe90640410>>
>>> Person().sayHello
<bound method Person.sayHello of <__main__.Person object at 0x7fbe90640490>>
Note: All we have in Python is runtime; there is no such thing as a separate compile time.
|
Yield from coroutine vs yield from task
|
Guido van Rossum, in his speech in 2014 on Tulip/Asyncio shows the slide:
Tasks vs coroutines
Compare:
res = yield from some_coroutine(...)
res = yield from Task(some_coroutine(...))
Task can make progress without waiting for it
As log as you wait for something else
i.e. yield from
And I'm completely missing the point.
From my point of view both constructs are identical:
I case of bare coroutine - it gets scheduled, so the task is created anyways, because scheduler operates with Tasks, then coroutine caller coroutine is suspended until callee is done and then becomes free to continue execution.
In case of Task - all the same - new task is schduled and caller coroutine waits for its completion.
What is the difference in the way that code executed in both cases and what impact it has that developer should consider in practice?
p.s.
Links to authoritative sources (GvR, PEPs, docs, core devs notes) will be very appreciated.
|
For the calling side co-routine yield from coroutine() feels like a function call (i.e. it will again gain control when coroutine() finishes).
yield from Task(coroutine()) on the other hand feels more like creating a new thread. Task() returns almost instantly and very likely the caller gains control back before the coroutine() finishes.
The difference between f() and th = threading.Thread(target=f, args=()); th.start(); th.join() is obvious, right?
|
How to use "setup.cfg" instead of setup.py with Python 2.7
|
It seemed to me, that instead of the whole plethora of named keyword-arguments for
the distutils.core.setup function, one can use a setup.cfg file in the same directory
as the setup.py file and all these keywords will be read from the setup.cfg file.
I thought i could simply create a setup.cfg with Python 2.7, but a minimal testing
version just does not work. I did test it with setup.py --name, which
just returns: UNKNOWN.
And as usual with python-packaging the documentation is confusing as hell,
as it is never clear, which version they relate to or at least how old the
documentation is.
My two setup files:
setup.py:
from distutils.core import setup
setup()
setup.cfg:
[metadata]
name = foo
version = 0.1
I looked into the distutils package and found that (besides being fugly as hell)
it seems to use the mail.message_from_file factory to read the setup.cfg.
As i am quite ok with a setup.py-only approach i would not bother much longer
with such nonsense, but i am still curious how to do it right, if it is possible at all.
Neither the official packaging doc nor the Packaging-Authority seems to be a big help here.
Almost every time i feel compelled to look into python's 2.x stdlib i am wondering if they
try to showcase how not to program. On the other hand the C-Code seems quite beautiful.
|
The problem is that the setup.cfg file does not do what you want. It does not provide parameters to the setup function. It is used to supply parameters to the commands that setup.py makes available. You can list the supported commands with setup.py --help-commands. You should see something like:
(env) gondolin/zender% ./setup.py --help-commands
Standard commands:
build build everything needed to install
build_py "build" pure Python modules (copy to build directory)
.....
install_data install data files
sdist create a source distribution (tarball, zip file, etc.)
This is the list of sections that you can put in a setup.cfg file. You can list the options that a command supports using setup.py --help command. For example, the sdist command supports the following options:
(env) gondolin/zender% ./setup.py --help sdist
Common commands: (see '--help-commands' for more)
....
Options for 'sdist' command:
--formats formats for source distribution (comma-separated list)
--keep-temp (-k) keep the distribution tree around after creating archive
file(s)
--dist-dir (-d) directory to put the source distribution archive(s) in
[default: dist]
--help-formats list available distribution formats
You can control what happens when a user runs ./setup.py sdist in your project by adding a setup.cfg file like the following.
[sdist]
keep-temp = 1
dist-dir = dist/source
So... setup.cfg simply configures the behavior of the various setup commands for your project. The setup function really needs to have the metadata supplied to it as keyword parameters. You could write your own version of the distutils.dist.Distribution class that pulls metadata from setup.cfg and provide it as the distclass= keyword parameter to setup.
The missing piece to the puzzle is that the standard Distribution class does not provide a way to pass the path parameter to the distutils.dist.DistributionMetadata initializer which does pretty much what you want - it reads the package information using the email parsing stuff that you mentioned. What you found is the code that is used to process a PEP-314/PEP-345 metadata file. This is not used by the setup function. Instead, it is used to parse the metadata embedded in a distributed package.
|
pycharm and flask autoreload and breakpoints not working
|
I'm using Pycharm 4, with flask 0.10.1, python 3.4
It seems that when running a flask application from inside pycharm, if I run it with:
app.run(debug=True)
My breakpoints are ignored. After some googling, I've found that in order to make PyCharm stop on breakpoints, I should run flask with:
app.run(debug=True, use_reloader=False)
Now PyCharm correctly stops on breakpoints, but I miss the autoreloading feature.
Is there any way to make both work together?
EDIT: Using python 2.7 both things work
|
I'm going to start with the short answer: No, what you want cannot be done with any releases of PyCharm up to 4.0.1.
The problem is that when you use the reloader the Flask application runs in a child process, so the PyCharm debugger is attached to the master process and has no control over the child.
The best way to solve this problem, in my opinion, is to ask Jetbrains to build a "restart on change" feature in their IDE. Then you don't need to use Werkzeug's reloader at all and you get the same functionality direct from PyCharm.
Until Jetbrains decides to implement this, I can share my workaround, which is not terribly bad.
In the "Edit Configurations", set the configuration you are going to use to "Single Instance only" (check box in the top right of the dialog box)
Make sure the configuration is the active one.
Configure your Flask app to not use the Werkzeug reloader.
Press Ctrl-D to start debugging (on Mac, others may have a different shortcut)
Breakpoints should work just fine because the reloader isn't active.
Make any code changes you need.
When you are ready to restart, hit Ctrl-D again. The first time you do it you will get a confirmation prompt, something like "stop and restart?". Say yes, and check the "do not show again" checkbox.
Now you can hit Ctrl-D to quickly restart the debugger whenever you need to.
I agree it is not perfect, but once the Ctrl-D gets into your muscle memory you will not even think about it.
Good luck!
|
django application selenium testing no static files
|
I want to do some functional tests on my django app. I'm using selenium, tests works but the problem is with static files. The css/js status is not found.
My tests is running on localhost:8081.
Example bootstrap.css:
<h1>Not Found</h1><p>The requested URL /static/frontend/bootstrap/3.3.0/css/bootstrap.css was not found on this server.</p>
I can't find any information do I have add some extra config for selenium app?
Trackback:
Traceback (most recent call last):
File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
File "/home/t/py/django/bid/src/venv/local/lib/python2.7/site-packages/django/test/testcases.py", line 1028, in __call__
return super(FSFilesHandler, self).__call__(environ, start_response)
File "/home/t/py/django/bid/src/venv/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 187, in __call__
response = self.get_response(request)
File "/home/t/py/django/bid/src/venv/local/lib/python2.7/site-packages/django/test/testcases.py", line 1011, in get_response
return self.serve(request)
File "/home/t/py/django/bid/src/venv/local/lib/python2.7/site-packages/django/test/testcases.py", line 1023, in serve
return serve(request, final_rel_path, document_root=self.get_base_dir())
File "/home/t/py/django/bid/src/venv/local/lib/python2.7/site-packages/django/views/static.py", line 50, in serve
fullpath = os.path.join(document_root, newpath)
File "/home/t/py/django/bid/src/venv/lib/python2.7/posixpath.py", line 77, in join
elif path == '' or path.endswith('/'):
AttributeError: 'NoneType' object has no attribute 'endswith'
|
Assuming you're using 1.7, you can use StaticLiveServerTestCase from django.contrib.staticfiles.testing, instead of LiveServerTestCase (from django.test).
|
How to surround selected text in PyCharm like with Sublime Text
|
Is there a way to configure PyCharm to be able to surround selected code with parenthesis by just typing on the parenthesis key, like when we use SublimText 2?
|
I think you want something like
Preferences | Editor | General | Smart Keys -> Surround selection on typing quote or brace
|
How to delete a record by id in Flask-SQLAlchemy
|
I have users table in my MySql database. This table has id, name and age fields.
How can I delete some record by id?
Now I use the following code:
user = User.query.get(id)
db.session.delete(user)
db.session.commit()
But I don't want make any query before delete operation. Is there any way to do this? I know, I can use db.engine.execute("delete from users where id=..."), but I would like to use delete() method.
Thanks!
|
You can do this,
User.query.filter_by(id=123).delete()
or
User.query.filter(User.id == 123).delete()
Edit:
Make sure to commit for delete() to take effect.
|
Automated docstring and comments spell check
|
Consider the following sample code:
# -*- coding: utf-8 -*-
"""Test module."""
def test():
"""Tets function"""
return 10
pylint gives it 10 of 10, flake8 doesn't find any warnings:
$ pylint test.py
...
Global evaluation
-----------------
Your code has been rated at 10.00/10
...
$ flake8 test.py
$
But, as you may see, there is a typo in the test function's docstring. And, your editor would probably highlight it automagically, for example, here's how Pycharm does it:
Thanks to the http://stackoverflow.com/questions/2151300/whats-the-best-way-to-spell-check-python-source-code topic, now I know that there is a relevant spell-checking library called PyEnchant that can be used to detect typos.
My end goal is to automatically detect typos in the project and make the spell check a part of a continuous build, test and code-quality check run.
Is there a way to achieve that with pylint? If not, I would also appreciate any hints on applying PyEnchant to docstrings and comments project-wise (in this case, pylint or pyflakes plugin could be made out of it).
Please, also, let me know if I'm getting insanely concerned about the code quality.
|
Pylint just released 1.4.0, which includes a spell-checker. Here is the initial pull-request.
Note that, to make the checker work, you need to install pyenchant python module and have an enchant library installed system-wide. On mac, it can be installed via brew:
$ brew install enchant
By default, the spelling pylint checker is turned off. You can enable it either in the pylint rc configuration file, or from the command-line:
$ cat test.py
# I am the tyop
$ pylint --disable all --enable spelling --spelling-dict en_US test.py
C: 1, 0: Wrong spelling of a word 'tyop' in a comment:
# I am the tyop
^^^^
Did you mean: 'typo' or 'top' or 'tip' or 'topi'? (wrong-spelling-in-comment)
|
Convert large csv to hdf5
|
I have a 100M line csv file (actually many separate csv files) totaling 84GB. I need to convert it to a HDF5 file with a single float dataset. I used h5py in testing without any problems, but now I can't do the final dataset without running out of memory.
How can I write to HDF5 without having to store the whole dataset in memory? I'm expecting actual code here, because it should be quite simple.
I was just looking into pytables, but it doesn't look like the array class (which corresponds to a HDF5 dataset) can be written to iteratively. Similarly, pandas has read_csv and to_hdf methods in its io_tools, but I can't load the whole dataset at one time so that won't work. Perhaps you can help me solve the problem correctly with other tools in pytables or pandas.
|
Use append=True in the call to to_hdf:
import numpy as np
import pandas as pd
filename = '/tmp/test.h5'
df = pd.DataFrame(np.arange(10).reshape((5,2)), columns=['A', 'B'])
print(df)
# A B
# 0 0 1
# 1 2 3
# 2 4 5
# 3 6 7
# 4 8 9
# Save to HDF5
df.to_hdf(filename, 'data', mode='w', format='table')
del df # allow df to be garbage collected
# Append more data
df2 = pd.DataFrame(np.arange(10).reshape((5,2))*10, columns=['A', 'B'])
df2.to_hdf(filename, 'data', append=True)
print(pd.read_hdf(filename, 'data'))
yields
A B
0 0 1
1 2 3
2 4 5
3 6 7
4 8 9
0 0 10
1 20 30
2 40 50
3 60 70
4 80 90
Note that you need to use format='table' in the first call to df.to_hdf to make the table appendable. Otherwise, the format is 'fixed' by default, which is faster for reading and writing, but creates a table which can not be appended to.
Thus, you can process each CSV one at a time, use append=True to build the hdf5 file. Then overwrite the DataFrame or use del df to allow the old DataFrame to be garbage collected.
Alternatively, instead of calling df.to_hdf, you could append to a HDFStore:
import numpy as np
import pandas as pd
filename = '/tmp/test.h5'
store = pd.HDFStore(filename)
for i in range(2):
df = pd.DataFrame(np.arange(10).reshape((5,2)) * 10**i, columns=['A', 'B'])
store.append('data', df)
store.close()
store = pd.HDFStore(filename)
data = store['data']
print(data)
store.close()
yields
A B
0 0 1
1 2 3
2 4 5
3 6 7
4 8 9
0 0 10
1 20 30
2 40 50
3 60 70
4 80 90
|
matplotlib.pyplot has no attribute 'style'
|
I am trying to set a style in matplotlib as per tutorial http://matplotlib.org/users/style_sheets.html
import matplotlib.pyplot as plt
plt.style.use('ggplot')
but what I get in return is:
AttributeError: 'module' object has no attribute 'style'
My matplotlib version is 1.1.1 (and I'm on a Mac running Mavericks). Where are the styles in this version?
thanks!
|
My matplotlib version is 1.1.1
There's your problem. The style package was added in version 1.4. You should update your version.
|
How to decode a QR-code image in (preferably pure) Python?
|
TL;DR: I need a way to decode a QR-code from an image file using (preferable pure) Python.
I've got a jpg file with a QR-code which I want to decode using Python. I've found a couple libraries which claim to do this:
PyQRCode (website here) which supposedly can decode qr codes from images by simply providing a path like this:
import sys, qrcode
d = qrcode.Decoder()
if d.decode('out.png'):
print 'result: ' + d.result
else:
print 'error: ' + d.error
So I simply installed it using sudo pip install pyqrcode. The thing I find strange about the example code above however, is that it only imports qrcode (and not pyqrcode though) Since I think qrcode refers to this library which can only generate qr-code images it kind of confused me. So I tried the code above with both pyqrcode and qrcode, but both fail at the second line saying AttributeError: 'module' object has no attribute 'Decoder'. Furthermore, the website refers to Ubuntu 8.10 (which came out more than 6 years ago) and I can't find a public (git or other) repository of it to check the latest commit. So I moved on to the next library:
ZBar (website here) claims to be "an open source software suite for reading bar codes from various sources, such as image files." So I tried installing it on Mac OSX running sudo pip install zbar. This fails with error: command 'cc' failed with exit status 1. I tried to suggestions in the answers to this SO question, but I can't seem to solve it. So I decided to move on again:
QRTools, which according to this blogpost can decode images easily by using the following code:
from qrtools import QR
myCode = QR(filename=u"/home/psutton/Documents/Python/qrcodes/qrcode.png")
if myCode.decode():
print myCode.data
print myCode.data_type
print myCode.data_to_string()
So I tried installing it using sudo pip install qrtools, which can't find anything. I also tried it with python-qrtools, qr-tools, python-qrtools and a couple more combinations, but unfortunately to no avail. I suppose it refers to this repo which says it is based on ZBar (see above). Although I want to run my code on Heroku (and thus prefer a pure Python solution) I successfully installed it on a Linux box (with sudo apt-get install python-qrtools) and tried running it:
from qrtools import QR
c = QR(filename='/home/kramer65/qrcode.jpg')
c.data # prints u'NULL'
c.data_type # prints u'text'
c.data_to_string() # prints '\xef\xbb\xbfNULL' where I expect an int (being `1234567890`)
Although this seems to decode it, It doesn't seem to do it correctly. It furthermore needs ZBar and is thus not pure Python. So I decided to find yet another library.
PyXing (website here) is supposedly a Python port of the popular Java ZXing library, but the initial and only commit is 6 years old and the project has no readme or documentation whatsoever.
For the rest I found a couple qr-encoders (not decoders) and some API endpoints which can decode for you. Since I don't like this service to be dependent on other API endpoints I would want to keep the decoding local though.
So to conclude; would anybody know how I can decode QR-codes from images in (preferable pure) Python? All tips are welcome!
|
You can try the following steps and code using qrtools:
Create a qrcode file, if not already existing
I used pyqrcode for doing this, which can be installed using pip install pyqrcode
And then use the code:
>>> import pyqrcode
>>> qr = pyqrcode.create("HORN O.K. PLEASE.")
>>> qr.png("horn.png", scale=6)
Decode an existing qrcode file using qrtools
Install qrtools using sudo apt-get install python-qrtools
Now use the following code within your python prompt
>>> import qrtools
>>> qr = qrtools.QR()
>>> qr.decode("horn.png")
>>> print qr.data
u'HORN O.K. PLEASE.'
Here is the complete code in a single run:
In [2]: import pyqrcode
In [3]: qr = pyqrcode.create("HORN O.K. PLEASE.")
In [4]: qr.png("horn.png", scale=6)
In [5]: import qrtools
In [6]: qr = qrtools.QR()
In [7]: qr.decode("horn.png")
Out[7]: True
In [8]: print qr.data
HORN O.K. PLEASE.
Caveats
You might need to install PyPNG using pip install pypng for using pyqrcode
In case you have PIL installed, you might get IOError: decoder zip not available. In that case, try uninstalling and reinstalling PIL using:
pip uninstall PIL
pip install PIL
If that doesn't work, try using Pillow instead
pip uninstall PIL
pip install pillow
|
What does `ValueError: cannot reindex from a duplicate axis` mean?
|
I am getting a ValueError: cannot reindex from a duplicate axis when I am trying to set an index to a certain value. I tried to reproduce this with a simple example, but I could not do it.
Here is my session inside of ipdb trace. I have a DataFrame with string index, and integer columns, float values. However when I try to create sum index for sum of all columns I am getting ValueError: cannot reindex from a duplicate axis error. I created a small DataFrame with the same characteristics, but was not able to reproduce the problem, what could I be missing?
I don't really understand what ValueError: cannot reindex from a duplicate axismeans, what does this error message mean? Maybe this will help me diagnose the problem, and this is most answerable part of my question.
ipdb> type(affinity_matrix)
<class 'pandas.core.frame.DataFrame'>
ipdb> affinity_matrix.shape
(333, 10)
ipdb> affinity_matrix.columns
Int64Index([9315684, 9315597, 9316591, 9320520, 9321163, 9320615, 9321187, 9319487, 9319467, 9320484], dtype='int64')
ipdb> affinity_matrix.index
Index([u'001', u'002', u'003', u'004', u'005', u'008', u'009', u'010', u'011', u'014', u'015', u'016', u'018', u'020', u'021', u'022', u'024', u'025', u'026', u'027', u'028', u'029', u'030', u'032', u'033', u'034', u'035', u'036', u'039', u'040', u'041', u'042', u'043', u'044', u'045', u'047', u'047', u'048', u'050', u'053', u'054', u'055', u'056', u'057', u'058', u'059', u'060', u'061', u'062', u'063', u'065', u'067', u'068', u'069', u'070', u'071', u'072', u'073', u'074', u'075', u'076', u'077', u'078', u'080', u'082', u'083', u'084', u'085', u'086', u'089', u'090', u'091', u'092', u'093', u'094', u'095', u'096', u'097', u'098', u'100', u'101', u'103', u'104', u'105', u'106', u'107', u'108', u'109', u'110', u'111', u'112', u'113', u'114', u'115', u'116', u'117', u'118', u'119', u'121', u'122', ...], dtype='object')
ipdb> affinity_matrix.values.dtype
dtype('float64')
ipdb> 'sums' in affinity_matrix.index
False
Here is the error:
ipdb> affinity_matrix.loc['sums'] = affinity_matrix.sum(axis=0)
*** ValueError: cannot reindex from a duplicate axis
I tried to reproduce this with a simple example, but I failed
In [32]: import pandas as pd
In [33]: import numpy as np
In [34]: a = np.arange(35).reshape(5,7)
In [35]: df = pd.DataFrame(a, ['x', 'y', 'u', 'z', 'w'], range(10, 17))
In [36]: df.values.dtype
Out[36]: dtype('int64')
In [37]: df.loc['sums'] = df.sum(axis=0)
In [38]: df
Out[38]:
10 11 12 13 14 15 16
x 0 1 2 3 4 5 6
y 7 8 9 10 11 12 13
u 14 15 16 17 18 19 20
z 21 22 23 24 25 26 27
w 28 29 30 31 32 33 34
sums 70 75 80 85 90 95 100
|
This error usually rises when you join / assign to a column when the index has duplicate values. Since you are assigning to a row, I suspect that there is a duplicate value in affinity_matrix.columns, perhaps not shown in your question.
|
Sending a file over TCP sockets in Python
|
I've successfully been able to copy the file contents (image) to a new file. However when I try the same thing over TCP sockets I'm facing issues. The server loop is not exiting. The client loop exits when it reaches the EOF, however the server is unable to recognize EOF.
Here's the code:
Server
import socket # Import socket module
s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 12345 # Reserve a port for your service.
s.bind((host, port)) # Bind to the port
f = open('torecv.png','wb')
s.listen(5) # Now wait for client connection.
while True:
c, addr = s.accept() # Establish connection with client.
print 'Got connection from', addr
print "Receiving..."
l = c.recv(1024)
while (l):
print "Receiving..."
f.write(l)
l = c.recv(1024)
f.close()
print "Done Receiving"
c.send('Thank you for connecting')
c.close() # Close the connection
Client
import socket # Import socket module
s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 12345 # Reserve a port for your service.
s.connect((host, port))
s.send("Hello server!")
f = open('tosend.png','rb')
print 'Sending...'
l = f.read(1024)
while (l):
print 'Sending...'
s.send(l)
l = f.read(1024)
f.close()
print "Done Sending"
print s.recv(1024)
s.close # Close the socket when done
Here's the screenshot:
Server
Client
Edit 1: Extra data copied over. Making the file "not complete."
The first column shows the image that has been received. It seems to be larger than the one sent. Because of this, I'm not able to open the image. It seems like a corrupted file.
Edit 2: This is how I do it in the console. The file sizes are the same here.
|
Client need to notify that it finished sending, using socket.shutdown (not socket.close which close both reading/writing part of the socket):
...
print "Done Sending"
s.shutdown(socket.SHUT_WR)
print s.recv(1024)
s.close()
UPDATE
Client sends Hello server! to the server; which is written to the file in the server side.
s.send("Hello server!")
Remove above line to avoid it.
|
Can Scrapy be replaced by pyspider?
|
I've been using Scrapy web-scraping framework pretty extensively, but, recently I've discovered that there is another framework/system called pyspider, which, according to it's github page, is fresh, actively developed and popular.
pyspider's home page lists several things being supported out-of-the-box:
Powerful WebUI with script editor, task monitor, project manager and result viewer
Javascript pages supported!
Task priority, retry, periodical and
recrawl by age or marks in index page (like update time)
Distributed architecture
These are the things that Scrapy itself doesn't provide, but, it is possible with the help of portia (for Web UI), scrapyjs (for js pages) and scrapyd (deploying and distributing through API).
Is it true that pyspider alone can replace all of these tools? In other words, is pyspider a direct alternative to Scrapy? If not, then which use cases does it cover?
I hope I'm not crossing "too broad" or "opinion-based" line.
|
pyspider and Scrapy have the same purpose, web scraping, but a different view about doing that.
spider should never stop till WWW dead. (information is changing, data is updating in websites, spider should have the ability and responsibility to scrape latest data. That's why pyspider has URL database, powerful scheduler, @every, age, etc..)
pyspider is a service more than a framework. (Components are running in isolated process, lite - all version is running as service too, you needn't have a Python environment but a browser, everything about fetch or schedule is controlled by script via API not startup parameters or global configs, resources/projects is managed by pyspider, etc...)
pyspider is a spider system. (Any components can been replaced, even developed in C/C++/Java or any language, for better performance or larger capacity)
and
on_start vs start_url
token bucket traffic control vs download_delay
return json vs class Item
message queue vs Pipeline
built-in url database vs set
Persistence vs In-memory
PyQuery + any third package you like vs built-in CSS/Xpath support
In fact, I have not referred much from Scrapy. pyspider is really different from Scrapy.
But, why not try it yourself? pyspider is also fast, has easy-to-use API and you can try it without install.
|
Import psycopg2 Library not loaded: libssl.1.0.0.dylib
|
When I try to run the command:
import psycopg2
I get the error:
ImportError: dlopen(/Users/gwulfs/anaconda/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Library not loaded: libssl.1.0.0.dylib
Referenced from: /Users/gwulfs/anaconda/lib/python2.7/site-packages/psycopg2/_psycopg.so
Reason: image not found
So far I have tried brew install openssl and have referenced (with no luck):
psycopg2 installation error - Library not loaded: libssl.dylib
http://joshuakehn.com/2013/10/13/Postgresapp-and-psycopg2-on-OS-X.html
Psycopg2 image not found
|
Instead of playing with symlinks in system library dirs, set the $DYLD_FALLBACK_LIBRARY_PATH to include the anaconda libraries. eg:
export DYLD_FALLBACK_LIBRARY_PATH=$HOME/anaconda/lib/:$DYLD_FALLBACK_LIBRARY_PATH
|
How to run Debug server for Django project in PyCharm Community Edition?
|
Has anyone had issues setting up a debug configuration for Django project in PyCharm Community Edition?
Community Edition of the IDE is lacking the project type option on project setup and then when I am setting up Debug or Run config it asks me for a script it should run. What script would it be for Django, manage.py?
Thanks in advance
|
Yes you can.
In Run -> Edit Configurations create new configuration
Script: path_to/manage.py
Script parameters: runserver
|
TypeError: coercing to Unicode: need string or buffer, int found
|
I have 2 APIs. I am fetching data from them. I want to assign particular code parts to string so that life became easier while coding. Here is the code:
import urllib2
import json
urlIncomeStatement = 'http://dev.c0l.in:8888'
apiIncomeStatement = urllib2.urlopen(urlIncomeStatement)
dataIncomeStatement = json.load(apiIncomeStatement)
urlFinancialPosition = 'http://dev.c0l.in:9999'
apiFinancialPosition = urllib2.urlopen(urlFinancialPosition)
dataFinancialPositiont = json.load(apiFinancialPosition)
for item in dataIncomeStatement:
name = item['company']['name']
interestPayable = int(item['company']['interest_payable'])
interestReceivable = int(item['company']['interest_receivable'])
sales = int(item['company']['interest_receivable'])
expenses = int(item['company']['expenses'])
openingStock = int(item['company']['opening_stock'])
closingStock = int(item['company']['closing_stock'])
sum1 = sales + expenses
if item['sector'] == 'technology':
name + "'s interest payable - " + interestPayable
name + "'s interest receivable - " + interestReceivable
name + "'s interest receivable - " + sales
name + "'s interest receivable - " + expenses
name + "'s interest receivable - " + openingStock
name + "'s interest receivable - " + closingStock
print sum1
In result I get:
Traceback (most recent call last):
File "C:/Users/gnite_000/Desktop/test.py", line 25, in <module>
name + "'s interest payable - " + interestPayable
TypeError: coercing to Unicode: need string or buffer, int found
|
The problem might have to do with the fact that you are adding ints to strings here
if item['sector'] == 'technology':
name + "'s interest payable - " + interestPayable
name + "'s interest receivable - " + interestReceivable
name + "'s interest receivable - " + sales
name + "'s interest receivable - " + expenses
name + "'s interest receivable - " + openingStock
name + "'s interest receivable - " + closingStock
As far as I'm aware, the interpretor cannot implicitly convert an int to a string.
This might work, though,
str(name) + "'s interest receivable - " + str(closingStock)
On which I'm assuming Python > 3.0
|
Why not generating the secret key every time Flask application starts?
|
When using sessions, Flask requires a secret key. In every example I've seen, the secret key is somehow generated and then stored either in source code or in configuration file.
What is the reason to store it permanently? Why not simply generating it when Flask application starts?
app.secret_key = os.urandom(50)
I can only imagine two cases where this would be a problem:
When using a variant of a failover where the same user may be moved from server to server (since several machines will generate different keys),
When using aggressive continuous delivery: with several dozens of deliveries per day, breaking the sessions of every user would quickly become the issue number one.
Are there any other cases?
|
The secret key is used to sign the session cookie. If you had to restart your application, and regenerated the key, all the existing sessions would be invalidated. That's probably not what you want (or at least, not the right way to go about invalidating sessions). A similar case could be made for anything else that relies on the secret key, such as tokens generated by itsdangerous to provide reset password urls (for example).
The standard practice is to have some throwaway key commited to the repo (so that there's something there for dev machines) and then to set the key in the local config when deploying. This way, the key isn't leaked and doesn't need to be regenerated.
There's also the case of running secondary systems that depend on the app context, such as Celery for running background tasks. If each running instance of the application has different settings, they may not work together correctly in some cases.
|
Best way to access the Nth line of csv file
|
I have to access the Nth line in a CSV file.
Here's what I did:
import csv
the_file = open('path', 'r')
reader = csv.reader(the_file)
N = input('What line do you need? > ')
i = 0
for row in reader:
if i == N:
print("This is the line.")
print(row)
break
i += 1
the_file.close()
...but this does not feel optimal. Edit for precision: If the file is huge, I do not want to go through all the lines and I do not want to have to load the whole file into memory.
I do hope something like reader[N] exists, but I have not found it.
Edit for answer: This line (coming from chosen answer) is what I was looking for:
next(itertools.islice(csv.reader(f), N, None)
|
It makes little difference but it is slightly cleaner to use enumerate rather than making your own counter variable.
for i, row in enumerate(reader):
if i == N:
print("This is the line.")
print(row)
break
You can also use itertools.islice which is designed for this type of scenario - accessing a particular slice of an iterable without reading the whole thing into memory. It should be a bit more efficient than looping through the unwanted rows.
with open(path, 'r') as f:
N = int(input('What line do you need? > '))
print("This is the line.")
print(next(itertools.islice(csv.reader(f), N, None)))
But if your CSV file is small, just read the entire thing into a list, which you can then access with an index in the normal way. This also has the advantage that you can access several different rows in random order without having to reset the csv reader.
my_csv_data = list(reader)
print(my_csv_data[N])
|
Python - how to count item in json data
|
How I can get numer of elements in node of JSON data, in Python language? This is example of JSON:
{
"result":[
{
"run":[
{
"action":"stop"
},
{
"action":"start"
},
{
"action":"start"
}
],
"find":true
}
]
}
I need to get numer of elements from node data['result'][0]['run']. It should be 3, but I can't find how do it in Python. Anybody can help me?
|
import json
json_data = json.dumps({
"result":[
{
"run":[
{
"action":"stop"
},
{
"action":"start"
},
{
"action":"start"
}
],
"find": "true"
}
]
})
item_dict = json.loads(json_data)
print len(item_dict['result'][0]['run'])
Convert it in dict.
|
Why can I pass an instance method to multiprocessing.Process, but not a multiprocessing.Pool?
|
I am trying to write an application that applies a function concurrently with a multiprocessing.Pool. I would like this function to be an instance method (so I can define it differently in different subclasses). This doesn't seem to be possible; as I have learned elsewhere, apparently bound methods can't be pickled. So why does starting a multiprocessing.Process with a bound method as a target work? The following code:
import multiprocessing
def test1():
print "Hello, world 1"
def increment(x):
return x + 1
class testClass():
def process(self):
process1 = multiprocessing.Process(target=test1)
process1.start()
process1.join()
process2 = multiprocessing.Process(target=self.test2)
process2.start()
process2.join()
def pool(self):
pool = multiprocessing.Pool(1)
for answer in pool.imap(increment, range(10)):
print answer
print
for answer in pool.imap(self.square, range(10)):
print answer
def test2(self):
print "Hello, world 2"
def square(self, x):
return x * x
def main():
c = testClass()
c.process()
c.pool()
if __name__ == "__main__":
main()
Produces this output:
Hello, world 1
Hello, world 2
1
2
3
4
5
6
7
8
9
10
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Python27\Lib\threading.py", line 551, in __bootstrap_inner
self.run()
File "C:\Python27\Lib\threading.py", line 504, in run
self.__target(*self.__args, **self.__kwargs)
File "C:\Python27\Lib\multiprocessing\pool.py", line 319, in _handle_tasks
put(task)
PicklingError: Can't pickle <type 'instancemethod'>: attribute lookup __builtin__.instancemethod failed
Why can Processes handle bound methods, but not Pools?
|
The pickle module normally can't pickle instance methods:
>>> import pickle
>>> class A(object):
... def z(self): print "hi"
...
>>> a = A()
>>> pickle.dumps(a.z)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/pickle.py", line 1374, in dumps
Pickler(file, protocol).dump(obj)
File "/usr/local/lib/python2.7/pickle.py", line 224, in dump
self.save(obj)
File "/usr/local/lib/python2.7/pickle.py", line 306, in save
rv = reduce(self.proto)
File "/usr/local/lib/python2.7/copy_reg.py", line 70, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle instancemethod objects
However, the multiprocessing module has a custom Pickler that adds some code to enable this feature:
#
# Try making some callable types picklable
#
from pickle import Pickler
class ForkingPickler(Pickler):
dispatch = Pickler.dispatch.copy()
@classmethod
def register(cls, type, reduce):
def dispatcher(self, obj):
rv = reduce(obj)
self.save_reduce(obj=obj, *rv)
cls.dispatch[type] = dispatcher
def _reduce_method(m):
if m.im_self is None:
return getattr, (m.im_class, m.im_func.func_name)
else:
return getattr, (m.im_self, m.im_func.func_name)
ForkingPickler.register(type(ForkingPickler.save), _reduce_method)
You can replicate this using the copy_reg module to see it work for yourself:
>>> import copy_reg
>>> def _reduce_method(m):
... if m.im_self is None:
... return getattr, (m.im_class, m.im_func.func_name)
... else:
... return getattr, (m.im_self, m.im_func.func_name)
...
>>> copy_reg.pickle(type(a.z), _reduce_method)
>>> pickle.dumps(a.z)
"c__builtin__\ngetattr\np0\n(ccopy_reg\n_reconstructor\np1\n(c__main__\nA\np2\nc__builtin__\nobject\np3\nNtp4\nRp5\nS'z'\np6\ntp7\nRp8\n."
When you use Process.start to spawn a new process on Windows, it pickles all the parameters you passed to the child process using this custom ForkingPickler:
#
# Windows
#
else:
# snip...
from pickle import load, HIGHEST_PROTOCOL
def dump(obj, file, protocol=None):
ForkingPickler(file, protocol).dump(obj)
#
# We define a Popen class similar to the one from subprocess, but
# whose constructor takes a process object as its argument.
#
class Popen(object):
'''
Start a subprocess to run the code of a process object
'''
_tls = thread._local()
def __init__(self, process_obj):
# create pipe for communication with child
rfd, wfd = os.pipe()
# get handle for read end of the pipe and make it inheritable
...
# start process
...
# set attributes of self
...
# send information to child
prep_data = get_preparation_data(process_obj._name)
to_child = os.fdopen(wfd, 'wb')
Popen._tls.process_handle = int(hp)
try:
dump(prep_data, to_child, HIGHEST_PROTOCOL)
dump(process_obj, to_child, HIGHEST_PROTOCOL)
finally:
del Popen._tls.process_handle
to_child.close()
Note the "send information to the child" section. It's using the dump function, which uses ForkingPickler to pickle the data, which means your instance method can be pickled.
Now, when you use methods on multiprocessing.Pool to send a method to a child process, it's using a multiprocessing.Pipe to pickle the data. In Python 2.7, multiprocessing.Pipe is implemented in C, and calls pickle_dumps directly, so it doesn't take advantage of the ForkingPickler. That means pickling the instance method doesn't work.
However, if you use copy_reg to register the instancemethod type, rather than a custom Pickler, all attempts at pickling will be affected. So you can use that to enable pickling instance methods, even via Pool:
import multiprocessing
import copy_reg
import types
def _reduce_method(m):
if m.im_self is None:
return getattr, (m.im_class, m.im_func.func_name)
else:
return getattr, (m.im_self, m.im_func.func_name)
copy_reg.pickle(types.MethodType, _reduce_method)
def test1():
print("Hello, world 1")
def increment(x):
return x + 1
class testClass():
def process(self):
process1 = multiprocessing.Process(target=test1)
process1.start()
process1.join()
process2 = multiprocessing.Process(target=self.test2)
process2.start()
process2.join()
def pool(self):
pool = multiprocessing.Pool(1)
for answer in pool.imap(increment, range(10)):
print(answer)
print
for answer in pool.imap(self.square, range(10)):
print(answer)
def test2(self):
print("Hello, world 2")
def square(self, x):
return x * x
def main():
c = testClass()
c.process()
c.pool()
if __name__ == "__main__":
main()
Output:
Hello, world 1
Hello, world 2
GOT (0, 0, (True, 1))
GOT (0, 1, (True, 2))
GOT (0, 2, (True, 3))
GOT (0, 3, (True, 4))
GOT (0, 4, (True, 5))
1GOT (0, 5, (True, 6))
GOT (0, 6, (True, 7))
2
GOT (0, 7, (True, 8))
3
GOT (0, 8, (True, 9))
GOT (0, 9, (True, 10))
4
5
6
7
8
9
10
GOT (1, 0, (True, 0))
0
GOT (1, 1, (True, 1))
1
GOT (1, 2, (True, 4))
4
GOT (1, 3, (True, 9))
9
GOT (1, 4, (True, 16))
16
GOT (1, 5, (True, 25))
25
GOT (1, 6, (True, 36))
36
GOT (1, 7, (True, 49))
49
GOT (1, 8, (True, 64))
64
GOT (1, 9, (True, 81))
81
GOT None
Also note that in Python 3.x, pickle can pickle instance method types natively, so none of this stuff matters any more. :)
|
Firefox Build does not work with Selenium
|
for my research, I did some source code modifications in firefox and build it myself. In order to automate testing, I opted to use Selenium but unfortunately, my newly built Firefox seem to not support Selenium.
I did the following:
from selenium import webdriver
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
binary = FirefoxBinary("/path/to/firefox/binary")
d = webdriver.Firefox(firefox_binary=binary)
d.get("http://www.google.de")
The Firefox does open and is responsive (I can enter a website in the search bar). But after a while, the python script crashes with the following error message:
Traceback (most recent call last):
File "firefox.py", line 7, in <module>
d = webdriver.Firefox(firefox_binary=binary)
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/webdriver.py", line 59, in __init__
self.binary, timeout),
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/extension_connection.py", line 47, in __init__
self.binary.launch_browser(self.profile)
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 66, in launch_browser
self._wait_until_connectable()
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 109, in _wait_until_connectable
raise WebDriverException("Can't load the profile. Profile "
selenium.common.exceptions.WebDriverException: Message: Can't load the profile. Profile Dir: %s If you specified a log_file in the FirefoxBinary constructor, check it for details.
I did google that error message and most solutions suggested, that I should Update Selenium since it does not support the Firefox version used. Unfortunately, I installed the newest version of selenium (2.44.0) and I even used an older version of firefox (version 33) to rule out that point.
I also made sure that my code modifications are not the reason for this to crash by building a clean, unmodified firefox. Selenium doesn't work with this firefox either.
If I don't specify a firefox binary and let Selenium use the installed Firefox, everything works fine. So my guess is, that something is wrong with the firefox build, which I did exactly as mentioned in the online documentation (e.g. ./mach build).
Has anyone an idea, what my mistake might be? Any help is greatly appreciated!
Some setup information:
Firefox 33
Selenium 2.44.0
Python 3.4 (also tried 2.7, doesn't work either)
Firefox build with Ubuntu 14.04
|
Ubuntu 14.04, firefox 36.0, selenium 2.44.0.
The same problem, was solved by:
sudo pip install -U selenium
Selenium 2.45.0 is OK with FF36.
update: Selenium 2.53+ is compatible with FF45
You can get older FF versions here
|
Convert word2vec bin file to text
|
From the word2vec site I can download GoogleNews-vectors-negative300.bin.gz. The .bin file (about 3.4GB) is a binary format not useful to me. Tomas Mikolov assures us that "It should be fairly straightforward to convert the binary format to text format (though that will take more disk space). Check the code in the distance tool, it's rather trivial to read the binary file." Unfortunately, I don't know enough C to understand http://word2vec.googlecode.com/svn/trunk/distance.c.
Supposedly gensim can do this also, but all the tutorials I've found seem to be about converting from text, not the other way.
Can someone suggest modifications to the C code or instructions for gensim to emit text?
|
I use this code to load binary model, then save the model to text file,
from gensim.models import word2vec
model = word2vec.Word2Vec.load_word2vec_format('path/to/GoogleNews-vectors-negative300.bin', binary=True)
model.save_word2vec_format('path/to/GoogleNews-vectors-negative300.txt', binary=False)
References: API and nullege.
|
Python Pandas read_csv skip rows but keep header
|
I'm having trouble figuring out how to skip n rows in a csv file but keep the header which is the 1 row.
What I want to do is iterate but keep the header from the first row. skiprows makes the header the first row after the skipped rows. What is the best way of doing this?
data = pd.read_csv('test.csv', sep='|', header=0, skiprows=10, nrows=10)
|
You can pass a list of row numbers to skiprows instead of an integer. The reader will then ignore those rows in the list.
By giving the function the integer 10, you're just skipping the first 10 lines.
To keep the first row 0 (as the header) and then skip to row 10, you could write:
pd.read_csv('test.csv', sep='|', skiprows=range(1, 10))
|
How do I fix 'ImportError: cannot import name IncompleteRead'?
|
When I try to install anything with pip or pip3, I get:
$ sudo pip3 install python3-tk
Traceback (most recent call last):
File "/usr/bin/pip3", line 9, in <module>
load_entry_point('pip==1.5.6', 'console_scripts', 'pip3')()
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 356, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2476, in load_entry_point
return ep.load()
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2190, in load
['__name__'])
File "/usr/lib/python3/dist-packages/pip/__init__.py", line 61, in <module>
from pip.vcs import git, mercurial, subversion, bazaar # noqa
File "/usr/lib/python3/dist-packages/pip/vcs/mercurial.py", line 9, in <module>
from pip.download import path_to_url
File "/usr/lib/python3/dist-packages/pip/download.py", line 25, in <module>
from requests.compat import IncompleteRead
ImportError: cannot import name 'IncompleteRead'
I have a Ubuntu 14.10 system.
How can I fix this problem?
|
While this previous answer might be the reason, this snipped worked for me as a solution (in Ubuntu 14.04):
First remove the package from the package manager:
# apt-get remove python-pip
And then install the latest version by side:
# easy_install pip
(thanks to @Aufziehvogel, @JunchaoGu)
|
Decorator execution order
|
def make_bold(fn):
return lambda : "<b>" + fn() + "</b>"
def make_italic(fn):
return lambda : "<i>" + fn() + "</i>"
@make_bold
@make_italic
def hello():
return "hello world"
helloHTML = hello()
Output: "<b><i>hello world</i></b>"
I roughly understand about decorators and how it works with one of it in most examples.
In this example, there are 2 of it. From the output, it seems that @make_italic executes first, then @make_bold.
Does this mean that for decorated functions, it will first run the function first then moving towards to the top for other decorators? Like @make_italic first then @make_bold, instead of the opposite.
So this means that it is different from the norm of top-down approach in most programming lang? Just for this case of decorator? Or am I wrong?
|
Decorators wrap the function they are decorating. So make_bold decorated the result of the make_italic decorator, which decorated the hello function.
The @decorator syntax is really just syntactic sugar; the following:
@decorator
def decorated_function():
# ...
is really executed as:
def decorated_function():
# ...
decorated_function = decorator(decorated_function)
replacing the original decorated_function object with whatever decorator() returned.
Stacking decorators repeats that process outward.
So your sample:
@make_bold
@make_italic
def hello():
return "hello world"
can be expanded to:
def hello():
return "hello world"
hello = make_bold(make_italic(hello))
When you call hello() now, you are calling the object returned by make_bold(), really. make_bold() returned a lambda that calls the function make_bold wrapped, which is the return value of make_italic(), which is also a lambda that calls the original hello(). Expanding all these calls you get:
hello() = lambda : "<b>" + fn() + "</b>" # where fn() ->
lambda : "<i>" + fn() + "</i>" # where fn() ->
return "hello world"
so the output becomes:
"<b>" + ("<i>" + ("hello world") + "</i>") + "</b>"
|
How to connect PyCharm to a python interpreter located inside a Docker container?
|
I'm starting with Docker, but I don't know how to configure PyCharm to use a python interpreter located in a container.
It was easy to setup with Vagrant, but there's apparently no official way to do it with Docker yet.
Should I prepare special Docker image with exposed ssh port? How to do that more easily?
|
Here is how I solved the problem. My circumstances are that I was assigned to do an intervention on a specific area of a web app that used docker-compose to create a set of four containers. Docker-compose is a kind of meta docker that manages multiple docker containers from one command. I did not want to mangle their existing setup since so many things depend on it. But since I was working on one specific part in one of the images I decided that I would extend one of the containers with ssh so that I could debug from PyCharm. Further, I wanted the app to run as normal when started and only by forcing it to quit and then connecting to it from PyCharm would I have a debuggable component. Here is what I did on my mac that uses boot2docker (on VirtualBox) to setup docker correctly.
First, I need to extend the target container, called jqworker. I am going to use "supervisior" to do the heavy lifting of managing things.
FROM jqworker
# Get supervisor to control multiple processes, sshd to allow connections.
# And supervisor-stdout allows us to send the output to the main docker output.
RUN apt-get update && apt-get install -y supervisor openssh-server python-pip \
&& pip install supervisor-stdout \
&& mkdir -p /var/run/sshd \
&& mkdir -p /var/log/supervisor \
&& mkdir -p /etc/supervisor/conf.d
COPY ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Fix up SSH, probably should rip this out in real deploy situations.
RUN echo 'root:soup4nuts' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
# Expose SSH on 22, but this gets mapped to some other address.
EXPOSE 22
# Replace old entrypoint with supervisiord, starts both sshd and worker.py
ENTRYPOINT ["/usr/bin/supervisord"]
Supervisor lets me run multiple tasks from one command, in this case the original command and SSHD. Yes, everyone says that SSHD in docker is evil and containers should this and that and blah blah, but programming is about solving problems, not conforming to arbitrary dicta that ignore context. We need SSH to debug code and are not deploying this to the field, which is one reason we are extending the existing container instead of adding this in to the deployment structure. I am running it locally so that I can debug the code in context.
Here is the supervisord.conf file, note that I am using the supervisor-stdout package to direct output to supervisor instead of logging the data as I prefer to see it all in one place:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:worker]
command=python /opt/applications/myproject/worker.py -A args
directory=/opt/applications/myproject
stdout_events_enabled=true
stderr_events_enabled=true
[eventlistener:stdout]
command = supervisor_stdout
buffer_size = 100
events = PROCESS_LOG
result_handler = supervisor_stdout:event_handler
I have a build directory containing the above two files, and from a terminal in there I build the Dockerfile with:
docker build -t fgkrqworker .
This adds it so that I can call it from docker or docker-compose. Don't skip the trailing dot!
Since the app uses docker-compose to run a set of containers, the existing WORKER container will be replaced with one that solves my problems. But first I want to show that in another part of my docker-compose.yml I define a mapping from the containers to my local hard drive, this is one of a number of volumes being mapped:
volumes: &VOLUMES
? /Users/me/source/myproject:/opt/applications/myproject
Then the actual definition for my container, which references the above VOLUMES:
jqworker: &WORKER
image: fgkrqworker
privileged: true
stdin_open: true
detach: true
tty: true
volumes:
<<: *VOLUMES
ports:
- "7722:22"
This maps the SSH port to a known port that is available in the VM, recall I am using boot2docker which rides on VirtualBox, but the needs to be mapped out to where PyCharm can get at it. In VirtualBox, open the boot2docker VM and choose Adapter 1. Sometimes the "Attached to:" combo unselects itself, so watch for that. In my case it should have NAT selected.
Click "Port Forwarding" and map the inner port to the a port on localhost, I choose to use the same port number. It should be something like, Name: ssh_mapped; Protocol: TCP; Host IP:127.0.0.1; Host Port:7722; Guest IP:; Guest Port: 7722. Note: be careful not to change the boot2docker `ssh' setting or you will eventually be unable to start the VM correctly.
So, at this point we have a container that extends my target container. It runs ssh on port 22 and maps it to 7722 since other containers might want to use 22, and is visible in the VirtualBox environment. VirtualBox maps 7722 to 7722 to the localhost and you can ssh into the container with:
ssh root@localhost -p 7722
Which will then prompt for the password, 'soup4nuts' and you should be able to locate something specific to your container to verify that it is the right one and that everything works OK. I would not mess with root if I were deploying this anywhere but my local machine, so be warned. This is only for debugging locally and you should think twice or thrice about doing this on a live site.
At this point you can probably figure the rest of it out if you have used PyCharm's remote debugging. But here is how I set it up:
First, recall that I have docker-compose.yml mapping the project directory:
? /Users/me/source/myproject:/opt/applications/myproject
In my container /opt/applications/myproject is actually /Users/me/source/myproject on my local hard drive. So, this is the root of my project. My PyCharm sees this directory as the project root and I want PyCharm to write the .pycharm_helpers here so that it persists between sessions. I am managing source code on the mac side of things, but PyCharm thinks it is a unixy box elsewhere. Yes, it is a bit of kludge until JetBrains incorporates a Docker solution.
First, go to the Project X/Project Structure and create a Content Root of the local mapping, in my case that means /Users/me/source/myproject
Later, come back and add .pycharm_helpers to the excluded set, we don't want this to end up in source control or confuse PyCharm.
Go to the Build, Execution, Deployment tab, pick Deployment and create a new Deployment of SFTP type. The host is localhost, the port 7722, the root path is /opt/applications/myproject and the username is root and password is soup4nuts and I checked the option to save the password. I named my Deployment 'dockercompose' so that I would be able to pick it out later.
On the Deployment Mappings tab I set the local path to /Users/me/source/myproject and deployment and web path to a single '/' but since my code doesn't correspond to a URL and I don't use this to debug, it is a placeholder in the Web Path setting. I don't know how you might set yours.
On the Project X/Project Interpreter tab, create a new Remote Python Interpreter. You can pick the Deployment Configuration and choose the 'dockercompose' configuration we created above. The host URL should fill in as 'ssh://root@localhost:7722' and the Python Interpreter Path will likely be /usr/bin/python. We need to set the PyCharm Helpers Path as the default will not survive the container being redone. I actually went to my project local directory and created a .pycharm_helpers directory in the root, then set the path here as /opt/applications/myproject/.pycharm_helpers and when I hit the OK button it copied the files "up" to the directory. I don't know if it will create it automatically or not.
Don't forget that the .pycharm_helpers directory should probably be excluded on the project roots tab.
At this point you can go to the Build, Execution, Deployment tab, and under Console/Python Console, pick the remote interpreter we created above and set the working directory to /opt/applications/myproject and you can run your Python Console in the container if you like.
Now you need to create a Run Configuration so that you can remotely debug your python code. Make a new Python configuration and set the script to the one that used to start the python code in the container. Mine, from the supervisor setup, above is:
/opt/applications/myproject/worker.py -A args
So I set the script to /opt/applications/myproject/worker.py and the parameters to -A args.
Choose the remote interpreter we created above, and the working directory as needed, for me it is /opt/applications/myproject and for me that does the job.
Now I want to enter my container and stop the worker.py script so I can start up a debug version. Of course, if you like you can ignore running the script by default and only use the container for debugging.
I could open a ssh session to stop the script, but docker provides a useful command that will do the work for me by passing it into the environment.
$> docker exec -i -t supervisorctl stop worker
As my process is named 'worker'. Note that you can restart by replacing the stop command with start.
Now, in PyCharm start a debug session with the Run Configuration created above. It should connect and start things up and give you console output in the window. Since we killed the one that Supervision originally started it is no longer connected.
This was a seat of the pants operation, so there may be errors and incorrect assumptions I didn't notice. Particularly, the PyCharm setup required a few iterations, so the order may be incorrect, try going through it again if it fails. This is a lot of stuff and easy to skip something critical.
|
Pycharm error Django is not importable in this environment
|
I'm getting the following errors in Pycharm.
Error: Django is not importable in this environment
It used to work well but I moved the project and virtualenv to other directory and the error appeared. It's weird because if I run the project ignoring the configuration settings warning that I get, works fine.
The problem is that now, Pycharm is not finding the project interperter.
The command that Pycharm is running is:
/Users/plorenzo/../gestorSchools/venv/bin/python3.4 /Users/plorenzo/../gestorSchools/schoolsManager/manage.py runserver 8000
The virtualenv is activated.
My $PATH is:
/Users/plorenzo/../gestorSchools/venv/bin:/Library/Frameworks/Python.framework/Versions/3.4/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin
and my $PYTHONPATH:
/Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4
I spent a few days trying to find the solution but I couldn't find it.
|
I finally found the error. The problem was that one folder name in the virtualenv path had an accent.
This is the old path:
/Users/plorenzo/../telefónica/gestorSchools/venv
This is the new one that works:
/Users/plorenzo/../telefonica/gestorSchools/venv
|
python pandas: plot histogram of dates?
|
I've taken my Series and coerced it to a datetime column of dtype=datetime64[ns] (though only need day resolution...not sure how to change).
import pandas as pd
df = pd.read_csv('somefile.csv')
column = df['date']
column = pd.to_datetime(column, coerce=True)
but plotting doesn't work:
ipdb> column.plot(kind='hist')
*** TypeError: ufunc add cannot use operands with types dtype('<M8[ns]') and dtype('float64')
I'd like to plot a histogram that just shows the count of dates by week, month, or year.
Surely there is a way to do this in pandas?
|
Given this df:
date
0 2001-08-10
1 2002-08-31
2 2003-08-29
3 2006-06-21
4 2002-03-27
5 2003-07-14
6 2004-06-15
7 2003-08-14
8 2003-07-29
and, if it's not already the case:
df.date = df.date.astype("datetime64")
To show the count of dates by month:
df.groupby(df.date.dt.month).count().plot(kind="bar")
.dt allows you to access the datetime properties.
Which will give you:
You can replace month by year, day, etc..
If you want to distinguish year and month for instance, just do:
df.groupby([df.date.dt.year, df.date.dt.month]).count().plot(kind="bar")
Which gives:
Was it what you wanted ? Is this clear ?
Hope this helps !
|
Auto-import doesn't follow PEP8
|
Consider the following code:
from bs4 import BeautifulSoup
data = "<test>test text</test>"
soup = BeautifulSoup(data)
print(soup.find(text=re.compile(r'test$')))
It is missing an import re line and would fail with a NameError without it.
Now, I'm trying to use PyCharm's Auto-Import feature: focusing on re and hitting Alt+Enter, which opens up the following popup:
Now, if I choose Import 're' option, Pycharm would insert the new import line at the top of the script:
import re
from bs4 import BeautifulSoup
data = "<test>test text</test>"
soup = BeautifulSoup(data)
print(soup.find(text=re.compile(r'test$')))
Looks almost good, except that it doesn't follow PEP8 import guidelines:
Imports should be grouped in the following order:
standard library imports
related third party imports
local application/library specific imports
You should put a blank line between each group of imports.
In other words, there is a missing blank line between the two imports:
import re
from bs4 import BeautifulSoup
Question is: is it possible to tell Pycharm to follow the PEP8 guidelines and insert a new-line between the lines with different import types on auto-import?
As a workaround, I'm calling Optimize Imports after that organizes the imports correctly.
|
You can't. Reason is PyCharm doesn't tell you that you have violated any PEP8 Guidelines if you do that or any import statements at all. One, your PyCharm is outdated (newest version is 4.0.2/4.2) or second, your PyCharm seems to be having a bug, thus giving reason to file a bug report. If you can try to safely download PyCharm again to try to fix the bug. If that doesn't work, file a bug report or make a habit of making a blank line between your statements. Hope this answers your question! Oh, it does not matter whether you use from, import, or both type of statements.
|
scope of eval function in python
|
Consider the following example:
i=7
j=8
k=10
def test():
i=1
j=2
k=3
return dict((name,eval(name)) for name in ['i','j','k'])
It returns:
>>> test()
{'i': 7, 'k': 10, 'j': 8}
Why eval does not take into consideration the variables defined inside the function? From the documentation, optionally you can pass a globals and a locals dictionary. What does it means?Finally, how can I modify this small case to make it work?
|
Generators are implemented as function scopes:
The scope of names defined in a class block is limited to the class
block; it does not extend to the code blocks of methods â this
includes generator expressions since they are implemented using a
function scope.
So, the generator inside the dict() constructor has its own locals() dictionary. Now let's take a look at Py_eval's source code, specially when both globals() and locals() are None:
if (globals == Py_None) {
globals = PyEval_GetGlobals();
if (locals == Py_None)
locals = PyEval_GetLocals();
}
So, for your example PyEval_GetLocals() will be empty at the moment the loop is executing and globals() will be the global dictionary. Note that i, j and k defined inside the function are not in local scope of generator, rather they are in its enclosing scope:
>>> dict((name,eval(name, globals(), {})) for name in ['i', 'j', 'k'])
{'i': 7, 'k': 10, 'j': 8}
|
Python 3.4: How to import a module given the full path?
|
How can I load a Python module in Python 3.4 given its full path?
A similar question How to import a module given the full path? covers Python versions before 3.4, but the conclusion is that support in Python 3.4 is deprecated for the presented answers, so any solution for Python 3.4 is appreciated.
Note that this question is not a duplicate of Import abitrary python source file. (Python 3.3+), since answers for this also use loader.load_module() which is deprecated in Python 3.4, as said in answer, with details in Consider leaving importlib.abc.Loader.load_module(), and documentation in importlib.
So a supported solution for module import by full path in Python 3.4 is needed.
|
This should work for all python files, regardless of file extension:
import importlib.machinery
modulename = importlib.machinery.SourceFileLoader('modulename','/Path/To/module.py').load_module()
This method was mentioned in the deprecation message in the imp.load_module documentation.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.