instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
|---|---|---|
Pelican 3.3 pelican-quickstart error "ValueError: unknown locale: UTF-8"
|
When I was trying to use pelican3.3, I typed the commend "pelican-quickstart", some errors showed up.
These are the errors:
(PelicanEnv)59-127-113-90:myblog Richo$ pelican-quickstart
Traceback (most recent call last):
File "/Users/Richo/Dropbox/Github/PelicanEnv/bin/pelican-quickstart", line 9, in <module>
load_entry_point('pelican==3.3', 'console_scripts', 'pelican-quickstart')()
File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/pkg_resources.py", line 378, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/pkg_resources.py", line 2566, in load_entry_point
return ep.load()
File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/pelican/__init__.py", line 16, in <module>
from pelican.generators import (ArticlesGenerator, PagesGenerator,
File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/pelican/generators.py", line 20, in <module>
from pelican.readers import Readers
File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/pelican/readers.py", line 11, in <module>
import docutils.core
File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/docutils/core.py", line 20, in <module>
from docutils import frontend, io, utils, readers, writers
File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/docutils/frontend.py", line 41, in <module>
import docutils.utils
File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/docutils/utils/__init__.py", line 20, in <module>
import docutils.io
File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/docutils/io.py", line 18, in <module>
from docutils.utils.error_reporting import locale_encoding, ErrorString, ErrorOutput
File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/docutils/utils/error_reporting.py", line 47, in <module>
locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1]
File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/locale.py", line 513, in getdefaultlocale
return _parse_localename(localename)
File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/locale.py", line 445, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8
My OS is OS X Mavericks.
|
You could try a solution posted here or here. Basically, add some lines to your ~/.bash_profile:
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
There is an outstanding bug report related to this issue. It appears that Python makes some assumptions about the format of locale names that aren't universally valid. Explicitly setting these environment vars is basically just a workaround for that bug.
[Edit:] As @asmeurer correctly points out, the above fix assumes English and the U.S. You should really pick your preferred locale from the list given by locale -a (generally one that ends in UTF-8).
|
Construct pandas DataFrame from list of tuples
|
I have a list of tuples like
data = [
('r1', 'c1', avg11, stdev11),
('r1', 'c2', avg12, stdev12),
('r2', 'c1', avg21, stdev21),
('r2', 'c2', avg22, stdev22)
]
and I would like to put them into a pandas DataFrame with rows named by the first column and columns named by the 2nd column. It seems the way to take care of the row names is something like pandas.DataFrame([x[1:] for x in data], index = [x[0] for x in data]) but how do I take care of the columns to get a 2x2 matrix (the output from the previous set is 3x4)? Is there a more intelligent way of taking care of row labels as well, instead of explicitly omitting them?
EDIT It seems I will need 2 DataFrames - one for averages and one for standard deviations, is that correct? Or can I store a list of values in each "cell"?
|
You can pivot your DataFrame after creating:
>>> df = pd.DataFrame(data)
>>> df.pivot(index=0, columns=1, values=2)
# avg DataFrame
1 c1 c2
0
r1 avg11 avg12
r2 avg21 avg22
>>> df.pivot(index=0, columns=1, values=3)
# stdev DataFrame
1 c1 c2
0
r1 stdev11 stdev12
r2 stdev21 stdev22
|
Flask RESTful cross-domain issue with Angular: PUT, OPTIONS methods
|
I've developed a small write-only REST api with Flask Restful that accepts PUT request from a handful of clients that can potentially have changing IP addresses. My clients are embedded Chromium clients running an AngularJS front-end; they authenticate with my API with a simple magic key -- it's sufficient for my very limited scale.
I'm testing deploying my API now and I notice that the Angular clients are attempting to send an OPTIONS http methods to my Flask service. My API meanwhile is replying with a 404 (since I didn't write an OPTIONS handler yet, only a PUT handler). It seems that when sending cross-domain requests that are not POST or GET, Angular will send a pre-flight OPTIONS method at the server to make sure the cross-domain request is accepted before it sends the actual request. Is that right?
Anyway, how do I allow all cross-domain PUT requests to Flask Restful API? I've used cross-domaion decorators with a (non-restful) Flask instance before, but do I need to write an OPTIONS handler as well into my API?
|
I resolved the issue by rewriting my Flask backend to answer with an Access-Control-Allow-Origin header in my PUT response. Furthermore, I created an OPTIONS handler in my Flask app to answer the options method by following what I read in the http RFC.
The return on the PUT method looks like this:
return restful.request.form, 201, {'Access-Control-Allow-Origin': '*'}
My OPTIONS method handler looks like this:
def options (self):
return {'Allow' : 'PUT' }, 200, \
{ 'Access-Control-Allow-Origin': '*', \
'Access-Control-Allow-Methods' : 'PUT,GET' }
@tbicr is right: Flask DOES answer the OPTIONS method automatically for you. However, in my case it wasn't transmitting the Access-Control-Allow-Origin header with that answer, so my browser was getting a reply from the api that seemed to imply that cross-domain requests were not permitted. I overloaded the options request in my case and added the ACAO header, and the browser seemed to be satisfied with that, and followed up OPTIONS with a PUT that also worked.
|
Django import error - no module named django.conf.urls.defaults
|
I am trying to run statsd/graphite which uses django 1.6.
While accessing graphite URL, I get django module error
File "/opt/graphite/webapp/graphite/urls.py", line 15, in
from django.conf.urls.defaults import *
ImportError: No module named defaults
However, I do not find defaults django package inside /Library/Python/2.7/site-packages/django/conf/urls/
Please help fixing this issue.
|
django.conf.urls.defaults has been removed in Django 1.6. If the problem was in your own code, you would fix it by changing the import to
from django.conf.urls import patterns, url, include
However, in your case the problem is in a third party app, graphite. The issue has been fixed in graphite's master branch and version 0.9.14+.
|
Pandas: filling missing values by mean in each group
|
This should be straightforward, but the closest thing I've found is this post:
pandas: Filling missing values within a group, and I still can't solve my problem....
Suppose I have the following dataframe
df = pd.DataFrame({'value': [1, np.nan, np.nan, 2, 3, 1, 3, np.nan, 3], 'name': ['A','A', 'B','B','B','B', 'C','C','C']})
name value
0 A 1
1 A NaN
2 B NaN
3 B 2
4 B 3
5 B 1
6 C 3
7 C NaN
8 C 3
and I'd like to fill in "NaN" with mean value in each "name" group, i.e.
name value
0 A 1
1 A 1
2 B 2
3 B 2
4 B 3
5 B 1
6 C 3
7 C 3
8 C 3
I'm not sure where to go after:
grouped = df.groupby('name').mean()
Thanks a bunch.
|
One way would be to use transform:
>>> df
name value
0 A 1
1 A NaN
2 B NaN
3 B 2
4 B 3
5 B 1
6 C 3
7 C NaN
8 C 3
>>> df["value"] = df.groupby("name").transform(lambda x: x.fillna(x.mean()))
>>> df
name value
0 A 1
1 A 1
2 B 2
3 B 2
4 B 3
5 B 1
6 C 3
7 C 3
8 C 3
|
What's the difference between `from django.conf import settings` and `import settings` in a Django project
|
I'm reading up that most people do from django.conf import settings but I don't undertstand the difference to simply doing import settings in a django project file. Can anyone explain the difference?
|
import settings will import the first python module named settings.py found in sys.path, usually (in default django setups). It allows access only to your site defined settings file, which overwrites django default settings (django.conf.global_settings).
So, if you try to access a valid django setting not specified in your settings file you will get an error.
django.conf.settings is not a file but a class making an abstraction of the concepts, default settings and your site-specific settings. Django also does other checks when you use from django.conf import settings.
You can also find it in the django docs.
Hope this helps.
|
django-rest-framework + django-polymorphic ModelSerialization
|
I was wondering if anyone had a Pythonic solution of combining Django REST framework with django-polymorphic.
Given:
class GalleryItem(PolymorphicModel):
gallery_item_field = models.CharField()
class Photo(GalleryItem):
custom_photo_field = models.CharField()
class Video(GalleryItem):
custom_image_field = models.CharField()
If I want a list of all GalleryItems in django-rest-framework it would only give me the fields of GalleryItem (the parent model), hence: id, gallery_item_field, and polymorphic_ctype. That's not what I want. I want the custom_photo_field if it's a Photo instance and custom_image_field if it's a Video.
|
So far I only tested this for GET request, and this works:
class PhotoSerializer(serializers.ModelSerializer):
class Meta:
model = models.Photo
class VideoSerializer(serializers.ModelSerializer):
class Meta:
model = models.Video
class GalleryItemModuleSerializer(serializers.ModelSerializer):
class Meta:
model = models.GalleryItem
def to_representation(self, obj):
"""
Because GalleryItem is Polymorphic
"""
if isinstance(obj, models.Photo):
return PhotoSerializer(obj, context=self.context).to_representation(obj)
elif isinstance(obj, models.Video):
return VideoSerializer(obj, context=self.context).to_representation(obj)
return super(GalleryItemModuleSerializer, self).to_representation(obj)
For POST and PUT requests you might want to do something similiar as overriding the to_representation definition with the to_internal_value def.
|
What is Python's heapq module?
|
I tried "heapq" and arrived at the conclusion that my expectations differ from what I see on the screen. I need somebody to explain how it works and where it can be useful.
From the book Python Module of the Week under paragraph 2.2 Sorting it is written
If you need to maintain a sorted list as you add and remove values,
check out heapq. By using the functions in heapq to add or remove
items from a list, you can maintain the sort order of the list with
low overhead.
Here is what I do and get.
import heapq
heap = []
for i in range(10):
heap.append(i)
heap
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
heapq.heapify(heap)
heapq.heappush(heap, 10)
heap
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
heapq.heappop(heap)
0
heap
[1, 3, 2, 7, 4, 5, 6, 10, 8, 9] <<< Why the list does not remain sorted?
heapq.heappushpop(heap, 11)
1
heap
[2, 3, 5, 7, 4, 11, 6, 10, 8, 9] <<< Why is 11 put between 4 and 6?
So, as you see the "heap" list is not sorted at all, in fact the more you add and remove the items the more cluttered it becomes. Pushed values take unexplainable positions.
What is going on?
|
The heapq module maintains the heap invariant, which is not the same thing as maintaining the actual list object in sorted order.
Quoting from the heapq documentation:
Heaps are binary trees for which every parent node has a value less than or equal to any of its children. This implementation uses arrays for which heap[k] <= heap[2*k+1] and heap[k] <= heap[2*k+2] for all k, counting elements from zero. For the sake of comparison, non-existing elements are considered to be infinite. The interesting property of a heap is that its smallest element is always the root, heap[0].
This means that it is very efficient to find the smallest element (just take heap[0]), which is great for a priority queue. After that, the next 2 values will be larger (or equal) than the 1st, and the next 4 after that are going to be larger than their 'parent' node, then the next 8 are larger, etc.
You can read more about the theory behind the datastructure in the Theory section of the documentation. You can also watch this lecture from the MIT OpenCourseWare Introduction to Algorithms course, which explains the algorithm in general terms.
A heap can be turned back into a sorted list very efficiently:
def heapsort(heap):
return [heapq.heappop(heap) for _ in range(len(heap))]
by just popping the next element from the heap. Using sorted(heap) should be faster still, however, as the TimSort will take advantage of the partial ordering already present in a heap.
You'd use a heap if you are only interested in the smallest value, or the first n smallest values, especially if you are interested in those values on an ongoing basis; adding new items and removing the smallest is very efficient indeed, more so than resorting the list each time you added a value.
|
Sorting Multi-Index to full depth (Pandas)
|
I have a dataframe which Im loading from a csv file and then setting the index to few of its columns (usually two or three) by the set_index method. The idea is to then access parts of the dataframe using several key combination, as such:
df.set_index(['fileName','phrase'])
df.ix['somePath','somePhrase']
Appearntly, this type of selection with multiple keys is only possible if the Multi-Index of the dataframe is sorted to sufficient depth. In this case, since im supplying two keys, the .ix operation will not fail only if the dataframe multi index is sorted to depth of at least 2.
for some reason, when Im setting the index as shown, while to me it seems both layers are sorted, calling df.index.lexsort_depth command returns 1, and i get the following error when trying to access with two keys: MultiIndex lexsort depth 1, key was length 2
Any help?
|
Its not really clear what you are asking. Multi-index docs are here
The OP needs to set the index, then sort in place
df.set_index(['fileName','phrase'],inplace=True)
df.sortlevel(inplace=True)
Then access these levels via a tuple to get a specific result
df.ix[('somePath','somePhrase')]
Maybe just give a toy example like this and show I want to get a specific result.
In [1]: arrays = [np.array(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'])
...: .....: ,
...: .....: np.array(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'])
...: .....: ]
In [2]: df = DataFrame(randn(8, 4), index=arrays)
In [3]: df
Out[3]:
0 1 2 3
bar one 1.654436 0.184326 -2.337694 0.625120
two 0.308995 1.219156 -0.906315 1.555925
baz one -0.180826 -1.951569 1.617950 -1.401658
two 0.399151 -1.305852 1.530370 -0.132802
foo one 1.097562 0.097126 0.387418 0.106769
two 0.465681 0.270120 -0.387639 -0.142705
qux one -0.656487 -0.154881 0.495044 -1.380583
two 0.274045 -0.070566 1.274355 1.172247
In [4]: df.index.lexsort_depth
Out[4]: 2
In [5]: df.ix[('foo','one')]
Out[5]:
0 1.097562
1 0.097126
2 0.387418
3 0.106769
Name: (foo, one), dtype: float64
In [6]: df.ix['foo']
Out[6]:
0 1 2 3
one 1.097562 0.097126 0.387418 0.106769
two 0.465681 0.270120 -0.387639 -0.142705
In [7]: df.ix[['foo']]
Out[7]:
0 1 2 3
foo one 1.097562 0.097126 0.387418 0.106769
two 0.465681 0.270120 -0.387639 -0.142705
In [8]: df.sortlevel(level=1)
Out[8]:
0 1 2 3
bar one 1.654436 0.184326 -2.337694 0.625120
baz one -0.180826 -1.951569 1.617950 -1.401658
foo one 1.097562 0.097126 0.387418 0.106769
qux one -0.656487 -0.154881 0.495044 -1.380583
bar two 0.308995 1.219156 -0.906315 1.555925
baz two 0.399151 -1.305852 1.530370 -0.132802
foo two 0.465681 0.270120 -0.387639 -0.142705
qux two 0.274045 -0.070566 1.274355 1.172247
In [10]: df.sortlevel(level=1).index.lexsort_depth
Out[10]: 0
|
What can multiprocessing and dill do together?
|
I would like to use the multiprocessing library in Python. Sadly multiprocessing uses pickle which doesn't support functions with closures, lambdas, or functions in __main__. All three of these are important to me
In [1]: import pickle
In [2]: pickle.dumps(lambda x: x)
PicklingError: Can't pickle <function <lambda> at 0x23c0e60>: it's not found as __main__.<lambda>
Fortunately there is dill a more robust pickle. Apparently dill performs magic on import to make pickle work
In [3]: import dill
In [4]: pickle.dumps(lambda x: x)
Out[4]: "cdill.dill\n_load_type\np0\n(S'FunctionType'\np1 ...
This is very encouraging, particularly because I don't have access to the multiprocessing source code. Sadly, I still can't get this very basic example to work
import multiprocessing as mp
import dill
p = mp.Pool(4)
print p.map(lambda x: x**2, range(10))
Why is this? What am I missing? Exactly what are the limitations on the multiprocessing+dill combination?
Temporary Edit for J.F Sebastian
mrockli@mrockli-notebook:~/workspace/toolz$ python testmp.py
Temporary Edit for J.F Sebastian
mrockli@mrockli-notebook:~/workspace/toolz$ python testmp.py
Exception in thread Thread-2:
Traceback (most recent call last):
File "/home/mrockli/Software/anaconda/lib/python2.7/threading.py", line 808, in __bootstrap_inner
self.run()
File "/home/mrockli/Software/anaconda/lib/python2.7/threading.py", line 761, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/mrockli/Software/anaconda/lib/python2.7/multiprocessing/pool.py", line 342, in _handle_tasks
put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
^C
...lots of junk...
[DEBUG/MainProcess] cleaning up worker 3
[DEBUG/MainProcess] cleaning up worker 2
[DEBUG/MainProcess] cleaning up worker 1
[DEBUG/MainProcess] cleaning up worker 0
[DEBUG/MainProcess] added worker
[DEBUG/MainProcess] added worker
[INFO/PoolWorker-5] child process calling self.run()
[INFO/PoolWorker-6] child process calling self.run()
[DEBUG/MainProcess] added worker
[INFO/PoolWorker-7] child process calling self.run()
[DEBUG/MainProcess] added worker
[INFO/PoolWorker-8] child process calling self.run()Exception in thread Thread-2:
Traceback (most recent call last):
File "/home/mrockli/Software/anaconda/lib/python2.7/threading.py", line 808, in __bootstrap_inner
self.run()
File "/home/mrockli/Software/anaconda/lib/python2.7/threading.py", line 761, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/mrockli/Software/anaconda/lib/python2.7/multiprocessing/pool.py", line 342, in _handle_tasks
put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
^C
...lots of junk...
[DEBUG/MainProcess] cleaning up worker 3
[DEBUG/MainProcess] cleaning up worker 2
[DEBUG/MainProcess] cleaning up worker 1
[DEBUG/MainProcess] cleaning up worker 0
[DEBUG/MainProcess] added worker
[DEBUG/MainProcess] added worker
[INFO/PoolWorker-5] child process calling self.run()
[INFO/PoolWorker-6] child process calling self.run()
[DEBUG/MainProcess] added worker
[INFO/PoolWorker-7] child process calling self.run()
[DEBUG/MainProcess] added worker
[INFO/PoolWorker-8] child process calling self.run()
|
multiprocessing makes some bad choices about pickling. Don't get me wrong, it makes some good choices that enable it to pickle certain types so they can be used in a pool's map function. However, since we have dill that can do the pickling, multiprocessing's own pickling becomes a bit limiting. Actually, if multiprocessing were to use pickle instead of cPickle... and also drop some of it's own pickling overrides, then dill could take over and give a much more full serialization for multiprocessing.
Until that happens, there's a fork of multiprocessing called pathos (the release version is a bit stale, unfortunately) that removes the above limitations. Pathos also adds some nice features that multiprocessing doesn't have, like multi-args in the map function. Pathos is due for a release, after some mild updating -- mostly conversion to python 3.x.
Python 2.7.5 (default, Sep 30 2013, 20:15:49)
[GCC 4.2.1 (Apple Inc. build 5566)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill
>>> from pathos.multiprocessing import ProcessingPool
>>> pool = ProcessingPool(nodes=4)
>>> result = pool.map(lambda x: x**2, range(10))
>>> result
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
and just to show off a little of what pathos.multiprocessing can do...
>>> def busy_add(x,y, delay=0.01):
... for n in range(x):
... x += n
... for n in range(y):
... y -= n
... import time
... time.sleep(delay)
... return x + y
...
>>> def busy_squared(x):
... import time, random
... time.sleep(2*random.random())
... return x*x
...
>>> def squared(x):
... return x*x
...
>>> def quad_factory(a=1, b=1, c=0):
... def quad(x):
... return a*x**2 + b*x + c
... return quad
...
>>> square_plus_one = quad_factory(2,0,1)
>>>
>>> def test1(pool):
... print pool
... print "x: %s\n" % str(x)
... print pool.map.__name__
... start = time.time()
... res = pool.map(squared, x)
... print "time to results:", time.time() - start
... print "y: %s\n" % str(res)
... print pool.imap.__name__
... start = time.time()
... res = pool.imap(squared, x)
... print "time to queue:", time.time() - start
... start = time.time()
... res = list(res)
... print "time to results:", time.time() - start
... print "y: %s\n" % str(res)
... print pool.amap.__name__
... start = time.time()
... res = pool.amap(squared, x)
... print "time to queue:", time.time() - start
... start = time.time()
... res = res.get()
... print "time to results:", time.time() - start
... print "y: %s\n" % str(res)
...
>>> def test2(pool, items=4, delay=0):
... _x = range(-items/2,items/2,2)
... _y = range(len(_x))
... _d = [delay]*len(_x)
... print map
... res1 = map(busy_squared, _x)
... res2 = map(busy_add, _x, _y, _d)
... print pool.map
... _res1 = pool.map(busy_squared, _x)
... _res2 = pool.map(busy_add, _x, _y, _d)
... assert _res1 == res1
... assert _res2 == res2
... print pool.imap
... _res1 = pool.imap(busy_squared, _x)
... _res2 = pool.imap(busy_add, _x, _y, _d)
... assert list(_res1) == res1
... assert list(_res2) == res2
... print pool.amap
... _res1 = pool.amap(busy_squared, _x)
... _res2 = pool.amap(busy_add, _x, _y, _d)
... assert _res1.get() == res1
... assert _res2.get() == res2
... print ""
...
>>> def test3(pool): # test against a function that should fail in pickle
... print pool
... print "x: %s\n" % str(x)
... print pool.map.__name__
... start = time.time()
... res = pool.map(square_plus_one, x)
... print "time to results:", time.time() - start
... print "y: %s\n" % str(res)
...
>>> def test4(pool, maxtries, delay):
... print pool
... m = pool.amap(busy_add, x, x)
... tries = 0
... while not m.ready():
... time.sleep(delay)
... tries += 1
... print "TRY: %s" % tries
... if tries >= maxtries:
... print "TIMEOUT"
... break
... print m.get()
...
>>> import time
>>> x = range(18)
>>> delay = 0.01
>>> items = 20
>>> maxtries = 20
>>> from pathos.multiprocessing import ProcessingPool as Pool
>>> pool = Pool(nodes=4)
>>> test1(pool)
<pool ProcessingPool(ncpus=4)>
x: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]
map
time to results: 0.0553691387177
y: [0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289]
imap
time to queue: 7.91549682617e-05
time to results: 0.102381229401
y: [0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289]
amap
time to queue: 7.08103179932e-05
time to results: 0.0489699840546
y: [0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289]
>>> test2(pool, items, delay)
<built-in function map>
<bound method ProcessingPool.map of <pool ProcessingPool(ncpus=4)>>
<bound method ProcessingPool.imap of <pool ProcessingPool(ncpus=4)>>
<bound method ProcessingPool.amap of <pool ProcessingPool(ncpus=4)>>
>>> test3(pool)
<pool ProcessingPool(ncpus=4)>
x: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]
map
time to results: 0.0523059368134
y: [1, 3, 9, 19, 33, 51, 73, 99, 129, 163, 201, 243, 289, 339, 393, 451, 513, 579]
>>> test4(pool, maxtries, delay)
<pool ProcessingPool(ncpus=4)>
TRY: 1
TRY: 2
TRY: 3
TRY: 4
TRY: 5
TRY: 6
TRY: 7
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34]
|
scikit .predict() default threshold
|
I'm working on a classification problem with unbalanced classes (5% 1's). I want to predict the class, not the probability.
In a binary classification problem, is scikit's classifier.predict() using 0.5 by default?
If it doesn't, what's the default method? If it does, how do I change it?
In scikit some classifiers have the class_weight='auto' option, but not all do. With class_weight='auto', would .predict() use the actual population proportion as a threshold?
What would be the way to do this in a classifier like MultinomialNB that doesn't support class_weight? Other than using predict_proba() and then calculation the classes myself.
|
is scikit's classifier.predict() using 0.5 by default?
In probabilistic classifiers, yes. It's the only sensible threshold from a mathematical viewpoint, as others have explained.
What would be the way to do this in a classifier like MultinomialNB that doesn't support class_weight?
You can set the class_prior, which is the prior probability P(y) per class y. That effectively shifts the decision boundary. E.g.
# minimal dataset
>>> X = [[1, 0], [1, 0], [0, 1]]
>>> y = [0, 0, 1]
# use empirical prior, learned from y
>>> MultinomialNB().fit(X,y).predict([1,1])
array([0])
# use custom prior to make 1 more likely
>>> MultinomialNB(class_prior=[.1, .9]).fit(X,y).predict([1,1])
array([1])
|
How do I create a Python set with only one element?
|
If I have a string, and want to create a set that initially contains only that string, is there a more Pythonic approach than the following?
mySet = set()
mySet.add(myString)
The following gives me a set of the letters in myString:
mySet = set(myString)
|
For example, this easy way:
mySet = set([myString])
|
Run an OLS regression with Pandas Data Frame
|
I have a pandas data frame and I would like to able to predict the values of column A from the values in columns B and C. Here is a toy example:
import pandas as pd
df = pd.DataFrame({"A": [10,20,30,40,50],
"B": [20, 30, 10, 40, 50],
"C": [32, 234, 23, 23, 42523]})
Ideally, I would have something like ols(A ~ B + C, data = df) but when I look at the examples from algorithm libraries like scikit-learn it appears to feed the data to the model with a list of rows instead of columns. This would require me to reformat the data into lists inside lists, which seems to defeat the purpose of using pandas in the first place. What is the most pythonic way to run an OLS regression (or any machine learning algorithm more generally) on data in a pandas data frame?
|
I think you can almost do exactly what you thought would be ideal, using the statsmodels package which is one of pandas' optional dependencies (it's used for a few things in pandas.stats.)
>>> import pandas as pd
>>> import statsmodels.formula.api as sm
>>> df = pd.DataFrame({"A": [10,20,30,40,50], "B": [20, 30, 10, 40, 50], "C": [32, 234, 23, 23, 42523]})
>>> result = sm.ols(formula="A ~ B + C", data=df).fit()
>>> print result.params
Intercept 14.952480
B 0.401182
C 0.000352
dtype: float64
>>> print result.summary()
OLS Regression Results
==============================================================================
Dep. Variable: A R-squared: 0.579
Model: OLS Adj. R-squared: 0.158
Method: Least Squares F-statistic: 1.375
Date: Thu, 14 Nov 2013 Prob (F-statistic): 0.421
Time: 20:04:30 Log-Likelihood: -18.178
No. Observations: 5 AIC: 42.36
Df Residuals: 2 BIC: 41.19
Df Model: 2
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
Intercept 14.9525 17.764 0.842 0.489 -61.481 91.386
B 0.4012 0.650 0.617 0.600 -2.394 3.197
C 0.0004 0.001 0.650 0.583 -0.002 0.003
==============================================================================
Omnibus: nan Durbin-Watson: 1.061
Prob(Omnibus): nan Jarque-Bera (JB): 0.498
Skew: -0.123 Prob(JB): 0.780
Kurtosis: 1.474 Cond. No. 5.21e+04
==============================================================================
Warnings:
[1] The condition number is large, 5.21e+04. This might indicate that there are
strong multicollinearity or other numerical problems.
|
How to get POSTed json in Flask?
|
I'm trying to build a simple API using Flask, in which I now want to read some POSTed JSON. I do the post with the PostMan Chrome extension, and the JSON I post is simply {"text":"lalala"}. I try to read the JSON using the following method:
@app.route('/api/add_message/<uuid>', methods=['GET', 'POST'])
def add_message(uuid):
content = request.json
print content
return uuid
On the browser it correctly returns the uuid I put in the GET, but on the console, it just prints out None (where I expect it to print out the {"text":"lalala"}. Does anybody know how I can get the posted JSON from within the Flask method?
|
You need to set the request content type to application/json for the .json property to work; it'll be None otherwise. See the Flask Request documentation:
If the mimetype is application/json this will contain the parsed JSON data. Otherwise this will be None.
Flask 0.10 added the request.get_json() method, and you should use that method instead of the .json property. You can tell the method to skip the content type requirement by setting force=True.
|
Print different precision by column with pandas.DataFrame.to_csv()?
|
Question
Is it possible to specify a float precision specifically for each column to be printed by the Python pandas package method pandas.DataFrame.to_csv?
Background
If I have a pandas dataframe that is arranged like this:
In [53]: df_data[:5]
Out[53]:
year month day lats lons vals
0 2012 6 16 81.862745 -29.834254 0.0
1 2012 6 16 81.862745 -29.502762 0.1
2 2012 6 16 81.862745 -29.171271 0.0
3 2012 6 16 81.862745 -28.839779 0.2
4 2012 6 16 81.862745 -28.508287 0.0
There is the float_format option that can be used to specify a precision, but this applys that precision to all columns of the dataframe when printed.
When I use that like so:
df_data.to_csv(outfile, index=False,
header=False, float_format='%11.6f')
I get the following, where vals is given an inaccurate precision:
2012,6,16, 81.862745, -29.834254, 0.000000
2012,6,16, 81.862745, -29.502762, 0.100000
2012,6,16, 81.862745, -29.171270, 0.000000
2012,6,16, 81.862745, -28.839779, 0.200000
2012,6,16, 81.862745, -28.508287, 0.000000
|
Change the type of column "vals" prior to exporting the data frame to a CSV file
df_data['vals'] = df_data['vals'].map(lambda x: '%2.1f' % x)
df_data.to_csv(outfile, index=False, header=False, float_format='%11.6f')
|
Install a package and write to requirements.txt with pip
|
I'm searching for a way to install a package with pip, and write that package's version information to my project's requirements.txt file. For those familiar with npm, it's essentially what npm install --save does.
Using pip freeze > requirements.txt works great, but I've found that I forget to run this, or I can accidentally include unused packages that I'd installed for testing but decided not to use.
So the following psuedocode:
$ pip install nose2 --save
Would result in a requirements.txt file with:
nose2==0.4.7
I guess I could munge the output of save to grab the version numbers, but I am hoping there is an easier way.
|
To get the version information, you can actually use pip freeze selectively after install. Here is a function that should do what you are asking for:
pip_install_save() {
package_name=$1
requirements_file=$2
if [[ -z $requirements_file ]]
then
requirements_file='./requirements.txt'
fi
pip install $package_name && pip freeze | grep -i $package_name >> $requirements_file
}
Note the -i to the grep command. Pip isn't case sensitive with package names, so you will probably want that.
|
What does "for x in y or z:" do in Python?
|
I'm trying to take apart and de-obfuscate this mandlebrot-generating python code:
_ = (
255,
lambda
V ,B,c
:c and Y(V*V+B,B, c
-1)if(abs(V)<6)else
( 2+c-4*abs(V)**-0.4)/i
) ;v, x=1500,1000;C=range(v*x
);import struct;P=struct.pack;M,\
j ='<QIIHHHH',open('M.bmp','wb').write
for X in j('BM'+P(M,v*x*3+26,26,12,v,x,1,24))or C:
i ,Y=_;j(P('BBB',*(lambda T:(T*80+T**9
*i-950*T **99,T*70-880*T**18+701*
T **9 ,T*i**(1-T**45*2)))(sum(
[ Y(0,(A%3/3.+X%v+(X/v+
A/3/3.-x/2)/1j)*2.5
/x -2.7,i)**2 for \
A in C
[:9]])
/9)
) )
source: http://preshing.com/20110926/high-resolution-mandelbrot-in-obfuscated-python/
I found a for loop with an interesting twist, and I'm not sure if it's just part of the obfuscation, or if it actually has a function. The middle line:
for X in j('BM'+P(M,v*x*3+26,26,12,v,x,1,24))or C:
What does the or do at the end?
I re-wrote the code as follows for clarity:
import struct
image_name = "M.bmp"
mandlebrot = lambda V,B,c :c and Y(V*V+B,B,c-1) if(abs(V)<6) else (2+c-4*abs(V)**-0.4)/i
y,x = 600,800
pixelrange = range(y*x)
package = struct.pack
image = open(image_name,'wb').write
for X in image('BM'+package('<QIIHHHH',y*x*3+26,26,12,y,x,1,24)) or pixelrange:
i = 255
Y = mandlebrot
image(package('BBB',*(lambda T:(T*80+T**9*i-950*T**99,T*70-880*T**18+701*T**9,T*i**(1-T**45*2)))(sum([Y(0,(A%3/3.+X%y+(X/y+A/3/3.-x/2)/1j)*2.5/x-2.7,i)**2 for A in pixelrange [:9]])/9)))
|
Doing:
for x in y or z:
is the same as:
for x in (y or z):
If y evaluates to True, the for-loop will iterate through it. Otherwise, it will iterate through z.
Below is a demonstration:
>>> y = [1, 2, 3]
>>> z = [4, 5, 6]
>>> for x in y or z:
... print x
...
1
2
3
>>> y = [] # Empty lists evaluate to False in Python
>>> for x in y or z:
... print x
...
4
5
6
>>>
|
How to 'turn off' blurry effect of imshow() in matplotlib?
|
I want to make a color plot of probabilities however imshow generates blurry values for points which have zero probability. How can I get rid of this blurry periphery around real grid points?
Example:
import numpy as np
import matplotlib.pyplot as plt
a=np.asarray([[ 0.00000000e+00 , 1.05824446e-01 , 2.05086136e-04, 0.00000000e+00],
[ 1.05824446e-01 , 3.15012305e-01 , 1.31255127e-01 , 1.05209188e-01],
[ 2.05086136e-04 , 1.31255127e-01 , 0.00000000e+00 , 0.00000000e+00],
[ 0.00000000e+00 ,1.05209188e-01 , 0.00000000e+00 , 0.00000000e+00]])
im=plt.imshow(a,extent=[0,4,0,4],origin='lower',alpha=1,aspect='auto')
plt.show()
|
By default (which will be changed in mpl 2.0), imshow interpolates the data (as you would want to do for an image). All you need to do is tell it to not interpolate:
im = plt.imshow(..., interpolation='none')
'nearest' will also work for what you want. See smoothing between pixels of imagesc\imshow in matlab like the matplotlib imshow for examples of all of the kinds of interpolation.
doc
|
Fitting a Normal distribution to 1D data
|
I have a 1 Dimentional array and I can compute the "mean" and "standard deviation" of this sample and plot the "Normal distribution" but I have a problems:
I want to plot the data and Normal distribution in the same figure like below :
I dont know how to plot both the "DATA" and the "Normal Distribution"
any Idea about "Gaussian probability density function in scipy.stats"?
s = np.std(array)
m = np.mean(array)
plt.plot(norm.pdf(array,m,s))
|
You can use matplotlib to plot the histogram and the PDF (as in the link in @MrE's answer). For fitting and for computing the PDF, you can use scipy.stats.norm, as follows.
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
# Generate some data for this demonstration.
data = norm.rvs(10.0, 2.5, size=500)
# Fit a normal distribution to the data:
mu, std = norm.fit(data)
# Plot the histogram.
plt.hist(data, bins=25, normed=True, alpha=0.6, color='g')
# Plot the PDF.
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, std)
plt.plot(x, p, 'k', linewidth=2)
title = "Fit results: mu = %.2f, std = %.2f" % (mu, std)
plt.title(title)
plt.show()
Here's the plot generated by the script:
|
Read file data without saving it in Flask
|
I am writing my first flask application. I am dealing with file uploads, and basically what I want is to read the data/content of the uploaded file without saving it and then print it on the resulting page. Yes, I am assuming that the user uploads a text file always.
Here is the simple upload function i am using:
@app.route('/upload/', methods=['GET', 'POST'])
def upload():
if request.method == 'POST':
file = request.files['file']
if file:
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
a = 'file uploaded'
return render_template('upload.html', data = a)
Right now, I am saving the file, but what I need is that 'a' variable to contain the content/data of the file .. any ideas?
|
FileStorage contains stream field. This object must extend IO or file object, so it must contain read and other similar methods. FileStorage also extend stream field object attributes, so you can just use file.read() instead file.stream.read(). Also you can use save argument with dst parameter as StringIO or other IO or file object to copy FileStorage.stream to another IO or file object.
See documentation: http://flask.pocoo.org/docs/api/#flask.Request.files and http://werkzeug.pocoo.org/docs/datastructures/#werkzeug.datastructures.FileStorage.
|
Playing mp3 song on python
|
I want to play my song (mp3) from python, can you give me a simplest command to do that?
This is not correct:
import wave
w = wave.open("e:/LOCAL/Betrayer/Metalik Klinik1-Anak Sekolah.mp3","r")
|
Grab the VLC Python module, vlc.py, which provides full support for libVLC and pop that in site-packages. Then:
>>> import vlc
>>> p = vlc.MediaPlayer("file:///path/to/track.mp3")
>>> p.play()
And you can stop it with:
>>> p.stop()
That module offers plenty beyond that (like pretty much anything the VLC media player can do), but that's the simplest and most effective means of playing one MP3.
You could play with os.path a bit to get it to find the path to the MP3 for you, given the filename and possibly limiting the search directories.
Full documentation and pre-prepared modules are available here. Current versions are Python 3 compatible.
|
Vectorizing a Pandas dataframe for Scikit-Learn
|
Say I have a dataframe in Pandas like the following:
> my_dataframe
col1 col2
A foo
B bar
C something
A foo
A bar
B foo
where rows represent instances, and columns input features (not showing the target label, but this would be for a classification task), i.e. I trying to build X out of my_dataframe.
How can I vectorize this efficiently using e.g. DictVectorizer ?
Do I need to convert each and every entry in my DataFrame to a dictionary first? (that's the way it is done in the example in the link above). Is there a more efficient way to do this?
|
First, I don't get where in your sample array are features, and where observations.
Second, DictVectorizer holds no data, and is only about transformation utility and metadata storage. After transformation it stores features names and mapping. It returns a numpy array, used for further computations. Numpy array (features matrix) size equals to features count x number of observations, with values equal to feature value for an observation. So if you know your observations and features, you can create this array any other way you like.
In case you expect sklearn do it for you, you don't have to reconstruct dict manually, as it can be done with to_dict applied to transposed dataframe:
>>> df
col1 col2
0 A foo
1 B bar
2 C foo
3 A bar
4 A foo
5 B bar
>>> df.T.to_dict().values()
[{'col2': 'foo', 'col1': 'A'}, {'col2': 'bar', 'col1': 'B'}, {'col2': 'foo', 'col1': 'C'}, {'col2': 'bar', 'col1': 'A'}, {'col2': 'foo', 'col1': 'A'}, {'col2': 'bar', 'col1': 'B'}]
Since scikit-learn 0.13.0 (Jan 3, 2014) there is a new parameter 'records' for the to_dict() method available, so now you can simple use this method without additional manipulations:
>>> df = pandas.DataFrame({'col1': ['A', 'B', 'C', 'A', 'A', 'B'], 'col2': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar']})
>>> df
col1 col2
0 A foo
1 B bar
2 C foo
3 A bar
4 A foo
5 B bar
>>> df.to_dict('records')
[{'col2': 'foo', 'col1': 'A'}, {'col2': 'bar', 'col1': 'B'}, {'col2': 'foo', 'col1': 'C'}, {'col2': 'bar', 'col1': 'A'}, {'col2': 'foo', 'col1': 'A'}, {'col2': 'bar', 'col1': 'B'}]
|
Inconsistent comprehension syntax?
|
I just stumbled over what seems to be a flaw in the python syntax-- or else I'm missing something.
See this:
[x for x in range(30) if x % 2 == 0]
But this is a syntax error:
[x for x in range(30) if x % 2 == 0 else 5]
If you have an else clause, you have to write:
[x if x % 2 == 0 else 5 for x in range (30)]
But this is a syntax error:
[x if x %2 == 0 for x in range(30)]
What am I missing? Why is this so inconsistent?
|
You are mixing syntax here. There are two different concepts at play here:
List comprehension syntax. Here if acts as a filter; include a value in the iteration or not. There is no else, as that is the 'don't include' case already.
A conditional expression. This must always return a value, either the outcome of the 'true' or the 'false' expression.
|
Apply Function on DataFrame Index
|
What is the best way to apply a function over the index of a Pandas DataFrame?
Currently I am using this verbose approach:
pd.DataFrame({"Month": df.reset_index().Date.apply(foo)})
where Date is the name of the index and foo is the name of the function that I am applying.
|
As already suggested by HYRY in the comments, Series.map is the way to go here. Just set the index to the resulting series.
Simple example:
df = pd.DataFrame({'d': [1, 2, 3]}, index=['FOO', 'BAR', 'BAZ'])
df
d
FOO 1
BAR 2
BAZ 3
df.index = df.index.map(lambda x: x.lower())
df
d
foo 1
bar 2
baz 3
|
Django CMS fails to syncdb or migrate
|
I'm very new to django and in turn the django-cms app.
I have followed the tutorial step by step:
(from the offical website ) on a new machine and have everything exactly as the tutorial does yet I still can't get anywhere.
I get to the final step in the setup process and everything falls over, When I run:
python manage.py syncdb --all
I get the following error:
CommandError: One or more models did not validate:
cms.page: 'site' has a relation with model <class 'django.contrib.sites.models.Site'>, which has either not been installed or is abstract.
cms.globalpagepermission: 'sites' has an m2m relation with model <class 'django.contrib.sites.models.Site'>, which has either not been installed or is abstract.
To clarify this is happening at step 1.3.2.1. Fresh install in the tutorial
1.3.2.1. Fresh install
Run:
python manage.py syncdb --all
python manage.py migrate --fake
None of these commands work, both result in the above error.
I am not sure how to solve this or even find resource as to what is causing my problem, I have spent a short amuont of time on both google and stackoverflow looking for answers with no result, I havent been able to pin point the problem witch is probably making things harder for me.
Any help is much appreciated.
My code is exactly the same as the tutorials which is why I havent posted any.
|
I'm guessing that you are using the new Django 1.6. There the sites application is no longer included by default in your project. And as it seems the django-cms depends on it.
You can add it easily to the list of enabled applications in your settings.py file, in the INSTALLED_APPS list:
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites', # <----- here!
...
|
Python Pandas max value of selected columns
|
data = {'name' : ['bill', 'joe', 'steve'],
'test1' : [85, 75, 85],
'test2' : [35, 45, 83],
'test3' : [51, 61, 45]}
frame = pd.DataFrame(data)
I would like to add a new column that shows the max value for each row.
desired output:
name test1 test2 test3 HighScore
bill 75 75 85 85
joe 35 45 83 83
steve 51 61 45 61
Sometimes
frame['HighScore'] = max(data['test1'], data['test2'], data['test3'])
works but most of the time gives this error:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Why does it only work sometimes? Is there another way of doing it?
|
>>> frame['HighScore'] = frame[['test1','test2','test3']].max(axis=1)
>>> frame
name test1 test2 test3 HighScore
0 bill 85 35 51 85
1 joe 75 45 61 75
2 steve 85 83 45 85
|
Insert a link inside a pandas table
|
I'd like to insert a link (to a web page) inside a pandas table, so when it is displayed in ipython notebook, I could press the link.
I tried the following:
In [1]: import pandas as pd
In [2]: df = pd.DataFrame(range(5), columns=['a'])
In [3]: df['b'] = df['a'].apply(lambda x: 'http://example.com/{0}'.format(x))
In [4]: df
Out[4]:
a b
0 0 http://example.com/0
1 1 http://example.com/1
2 2 http://example.com/2
3 3 http://example.com/3
4 4 http://example.com/4
but the url is just displayed as text.
I also tried using ipython HTML object:
In [5]: from IPython.display import HTML
In [6]: df['b'] = df['a'].apply(lambda x:HTML('http://example.com/{0}'.format(x)))
In [7]: df
Out[7]:
a b
0 0 <IPython.core.display.HTML object at 0x0481E530>
1 1 <IPython.core.display.HTML object at 0x0481E770>
2 2 <IPython.core.display.HTML object at 0x0481E7B0>
3 3 <IPython.core.display.HTML object at 0x0481E810>
4 4 <IPython.core.display.HTML object at 0x0481EA70>
but it will only display the repr of the object.
Any other ideas?
EDIT:
alko got the right answer, just wanted to add that the cell width is limited by default, and long html code will be truncated, ie:
<a href="aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa0">xxx</a>
will become this:
<a href="aaaaaaaaaaaaaaaaaaaaaa...
and won't be displayed correctly.
(even though the text xxx is short and can fit in the cell)
I've bypassed it by setting:
pd.set_printoptions(max_colwidth=1000)
|
I suppose you have to represent whole pandas object as html object, that is
In [1]: from IPython.display import HTML
In [2]: df = pd.DataFrame(list(range(5)), columns=['a'])
In [3]: df['a'] = df['a'].apply(lambda x: '<a href="http://example.com/{0}">link</a>'.format(x))
In [4]: HTML(df.to_html(escape=False))
Sorry, now don't have IPython at hand, and can't check whether output is correct.
|
What is the purpose of Flask's context stacks?
|
I've been using the request/application context for some time without fully understanding how it works or why it was designed the way it was. What is the purpose of the "stack" when it comes to the request or application context? Are these two separate stacks, or are they both part of one stack? Is the request context pushed onto a stack, or is it a stack itself? Am I able to push/pop multiple contexts on top of eachother? If so, why would I want to do that?
Sorry for all the questions, but I'm still confused after reading the documentation for Request Context and Application Context.
|
Multiple Apps
The application context (and its purpose) is indeed confusing until you realize that Flask can have multiple apps. Imagine the situation where you want to have a single WSGI Python interpreter run multiple Flask application. We're not talking Blueprints here, we're talking entirely different Flask applications.
You might set this up similar to the Flask documentation section on "Application Dispatching" example:
from werkzeug.wsgi import DispatcherMiddleware
from frontend_app import application as frontend
from backend_app import application as backend
application = DispatcherMiddleware(frontend, {
'/backend': backend
})
Notice that there are two completely different Flask applications being created "frontend" and "backend". In other words, the Flask(...) application constructor has been called twice, creating two instances of a Flask application.
Contexts
When you are working with Flask, you often end up using global variables to access various functionality. For example, you probably have code that reads...
from flask import request
Then, during a view, you might use request to access the information of the current request. Obviously, request is not a normal global variable; in actuality, it is a context local value. In other words, there is some magic behind the scenes that says "when I call request.path, get the path attribute from the request object of the CURRENT request." Two different requests will have a different results for request.path.
In fact, even if you run Flask with multiple threads, Flask is smart enough to keep the request objects isolated. In doing so, it becomes possible for two threads, each handling a different request, to simultaneously call request.path and get the correct information for their respective requests.
Putting it Together
So we've already seen that Flask can handle multiple applications in the same interpreter, and also that because of the way that Flask allows you to use "context local" globals there must be some mechanism to determine what the "current" request is (in order to do things such as request.path).
Putting these ideas together, it should also make sense that Flask must have some way to determine what the "current" application is!
You probably also have code similar to the following:
from flask import url_for
Like our request example, the url_for function has logic that is dependent on the current environment. In this case, however, it is clear to see that the logic is heavily dependent on which app is considered the "current" app. In the frontend/backend example shown above, both the "frontend" and "backend" apps could have a "/login" route, and so url_for('/login') should return something different depending on if the view is handling the request for the frontend or backend app.
To answer your questions...
What is the purpose of the "stack" when it comes to the request or
application context?
From the Request Context docs:
Because the request context is internally maintained as a stack you
can push and pop multiple times. This is very handy to implement
things like internal redirects.
In other words, even though you typically will have 0 or 1 items on these stack of "current" requests or "current" applications, it is possible that you could have more.
The example given is where you would have your request return the results of an "internal redirect". Let's say a user requests A, but you want to return to the user B. In most cases, you issue a redirect to the user, and point the user to resource B, meaning the user will run a second request to fetch B. A slightly different way of handling this would be to do an internal redirect, which means that while processing A, Flask will make a new request to itself for resource B, and use the results of this second request as the results of the user's original request.
Are these two separate stacks, or are they both part of one stack?
They are two separate stacks. However, this is an implementation detail. What's more important is not so much that there is a stack, but the fact that at any time you can get the "current" app or request (top of the stack).
Is the request context pushed onto a stack, or is it a stack itself?
A "request context" is one item of the "request context stack". Similarly with the "app context" and "app context stack".
Am I able to push/pop multiple contexts on top of eachother? If so,
why would I want to do that?
In a Flask application, you typically would not do this. One example of where you might want to is for an internal redirect (described above). Even in that case, however, you would probably end up having Flask handle a new request, and so Flask would do all of the pushing/popping for you.
However, there are some cases where you'd want to manipulate the stack yourself.
Running code outside of a request
One typical problem people have is that they use the Flask-SQLAlchemy extension to set up a SQL database and model definition using code something like what is shown below...
app = Flask(__name__)
db = SQLAlchemy() # Initialize the Flask-SQLAlchemy extension object
db.init_app(app)
Then they use the app and db values in a script that should be run from the shell. For example, a "setup_tables.py" script...
from myapp import app, db
# Set up models
db.create_all()
In this case, the Flask-SQLAlchemy extension knows about the app application, but during create_all() it will throw an error complaining about there not being an application context. This error is justified; you never told Flask what application it should be dealing with when running the create_all method.
You might be wondering why you don't end up needing this with app.app_context() call when you run similar functions in your views. The reason is that Flask already handles the management of the application context for you when it is handling actual web requests. The problem really only comes up outside of these view functions (or other such callbacks), such as when using your models in a one-off script.
The resolution is to push the application context yourself, which can be done by doing...
from myapp import app, db
# Set up models
with app.app_context():
db.create_all()
This will push a new application context (using the application of app, remember there could be more than one application).
Testing
Another case where you would want to manipulate the stack is for testing. You could create a unit test that handles a request and you check the results:
import unittest
from flask import request
class MyTest(unittest.TestCase):
def test_thing(self):
with app.test_request_context('/?next=http://example.com/') as ctx:
# You can now view attributes on request context stack by using `request`.
# Now the request context stack is empty
|
Understanding NumPy's Convolve
|
When calculating a simple moving average, numpy.convolve appears to do the job.
Question: How is the calculation done when you use np.convolve(values, weights, 'valid')?
When the docs mentioned convolution product is only given for points where the signals overlap completely, what are the 2 signals referring to?
If any explanations can include examples and illustrations, it will be extremely useful.
window = 10
weights = np.repeat(1.0, window)/window
smas = np.convolve(values, weights, 'valid')
|
Convolution is a mathematical operator primarily used in signal processing. Numpy simply uses this signal processing nomenclature to define it, hence the "signal" references. An array in numpy is a signal. The convolution of two signals is defined as the integral of the second signal reversed sweeping over the first signal. It can more clearly be understood using the animations on wikipedia.
Convolutions have multiple definitions depending on the context. Some start the convolution when the overlap begins while others start when the overlap is only partial. In the case of numpy's "valid" mode, the overlap is specified to be always complete. It is called "valid" since every value given in the result is done without data extrapolation.
For instance, if your array X have a length of 2 and your array Y have a length of 4, the convolution of X on Y on mode "valid" will give you an array of length 3.
First step, for X = [4 3] and Y = [1 1 5 5]:
[3 4]
[1 1 5 5]
= 3 * 1 + 4 * 1 = 7
Second Step:
[3 4]
[1 1 5 5]
= 3 * 1 + 4 * 5 = 23
Third step:
[3 4]
[1 1 5 5]
= 3 * 5 + 4 * 5 = 35
The result of the convolution for mode "valid" would then be [7 23 35].
If the overlap is be specified as one single data point (as the case in mode "full"), the result would have given you an array of length 5. The first step being:
[3 4]
[1 1 5 5]
= 3 * undefined (extrapolated as 0) + 4 * 1 = 4
And so on. More extrapolation modes exist.
|
How to scrape a website that requires login first with Python
|
First of all, I think it's worth saying that, I know there are a bunch of similar questions but NONE of them works for me...
I'm a newbie on Python, html and web scraper. I'm trying to scrape user information from a website which needs to login first. In my tests I use scraper my email settings from github as examples. The main page is 'https://github.com/login' and the target page is 'https://github.com/settings/emails'
Here are a list of methods I've tried
##################################### Method 1
import mechanize
import cookielib
from BeautifulSoup import BeautifulSoup
import html2text
br = mechanize.Browser()
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
# Browser options
br.set_handle_equiv(True)
br.set_handle_gzip(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)
br.addheaders = [('User-agent', 'Chrome')]
# The site we will navigate into, handling it's session
br.open('https://github.com/login')
for f in br.forms():
print f
br.select_form(nr=0)
# User credentials
br.form['login'] = 'myusername'
br.form['password'] = 'mypwd'
# Login
br.submit()
br.open('github.com/settings/emails').read()
################ Method 2
import urllib, urllib2, cookielib
username = 'myusername'
password = 'mypwd'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({'username' : username, 'j_password' : password})
opener.open('https://github.com/login', login_data)
resp = opener.open('https://github.com/settings/emails')
print resp.read()
############# Method 3
import urllib
opener = urllib.FancyURLopener()
print opener.open('http://myusername:mypwd@github.com/settings/emails').read()
########## Method 4
import mechanize
import cookielib
br = mechanize.Browser()
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
br.set_handle_equiv(True)
br.set_handle_gzip(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)
#br.set_debug_http(True)
#br.set_debug_redirects(True)
#br.set_debug_responses(True)
br.addheaders = [('User-agent', 'Chrome')]
br.add_password('https://github.com/settings/emails', 'myusername', 'mypwd')
br.open('https://github.com/settings/emails')
print br.response().read()
############ Methods 5
from requests import session
payload = {
'action': 'login',
'username': 'myusername',
'password': 'mypwd'
}
with session() as c:
c.post('https://github.com/login', data=payload)
request = c.get('https://github.com/settings/emails')
print request.headers
print request.text
########### Method 6
import requests
from requests.packages.urllib3 import add_stderr_logger
import sys
from bs4 import BeautifulSoup as bs
add_stderr_logger()
s = requests.Session()
s.headers['User-Agent'] = 'Chrome'
username = 'myusername'
password = 'mypwd'
url = 'https://github.com/login'
# after examining the HTML of the website you're trying to log into
# set name_form to the name of the form element that contains the name and
# set password_form to the name of the form element that will contain the password
login = {'login': username, 'password': password}
login_response = s.post(url, data=login)
for r in login_response.history:
if r.status_code == 401: # 401 means authentication failed
print 'error!'
sys.exit(1) # abort
pdf_response = s.get('https://github.com/settings/emails') # Your cookies and headers are automatically included
soup = bs(pdf_response.content)
Also I've read some discussions about differences between HTTP Authentication and cookies. Still none of them worked.
Please help and any help would be appreciated. Thank you very much.
|
This works for me:
##################################### Method 1
import mechanize
import cookielib
from BeautifulSoup import BeautifulSoup
import html2text
# Browser
br = mechanize.Browser()
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
# Browser options
br.set_handle_equiv(True)
br.set_handle_gzip(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)
br.addheaders = [('User-agent', 'Chrome')]
# The site we will navigate into, handling it's session
br.open('https://github.com/login')
# View available forms
for f in br.forms():
print f
# Select the second (index one) form (the first form is a search query box)
br.select_form(nr=1)
# User credentials
br.form['login'] = 'mylogin'
br.form['password'] = 'mypass'
# Login
br.submit()
print(br.open('https://github.com/settings/emails').read())
You were not far off at all!
|
Python multiprocessing's Pool process limit
|
In using the Pool object from the multiprocessing module, is the number of processes limited by the number of CPU cores? E.g. if I have 4 cores, even if I create a Pool with 8 processes, only 4 will be running at one time?
|
You can ask for as many processes as you like. Any limit that may exist will be imposed by your operating system, not by multiprocessing. For example,
p = multiprocessing.Pool(1000000)
is likely to suffer an ugly death on any machine. I'm trying it on my box as I type this, and the OS is grinding my disk to dust swapping out RAM madly - finally killed it after it had created about 3000 processes ;-)
As to how many will run "at one time", Python has no say in that. It depends on:
How many your hardware is capable of running simultaneously; and,
How your operating system decides to give hardware resources to all the processes on your machine currently running.
For CPU-bound tasks, it doesn't make sense to create more Pool processes than you have cores to run them on. If you're trying to use your machine for other things too, then you should create fewer processes than cores.
For I/O-bound tasks, it may make sense to create a quite a few more Pool processes than cores, since the processes will probably spend most their time blocked (waiting for I/O to complete).
|
How to pip or easy_install tkinter
|
My Idle is throwing errors that and says tkinter can't be imported.
Is there a simple way to install tkinter via pip or easy_install?
There seem to be a lot of package names flying around for this...
This and other assorted variations with tkinter-pypy aren't working.
pip install python-tk
I'm on Windows with Python 2.7 and can't apt-get.
Thanks.
|
The Tkinter library is build in with every Python installation. And since you are on windows, I believe you installed python through the binaries on their website?
if so, Then most probably you are typing the command wrong. It should be:
import Tkinter as tk
Note the capital T at the beginning of Tkinter.
For Python 3,
import tkinter as tk
|
Python doctests: test for None
|
Using Python 2.7 I'm trying to test that the result of a particular function call is None
I would expect these tests to pass (excuse the rather silly example)
def six_or_none(val):
"""
>>> six_or_none(6)
6
>>> six_or_none(4)
None
"""
if val == 6:
return 6
return None
However they yield the following result
Failed example:
six_or_none(4)
Expected:
None
Got nothing
What's the correct way to test for None in doctests?
|
The Python interpreter ignores None return values, so doctests do the same.
Test for is None instead:
>>> six_or_none(4) is None
True
|
np.mean() vs np.average() in Python NumPy?
|
Title says it all.
I notice that
In [30]: np.mean([1, 2, 3])
Out[30]: 2.0
In [31]: np.average([1, 2, 3])
Out[31]: 2.0
However, there should be some differences, since after all they are two different functions.
What are the differences between them?
|
np.average takes an optional weight parameter. If it is not supplied they are equivalent. Take a look at the source code.
np.mean:
try:
mean = a.mean
except AttributeError:
return _wrapit(a, 'mean', axis, dtype, out)
return mean(axis, dtype, out)
np.average:
...
if weights is None :
avg = a.mean(axis)
scl = avg.dtype.type(a.size/avg.size)
else:
#code that does weighted mean here
if returned: #returned is another optional argument
scl = np.multiply(avg, 0) + scl
return avg, scl
else:
return avg
...
|
list comprehension filtering - "the set() trap"
|
A reasonably common operation is to filter one list based on another list. People quickly find that this:
[x for x in list_1 if x in list_2]
is slow for large inputs - it's O(n*m). Yuck. How do we speed this up? Use a set to make filtering lookups O(1):
s = set(list_2)
[x for x in list_1 if x in s]
This gives nice overall O(n) behavior. I however often see even veteran coders fall into The Trapâ¢:
[x for x in list_1 if x in set(list_2)]
Ack! This is again O(n*m) since python builds set(list_2) every time, not just once.
I thought that was the end of the story - python can't optimize it away to only build the set once. Just be aware of the pitfall. Gotta live with it. Hmm.
#python 3.3.2+
list_2 = list(range(20)) #small for demonstration purposes
s = set(list_2)
list_1 = list(range(100000))
def f():
return [x for x in list_1 if x in s]
def g():
return [x for x in list_1 if x in set(list_2)]
def h():
return [x for x in list_1 if x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19}]
%timeit f()
100 loops, best of 3: 7.31 ms per loop
%timeit g()
10 loops, best of 3: 77.4 ms per loop
%timeit h()
100 loops, best of 3: 6.66 ms per loop
Huh, python (3.3) can optimize away a set literal. It's even faster than f() in this case, presumably because it gets to replace a LOAD_GLOBAL with a LOAD_FAST.
#python 2.7.5+
%timeit h()
10 loops, best of 3: 72.5 ms per loop
Python 2 notably doesn't do this optimization. I've tried investigating further what python3 is doing but unfortunately dis.dis cannot probe the innards of comprehension expressions. Basically everything interesting turns into MAKE_FUNCTION.
So now I'm wondering - why can python 3.x optimize away the set literal to only build once, but not set(list_2)?
|
In order to optimize set(list_2), the interpreter needs to prove that list_2 (and all of its elements) does not change between iterations. This is a hard problem in the general case, and it would not surprise me if the interpreter does not even attempt to tackle it.
On the other hand a set literal cannot change its value between iterations, so the optimization is known to be safe.
|
How to remove gaps between subplots in matplotlib?
|
The code below produces gaps between the subplots. How do I remove the gaps between the subplots and make the image a tight grid?
import matplotlib.pyplot as plt
for i in range(16):
i = i + 1
ax1 = plt.subplot(4, 4, i)
plt.axis('on')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_aspect('equal')
plt.subplots_adjust(wspace=None, hspace=None)
plt.show()
|
You can use gridspec to control the spacing between axes. There's more information here.
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
plt.figure(figsize = (4,4))
gs1 = gridspec.GridSpec(4, 4)
gs1.update(wspace=0.025, hspace=0.05) # set the spacing between axes.
for i in range(16):
# i = i + 1 # grid spec indexes from 0
ax1 = plt.subplot(gs1[i])
plt.axis('on')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_aspect('equal')
plt.subp
plt.show()
|
Installing PIL with pip
|
I am trying to install PIL (the Python Imaging Library) using the command:
sudo pip install pil
but I get the following message:
Downloading/unpacking PIL
You are installing a potentially insecure and unverifiable file. Future versions of pip will default to disallowing insecure files.
Downloading PIL-1.1.7.tar.gz (506kB): 506kB downloaded
Running setup.py egg_info for package PIL
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
Installing collected packages: PIL
Running setup.py install for PIL
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
--- using frameworks at /System/Library/Frameworks
building '_imaging' extension
clang -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch i386 -arch x86_64 -pipe -IlibImaging -I/System/Library/Frameworks/Python.framework/Versions/2.7/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _imaging.c -o build/temp.macosx-10.8-intel-2.7/_imaging.o
unable to execute clang: No such file or directory
error: command 'clang' failed with exit status 1
Complete output from command /usr/bin/python -c "import setuptools;__file__='/private/tmp/pip_build_root/PIL/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-AYrxVD-record/install-record.txt --single-version-externally-managed:
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
running install
running build
.
.
.
.
copying PIL/XVThumbImagePlugin.py -> build/lib.macosx-10.8-intel-2.7
running build_ext
--- using frameworks at /System/Library/Frameworks
building '_imaging' extension
creating build/temp.macosx-10.8-intel-2.7
creating build/temp.macosx-10.8-intel-2.7/libImaging
clang -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch i386 -arch x86_64 -pipe -IlibImaging -I/System/Library/Frameworks/Python.framework/Versions/2.7/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _imaging.c -o build/temp.macosx-10.8-intel-2.7/_imaging.o
unable to execute clang: No such file or directory
error: command 'clang' failed with exit status 1
----------------------------------------
Cleaning upâ¦
Could you please help me to install PIL??
|
Install Xcode and Xcode Command Line Tools as mentioned.
Use Pillow instead, as PIL is basically dead. Pillow is a maintained fork of PIL.
https://pypi.python.org/pypi/Pillow/2.2.1
pip install Pillow
If you have both Pythons installed and want to install this for Python3:
$ python3 -m pip install Pillow
|
sqlalchemy, select using reverse-inclusive (not in) list of child column values
|
I have a typical Post / Tags (many tags associated with one post) relationship in flask-sqlalchemy, and I want to select posts which aren't tagged with any tag in a list I provide. First, the models I set up:
class Post(db.Model):
id = db.Column(db.Integer, primary_key=True)
tags = db.relationship('Tag', lazy='dynamic')
class Tag(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.Text(50))
post_id = db.Column(db.Integer, db.ForeignKey('post.id'))
Something like
db.session.query(Post).filter(Post.tags.name.notin_(['dont','want','these']))
fails with
AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object associated with Post.tags has an attribute 'name'
which I assume is because tags is a relationship and not a column. I had this working on another project when I was writing the actual SQL manually. This was the SQL that worked:
SELECT * FROM $posts WHERE id NOT IN (SELECT post_id FROM $tags WHERE name IN ('dont','want','these'))
How would I achieve this using the sqlalchemy API?
|
Pretty straightforward using negated any:
query = session.query(Post).filter(~Post.tags.any(Tag.name.in_(['dont', 'want', 'these'])))
|
Pandas dataframe get first row of each group
|
I have a pandas DataFrame like following.
df = pd.DataFrame({'id' : [1,1,1,2,2,3,3,3,3,4,4,5,6,6,6,7,7],
'value' : ["first","second","second","first",
"second","first","third","fourth",
"fifth","second","fifth","first",
"first","second","third","fourth","fifth"]})
I want to group this by ["id","value"] and get the first row of each group.
id value
0 1 first
1 1 second
2 1 second
3 2 first
4 2 second
5 3 first
6 3 third
7 3 fourth
8 3 fifth
9 4 second
10 4 fifth
11 5 first
12 6 first
13 6 second
14 6 third
15 7 fourth
16 7 fifth
Expected outcome
id value
1 first
2 first
3 first
4 second
5 first
6 first
7 fourth
I tried following which only gives the first row of the DataFrame. Any help regarding this is appreciated.
In [25]: for index, row in df.iterrows():
....: df2 = pd.DataFrame(df.groupby(['id','value']).reset_index().ix[0])
|
>>> df.groupby('id').first()
value
id
1 first
2 first
3 first
4 second
5 first
6 first
7 fourth
If you need id as column:
>>> df.groupby('id').first().reset_index()
id value
0 1 first
1 2 first
2 3 first
3 4 second
4 5 first
5 6 first
6 7 fourth
To get n first records, you can use head():
>>> df.groupby('id').head(2).reset_index(drop=True)
id value
0 1 first
1 1 second
2 2 first
3 2 second
4 3 first
5 3 third
6 4 second
7 4 fifth
8 5 first
9 6 first
10 6 second
11 7 fourth
12 7 fifth
|
Pandas good approach to get top n records within each group
|
Suppose I have pandas DataFrame like this:
>>> df = pd.DataFrame({'id':[1,1,1,2,2,2,2,3,4],'value':[1,2,3,1,2,3,4,1,1]})
>>> df
id value
0 1 1
1 1 2
2 1 3
3 2 1
4 2 2
5 2 3
6 2 4
7 3 1
8 4 1
I want to get new DataFrame with top 2 records for each id, like this:
id value
0 1 1
1 1 2
3 2 1
4 2 2
7 3 1
8 4 1
I can do it with numbering records within group after group by:
>>> dfN = df.groupby('id').apply(lambda x:x['value'].reset_index()).reset_index()
>>> dfN
id level_1 index value
0 1 0 0 1
1 1 1 1 2
2 1 2 2 3
3 2 0 3 1
4 2 1 4 2
5 2 2 5 3
6 2 3 6 4
7 3 0 7 1
8 4 0 8 1
>>> dfN[dfN['level_1'] <= 1][['id', 'value']]
id value
0 1 1
1 1 2
3 2 1
4 2 2
7 3 1
8 4 1
But is there more effective/elegant approach to do this? And also is there more elegant approach to number records within each group (like SQL window function row_number()).
Thanks in advance.
|
Did you try df.groupby('id').head(2)
Ouput generated:
>>> df.groupby('id').head(2)
id value
id
1 0 1 1
1 1 2
2 3 2 1
4 2 2
3 7 3 1
4 8 4 1
(Keep in mind that you might need to order/sort before, depending on your data)
EDIT: As mentioned by the questioner, use df.groupby('id').head(2).reset_index(drop=True) to remove the multindex and flatten the results.
>>> df.groupby('id').head(2).reset_index(drop=True)
id value
0 1 1
1 1 2
2 2 1
3 2 2
4 3 1
5 4 1
|
No module named 'x' when reloading with os.execl()
|
I have a python script that is using the following to restart:
python = sys.executable
os.execl(python, python, * sys.argv)
Most the time this works fine, but occasionally the restart fails with a no module named error. Examples:
Traceback (most recent call last):
File "/usr/lib/python2.7/site.py", line 68, in <module>
import os
File "/usr/lib/python2.7/os.py", line 49, in <module>
import posixpath as path
File "/usr/lib/python2.7/posixpath.py", line 17, in <module>
import warnings
File "/usr/lib/python2.7/warnings.py", line 6, in <module>
import linecache
ImportError: No module named linecache
Traceback (most recent call last):
File "/usr/lib/python2.7/site.py", line 68, in <module>
import os
File "/usr/lib/python2.7/os.py", line 49, in <module>
import posixpath as path
File "/usr/lib/python2.7/posixpath.py", line 15, in <module>
import stat
ImportError: No module named stat
Edit: I attempted gc.collect() as suggested by andr0x and this did not work. I got the same error:
Traceback (most recent call last):
File "/usr/lib/python2.7/site.py", line 68, in <module>
import os
File "/usr/lib/python2.7/os.py", line 49, in <module>
import posixpath as path
ImportError: No module named posixpath
Edit 2: I tried sys.stdout.flush() and im still getting the same error. I've noticed I am only every getting between 1-3 successful restarts before an error occurs.
|
I believe you are hitting the following bug:
http://bugs.python.org/issue16981
As it is unlikely that these modules are disappearing there must be another error that is actually at fault. The bug report lists 'too many open files' as prone to causing this issue however I am unsure if there are any other errors which will also trigger this.
I would make sure you are closing any file handles before hitting the restart code. You can also actually force the garbage collector to run manually with:
import gc
gc.collect()
http://docs.python.org/2/library/gc.html
You can try using that before hitting the restart code as well
|
Installing pyodbc fails on OSX 10.9 (Mavericks)
|
When running pip install pyodbc, I get
In file included from .../build/pyodbc/src/buffer.cpp:12:
.../build/pyodbc/src/pyodbc.h:52:10: fatal error: 'sql.h' file not found
#include <sql.h>
^
1 error generated.
error: command 'cc' failed with exit status 1
It seems that Mavericks has no sql.h under /usr/include
Did anyone manage to install pyodbc? Is there a known workaround?
|
You can use Homebrew to install unixodbc, then pyodbc via pip in the usual fashion.
brew install unixodbc && pip install pyodbc
This works for me on Mavericks.
|
Is wordnet path similarity commutative?
|
I am using the wordnet API from nltk.
When I compare one synset with another I got None but when I compare them the other way around I get a float value.
Shouldn't they give the same value?
Is there an explanation or is this a bug of wordnet?
Example:
wn.synset('car.n.01').path_similarity(wn.synset('automobile.v.01')) # None
wn.synset('automobile.v.01').path_similarity(wn.synset('car.n.01')) # 0.06666666666666667
|
Technically without the dummy root, both car and automobile synsets would have no link to each other:
>>> from nltk.corpus import wordnet as wn
>>> x = wn.synset('car.n.01')
>>> y = wn.synset('automobile.v.01')
>>> print x.shortest_path_distance(y)
None
>>> print y.shortest_path_distance(x)
None
Now, let's look at the dummy root issue closely. Firstly, there is a neat function in NLTK that says whether a synset needs a dummy root:
>>> x._needs_root()
False
>>> y._needs_root()
True
Next, when you look at the path_similarity code (http://nltk.googlecode.com/svn-/trunk/doc/api/nltk.corpus.reader.wordnet-pysrc.html#Synset.path_similarity), you can see:
def path_similarity(self, other, verbose=False, simulate_root=True):
distance = self.shortest_path_distance(other, \
simulate_root=simulate_root and self._needs_root())
if distance is None or distance < 0:
return None
return 1.0 / (distance + 1)
So for automobile synset, this parameter simulate_root=simulate_root and self._needs_root() will always be True when you try y.path_similarity(x) and when you try x.path_similarity(y) it will always be False since x._needs_root() is False:
>>> True and y._needs_root()
True
>>> True and x._needs_root()
False
Now when path_similarity() pass down to shortest_path_distance() (https://nltk.googlecode.com/svn/trunk/doc/api/nltk.corpus.reader.wordnet-pysrc.html#Synset.shortest_path_distance) and then to hypernym_distances(), it will try to call for a list of hypernyms to check their distances, without simulate_root = True, the automobile synset will not connect to the car and vice versa:
>>> y.hypernym_distances(simulate_root=True)
set([(Synset('automobile.v.01'), 0), (Synset('*ROOT*'), 2), (Synset('travel.v.01'), 1)])
>>> y.hypernym_distances()
set([(Synset('automobile.v.01'), 0), (Synset('travel.v.01'), 1)])
>>> x.hypernym_distances()
set([(Synset('object.n.01'), 8), (Synset('self-propelled_vehicle.n.01'), 2), (Synset('whole.n.02'), 8), (Synset('artifact.n.01'), 7), (Synset('physical_entity.n.01'), 10), (Synset('entity.n.01'), 11), (Synset('object.n.01'), 9), (Synset('instrumentality.n.03'), 5), (Synset('motor_vehicle.n.01'), 1), (Synset('vehicle.n.01'), 4), (Synset('entity.n.01'), 10), (Synset('physical_entity.n.01'), 9), (Synset('whole.n.02'), 7), (Synset('conveyance.n.03'), 5), (Synset('wheeled_vehicle.n.01'), 3), (Synset('artifact.n.01'), 6), (Synset('car.n.01'), 0), (Synset('container.n.01'), 4), (Synset('instrumentality.n.03'), 6)])
So theoretically, the right path_similarity is 0 / None , but because of the simulate_root=simulate_root and self._needs_root() parameter,
nltk.corpus.wordnet.path_similarity() in NLTK's API is not commutative.
BUT the code is also not wrong/bugged, since comparison of any synset distance by going through the root will be constantly far since the position of the dummy *ROOT* will never change, so the best of practice is to do this to calculate path_similarity:
>>> from nltk.corpus import wordnet as wn
>>> x = wn.synset('car.n.01')
>>> y = wn.synset('automobile.v.01')
# When you NEVER want a non-zero value, since going to
# the *ROOT* will always get you some sort of distance
# from synset x to synset y
>>> max(wn.path_similarity(x,y), wn.path_similarity(y,x))
# when you can allow None in synset similarity comparison
>>> min(wn.path_similarity(x,y), wn.path_similarity(y,x))
|
Python: import module from another directory at the same level in project hierarchy
|
I've seen all sorts of examples and other similar questions, but I can't seem to find an example that exactly matches my scenario. I feel like a total goon asking this because there are so many similar questions, but I just can't seem to get this working "correctly." Here is my project:
user_management (package)
|
|------- __init__.py
|
|------- Modules/
| |
| |----- __init__.py
| |----- LDAPManager.py
| |----- PasswordManager.py
|
|------- Scripts/
| |
| |----- __init__.py
| |----- CreateUser.py
| |----- FindUser.py
If I move "CreateUser.py" to the main user_management directory, I can easily use: "import Modules.LDAPManager" to import LDAPManager.py --- this works. What I can't do (which I want to do), is keep CreateUser.py in the Scripts subfolder, and import LDAPManager.py. I was hoping to accomplish this by using "import user_management.Modules.LDAPManager.py". This doesn't work. In short, I can get Python files to easily look deeper in the hierarchy, but I can't get a Python script to reference up one directory and down into another.
Note that I am able to solve my problem using:
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
import Modules.LDAPManager as LDAPManager
I've heard that this is bad practice and discouraged.
The files in Scripts are intended to be executed directly (is the init.py in Scripts even necessary?). I've read that in this case, I should be executing CreateUser.py with the -m flag. I've tried some variations on this and just can't seem to get CreateUser.py to recognize LDAPManager.py.
|
If I move CreateUser.py to the main user_management directory, I can
easily use: import Modules.LDAPManager to import LDAPManager.py
--- this works.
Please, don't. In this way the LDAPManager module used by CreateUser will not be the same as the one imported via other imports. This can create problems when you have some global state in the module or during pickling/unpickling. Avoid imports that work only because the module happens to be in the same directory.
When you have a package structure you should either:
Use relative imports, i.e if the CreateUser.py is in Scripts/:
from ..Modules import LDAPManager
Note that this was (note the past tense) discouraged by PEP 8 only because old versions of python didn't support them very well, but this problem was solved years ago. The current version of PEP 8 does suggest them as an acceptable alternative to absolute imports. I actually like them inside packages.
Use absolute imports using the whole package name(CreateUser.py in Scripts/):
from user_management.Modules import LDAPManager
In order for the second one to work the package user_management should be installed inside the PYTHONPATH. During development you can configure the IDE so that this happens, without having to manually add calls to sys.path.append anywhere.
Also I find it odd that Scripts/ is a subpackage. Because in a real installation the user_management module would be installed under the site-packages found in the lib/ directory (whichever directory is used to install libraries in your OS), while the scripts should be installed under a bin/ directory (whichever contains executables for your OS).
In fact I believe Script/ shouldn't even be under user_management. It should be at the same level of user_management.
In this way you do not have to use -m, but you simply have to make sure the package can be found (this again is a matter of configuring the IDE, installing the package correctly or using PYTHONPATH=. python Scripts/CreateUser.py to launch the scripts with the correct path).
In summary, the hierarchy I would use is:
user_management (package)
|
|------- __init__.py
|
|------- Modules/
| |
| |----- __init__.py
| |----- LDAPManager.py
| |----- PasswordManager.py
|
Scripts/ (*not* a package)
|
|----- CreateUser.py
|----- FindUser.py
Then the code of CreateUser.py and FindUser.py should use absolute imports to import the modules:
from user_management.Modules import LDAPManager
During installation you make sure that user_management ends up somewhere in the PYTHONPATH, and the scripts inside the directory for executables so that they are able to find the modules. During development you either rely on IDE configuration, or you launch CreateUser.py adding the Scripts/ parent directory to the PYTHONPATH (I mean the directory that contains both user_management and Scripts):
PYTHONPATH=/the/parent/directory python Scripts/CreateUser.py
Or you can modify the PYTHONPATH globally so that you don't have to specify this each time. On unix OSes (linux, Mac OS X etc.) you can modify one of the shell scripts to define the PYTHONPATH external variable, on Windows you have to change the environmental variables settings.
Addendum I believe, if you are using python2, it's better to make sure to avoid implicit relative imports by putting:
from __future__ import absolute_import
at the top of your modules. In this way import X always means to import the toplevel module X and will never try to import the X.py file that's in the same directory (if that directory isn't in the PYTHONPATH). In this way the only way to do a relative import is to use the explicit syntax (the from . import X), which is better (explicit is better than implicit).
This will make sure you never happen to use the "bogus" implicit relative imports, since these would raise an ImportError clearly signalling that something is wrong. Otherwise you could use a module that's not what you think it is.
|
Replace non-ASCII characters with a single space
|
I need to replace all non-ASCII (\x00-\x7F) characters with a space. I'm surprised that this is not dead-easy in Python, unless I'm missing something. The following function simply removes all non-ASCII characters:
def remove_non_ascii_1(text):
return ''.join(i for i in text if ord(i)<128)
And this one replaces non-ASCII characters with the amount of spaces as per the amount of bytes in the character code point (i.e. the â character is replaced with 3 spaces):
def remove_non_ascii_2(text):
return re.sub(r'[^\x00-\x7F]',' ', text)
How can I replace all non-ASCII characters with a single space?
Of the myriad of similar SO questions, none address character replacement as opposed to stripping, and additionally address all non-ascii characters not a specific character.
|
Your ''.join() expression is filtering, removing anything non-ASCII; you could use a conditional expression instead:
return ''.join([i if ord(i) < 128 else ' ' for i in text])
This handles characters one by one and would still use one space per character replaced.
Your regular expression should just replace consecutive non-ASCII characters with a space:
re.sub(r'[^\x00-\x7F]+',' ', text)
Note the + there.
|
Initializing a dictionary in python with a key value and no corresponding values
|
I was wondering if there was a way to initialize a dictionary in python with keys but no corresponding values until I set them. Such as:
Definition = {'apple': , 'ball': }
and then later i can set them:
Definition[key] = something
I only want to initialize keys but I don't know the corresponding values until I have to set them later. Basically I know what keys I want to add the values as they are found. Thanks.
|
Use the fromkeys function to initialize a dictionary with any default value. In your case, you will initialize with None since you don't have a default value in mind.
empty_dict = dict.fromkeys(['apple','ball'])
this will initialize empty_dict as:
empty_dict = {'apple': None, 'ball': None}
As an alternative, if you wanted to initialize the dictionary with some default value other than None, you can do:
default_value = 'xyz'
nonempty_dict = dict.fromkeys(['apple','ball'],default_value)
|
How to activate an Anaconda environment
|
I'm on Windows 8, using Anaconda 1.7.5 64bit.
I created a new Anaconda environment with
conda create -p ./test python=2.7 pip
from C:\Pr\TEMP\venv\.
This worked well (there is a folder with a new python distribution). conda tells me to type
activate C:\PR\TEMP\venv\test
to activate the environment, however this returns:
No environment named "C:\PR\temp\venv\test" exists in C:\PR\Anaconda\envs
How can I activate the environment? What am I doing wrong?
|
If this happens you would need to set the PATH for your environment (so that it gets the right Python from the environment and Scripts\ on Windows).
Imagine you have created an environment called py33 by using:
conda create -n py33 python=3.3 anaconda
Here the folders are created by default in Anaconda\envs, so you need to set the PATH as:
set PATH=C:\Anaconda\envs\py33\Scripts;C:\Anaconda\envs\py33;%PATH%
Now it should work in the command window:
activate py33
This the windows equivalent to the code that normally appears in the tutorials for Mac and Linux:
$ source activate py33
More info:
https://groups.google.com/a/continuum.io/forum/#!topic/anaconda/8T8i11gO39U
Does anaconda create a separate PYTHONPATH variable for each new environment?
|
Unbalanced classification using RandomForestClassifier in sklearn
|
I have a dataset where the classes are unbalanced. The classes are either '1' or '0' where the ratio of class '1':'0' is 5:1. How do you calculate the prediction error for each class and the rebalance weights accordingly in sklearn with Random Forest, kind of like in the following link: http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#balance
|
You can pass sample weights argument to Random Forest fit method
sample_weight : array-like, shape = [n_samples] or None
Sample weights. If None, then samples are equally weighted. Splits
that would create child nodes with net zero or negative weight are
ignored while searching for a split in each node. In the case of
classification, splits are also ignored if they would result in any
single class carrying a negative weight in either child node.
In older version there were a preprocessing.balance_weights method to generate balance weights for given samples, such that classes become uniformly distributed. It is still there, in internal but still usable preprocessing._weights module, but is deprecated and will be removed in future versions. Don't know exact reasons for this.
Update
Some clarification, as you seems to be confused. sample_weight usage is straightforward, once you remember that its purpose is to balance target classes in training dataset. That is, if you have X as observations and y as classes (labels), then len(X) == len(y) == len(sample_wight), and each element of sample witght 1-d array represent weight for a corresponding (observation, label) pair. For your case, if 1 class is represented 5 times as 0 class is, and you balance classes distributions, you could use simple
sample_weight = np.array([5 if i == 0 else 1 for i in y])
assigning weight of 5 to all 0 instances and weight of 1 to all 1 instances. See link above for a bit more crafty balance_weights weights evaluation function.
|
'module' object has no attribute 'loads' while parsing JSON using python
|
I am trying to parse JSON from Python. I recently started working with Python so I followed some stackoverflow tutorial how to parse JSON using Python and I came up with below code -
#!/usr/bin/python
import json
j = json.loads('{"script":"#!/bin/bash echo Hello World"}')
print j['script']
But whenever I run the above code, I always get this error -
Traceback (most recent call last):
File "json.py", line 2, in <module>
import json
File "/cygdrive/c/ZookPython/json.py", line 4, in <module>
j = json.loads('{"script":"#!/bin/bash echo Hello World"}')
AttributeError: 'module' object has no attribute 'loads'
Any thoughts what wrong I am doing here? I am running cygwin in windows and from there only I am running my python program. I am using Python 2.7.3
And is there any better and efficient way of parsing the JSON as well?
Update:-
Below code doesn't work if I remove the single quote since I am getting JSON string from some other method -
#!/usr/bin/python
import json
jsonStr = {"script":"#!/bin/bash echo Hello World"}
j = json.loads(jsonStr)
shell_script = j['script']
print shell_script
So before deserializing how to make sure, it has single quote as well?
This is the error I get -
Traceback (most recent call last):
File "jsontest.py", line 7, in <module>
j = json.loads(jsonStr)
File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
|
File "json.py", line 2, in <module>
import json
This line is a giveaway: you have named your script "json", but you are trying to import the builtin module called "json", since your script is in the current directory, it comes first in sys.path, and so that's the module that gets imported.
You need to rename your script to something else, preferrably not a standard python module.
|
how to install pip for python3 on mac os x
|
OSX (Mavericks) has python2.7 stock installed. But I do all my own personal python stuff with 3.3. I just flushed my 3.3.2 install and installed the new 3.3.3. So I need to install pyserial again. I can do it the way I've done it before, which is:
Download pyserial from pypi
untar pyserial.tgz
cd pyserial
python3 setup.py install
But I'd like to do like the cool kids do, and just do something like pip3 install pyserial. But it's not clear how I get to that point. And just that point. Not interested (unless I have to be) in virtualenv yet.
|
UPDATE: This is no longer necessary with Python3.4. It installs pip3 as part of the stock install.
I ended up posting this same question on the python mailing list, and got the following answer:
# download and install setuptools
curl -O https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py
python3 ez_setup.py
# download and install pip
curl -O https://bootstrap.pypa.io/get-pip.py
python3 get-pip.py
Which solved my question perfectly. After adding the following for my own:
cd /usr/local/bin
ln -s ../../../Library/Frameworks/Python.framework/Versions/3.3/bin/pip pip
So that I could run pip directly, I was able to:
# use pip to install
pip install pyserial
or:
# Don't want it?
pip uninstall pyserial
|
Improve pandas (PyTables?) HDF5 table write performance
|
I've been using pandas for research now for about two months to great effect. With large numbers of medium-sized trace event datasets, pandas + PyTables (the HDF5 interface) does a tremendous job of allowing me to process heterogenous data using all the Python tools I know and love.
Generally speaking, I use the Fixed (formerly "Storer") format in PyTables, as my workflow is write-once, read-many, and many of my datasets are sized such that I can load 50-100 of them into memory at a time with no serious disadvantages. (NB: I do much of my work on Opteron server-class machines with 128GB+ system memory.)
However, for large datasets (500MB and greater), I would like to be able to use the more scalable random-access and query abilities of the PyTables "Tables" format, so that I can perform my queries out-of-memory and then load the much smaller result set into memory for processing. The big hurdle here, however, is the write performance. Yes, as I said, my workflow is write-once, read-many, but the relative times are still unacceptable.
As an example, I recently ran a large Cholesky factorization that took 3 minutes, 8 seconds (188 seconds) on my 48 core machine. This generated a trace file of ~2.2 GB - the trace is generated in parallel with the program, so there is no additional "trace creation time."
The initial conversion of my binary trace file into the pandas/PyTables format takes a decent chunk of time, but largely because the binary format is deliberately out-of-order in order to reduce the performance impact of the trace generator itself. This is also irrelevant to the performance loss when moving from the Storer format to the Table format.
My tests were initially run with pandas 0.12, numpy 1.7.1, PyTables 2.4.0, and numexpr 0.20.1. My 48 core machine runs at 2.8GHz per core, and I am writing to an ext3 filesystem which is probably (but not certainly) on a SSD.
I can write the entire dataset to a Storer format HDF5 file (resulting filesize: 3.3GB) in 7.1 seconds. The same dataset, written to the Table format (resulting file size is also 3.3GB), takes 178.7 seconds to write.
The code is as follows:
with Timer() as t:
store = pd.HDFStore('test_storer.h5', 'w')
store.put('events', events_dataset, table=False, append=False)
print('Fixed format write took ' + str(t.interval))
with Timer() as t:
store = pd.HDFStore('test_table.h5', 'w')
store.put('events', events_dataset, table=True, append=False)
print('Table format write took ' + str(t.interval))
and the output is simply
Fixed format write took 7.1
Table format write took 178.7
My dataset has 28,880,943 rows, and the columns are basic datatypes:
node_id int64
thread_id int64
handle_id int64
type int64
begin int64
end int64
duration int64
flags int64
unique_id int64
id int64
DSTL_LS_FULL float64
L2_DMISS float64
L3_MISS float64
kernel_type float64
dtype: object
...so I don't think there should be any data-specific issues with the write speed.
I've also tried adding BLOSC compression, to rule out any strange I/O issues that might affect one scenario or the other, but compression seems to decrease the performance of both equally.
Now, I realize that the pandas documentation says that the Storer format offers significantly faster writes, and slightly faster reads. (I do experience the faster reads, as a read of the Storer format seems to take around 2.5 seconds, while a read of the Table format takes around 10 seconds.) But it really seems excessive that the Table format write should take 25 times as long as the Storer format write.
Can any of the folks involved with PyTables or pandas explain the architectural (or otherwise) reasons why writing to the queryable format (which clearly requires very little extra data) should take an order of magnitude longer? And is there any hope for improving this in the future? I'd love to jump in to contributing to one project or the other, as my field is high performance computing and I see a significant use case for both projects in this domain.... but it would be helpful to get some clarification on the issues involved first, and/or some advice on how to speed things up from those who know how the system is built.
EDIT:
Running the former tests with %prun in IPython gives the following (somewhat reduced for readability) profile output for the Storer/Fixed format:
%prun -l 20 profile.events.to_hdf('test.h5', 'events', table=False, append=False)
3223 function calls (3222 primitive calls) in 7.385 seconds
Ordered by: internal time
List reduced from 208 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
6 7.127 1.188 7.128 1.188 {method '_createArray' of 'tables.hdf5Extension.Array' objects}
1 0.242 0.242 0.242 0.242 {method '_closeFile' of 'tables.hdf5Extension.File' objects}
1 0.003 0.003 0.003 0.003 {method '_g_new' of 'tables.hdf5Extension.File' objects}
46 0.001 0.000 0.001 0.000 {method 'reduce' of 'numpy.ufunc' objects}
and the following for the Tables format:
%prun -l 40 profile.events.to_hdf('test.h5', 'events', table=True, append=False, chunksize=1000000)
499082 function calls (499040 primitive calls) in 188.981 seconds
Ordered by: internal time
List reduced from 526 to 40 due to restriction <40>
ncalls tottime percall cumtime percall filename:lineno(function)
29 92.018 3.173 92.018 3.173 {pandas.lib.create_hdf_rows_2d}
640 20.987 0.033 20.987 0.033 {method '_append' of 'tables.hdf5Extension.Array' objects}
29 19.256 0.664 19.256 0.664 {method '_append_records' of 'tables.tableExtension.Table' objects}
406 19.182 0.047 19.182 0.047 {method '_g_writeSlice' of 'tables.hdf5Extension.Array' objects}
14244 10.646 0.001 10.646 0.001 {method '_g_readSlice' of 'tables.hdf5Extension.Array' objects}
472 10.359 0.022 10.359 0.022 {method 'copy' of 'numpy.ndarray' objects}
80 3.409 0.043 3.409 0.043 {tables.indexesExtension.keysort}
2 3.023 1.512 3.023 1.512 common.py:134(_isnull_ndarraylike)
41 2.489 0.061 2.533 0.062 {method '_fillCol' of 'tables.tableExtension.Row' objects}
87 2.401 0.028 2.401 0.028 {method 'astype' of 'numpy.ndarray' objects}
30 1.880 0.063 1.880 0.063 {method '_g_flush' of 'tables.hdf5Extension.Leaf' objects}
282 0.824 0.003 0.824 0.003 {method 'reduce' of 'numpy.ufunc' objects}
41 0.537 0.013 0.668 0.016 index.py:607(final_idx32)
14490 0.385 0.000 0.712 0.000 array.py:342(_interpret_indexing)
39 0.279 0.007 19.635 0.503 index.py:1219(reorder_slice)
2 0.256 0.128 10.063 5.031 index.py:1099(get_neworder)
1 0.090 0.090 119.392 119.392 pytables.py:3016(write_data)
57842 0.087 0.000 0.087 0.000 {numpy.core.multiarray.empty}
28570 0.062 0.000 0.107 0.000 utils.py:42(is_idx)
14164 0.062 0.000 7.181 0.001 array.py:711(_readSlice)
EDIT 2:
Running again with a pre-release copy of pandas 0.13 (pulled Nov 20 2013 at about 11:00 EST), write times for the Tables format improve significantly but still don't compare "reasonably" to the write speeds of the Storer/Fixed format.
%prun -l 40 profile.events.to_hdf('test.h5', 'events', table=True, append=False, chunksize=1000000)
499748 function calls (499720 primitive calls) in 117.187 seconds
Ordered by: internal time
List reduced from 539 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
640 22.010 0.034 22.010 0.034 {method '_append' of 'tables.hdf5Extension.Array' objects}
29 20.782 0.717 20.782 0.717 {method '_append_records' of 'tables.tableExtension.Table' objects}
406 19.248 0.047 19.248 0.047 {method '_g_writeSlice' of 'tables.hdf5Extension.Array' objects}
14244 10.685 0.001 10.685 0.001 {method '_g_readSlice' of 'tables.hdf5Extension.Array' objects}
472 10.439 0.022 10.439 0.022 {method 'copy' of 'numpy.ndarray' objects}
30 7.356 0.245 7.356 0.245 {method '_g_flush' of 'tables.hdf5Extension.Leaf' objects}
29 7.161 0.247 37.609 1.297 pytables.py:3498(write_data_chunk)
2 3.888 1.944 3.888 1.944 common.py:197(_isnull_ndarraylike)
80 3.581 0.045 3.581 0.045 {tables.indexesExtension.keysort}
41 3.248 0.079 3.294 0.080 {method '_fillCol' of 'tables.tableExtension.Row' objects}
34 2.744 0.081 2.744 0.081 {method 'ravel' of 'numpy.ndarray' objects}
115 2.591 0.023 2.591 0.023 {method 'astype' of 'numpy.ndarray' objects}
270 0.875 0.003 0.875 0.003 {method 'reduce' of 'numpy.ufunc' objects}
41 0.560 0.014 0.732 0.018 index.py:607(final_idx32)
14490 0.387 0.000 0.712 0.000 array.py:342(_interpret_indexing)
39 0.303 0.008 19.617 0.503 index.py:1219(reorder_slice)
2 0.288 0.144 10.299 5.149 index.py:1099(get_neworder)
57871 0.087 0.000 0.087 0.000 {numpy.core.multiarray.empty}
1 0.084 0.084 45.266 45.266 pytables.py:3424(write_data)
1 0.080 0.080 55.542 55.542 pytables.py:3385(write)
I noticed while running these tests that there are long periods where writing seems to "pause" (the file on disk is not actively growing), and yet there is also low CPU usage during some of these periods.
I begin to suspect that some known ext3 limitations may interact badly with either pandas or PyTables. Ext3 and other non-extent-based filesystems sometimes struggle to unlink large files promptly, and similar system performance (low CPU usage, but long wait times) is apparent even during a simple 'rm' of a 1GB file, for instance.
To clarify, in each test case, I made sure to remove the existing file, if any, before starting the test, so as not to incur any ext3 file removal/overwrite penalty.
However, when re-running this test with index=None, performance improves drastically (~50s vs the ~120 when indexing). So it would seem that either this process continues to be CPU-bound (my system has relatively old AMD Opteron Istanbul CPUs running @ 2.8GHz, though it does also have 8 sockets with 6 core CPUs in each, all but one of which, of course, sit idle during the write), or that there is some conflict between the way PyTables or pandas attempts to manipulate/read/analyze the file when already partially or fully on the filesystem that causes pathologically bad I/O behavior when the indexing is occurring.
EDIT 3:
@Jeff's suggested tests on a smaller dataset (1.3 GB on disk), after upgrading PyTables from 2.4 to 3.0.0, have gotten me here:
In [7]: %timeit f(df)
1 loops, best of 3: 3.7 s per loop
In [8]: %timeit f2(df) # where chunksize= 2 000 000
1 loops, best of 3: 13.8 s per loop
In [9]: %timeit f3(df) # where chunksize= 2 000 000
1 loops, best of 3: 43.4 s per loop
In fact, my performance seems to beat his in all scenarios except for when indexing is turned on (the default). However, indexing still seems to be a killer, and if the way I'm interpreting the output from top and ls as I run these tests is correct, there remain periods of time when there is neither significant processing nor any file writing happening (i.e., CPU usage for the Python process is near 0, and the filesize remains constant). I can only assume these are file reads. Why file reads would be causing slowdowns is hard for me to understand, as I can reliably load an entire 3+ GB file from this disk into memory in under 3 seconds. If they're not file reads, then what is the system 'waiting' on? (No one else is logged into the machine, and there is no other filesystem activity.)
At this point, with upgraded versions of the relevant python modules, the performance for my original dataset is down to the following figures. Of special interest are the system time, which I assume is at least an upper-bound on the time spent performing IO, and the Wall time, which seems to perhaps account for these mysterious periods of no write/no CPU activity.
In [28]: %time f(profile.events)
CPU times: user 0 ns, sys: 7.16 s, total: 7.16 s
Wall time: 7.51 s
In [29]: %time f2(profile.events)
CPU times: user 18.7 s, sys: 14 s, total: 32.7 s
Wall time: 47.2 s
In [31]: %time f3(profile.events)
CPU times: user 1min 18s, sys: 14.4 s, total: 1min 32s
Wall time: 2min 5s
Nevertheless, it would appears that indexing causes significant slowdown for my use case. Perhaps I should attempt limiting the fields indexed instead of simply performing the default case (which may very well be indexing on all of the fields in the DataFrame)? I am not sure how this is likely to affect query times, especially in the cases where a query selects based on a non-indexed field.
Per Jeff's request, a ptdump of the resulting file.
ptdump -av test.h5
/ (RootGroup) ''
/._v_attrs (AttributeSet), 4 attributes:
[CLASS := 'GROUP',
PYTABLES_FORMAT_VERSION := '2.1',
TITLE := '',
VERSION := '1.0']
/df (Group) ''
/df._v_attrs (AttributeSet), 14 attributes:
[CLASS := 'GROUP',
TITLE := '',
VERSION := '1.0',
data_columns := [],
encoding := None,
index_cols := [(0, 'index')],
info := {1: {'type': 'Index', 'names': [None]}, 'index': {}},
levels := 1,
nan_rep := 'nan',
non_index_axes :=
[(1, ['node_id', 'thread_id', 'handle_id', 'type', 'begin', 'end', 'duration', 'flags', 'unique_id', 'id', 'DSTL_LS_FULL', 'L2_DMISS', 'L3_MISS', 'kernel_type'])],
pandas_type := 'frame_table',
pandas_version := '0.10.1',
table_type := 'appendable_frame',
values_cols := ['values_block_0', 'values_block_1']]
/df/table (Table(28880943,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Int64Col(shape=(10,), dflt=0, pos=1),
"values_block_1": Float64Col(shape=(4,), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (4369,)
autoindex := True
colindexes := {
"index": Index(6, medium, shuffle, zlib(1)).is_csi=False}
/df/table._v_attrs (AttributeSet), 15 attributes:
[CLASS := 'TABLE',
FIELD_0_FILL := 0,
FIELD_0_NAME := 'index',
FIELD_1_FILL := 0,
FIELD_1_NAME := 'values_block_0',
FIELD_2_FILL := 0.0,
FIELD_2_NAME := 'values_block_1',
NROWS := 28880943,
TITLE := '',
VERSION := '2.7',
index_kind := 'integer',
values_block_0_dtype := 'int64',
values_block_0_kind := ['node_id', 'thread_id', 'handle_id', 'type', 'begin', 'end', 'duration', 'flags', 'unique_id', 'id'],
values_block_1_dtype := 'float64',
values_block_1_kind := ['DSTL_LS_FULL', 'L2_DMISS', 'L3_MISS', 'kernel_type']]
and another %prun with the updated modules and the full dataset:
%prun -l 25 %time f3(profile.events)
CPU times: user 1min 14s, sys: 16.2 s, total: 1min 30s
Wall time: 1min 48s
542678 function calls (542650 primitive calls) in 108.678 seconds
Ordered by: internal time
List reduced from 629 to 25 due to restriction <25>
ncalls tottime percall cumtime percall filename:lineno(function)
640 23.633 0.037 23.633 0.037 {method '_append' of 'tables.hdf5extension.Array' objects}
15 20.852 1.390 20.852 1.390 {method '_append_records' of 'tables.tableextension.Table' objects}
406 19.584 0.048 19.584 0.048 {method '_g_write_slice' of 'tables.hdf5extension.Array' objects}
14244 10.591 0.001 10.591 0.001 {method '_g_read_slice' of 'tables.hdf5extension.Array' objects}
458 9.693 0.021 9.693 0.021 {method 'copy' of 'numpy.ndarray' objects}
15 6.350 0.423 30.989 2.066 pytables.py:3498(write_data_chunk)
80 3.496 0.044 3.496 0.044 {tables.indexesextension.keysort}
41 3.335 0.081 3.376 0.082 {method '_fill_col' of 'tables.tableextension.Row' objects}
20 2.551 0.128 2.551 0.128 {method 'ravel' of 'numpy.ndarray' objects}
101 2.449 0.024 2.449 0.024 {method 'astype' of 'numpy.ndarray' objects}
16 1.789 0.112 1.789 0.112 {method '_g_flush' of 'tables.hdf5extension.Leaf' objects}
2 1.728 0.864 1.728 0.864 common.py:197(_isnull_ndarraylike)
41 0.586 0.014 0.842 0.021 index.py:637(final_idx32)
14490 0.292 0.000 0.616 0.000 array.py:368(_interpret_indexing)
2 0.283 0.142 10.267 5.134 index.py:1158(get_neworder)
274 0.251 0.001 0.251 0.001 {method 'reduce' of 'numpy.ufunc' objects}
39 0.174 0.004 19.373 0.497 index.py:1280(reorder_slice)
57857 0.085 0.000 0.085 0.000 {numpy.core.multiarray.empty}
1 0.083 0.083 35.657 35.657 pytables.py:3424(write_data)
1 0.065 0.065 45.338 45.338 pytables.py:3385(write)
14164 0.065 0.000 7.831 0.001 array.py:615(__getitem__)
28570 0.062 0.000 0.108 0.000 utils.py:47(is_idx)
47 0.055 0.001 0.055 0.001 {numpy.core.multiarray.arange}
28570 0.050 0.000 0.090 0.000 leaf.py:397(_process_range)
87797 0.048 0.000 0.048 0.000 {isinstance}
|
Here is a similar comparison I just did. Its about 1/3 of the data 10M rows. The final size is abou 1.3GB
I define 3 timing functions:
Test the Fixed format (called Storer in 0.12). This writes in a PyTables Array format
def f(df):
store = pd.HDFStore('test.h5','w')
store['df'] = df
store.close()
Write in the Table format, using PyTables Table format. Do not create an index.
def f2(df):
store = pd.HDFStore('test.h5','w')
store.append('df',df,index=False)
store.close()
Same as f2, but create an index (which is normally done)
def f3(df):
store = pd.HDFStore('test.h5','w')
store.append('df',df)
store.close()
Create the frame
In [25]: df = concat([DataFrame(np.random.randn(10000000,10)),DataFrame(np.random.randint(0,10,size=50000000).reshape(10000000,5))],axis=1)
In [26]: df
Out[26]:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10000000 entries, 0 to 9999999
Columns: 15 entries, 0 to 4
dtypes: float64(10), int64(5)
v0.12.0
In [27]: %timeit f(df)
1 loops, best of 3: 14.7 s per loop
In [28]: %timeit f2(df)
1 loops, best of 3: 32 s per loop
In [29]: %timeit f3(df)
1 loops, best of 3: 40.1 s per loop
master/v0.13.0
In [5]: %timeit f(df)
1 loops, best of 3: 12.9 s per loop
In [6]: %timeit f2(df)
1 loops, best of 3: 17.5 s per loop
In [7]: %timeit f3(df)
1 loops, best of 3: 24.3 s per loop
Timing Runs with the same file as provided by the OP (link is below)
In [4]: df = pd.read_hdf('test.h5','df')
In [5]: df
Out[5]:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 28880943 entries, 0 to 28880942
Columns: 14 entries, node_id to kernel_type
dtypes: float64(4), int64(10)
Like f1, Fixed format
In [6]: %timeit df.to_hdf('test.hdf','df',mode='w')
1 loops, best of 3: 36.2 s per loop
Like f2, Table format, no index
In [7]: %timeit df.to_hdf('test.hdf','df',mode='w',format='table',index=False)
1 loops, best of 3: 45 s per loop
In [8]: %timeit df.to_hdf('test.hdf','df',mode='w',format='table',index=False,chunksize=2000000)
1 loops, best of 3: 44.5 s per loop
Like f3, Table format with index
In [9]: %timeit df.to_hdf('test.hdf','df',mode='w',format='table',chunksize=2000000)
1 loops, best of 3: 1min 36s per loop
Like f3, Table format with index, compressed with blosc
In [10]: %timeit df.to_hdf('test.hdf','df',mode='w',format='table',chunksize=2000000,complib='blosc')
1 loops, best of 3: 46.5 s per loop
In [11]: %timeit pd.read_hdf('test.hdf','df')
1 loops, best of 3: 10.8 s per loop
Show original file (test.h5, and compressed, test.hdf)
In [13]: !ls -ltr test.h*
-rw-r--r-- 1 jreback users 3471518282 Nov 20 18:20 test.h5
-rw-rw-r-- 1 jreback users 649327780 Nov 20 21:17 test.hdf
Several points to note.
Not creating an index can make a non-trivial difference in time. I also believe that if you have a string based index it can substantially worsen write time. That said, you always want to create an index to make retrieval very fast.
You didn't include what your index is, nor whether its sorted (though I only think this makes a small difference).
the write penalty in my examples are roughly 2x (though I have seen it be somewhat bigger when INCLUDING the index time). Thus your 7s (1/2 of my time), for 3x the number I am writing is quite suspect. I am using a reasonably fast disk array. If you were using a flash based disk then this is possible, though.
master/v0.13.0 (release very soon), improves the write times on tables substantially.
you can try setting the chunksize parameter to a bigger number when you write the data (its default is 100000). The purpose of the 'relatively' low number is to have a constant memory usage. (e.g. if is bigger you will use more memory, in theory it should write faster though).
Tables offer 2 advantages over Fixed format: 1) query retrieval, and 2) appendability. Reading the entire table doesn't take advantage of either, so if you ONLY want to read the entire table, then Fixed format is recommended. (In my experience the flexibility of Tables greatly outweights the write penalty, but YMMV)
Bottom line is to repeat the timings (use ipython as it will run multiple tests). If you can reproduce your results, then pls post a %prun and I'll take a look.
Update:
so the recommended way for a table this size is to compress with blosc and use pandas master/0.13.0 along with PyTables 3.0.0
|
Deprecation warning in scikit-learn svmlight format loader
|
I'm getting a new deprecation warning in an IPython notebook I wrote that I've not seen before. What I'm seeing is the following:
X,y = load_svmlight_file('./GasSensorArray/batch2.dat')
/Users/cpd/.virtualenvs/py27-ipython+pandas/lib/python2.7/site-packages/sklearn/datasets/svmlight_format.py:137: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
return _load_svmlight_file(f, dtype, multilabel, zero_based, query_id)
/Users/cpd/.virtualenvs/py27-ipython+pandas/lib/python2.7/site-packages/sklearn/datasets/svmlight_format.py:137: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
return _load_svmlight_file(f, dtype, multilabel, zero_based, query_id)
...
Any thoughts on what might be the issue here? I took another look at my data file and at first glance, I don't see any obvious issue. I'm not sure what I changed in my system setup that would have caused this. I've got v. 0.14.1 of scikit-learn installed.
|
You probably upgraded the numpy version, as this is a numpy 1.8.0 deprecation warning. Explained in this pull request. Continuation in this PR.
Briefly browsing the sklearn issue tracker, I haven't found any related issues.
You can probably search better and file a bug report if not found.
|
Find unique values in a Pandas dataframe, irrespective of row or column location
|
I have a Pandas dataframe and I want to find all the unique values in that dataframe...irrespective of row/columns. If I have a 10 x 10 dataframe, and suppose they have 84 unique values, I need to find them - Not the count.
I can create a set and add the values of each rows by iterating over the rows of the dataframe. But, I feel that it may be inefficient (cannot justify that). Is there an efficient way to find it? Is there a predefined function?
|
In [1]: df = DataFrame(np.random.randint(0,10,size=100).reshape(10,10))
In [2]: df
Out[2]:
0 1 2 3 4 5 6 7 8 9
0 2 2 3 2 6 1 9 9 3 3
1 1 2 5 8 5 2 5 0 6 3
2 0 7 0 7 5 5 9 1 0 3
3 5 3 2 3 7 6 8 3 8 4
4 8 0 2 2 3 9 7 1 2 7
5 3 2 8 5 6 4 3 7 0 8
6 4 2 6 5 3 3 4 5 3 2
7 7 6 0 6 6 7 1 7 5 1
8 7 4 3 1 0 6 9 7 7 3
9 5 3 4 5 2 0 8 6 4 7
In [13]: Series(df.values.ravel()).unique()
Out[13]: array([9, 1, 4, 6, 0, 7, 5, 8, 3, 2])
Numpy unique sorts, so its faster to do it this way (and then sort if you need to)
In [14]: df = DataFrame(np.random.randint(0,10,size=10000).reshape(100,100))
In [15]: %timeit Series(df.values.ravel()).unique()
10000 loops, best of 3: 137 ï¾µs per loop
In [16]: %timeit np.unique(df.values.ravel())
1000 loops, best of 3: 270 ï¾µs per loop
|
Anaphoric list comprehension in Python
|
Consider the following toy example:
>>> def square(x): return x*x
...
>>> [square(x) for x in range(12) if square(x) > 50]
[64, 81, 100, 121]
I have to call square(x) twice in the list comprehension. The duplication is ugly, bug-prone (it's easy to change only one of the two calls when modifying the code), and inefficient.
Of course I can do this:
>>> squares = [square(x) for x in range(12)]
>>> [s for s in squares if s > 50]
[64, 81, 100, 121]
or this:
[s for s in [square(x) for x in range(12)] if s > 50]
These are both livable, but it feels as though there might be a way to do it all in a single statement without nesting the two list comprehensions, which I know I'll have to stare it for a while next time I'm reading the code just to figure out what's going on. Is there a way?
I think a fair question to ask of me would be what I imagine such syntax could look like. Here are two ideas, but neither feels idiomatic in Python (nor do they work). They are inspired by anaphoric macros in Lisp.
[square(x) for x in range(12) if it > 50]
[it=square(x) for x in range(12) if it > 50]
|
You should use a generator:
[s for s in (square(x) for x in range(12)) if s > 50]
This avoids creating an intermediate unfiltered list of squares.
|
Celery task with a time_start attribute in 1970
|
An inspection of currently running Celery tasks reveals a weird time_start timestamp:
>> celery.app.control.inspect().active()
{u'celery@worker.hostname': [{u'acknowledged': True,
u'args': u'(...,)',
u'delivery_info': {u'exchange': u'celery',
u'priority': 0,
u'redelivered': None,
u'routing_key': u'celery'},
u'hostname': u'celery@worker.hostname',
u'id': u'3d92fdfd-524e-4ba1-98cb-cf83af2ad8e9',
u'kwargs': u'{}',
u'name': u'task_name',
u'time_start': 9636801.218162088,
u'worker_pid': 7931}]}
And the time_start attribute dates the task back to 1970:
>> from datetime import datetime
>> datetime.fromtimestamp(9636801.218162088)
datetime.datetime(1970, 4, 22, 13, 53, 21, 218162)
Am I misinterpreting the time_task attribute? Is my Celery app misconfigured?
I am using Celery 3.1.4 on Linux with a Django app and a Redis backend.
Tasks are run by a worker that is executed as follows:
./manage.py celery worker --loglevel=INFO --soft-time-limit=600 --logfile=/tmp/w1.log --pidfile=/tmp/w1.pid -n 'w1.%%h'
|
I found the answer to my own question by digging in the Celery and Kombu code: the time_start attribute of a task is computed by the kombu.five.monotonic function. (Ironically, the kombu code also refers to another StackOverflow question for reference) The timestamp returned by that function refers to a "monotonic" time computed by the clock_gettime system call.
As explained in the clock_gettime documentation, this monotonic time represents the time elapsed "since some unspecified starting point". The purpose of this function is to make sure that time increases monotonically, despite changes of other clock values.
Thus, in order to obtain the real datetime at which the task was started, we just need to compare the time_start attribute to the current value of the monotonic clock:
>> from datetime import datetime
>> from time import time
>> import kombu.five
>> datetime.fromtimestamp(time() - (kombu.five.monotonic() - 9636801.218162088))
datetime.datetime(2013, 11, 20, 9, 55, 56, 193768)
|
argparse subparser monolithic help output
|
My argparse has only 3 flags (store_true) on the top level, everything else is handled through subparsers. When I run myprog.py --help, the output shows a list of all subcommands like normal, {sub1, sub2, sub3, sub4, ...}. So, the default is working great...
I usually can't remember the exact subcommand name I need, and all of its options. So I end up doing 2 help lookups:
myprog.py --help
myprog.py sub1 --help
I do this so often, I decided to cram this into one step. I would rather have my toplevel help output a huge summary, and then I scroll through the list manually. I find it is much faster (for me at least).
I was using a RawDescriptionHelpFormatter, and typing the long help output by hand. But now I have lots of subcommands, and its becoming a pain to manage.
Is there a way to get a verbose help output with just one program call?
If not, how can I iterate the subparsers of my argparse instance, and then retrieve the help output individually from each one (which I will then later glue together)?
Here is a quick outline of my argparse setup. I cleaned/stripped the code a fair bit, so this may not run without a bit of help.
parser = argparse.ArgumentParser(
prog='myprog.py',
formatter_class=argparse.RawDescriptionHelpFormatter,
description=textwrap.dedent(""" You can manually type Help here """) )
parser.add_argument('--debuglog', action='store_true', help='Verbose logging for debug purposes.')
parser.add_argument('--ipyonexit', action='store_true', help='Drop into an embeded Ipython session instead of exiting command.')
subparser = parser.add_subparsers()
### --- Subparser B
parser_b = subparser.add_parser('pdfreport', description="Used to output reports in PDF format.")
parser_b.add_argument('type', type=str, choices=['flatlist', 'nested', 'custom'],
help="The type of PDF report to generate.")
parser_b.add_argument('--of', type=str, default='',
help="Override the path/name of the output file.")
parser_b.add_argument('--pagesize', type=str, choices=['letter', '3x5', '5x7'], default='letter',
help="Override page size in output PDF.")
parser_b.set_defaults(func=cmd_pdf_report)
### ---- Subparser C
parser_c = subparser.add_parser('dbtables', description="Used to perform direct DB import/export using XLS files.")
parser_c.add_argument('action', type=str, choices=['push', 'pull', 'append', 'update'],
help="The action to perform on the Database Tables.")
parser_c.add_argument('tablename', nargs="+",
help="The name(s) of the DB-Table to operate on.")
parser_c.set_defaults(func=cmd_db_tables)
args = parser.parse_args()
args.func(args)
|
This is a bit tricky, as argparse does not expose a list of defined sub-parsers directly. But it can be done:
import argparse
# create the top-level parser
parser = argparse.ArgumentParser(prog='PROG')
parser.add_argument('--foo', action='store_true', help='foo help')
subparsers = parser.add_subparsers(help='sub-command help')
# create the parser for the "a" command
parser_a = subparsers.add_parser('a', help='a help')
parser_a.add_argument('bar', type=int, help='bar help')
# create the parser for the "b" command
parser_b = subparsers.add_parser('b', help='b help')
parser_b.add_argument('--baz', choices='XYZ', help='baz help')
# print main help
print(parser.format_help())
# retrieve subparsers from parser
subparsers_actions = [
action for action in parser._actions
if isinstance(action, argparse._SubParsersAction)]
# there will probably only be one subparser_action,
# but better save than sorry
for subparsers_action in subparsers_actions:
# get all subparsers and print help
for choice, subparser in subparsers_action.choices.items():
print("Subparser '{}'".format(choice))
print(subparser.format_help())
This example should work for python 2.7 and python 3. The example parser is from Python 2.7 documentation on argparse sub-commands.
The only thing left to do is adding a new argument for the complete help, or replacing the built in -h/--help.
|
python: shift column in pandas dataframe up by one
|
I've got a pandas dataframe. I want to 'lag' one of my columns. Meaning, for example, shifting the entire column 'gdp' up by one, and then removing all the excess data at the bottom of the remaining rows so that all columns are of equal length again.
df =
y gdp cap
0 1 2 5
1 2 3 9
2 8 7 2
3 3 4 7
4 6 7 7
df_lag =
y gdp cap
0 1 3 5
1 2 7 9
2 8 4 2
3 3 7 7
Anyway to do this?
|
In [44]: df.gdp = df.gdp.shift(-1)
In [45]: df
Out[45]:
y gdp cap
0 1 3 5
1 2 7 9
2 8 4 2
3 3 7 7
4 6 NaN 7
In [46]: df[:-1]
Out[46]:
y gdp cap
0 1 3 5
1 2 7 9
2 8 4 2
3 3 7 7
|
pip install from github repo branch
|
Seems to be rather straightforward but I just can't get it right. Trying to pip install a repo's specific branch. Googling tells me to
pip install https://github.com/user/repo.git@branch
The branch's name is issue/34/oscar-0.6 so I did pip install https://github.com/tangentlabs/django-oscar-paypal.git@/issue/34/oscar-0.6 but its returning a 404.
How do I install this branch?
|
Prepend the url prefix git+ (See VCS Support):
pip install git+https://github.com/tangentlabs/django-oscar-paypal.git@issue/34/oscar-0.6
And specify the branch name without the leading /.
|
How can I make a scatter plot colored by density in matplotlib?
|
I'd like to make a scatter plot where each point is colored by the spatial density of nearby points.
I've come across a very similar question, which shows an example of this using R:
R Scatter Plot: symbol color represents number of overlapping points
What's the best way to accomplish something similar in python using matplotlib?
|
In addition to hist2d or hexbin as @askewchan suggested, you can use the same method that the accepted answer in the question you linked to uses.
If you want to do that:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import gaussian_kde
# Generate fake data
x = np.random.normal(size=1000)
y = x * 3 + np.random.normal(size=1000)
# Calculate the point density
xy = np.vstack([x,y])
z = gaussian_kde(xy)(xy)
fig, ax = plt.subplots()
ax.scatter(x, y, c=z, s=100, edgecolor='')
plt.show()
If you'd like the points to be plotted in order of density so that the densest points are always on top (similar to the linked example), just sort them by the z-values. I'm also going to use a smaller marker size here as it looks a bit better:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import gaussian_kde
# Generate fake data
x = np.random.normal(size=1000)
y = x * 3 + np.random.normal(size=1000)
# Calculate the point density
xy = np.vstack([x,y])
z = gaussian_kde(xy)(xy)
# Sort the points by density, so that the densest points are plotted last
idx = z.argsort()
x, y, z = x[idx], y[idx], z[idx]
fig, ax = plt.subplots()
ax.scatter(x, y, c=z, s=50, edgecolor='')
plt.show()
|
Passing a matplotlib figure to HTML (flask)
|
I am using matplotlib to render some figure in a web app. I've used fig.savefig() before when I'm just running scripts. However, I need a function to return an actual ".png" image so that I can call it with my HTML.
Some more (possibly unnecessary) info: I am using Python Flask. I figure I could use fig.savefig() and just stick the figure in my static folder and then call it from my HTML, but I'd rather not do that every time. It would be optimal if I could just create the figure, make an image out of it, return that image, and call it from my HTML, then it goes away.
The code that creates the figure works. However, it returns a figure, which doesn't work with HTML I guess.
Here's where I call the draw_polygon in the routing, draw_polygon is the method that returns the figure:
@app.route('/images/<cropzonekey>')
def images(cropzonekey):
fig = draw_polygons(cropzonekey)
return render_template("images.html", title=cropzonekey, figure = fig)
And here is the HTML where I am trying to generate the image.
<html>
<head>
<title>{{ title }} - image</title>
</head>
<body>
<img src={{ figure }} alt="Image Placeholder" height="100">
</body>
</html>
And, as you can probably guess, when I load the page, all I get is Image Placeholder. So, they didn't like the format I fed the figure in with.
Anyone know what matplotlib methods/work-arounds turn a figure into an actual image? I am all over these docs but I can't find anything. Thanks!
BTW: didn't think it was necessary to include the python code that makes the figure, but I can include it if You guys need to see it (just didn't want to clutter the question)
|
You have to separate the HTML and the image into two different routes.
Your /images/<cropzonekey> route will just serve the page, and in the HTML content of that page there will be a reference to the second route, the one that serves the image.
The image is served in its own route from a memory file that you generate with savefig().
I obviously didn't test this, but I believe the following example will work as is or will get you pretty close to a working solution:
@app.route('/images/<cropzonekey>')
def images(cropzonekey):
return render_template("images.html", title=cropzonekey)
@app.route('/fig/<cropzonekey>')
def fig(cropzonekey):
fig = draw_polygons(cropzonekey)
img = StringIO()
fig.savefig(img)
img.seek(0)
return send_file(img, mimetype='image/png')
Your images.html template the becomes:
<html>
<head>
<title>{{ title }} - image</title>
</head>
<body>
<img src="{{ url_for('fig', cropzonekey = title) }}" alt="Image Placeholder" height="100">
</body>
</html>
|
Removing index column in pandas
|
I have the following code which imports a CSV file. There are 3 columns and I want to set the first two of them to variables. When I set the second column to the variable "efficiency" the index column is also tacked on. How can I get rid of the index column?
df = pd.DataFrame.from_csv('Efficiency_Data.csv', header=0, parse_dates=False)
energy = df.index
efficiency = df.Efficiency
print efficiency
I tried using
del df['index']
after I set
energy = df.index
which I found in another post but that results in "KeyError: 'index' "
|
When reading to and from your csv file include the argument index=False so for example
df.to_csv(filename , index = False)
and to read from the csv
df.read_csv(filename , index = False)
This should prevent the issue so you don't need to fix it later.
|
How to make good reproducible pandas examples
|
Having spent a decent amount of time watching both the r and pandas tags on SO, the impression that I get is that pandas questions are less likely to contain reproducible data. This is something that the R community has been pretty good about encouraging, and thanks to guides like this, newcomers are able to get some help on putting together these examples. People who are able to read these guides and come back with reproducible data will often have much better luck getting answers to their questions.
How can we create good reproducible examples for pandas questions? Simple dataframes can be put together, e.g.:
import pandas as pd
df = pd.DataFrame({'user': ['Bob', 'Jane', 'Alice'],
'income': [40000, 50000, 42000]})
But many example datasets need more complicated structure, e.g.:
datetime indices or data
Multiple categorical variables (is there an equivalent to R's expand.grid() function, which produces all possible combinations of some given variables?)
MultiIndex or Panel data
For datasets that are hard to mock up using a few lines of code, is there an equivalent to R's dput() that allows you to generate copy-pasteable code to regenerate your datastructure?
|
Note: The ideas here are pretty generic for StackOverflow, indeed questions.
Disclaimer: Writing a good question is HARD.
The Good:
do include small* example DataFrame, either as runnable code:
In [1]: df = pd.DataFrame([[1, 2], [1, 3], [4, 6]], columns=['A', 'B'])
or make it "copy and pasteable" using pd.read_clipboard(sep='\s\s+'), you can format the text for StackOverflow highlight and use Ctrl+K (or prepend four spaces to each line):
In [2]: df
Out[2]:
A B
0 1 2
1 1 3
2 4 6
test pd.read_clipboard(sep='\s\s+') yourself.
* I really do mean small, the vast majority of example DataFrames could be fewer than 6 rowscitation needed, and I bet I can do it in 5 rows. Can you reproduce the error with df = df.head(), if not fiddle around to see if you can make up a small DataFrame which exhibits the issue you are facing.
* Every rule has an exception, the obvious one is for performance issues (in which case definitely use %timeit and possibly %prun), where you should generate (consider using np.random.seed so we have the exact same frame): df = pd.DataFrame(np.random.randn(100000000, 10)). Saying that, "make this code fast for me" is not strictly on topic for the site...
write out the outcome you desire (similarly to above)
In [3]: iwantthis
Out[3]:
A B
0 1 5
1 4 6
Explain what the numbers come from: the 5 is sum of the B column for the rows where A is 1.
do show the code you've tried:
In [4]: df.groupby('A').sum()
Out[4]:
B
A
1 5
4 6
But say what's incorrect: the A column is in the index rather than a column.
do show you've done some research (search the docs, search StackOverflow), give a summary:
The docstring for sum simply states "Compute sum of group values"
The groupby docs don't give any examples for this.
Aside: the answer here is to use df.groupby('A', as_index=False).sum().
if it's relevant that you have Timestamp columns, e.g. you're resampling or something, then be explicit and apply pd.to_datetime to them for good measure**.
df['date'] = pd.to_datetime(df['date']) # this column ought to be date..
** Sometimes this is the issue itself: they were strings.
The Bad:
don't include a MultiIndex, which we can't copy and paste (see above), this is kind of a grievance with pandas default display but nonetheless annoying:
In [11]: df
Out[11]:
C
A B
1 2 3
2 6
The correct way is to include an ordinary DataFrame with a set_index call:
In [12]: df = pd.DataFrame([[1, 2, 3], [1, 2, 6]], columns=['A', 'B', 'C']).set_index(['A', 'B'])
In [13]: df
Out[13]:
C
A B
1 2 3
2 6
do provide insight to what it is when giving the outcome you want:
B
A
1 1
5 0
Be specific about how you got the numbers (what are they)... double check they're correct.
If your code throws an error, do include the entire stacktrace (this can be edited out later if it's too noisy). Show the line number (and the corresponding line of your code which it's raising against).
The Ugly:
don't link to a csv we don't has access to (ideally don't link to an external source at all...)
df = pd.read_csv('my_secret_file.csv') # ideally with lots of parsing options
Most data is proprietary we get that: Make up similar data and see if you can reproduce the problem (something small).
don't explain the situation vaguely in words, like you have a DataFrame which is large, mention some of the column names in passing (be sure not to mention their dtypes). Try and go into lots of detail about something which is completely meaningless without seeing the actual context. Presumably noone is even going to read to the end of this paragraph.
Essays are bad, it's easier with small examples.
don't include 10+ (100+??) lines of data munging before getting to your actual question.
Please, we see enough of this in our day jobs. We want to help, but not like this....
Cut the intro, and just show the relevant DataFrames (or small versions of them) in the step which is causing you trouble.
Anyways, have fun learning python, numpy and pandas!
|
Turn Pandas Multi-Index into column
|
I have a dataframe with 2 index levels:
value
Trial measurement
1 0 13
1 3
2 4
2 0 NaN
1 12
3 0 34
Which I want to turn into this:
Trial measurement value
1 0 13
1 1 3
1 2 4
2 0 NaN
2 1 12
3 0 34
How can I best do this?
I need this because I want to aggregate the data as instructed here, but I can't select my columns like that if they are in use as indices.
|
The reset_index() is a pandas DataFrame method that will transfer index values into the DataFrame as columns. The default setting for the parameter is drop=False (which will keep the index values as columns).
All you have to do add .reset_index(inplace=True) after the name of the DataFrame:
df.reset_index(inplace=True)
|
How to avoid [Errno 12] Cannot allocate memory errors caused by using subprocess module
|
Complete Working Test Case
Of course depending on your memory on the local and remote machines your array sizes will be different.
z1 = numpy.random.rand(300000000,2);
for i in range(1000):
print('*******************************************\n');
direct_output = subprocess.check_output('ssh blah@blah "ls /"', shell=True);
direct_output = 'a'*1200000;
a2 = direct_output*10;
print(len(direct_output));
Current Use Case
In case it helps my use case is as follows:
I issue db queries then store the resulting tables on the remote machine. I then want to transfer them across a network and do analysis. Thus far I have been doing something like the following in python:
#run a bunch of queries before hand with the results in remote files
....
counter = 0
mergedDataFrame = None
while NotDone:
output = subprocess.check_output('ssh blah@blah cat /data/file%08d'%(counter))
data = pandas.read_csv(...)
#do lots of analysis, append, merge, numpy stuff etc...
mergedDataFrame = pandas.merge(...)
counter += 1
At some point I receive the following error at the check_output command: [Errno 12] Cannot allocate memory
Background
Thanks to the below questions I think I have an idea of what is wrong. There are a number of solutions posted, and I am trying to determine which of the solutions will avoid the [Errno 12] Cannot allocate memory error associated with the subprocess implementation using fork/clone.
Python subprocess.Popen "OSError: [Errno 12] Cannot allocate memory" This gives the underlying diagnosis and suggests some workaround like spawning separate script etc...
Understanding Python fork and memory allocation errors Suggests using rfoo to circumvent the subprocess limitation of fork/clone and spawning child process and copy memory etc... This seems to imply a client-server model
What is the simplest way to SSH using Python? , but I have the additional constraints that I cannot use subprocess due to memory limitations and fork/clone implementation? The solutions suggests using paramiko or something built on top of it, others suggest subprocess (which I have found will not work in my case).
There were other similar questions but the answers often talked about file descriptors being the culprit (in this case they are not), adding more RAM to the system ( I cannot do this), upgrading to x64 ( I already am on x64). Some hint at the problem of ENOMEM. A few answers mention trying to determine if the subprocess.Popen (in my case check_output) is not properly cleaning the processes, but it looks like S. Lott and others agree that the subprocess code itself is properly cleaning up.
Python memory allocation error using subprocess.Popen
Python IOError cannot allocate memory although there is plenty
Cannot allocate memory on Popen commands
Python subprocess.Popen erroring with OSError: [Errno 12] Cannot allocate memory after period of time
I have searched through the source code on github https://github.com/paramiko/paramiko/search?q=Popen&type=Code and it appears to use subprocess in the proxy.py file.
Actual Questions
Does this mean that ultimately paramiko is using the Popen solution described above that will have problems when the python memory footprint grows and repeated Popen calls are made due to the clone/fork implementation?
If paramiko will not work is there another way to do what I am looking for with a client side only solution? Or will a client/server/socket solution be needed? If so will any of rfoo, tornado, or zeromq, http transfers work here?
Notes
I am running 64bit linux 8GB main memory. I do not want to pursue the options of buying more RAM.
|
If you are running out of memory, you may want to increase your swap memory. Or you might have no swap enabled at all. In Ubuntu (it should work for other distributions as well) you can check your swap by:
$sudo swapon -s
if it is empty it means you don't have any swap enabled. To add a 1GB swap:
$sudo dd if=/dev/zero of=/swapfile bs=1024 count=1024k
$sudo mkswap /swapfile
$sudo swapon /swapfile
Add the following line to the fstab to make the swap permanent.
$sudo vim /etc/fstab
/swapfile none swap sw 0 0
Source and more information can be found here.
|
How to use valgrind with python?
|
I am trying to memcheck a C python extension I am writing, but I'm having trouble setting up valgrind to work with python. I would really appreciate some advice. Just for context, this is Ubuntu 13.10, python 2.7.5+, and valgrind 3.8.1.
As per recommendation from Readme.valgrind I did the following.
1) Downloaded the python source with
sudo apt-get build-dep python2.7
apt-get source python2.7
2) Applied the code patch, i.e. "Uncomment Py_USING_MEMORY_DEBUGGER in Objects/obmalloc.c".
3) Applied the suppression patch, i.e. "Uncomment the lines in Misc/valgrind-python.supp that suppress the warnings for PyObject_Free and PyObject_Realloc"
4) Compiled python with
./configure --prefix=/home/dejan/workspace/python --without-pymalloc
make -j4 install
Note that I did both 2 and 3, while README.valgrind says to do 2 or 3... more can't hurt.
Now, let's test this on some sample python code in test.py
print "Test"
Let's run valgrind on python with this script
valgrind --tool=memcheck --leak-check=full --suppressions=python2.7-2.7.5/Misc/valgrind-python.supp bin/python test.py
Unexpectedly, there is still loads of reports from valgrind, with the first one being (and many more following)
==27944== HEAP SUMMARY:
==27944== in use at exit: 857,932 bytes in 5,144 blocks
==27944== total heap usage: 22,766 allocs, 17,622 frees, 4,276,934 bytes allocated
==27944==
==27944== 38 bytes in 1 blocks are possibly lost in loss record 24 of 1,343
==27944== at 0x4C2A2DB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==27944== by 0x46B8DD: PyString_FromString (stringobject.c:143)
==27944== by 0x439631: PyFile_FromFile (fileobject.c:157)
==27944== by 0x4E9B4A: _PySys_Init (sysmodule.c:1383)
==27944== by 0x4E29E9: Py_InitializeEx (pythonrun.c:222)
==27944== by 0x4154B4: Py_Main (main.c:546)
==27944== by 0x577DDE4: (below main) (libc-start.c:260)
Am I doing something wrong? Is there a way to valgrind a python script that doesn't leak and get clean valgrind output?
|
I found the answer here.
Python also needs to be compiled in debug mode, i.e.
./configure --prefix=/home/dejan/workspace/python --without-pymalloc --with-pydebug --with-valgrind
In addition, numpy has a suppresion file that gets rid of the extra valgrind warnings.
|
Scikit Learn SVC decision_function and predict
|
I'm trying to understand the relationship between decision_function and predict, which are instance methods of SVC (http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html). So far I've gathered that decision function returns pairwise scores between classes. I was under the impression that predict chooses the class that maximizes its pairwise score, but I tested this out and got different results. Here's the code I was using to try and understand the relationship between the two. First I generated the pairwise score matrix, and then I printed out the class that has maximal pairwise score which was different than the class predicted by clf.predict.
result = clf.decision_function(vector)[0]
counter = 0
num_classes = len(clf.classes_)
pairwise_scores = np.zeros((num_classes, num_classes))
for r in xrange(num_classes):
for j in xrange(r + 1, num_classes):
pairwise_scores[r][j] = result[counter]
pairwise_scores[j][r] = -result[counter]
counter += 1
index = np.argmax(pairwise_scores)
class = index_star / num_classes
print class
print clf.predict(vector)[0]
Does anyone know the relationship between these predict and decision_function?
|
I don't fully understand your code, but let's go trough the example of the documentation page you referenced:
import numpy as np
X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
y = np.array([1, 1, 2, 2])
from sklearn.svm import SVC
clf = SVC()
clf.fit(X, y)
Now let's apply both the decision function and predict to the samples:
clf.decision_function(X)
clf.predict(X)
The output we get is:
array([[-1.00052254],
[-1.00006594],
[ 1.00029424],
[ 1.00029424]])
array([1, 1, 2, 2])
And that is easy to interpret: The desion function tells us on which side of the hyperplane generated by the classifier we are (and how far we are away from it). Based on that information, the estimator then label the examples with the corresponding label.
|
Fast n-dimensional sparse array in Python / Cython
|
I have an application that involves large n-dimensional arrays which are very sparse. scipy.sparse has a useful 'vectorized getting and setting' feature, so that Cython can be used to populate a sparse matrix quickly.
Of course the scipy package can't handle n-dimensions. There are two packages I have found that do n-dimensional sparse arrays in python sparray and ndsparse. However it seems neither has the vectorized getting and setting feature.
So I need either:
a python package for n-dimensional arrays with vectorized get and set or
a c library for sparse arrays which I can easily access with Cython or
some 'roll your own' option which I guess would require a c equivalent to a python dict
For my purpose I think mapping the n-dimension coordinates back to 1 or two dimensions could work. What would be better though is to have a dict equivalent that i can access fast inside a Cython loop. I assume this rules out the python dict.
Wondering if someone could give me an example of how to use the c++ map object from within Cython?
|
If you decide to go with the C dict option, you can use the C++ STL's std::map. It's unlikely that you'll find faster or more robust native code that implements a dictionary/map.
cppmap.pyx:
# distutils: language = c++
cdef extern from "<map>" namespace "std":
cdef cppclass mymap "std::map<int, float>":
mymap()
float& operator[] (const int& k)
cdef mymap m = mymap()
cdef int i
cdef float value
for i in range(100):
value = 3.0 * i**2
m[i] = value
print m[10]
setup.py:
from distutils.core import setup
from Cython.Build import cythonize
setup(name = "cppmapapp"
ext_modules = cythonize('*.pyx'))
Command line:
$ python setup.py build
$ cd build/lib.macosx-10.5-x86_64-2.7
$ python -c 'import cppmap'
300.0
|
How bad is shadowing names defined in outer scopes?
|
I just switched to Pycharm and I am very happy about all the warnings and hints it provides me to improve my code. Except for this one which I don't understand:
This inspection detects shadowing names defined in outer scopes.
I know it is bad practice to access variable from the outer scope but what is the problem with shadowing the outer scope?
Here is one example, where Pycharm gives me the warning message:
data = [4, 5, 6]
def print_data(data): # <-- Warning: "Shadows 'data' from outer scope
print data
print_data(data)
|
No big deal in your above snippet, but imagine a function with a few more arguments and quite a few more lines of code. Then you decide to rename your data argument as yadda but miss one of the places it is used in the function's body... Now data refers to the global, and you start having weird behaviour - where you would have a much more obvious NameError if you didn't have a global name data.
Also remember that in Python everything is an object (including modules, classes and functions) so there's no distinct namespaces for functions, modules or classes. Another scenario is that you import function foo at the top of your module, and use it somewhere in your function body. Then you add a new argument to your function and named it - bad luck - foo.
Finally, built-in functions and types also live in the same namespace and can be shadowed the same way.
None of this is much of a problem if you have short functions, good naming and a decent unittest coverage, but well, sometimes you have to maintain less than perfect code and being warned about such possible issues might help.
|
How to set default text for a Tkinter Entry widget
|
How do I set the default text for a Tkinter Entry widget in the constructor? I checked the documentation, but I do not see a something like a "string=" option to set in the constructor?
There is a similar answer out there for using tables and lists, but this is for a simple Entry widget.
|
use Entry.insert.
For example:
from Tkinter import *
root = Tk()
e = Entry(root)
e.insert(END, 'default text')
e.pack()
root.mainloop()
or use textvariable option:
from Tkinter import *
root = Tk()
v = StringVar(root, value='default text')
e = Entry(root, textvariable=v)
e.pack()
root.mainloop()
|
Why does bool(xml.etree.ElementTree.Element) evaluate to False?
|
import xml.etree.ElementTree as ET
e = ET.Element('Brock',Role="Bodyguard")
print bool(e)
Why is an xml.etree.ElementTree.Element considered False?
I know that I can do if e is not None to check for existence. But I would strongly expect bool(e) to return True.
|
As it turns out, Element objects are considered a False value if they have no children.
I found this in the source:
def __nonzero__(self):
warnings.warn(
"The behavior of this method will change in future versions. "
"Use specific 'len(elem)' or 'elem is not None' test instead.",
FutureWarning, stacklevel=2
)
return len(self._children) != 0 # emulate old behaviour, for now
Even the inline comment agrees with you -- this behavior is iffy ;)
|
Matplotlib connect scatterplot points with line - Python
|
I have two lists, dates and values. I want to plot them using matplotlib. The following creates a scatter plot of my data.
import matplotlib.pyplot as plt
plt.scatter(dates,values)
plt.show()
plt.plot(dates, values) creates a line graph.
But what I really want is a scatterplot where the points are connected by a line.
Similar to in R:
plot(dates, values)
lines(dates, value, type="l")
, which gives me a scatterplot of points overlaid with a line connecting the points.
How do I do this in python?
|
I think @Evert has the right answer:
plt.scatter(dates,values)
plt.plot(dates, values)
plt.show()
Which is pretty much the same as
plt.plot(dates, values, '-o')
plt.show()
or whatever linestyle you prefer.
|
Using Sklearn's TfidfVectorizer transform
|
I am trying to get the tf-idf vector for a single document using Sklearn's TfidfVectorizer object. I create a vocabulary based on some training documents and use fit_transform to train the TfidfVectorizer. Then, I want to find the tf-idf vectors for any given testing document.
from sklearn.feature_extraction.text import TfidfVectorizer
self.vocabulary = "a list of words I want to look for in the documents".split()
self.vect = TfidfVectorizer(sublinear_tf=True, max_df=0.5, analyzer='word',
stop_words='english')
self.vect.fit_transform(self.vocabulary)
...
doc = "some string I want to get tf-idf vector for"
tfidf = self.vect.transform(doc)
The problem is that this returns a matrix with n rows where n is the size of my doc string. I want it to return just a single vector representing the tf-idf for the entire string. How can I make this see the string as a single document, rather than each character being a document? Also, I am very new to text mining so if I am doing something wrong conceptually, that would be great to know. Any help is appreciated.
|
If you want to compute tf-idf only for a given vocabulary, use vocabulary argument to TfidfVectorizer constructor,
vocabulary = "a list of words I want to look for in the documents".split()
vect = TfidfVectorizer(sublinear_tf=True, max_df=0.5, analyzer='word',
stop_words='english', vocabulary=vocabulary)
Then, to fit, i.e. caluculate counts, with a given corpus, i.e. an iterable of documents, use fit:
vect.fit(corpus)
Method fit_transform is a shortening for
vect.fit(corpus)
corpus_tf_idf = vect.transform(corpus)
Last, transform method accepts a corpus, so for a single document, you should pass it as list, or it is treated as iterable of symbols, each symbol being a document.
doc_tfidf = vect.transform([doc])
|
Format APNS-style JSON message in Python for use with Amazon SNS
|
I'm creating an iOS app, and for our push notifications, we're using Amazon's Simple Notification Service (SNS).
SNS is wonderful, but the documentation is pretty sparse. I'm using boto, Amazon's Python library, and I've figured out how to send plain-text push notifications:
device_arn = 'MY ENDPOINT ARN GOES HERE'
plain_text_message = 'a plaintext message'
sns.publish(message=plain_text_message,target_arn=device_arn)
However, what's not clear from the documentation is how to create an an Apple Push Notification Service (APNS) message. I need to send a sound and a badge along with the push notification, but can't figure out how to format the JSON for the message.
Here's my best guess so far:
message = {'default':'default message', 'message':{'APNS_SANDBOX':{'aps':{'alert':'inner message','sound':'mySound.caf'}}}}
messageJSON = json.dumps(message,ensure_ascii=False)
sns.publish(message=messageJSON,target_arn=device_arn,message_structure='json')
When I run this code, though, all I see on the notification is "default message" - which means that Amazon SNS rejected my message's format, and displayed the default instead.
How do I format this JSON correctly?
|
I figured it out!
Turns out, the APNS payload has to be encoded as a string within the larger payload - and it totally works.
Here's the final, working code:
apns_dict = {'aps':{'alert':'inner message','sound':'mySound.caf'}}
apns_string = json.dumps(apns_dict,ensure_ascii=False)
message = {'default':'default message','APNS_SANDBOX':apns_string}
messageJSON = json.dumps(message,ensure_ascii=False)
sns.publish(message=messageJSON,target_arn=device_arn,message_structure='json')
Here's a walkthrough of what's going on in this code:
First, create the python dictionary for APNS:
apns_dict = {'aps':{'alert':'inner message','sound':'mySound.caf'}}
Second, take that dictionary, and turn it into a JSON-formatted string:
apns_string = json.dumps(apns_dict,ensure_ascii=False)
Third, put that string into the larger payload:
message = {'default':'default message','APNS_SANDBOX':apns_string}
Next, we encode that in its own JSON-formatted string:
messageJSON = json.dumps(message,ensure_ascii=False)
The resulting string can then be published using boto:
sns.publish(message=messageJSON,target_arn=device_arn,message_structure='json')
|
how to draw directed graphs using networkx in python?
|
I have some nodes coming from a script that I want to map on to a graph. In the below, I want to use Arrow to go from A to D and probably have the edge colored too in (red or something).
This is basically, like a path from A to D when all other nodes are present. you can imagine each nodes as cities and travelling from A to D requires directions (with arrow heads).
This code below builds the graph
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
G = nx.Graph()
G.add_edges_from(
[('A', 'B'), ('A', 'C'), ('D', 'B'), ('E', 'C'), ('E', 'F'),
('B', 'H'), ('B', 'G'), ('B', 'F'), ('C', 'G')])
val_map = {'A': 1.0,
'D': 0.5714285714285714,
'H': 0.0}
values = [val_map.get(node, 0.25) for node in G.nodes()]
nx.draw(G, cmap = plt.get_cmap('jet'), node_color = values)
plt.show()
but I want something like shown in the image.
Arrow heads of the first image and the edges in red color onto the second image..Thanks
|
Fully fleshed out example with arrows for only the red edges:
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
G = nx.DiGraph()
G.add_edges_from(
[('A', 'B'), ('A', 'C'), ('D', 'B'), ('E', 'C'), ('E', 'F'),
('B', 'H'), ('B', 'G'), ('B', 'F'), ('C', 'G')])
val_map = {'A': 1.0,
'D': 0.5714285714285714,
'H': 0.0}
values = [val_map.get(node, 0.25) for node in G.nodes()]
# Specify the edges you want here
red_edges = [('A', 'C'), ('E', 'C')]
edge_colours = ['black' if not edge in red_edges else 'red'
for edge in G.edges()]
black_edges = [edge for edge in G.edges() if edge not in red_edges]
# Need to create a layout when doing
# separate calls to draw nodes and edges
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G, pos, cmap=plt.get_cmap('jet'), node_color = values)
nx.draw_networkx_edges(G, pos, edgelist=red_edges, edge_color='r', arrows=True)
nx.draw_networkx_edges(G, pos, edgelist=black_edges, arrows=False)
plt.show()
|
python3 nested list comprehension scope
|
The best way to explain my question is with an example:
example.py:
class A(object):
integers = [1, 2, 3]
singles = [i for i in integers]
class B(object):
integers = [1, 2, 3]
pairs = [(i, j) for i in integers for j in integers]
When I run this under python 2 it works fine, but under python 3 I get a name error for class B (but not class A):
$ python example.py ~
Traceback (most recent call last):
File "example.py", line 6, in <module>
class B(object):
File "example.py", line 8, in B
pairs = [(i, j) for i in integers for j in integers]
File "example.py", line 8, in <listcomp>
pairs = [(i, j) for i in integers for j in integers]
NameError: global name 'integers' is not defined
Why does only class B raise a NameError and why only under Python 3?
|
Class scopes are a bit strange in Python 3, but its for a good reason.
In Python 2, the iteration variables (i and j in your examples) leaked out of list comprehensions and would be included in the outside scope. This is because they were developed early in Python 2's design, and they were based on explicit loops. As an example of how this is unexpected, check the values of B.i and B.j in Python 2 where you didn't get an error!
In Python 3, list comprehensions were changed to prevent this leaking. They are now implemented with a function (which has its own scope) that is called to produce the list value. This makes them work the same as generator expressions, which have always been functions under the covers.
A consequence of this is that in a class, a list comprehension usually can't see any class variables. This is parallel to a method not being able to see class variables directly (only though self or the explicit class name). For example, calling the method in the class below will give the same NameError exception you are seeing in your list comprehension:
class Foo:
classvar = "bar"
def blah(self):
print(classvar) # raises "NameError: global name 'classvar' is not defined"
There is an exception however. The sequence being iterated over by the first for clause of a list comprehension is evaluated outside of the inner function. This is why your A class works in Python 3. It does this so that generators can catch non-iterable objects immediately (rather than only when next is called on them and their code runs).
But it doesn't work for the inner for clause in the two-level comprehension in class B.
You can see the difference if you disassemble some functions that create list comprehensions using the dis module:
def f(lst):
return [i for i in lst]
def g(lst):
return [(i, j) for i in lst for j in lst]
Here's the disassembly of f:
>>> dis.dis(f)
2 0 LOAD_CONST 1 (<code object <listcomp> at 0x0000000003CCA1E0, file "<pyshell#374>", line 2>)
3 LOAD_CONST 2 ('f.<locals>.<listcomp>')
6 MAKE_FUNCTION 0
9 LOAD_FAST 0 (lst)
12 GET_ITER
13 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
16 RETURN_VALUE
The first three lines show f loading up a precompiled code block and creating a function out of it (it names it f.<locals>.<listcomp>). This is the function used to make the list.
The next two lines show the lst variable being loaded and an iterator being made from it. This is happening within f's scope, not the inner function's. Then the <listcomp> function is called with that iterator as its argument.
This is comparable to class A. It gets the iterator from the class variable integers, just like you can use other kinds of references to previous class members in the definition of a new member.
Now, compare the disassembly of g, which makes pairs by iterating over the same list twice:
>>> dis.dis(g)
2 0 LOAD_CLOSURE 0 (lst)
3 BUILD_TUPLE 1
6 LOAD_CONST 1 (<code object <listcomp> at 0x0000000003CCA810, file "<pyshell#377>", line 2>)
9 LOAD_CONST 2 ('g.<locals>.<listcomp>')
12 MAKE_CLOSURE 0
15 LOAD_DEREF 0 (lst)
18 GET_ITER
19 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
22 RETURN_VALUE
This time, it builds a closure with the code object, rather than a basic function. A closure is a function with some "free" variables that refer to things in the enclosing scope. For the <listcomp> function in g, this works just fine, since its scope is a normal one. However, when you try to use the same sort of comprehension in class B the closure fails, since classes don't let functions they contain see into their scopes in that way (as demonstrated with the Foo class above).
Its worth noting that not only inner sequence values cause this issue. As in the previous question linked to by BrenBarn in a comment, you'll have the same issue if a class variable is referred to elsewhere in the list comprehension:
class C:
num = 5
products = [i * num for i in range(10)] # raises a NameError about num
You don't, however get an error from multi-level list comprehensions where the inner for (or if) clauses only refer to the results of the preceeding loops. This is because those values aren't part of a closure, just local variables inside the <listcomp> function's scope.
class D:
nested = [[1, 2, 3], [4, 5, 6]]
flattened = [item for inner in nested for item in inner] # works!
Like I said, class scopes are a bit strange.
|
TF-IDF implementations in python
|
What are the standard tf-idf implementations/api available in python? I've come across the one in nltk. I want to know the other libraries that provide this feature.
|
there is a package called scikit which calculates tf-idf scores.
you can refer to my answer to this question
Python: tf-idf-cosine: to find document similarity
and also see the question code from this. Thankz.
|
OpenCV-Python dense SIFT
|
OpenCV has very good documentation on generating SIFT descriptors, but this is a version of "weak SIFT", where the key points are detected by the original Lowe algorithm. The OpenCV example reads something like:
img = cv2.imread('home.jpg')
gray= cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
sift = cv2.SIFT()
kp = sift.detect(gray,None)
kp,des = sift.compute(gray,kp)
What I'm looking for is strong/dense SIFT, which does not detect keypoints but instead calculates SIFT descriptors for a set of patches (e.g. 16x16 pixels, 8 pixels padding) covering an image as a grid. As I understand it, there are two ways to do this in OpenCV:
I could divide the image in a grid myself, and somehow convert those patches to KeyPoints
I could use a grid-based feature detector
In other words, I'd have to replace the sift.detect() line with something that gives me the keypoints I require.
My problem is that the rest of the OpenCV documentation, especially wrt Python, is severely lacking, so I have no idea how to achieve either of these things. I see in the C++ documentation that there are keypoint detectors for grid, but I don't know how to use these from Python.
The alternative is to switch to VLFeat, which has a very good DSift/PHOW implementation but means that I'll have to switch from python to matlab.
Any ideas? Thanks.
|
You can use Dense Sift in opencv 2.4.6 <.
Creates a feature detector by its name.
cv2.FeatureDetector_create(detectorType)
Then "Dense" string in place of detectorType
eg:-
dense=cv2.FeatureDetector_create("Dense")
kp=dense.detect(imgGray)
kp,des=sift.compute(imgGray,kp)
|
How do I find which attributes my tree splits on, when using scikit-learn?
|
I have been exploring scikit-learn, making decision trees with both entropy and gini splitting criteria, and exploring the differences.
My question, is how can I "open the hood" and find out exactly which attributes the trees are splitting on at each level, along with their associated information values, so I can see where the two criterion make different choices?
So far, I have explored the 9 methods outlined in the documentation. They don't appear to allow access to this information. But surely this information is accessible? I'm envisioning a list or dict that has entries for node and gain.
Thanks for your help and my apologies if I've missed something completely obvious.
|
Directly from the documentation ( http://scikit-learn.org/0.12/modules/tree.html ):
from StringIO import StringIO
out = StringIO()
out = tree.export_graphviz(clf, out_file=out)
There is also the tree_ attribute in your decision tree object, which allows the direct access to the whole structure.
And you can simply read it
clf.tree_.children_left #array of left children
clf.tree_.children_right #array of right children
clf.tree_.feature #array of nodes splitting feature
clf.tree_.threshold #array of nodes splitting points
clf.tree_.value #array of nodes values
for more details look at the source code of export method
In general you can use the inspect module
from inspect import getmembers
print( getmembers( clf.tree_ ) )
to get all the object's elements
|
How to qcut with non unique bin edges?
|
My question is the same as this previous one:
Binning with zero values in pandas
however, I still want to include the 0 values in a fractile. Is there a way to do this? In other words, if I have 600 values, 50% of which are 0, and the rest are let's say between 1 and 100, how would I categorize all the 0 values in fractile 1, and then the rest of the non-zero values in fractile labels 2 to 10 (assuming I want 10 fractiles). Could I convert the 0's to nan, qcut the remaining non nan data into 9 fractiles (1 to 9), then add 1 to each label (now 2 to 10) and label all the 0 values as fractile 1 manually? Even this is tricky, because in my data set in addition to the 600 values, I also have another couple hundred which may already be nan before I would convert the 0s to nan.
Update 1/26/14:
I came up with the following interim solution. The problem with this code though, is if the high frequency value is not on the edges of the distribution, then it inserts an extra bin in the middle of the existing set of bins and throws everything a little (or a lot) off.
def fractile_cut(ser, num_fractiles):
num_valid = ser.valid().shape[0]
remain_fractiles = num_fractiles
vcounts = ser.value_counts()
high_freq = []
i = 0
while vcounts.iloc[i] > num_valid/ float(remain_fractiles):
curr_val = vcounts.index[i]
high_freq.append(curr_val)
remain_fractiles -= 1
num_valid = num_valid - vcounts[i]
i += 1
curr_ser = ser.copy()
curr_ser = curr_ser[~curr_ser.isin(high_freq)]
qcut = pd.qcut(curr_ser, remain_fractiles, retbins=True)
qcut_bins = qcut[1]
all_bins = list(qcut_bins)
for val in high_freq:
bisect.insort(all_bins, val)
cut = pd.cut(ser, bins=all_bins)
ser_fractiles = pd.Series(cut.labels + 1, index=ser.index)
return ser_fractiles
|
You ask about binning with non-unique bin edges, for which I have a fairly simple answer. In the case of your example, your intent and the behavior of qcut diverge where in the pandas.tools.tile.qcut function where bins are defined:
bins = algos.quantile(x, quantiles)
Which, because your data is 50% 0s, causes bins to be returned with multiple bin edges at the value 0 for any value of quantiles greater than 2. I see two possible resolutions. In the first, the fractile space is divided evenly, binning all 0s, but not only 0s, in the first bin. In the second, the fractile space is divided evenly for values greater than 0, binning all 0s and only 0s in the first bin.
import numpy as np
import pandas as pd
import pandas.core.algorithms as algos
from pandas import Series
In both cases, I'll create some random sample data fitting your description of 50% zeroes and the remaining values between 1 and 100
zs = np.zeros(300)
rs = np.random.randint(1, 100, size=300)
arr=np.concatenate((zs, rs))
ser = Series(arr)
Solution 1: bin 1 contains both 0s and low values
bins = algos.quantile(np.unique(ser), np.linspace(0, 1, 11))
result = pd.tools.tile._bins_to_cuts(ser, bins, include_lowest=True)
The result is
In[61]: result.value_counts()
Out[61]:
[0, 9.3] 323
(27.9, 38.2] 37
(9.3, 18.6] 37
(88.7, 99] 35
(57.8, 68.1] 32
(68.1, 78.4] 31
(78.4, 88.7] 30
(38.2, 48.5] 27
(48.5, 57.8] 26
(18.6, 27.9] 22
dtype: int64
Solution 2: bin1 contains only 0s
mx = np.ma.masked_equal(arr, 0, copy=True)
bins = algos.quantile(arr[~mx.mask], np.linspace(0, 1, 11))
bins = np.insert(bins, 0, 0)
bins[1] = bins[1]-(bins[1]/2)
result = pd.tools.tile._bins_to_cuts(arr, bins, include_lowest=True)
The result is:
In[133]: result.value_counts()
Out[133]:
[0, 0.5] 300
(0.5, 11] 32
(11, 18.8] 28
(18.8, 29.7] 30
(29.7, 39] 35
(39, 50] 26
(50, 59] 31
(59, 71] 31
(71, 79.2] 27
(79.2, 90.2] 30
(90.2, 99] 30
dtype: int64
There is work that could be done to Solution 2 to make it a little prettier I think, but you can see that the masked array is a useful tool to approach your goals.
|
Deploying Django project with Gunicorn and nginx
|
I am new to django, i would like to know how to set up my django project with nginx and gunicorn. I read this guide: http://michal.karzynski.pl/blog/2013/06/09/django-nginx-gunicorn-virtualenv-supervisor/
but it doesn't work for my project.
I think that it is due to the particular structure of my project, that is:
âââicecream
â âââ settings
â | âââ __init.py
â | âââ base.py
â | âââ local.py
â | âââ production.py
â âââ __init__.py
â âââ urls.py
â âââ wsgi.py
âââ manage.py
I got this layout from: https://github.com/twoscoops/django-twoscoops-project.
Can anyone help me, please?
thank you
|
I'll just summarize the steps for deploying a django application with nginx & gunicorn here:
1. Install nginx and add this to /etc/nginx/sites-enabled/default
server {
server_name 127.0.0.1 yourhost@example.com;
access_log /var/log/nginx/domain-access.log;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
# This line is important as it tells nginx to channel all requests to port 8000.
# We will later run our wsgi application on this port using gunicorn.
proxy_pass http://127.0.0.1:8000/;
}
}
2. Install gunicorn
$ pip install gunicorn
3. Start your django project using gunicorn and the wsgi.py file
$ cd </path/to/djangoproject_subdirectory_with_wsgi.py>
$ gunicorn wsgi -b 127.0.0.1:8000 --pid /tmp/gunicorn.pid --daemon
# --daemon parameter tells gunicorn to run in the background
# So that gunicorn continues to run even if you close your ssh session
# (You cannot remain ssh-ed into your server all the time right!)
Please do not use "wsgi.py"; you just have to use wsgi without the ".py" extension when calling gunicorn. This will start your wsgi application in the background.
4. Visit "yourhost@example.com" in your browser
Now your application must be up and running on your instance. Visit:
http://yourhost@example.com/
and see if your application is running. Do not forget to replce yourhost@example.com in the above and in the nginx configuration file before.
5. (Optional) Additional Notes
In Step 1, if confused; remove all existing lines from the /etc/nginx/sites-enabled/default file and put the above code inside it. (Or delete and create a new blank file and add the code)
If you are using virtualenv and you did apip install gunicorn inside the virtualenv in Step 2, then run the Step 3 command with respective virtualenv activated.
The pid of the gunicorn process is stored in /tmp/gunicorn.pid; incase you want to kill the existing gunicorn process and restart it.
supervisord might be used in conjunction which helps in restarting the gunicorn daemon automatically in case it dies due to some reason. This is useful in production environments.
|
Celery and Django simple example
|
Let's take a simple Django example.
app/models.py
from django.db import models
from django.contrib.auth.models import User
class UserProfile(models.Model):
user = models.OneToOneField(User)
token = models.CharField(max_length=32)
app/views.py
from django.http import HttpResponse
from django.views.decorators.csrf import csrf_exempt
from forms import RegisterForm
from utils.utilities import create_user
@csrf_exempt
def register_view(request):
if request.method == 'POST':
form = RegisterForm(request.POST)
if form.is_valid():
create_user(form.cleaned_data)
return HttpResponse('success')
utils/utilities.py
def create_user(data):
user = User.objects.create_user(username=data['username'], email=None, password=data['password'])
user.save()
profile = UserProfile()
profile.user = user
profile.token = generate_token()
profile.save()
Can somebody provide an implementation of Celery in this example? Imagine this is a large project with hundreds of requests per sec.
|
Assuming you have both Python's celery and django-celery installed, create the following tasks.py file under your app:
utils/tasks.py
from celery import task
# other imports
@task()
def create_user(data):
user = User.objects.create_user(
username=data['username'], email=None, password=data['password']
)
user.save()
profile = UserProfile()
profile.user = user
profile.token = generate_token()
profile.save()
return None
Delete your utils/utilities.py file in your example above.
In your code in views.py change the create_user call from:
create_user(form.cleaned_data)
to:
create_user.delay(form.cleaned_data)
Basically create_user is now a celery task; if you have the right Python packages installed (as mentioned above), code-wise (the implementation you ask for) that's it. delay executes your function asynchronously - i.e. the HTTP response is returned without waiting for the asynchronous task to complete.
Locally you can run a celery daemon process using python manage.py celeryd.
In production you have to set up the celery process itself using for instance upstart, supervisor or any other tool to control the lifecycle of such process.
Further details documented here.
|
Argparse: how to handle variable number of arguments (nargs='*')
|
I thought that nargs='*' was enough to handle a variable number of arguments. Apparently it's not, and I don't understand the cause of this error.
The code:
p = argparse.ArgumentParser()
p.add_argument('pos')
p.add_argument('foo')
p.add_argument('--spam', default=24, type=int, dest='spam')
p.add_argument('vars', nargs='*')
p.parse_args('1 2 --spam 8 8 9'.split())
I think the resulting namespace should be Namespace(pos='1', foo='2', spam='8', vars=['8', '9']). Instead, argparse gives this error:
usage: prog.py [-h] [--spam SPAM] pos foo [vars [vars ...]]
error: unrecognized arguments: 9 8
Basically, argparse doesn't know where to put those additional arguments... Why is that?
|
http://bugs.python.org/issue15112
argparse: nargs='*' positional argument doesn't accept any items if preceded by an option and another positional
is the relevant Python bugs issue.
When argparse parses ['1', '2', '--spam', '8', '8', '9'] it first tries to match ['1','2'] with as many of the positional arguments as possible. With your arguments the pattern matching string is AAA*, 1 argument each for pos and foo, and zero arguments for vars (remember * means ZERO_OR_MORE).
['--spam','8'] are handled by your --spam argument. Since vars has already been set to [], there is nothing left to handle ['8','9'].
The programming change to argparse checks for the case where 0 argument strings is satisfying the pattern, but there are still optionals to be parsed. It then defers the handling of that * argument.
You might be able to get around this by first parsing the input with parse_known_args, and then handling the remainder with another call to parse_args.
To have complete freedom in interspersing optionals among positionals, in http://bugs.python.org/issue14191, I propose using parse_known_args with just the optionals, followed by a parse_args that only knows about the positionals. The parse_intermixed_args function that I posted there could be implemented in an ArgumentParser subclass, without modifying the argparse.py code itself.
Here's a way of handling subparsers. I've taken the parse_known_intermixed_args function, simplified it for presentation sake, and then made it the parse_known_args function of a Parser subclass. I had to take an extra step to avoid recursion.
Finally I changed the _parser_class of the subparsers Action, so each subparser uses this alternative parse_known_args. An alternative would be to subclass _SubParsersAction, possibly modifying its __call__.
from argparse import ArgumentParser
def parse_known_intermixed_args(self, args=None, namespace=None):
# self - argparse parser
# simplified from http://bugs.python.org/file30204/test_intermixed.py
parsefn = super(SubParser, self).parse_known_args # avoid recursion
positionals = self._get_positional_actions()
for action in positionals:
# deactivate positionals
action.save_nargs = action.nargs
action.nargs = 0
namespace, remaining_args = parsefn(args, namespace)
for action in positionals:
# remove the empty positional values from namespace
if hasattr(namespace, action.dest):
delattr(namespace, action.dest)
for action in positionals:
action.nargs = action.save_nargs
# parse positionals
namespace, extras = parsefn(remaining_args, namespace)
return namespace, extras
class SubParser(ArgumentParser):
parse_known_args = parse_known_intermixed_args
parser = ArgumentParser()
parser.add_argument('foo')
sp = parser.add_subparsers(dest='cmd')
sp._parser_class = SubParser # use different parser class for subparsers
spp1 = sp.add_parser('cmd1')
spp1.add_argument('-x')
spp1.add_argument('bar')
spp1.add_argument('vars',nargs='*')
print parser.parse_args('foo cmd1 bar -x one 8 9'.split())
# Namespace(bar='bar', cmd='cmd1', foo='foo', vars=['8', '9'], x='one')
|
How to convert an OrderedDict into a regular dict in python3
|
I am struggling with the following problem:
I want to convert an OrderedDict like this:
OrderedDict([('method', 'constant'), ('data', '1.225')])
into a regular dict like this:
{'method': 'constant', 'data':1.225}
because I have to store it as string in a database. After the conversion the order is not important anymore, so I can spare the ordered feature anyway.
Thanks for any hint or solutions,
Ben
|
>>> from collections import OrderedDict
>>> OrderedDict([('method', 'constant'), ('data', '1.225')])
OrderedDict([('method', 'constant'), ('data', '1.225')])
>>> dict(OrderedDict([('method', 'constant'), ('data', '1.225')]))
{'data': '1.225', 'method': 'constant'}
>>>
However, to store it in a database it'd be much better to convert it to a format such as JSON or Pickle. With Pickle you even preserve the order!
|
Mac + virtualenv + pip + postgresql = Error: pg_config executable not found
|
I was trying to install postgres for a tutorial, but pip gives me error:
pip install psycopg
A snip of error I get:
Error: pg_config executable not found.
Please add the directory containing pg_config to the PATH
or specify the full executable path with the option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
Where is pg_config in my virtualenv? How to configure it? I'm using virtualenv because I do not want a system-wide installation of postgres.
|
On Mac, the solution is to install postgresql:
brew install postgresql
On CentOS, the solution is to install postgresql-devel:
sudo yum install postgresql-devel
pg_config is in postgresql-devel package
|
Why is my Flask app being detected as node.js on Heroku
|
I recently made some changes to the structure of my Flask app hosted on heroku and now heroku has decided to detect it as a Node.js app intead of a Python app. My application uses both python (Flask) for the backend api and javascript for the front end.
The changes I made included integrating npm and bower into my application to streamline the javascript development of the app.
|
The problem was introduced when I added a package.json to my root directory when I started using npm. It seems that the build detection script runs the nodejs detection first (here) which leads to this code: if [ -f $1/package.json ]; then
echo "Node.js" && exit 0 executing and Heroku thinks it's a nodejs app and exits before the python detection has a chance to run.
To solve this I had to manually tell Heroku that I wanted a python build using this command
heroku config:set BUILDPACK_URL=https://github.com/heroku/heroku-buildpack-python.
|
python pprint dictionary on multiple lines
|
I'm trying to get a pretty print of a dictionary but I'm having no luck:
>>> import pprint
>>> a = {'first': 123, 'second': 456, 'third': {1:1, 2:2}}
>>> pprint.pprint(a)
{'first': 123, 'second': 456, 'third': {1: 1, 2: 2}}
I wanted the output to be on multiple lines, something like this:
{'first': 123,
'second': 456,
'third': {1: 1,
2: 2}
}
can pprint do this? If not then which module does it? I'm using python 2.7.3
|
Use width=1 or width=-1:
In [33]: pprint.pprint(a, width=1)
{'first': 123,
'second': 456,
'third': {1: 1,
2: 2}}
|
uWSGI: No request plugin is loaded, you will not be able to manage requests
|
I've loaded uWSGI v 1.9.20, built from source. I'm getting this error, but how do I tell which plugin is needed?
!!!!!!!!!!!!!! WARNING !!!!!!!!!!!!!!
no request plugin is loaded, you will not be able to manage requests.
you may need to install the package for your language of choice, or simply load
it with --plugin.
!!!!!!!!!!! END OF WARNING !!!!!!!!!!
Which plugin should be loaded?
|
I had this problem and was stuck for hours.
my issue is different than the answer listed, make sure you have plugins = python in your uwsgi ini file and you install the uwsgi python plugin:
sudo apt-get install uwsgi-plugin-python
After I did the above my application worked. Obviously this is for python projects, but a similar approach is required for other projects.
|
How to check version of python modules?
|
I just installed the python modules: construct and statlib with setuptools like this:
# Install setuptools to be able to download the following
sudo apt-get install python-setuptools
# Install statlib for lightweight statistical tools
sudo easy_install statlib
# Install construct for packing/unpacking binary data
sudo easy_install construct
I want to be able to (programmatically) check their versions. Is there an equivalent to python --version I can run from the command line?
My python version is 2.7.3.
|
I suggest using pip in place of easy_install. With pip, you can list all installed packages and their versions with
pip freeze
For an individual module, you can try __version__ attribute, however there are modules without it:
$ pip freeze | grep lxml
lxml==2.3
$ python -c "import lxml; print lxml.__version__"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AttributeError: 'module' object has no attribute '__version__'
Last, as you run your command with sudo prefix, I guess you're installing to global python environment. Strongly advise to take look into python virtual environment managers, for example virtualenvwrapper
|
Python Multiprocessing a for loop
|
I have an array (called data_inputs) containing the names of hundreds of astronomy images files. These images are then manipulated. My code works and takes a few seconds to process each image. However, it can only do one image at a time because I'm running the array through a 'for' loop:
for name in data_inputs:
sci=fits.open(name+'.fits')
#image is manipulated
There is no reason why I have to modify an image before any other, so is it possible to utilise all 4 cores on my machine with each core running through the for loop on a different image?
I've read about the multiprocessing module but I'm unsure how to implement it in my case.
I'm keen to get multiprocessing to work because eventually I'll have to run this on 10,000+ images.
|
You can simply use multiprocessing.Pool:
from multiprocessing import Pool
def process_image(name):
sci=fits.open('{}.fits'.format(name))
<process>
if __name__ == '__main__':
pool = Pool(processes=4) # process per core
pool.map(process_image, data_inputs) # proces data_inputs iterable with pool
|
How to display line numbers in IPython Notebook code cell by default
|
I would like my default display for IPython notebook code cells to include line numbers.
I learned from ipython notebook line number that I can toggle this with ctrl-M L, which is great, but manual. In order to include line numbers by default, I would need to add something to my ipython_notebook_config.py file. Unless I've missed something, there is not an explanation of how to do this in the documentation.
|
(For Jupyter 4+) In the latest Jupyter versions, they have documented the place to make config changes. So basically, in the Jupyter update, they've removed the concept of profiles, so the custom.js file location is now .jupyter/custom/custom.js, depending on where your .jupyter folder is. So if you don't have a custom folder or the custom.js file, just create them, then put these lines into the newly created file:
define([
'base/js/namespace',
'base/js/events'
],
function(IPython, events) {
events.on("app_initialized.NotebookApp",
function () {
require("notebook/js/cell").Cell.options_default.cm_config.lineNumbers = true;
}
);
}
);
The above is for setting line numbers to all your cell types at the same time. Code, Markdown and Raw cells will all get line numbers if you do this. If you want line numbers only for code cells, there is a simpler approach. Select a code cell, open the Chrome/Firefox JavaScript console, type the following lines:
var cell = Jupyter.notebook.get_selected_cell();
var config = cell.config;
var patch = {
CodeCell:{
cm_config:{lineNumbers:true}
}
}
config.update(patch)
Then reload the page. These changes persist because Jupyter will create a json config file in .jupyter/nbconfig to store them. This method is from this page of the documentation, so read the docs for more config changes that you can make.
(old answer)
In the latest version of IPython Notebook (v3.1.0), go to ~/.ipython/<profile_name>/static/custom/custom.js and add these lines:
define([
'base/js/namespace',
'base/js/events'
],
function(IPython, events) {
events.on("app_initialized.NotebookApp",
function () {
IPython.Cell.options_default.cm_config.lineNumbers = true;
}
);
}
);
The IPython.Cell.options_default.cm_config.lineNumbers = true; line alone will not work as it needs to load the IPython.Cell object before it tries this. Adding this line alone will cause an undefined error in the console. You need to encase it in the event handler as shown.
@William-Denman's code might have worked for an earlier version, but now you will need to do this.
EDIT: The line of code right in the middle has to be changed to require("notebook/js/cell").Cell.options_default.cm_config.lineNumbers = true; for the latest version of IPython/Jupyter (IPython 4.0.0, Jupyter 4.0.6). The old IPython.Cell object will also work, but your web console will throw a deprecation warning, so you can expect the old line to not be supported in future versions.
Also, in the latest IPython/Jupyter, which I'm running using the WinPython portable, I couldn't find the custom.js file within the profile folder. I found it (after much searching) in WinPython-64bit-2.7.10.3\python-2.7.10.amd64\Lib\site-packages\notebook\static\custom. I don't know if this is a WinPython thing or a Jupyter thing. If someone has Jupyter (latest version) installed normally (using pip or whatever) and can still find the custom.js file in the profile folder, please comment.
|
Reading a JSON file using Python
|
I am getting a bit of headache just because a simple looking, easy statement is throwing some errors in my face.
I have a json file called strings.json like this:
"strings": [ {"-name": "city", "#text": "City"}, {"-name": "phone", "#text": "Phone"}, ..., {"-name": "address", "#text": "Address"} ]
I want to read the json file, just that for now. I have these statements which I found out, but it's not working:
import json
from pprint import pprint
with open('strings.json') as json_data:
d = json.loads(json_data)
json_data.close()
pprint(d)
The error spitted on the console was this:
Traceback (most recent call last):
File "/home/.../android/values/manipulate_json.py", line 5, in <module>
d = json.loads(json_data)
File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
[Finished in 0.1s with exit code 1]
EDITED
Changed from json.loads to json.load
and got this:
Traceback (most recent call last):
File "/home/.../android/values/manipulate_json.py", line 5, in <module>
d = json.load(json_data)
File "/usr/lib/python2.7/json/__init__.py", line 278, in load
**kw)
File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 369, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 829 column 1 - line 829 column 2 (char 18476 - 18477)
[Finished in 0.1s with exit code 1]
|
The json.load() method (without "s" in "load") can read a file directly:
import json
with open('strings.json') as json_data:
d = json.load(json_data)
print(d)
You were using the json.loads() method, which is used for string arguments only.
Edit:
The new message is a totally different problem. In that case, there is some invalid json in that file. For that, I would recommend running the file through a json validator.
There are also solutions for fixing json like for example How do I automatically fix an invalid JSON string?.
|
Pandas: Get duplicated indexes
|
Given a dataframe, I want to get the duplicated indexes, which do not have duplicate values in the columns, and see which values are different.
Specifically, I have this dataframe:
import pandas as pd
wget https://www.dropbox.com/s/vmimze2g4lt4ud3/alt_exon_repeatmasker_intersect.bed
alt_exon_repeatmasker = pd.read_table('alt_exon_repeatmasker_intersect.bed', header=None, index_col=3)
In [74]: alt_exon_repeatmasker.index.is_unique
Out[74]: False
And some of the indexes have duplicate values in the 9th column (the type of DNA repetitive element in this location), and I want to know what are the different types of repetitive elements for individual locations (each index = a genome location).
I'm guessing this will require some kind of groupby and hopefully some groupby ninja can help me out.
To simplify even further, if we only have the index and the repeat type,
genome_location1 MIR3
genome_location1 AluJb
genome_location2 Tigger1
genome_location3 AT_rich
So the output I'd like to see all duplicate indexes and their repeat types, as such:
genome_location1 MIR3
genome_location1 AluJb
EDIT: added toy example
|
df.groupby(level=0).filter(lambda x: len(x) > 1)['type']
We added filter method for this kind of operation. You can also use masking and transform for equivalent results, but this is faster, and a little more readable too.
Important:
The filter method was introduced in version 0.12, but it failed to work on DataFrames/Series with nonunique indexes. The issue -- and a related issue with transform on Series -- was fixed for version 0.13, which should be released any day now.
Clearly, nonunique indexes are the heart of this question, so I should point out that this approach will not help until you have pandas 0.13. In the meantime, the transform workaround is the way to go. Be ware that if you try that on a Series with a nonunique index, it too will fail.
There is no good reason why filter and transform should not be applied to nonunique indexes; it was just poorly implemented at first.
|
sqlalchemy flask: AttributeError: 'Session' object has no attribute '_model_changes' on session.commit()
|
I've seen a lot of problems with SessionMaker, but this one is slightly different. Not sure why, but sqlalchemy won't let my session object commit.
In my app, I have some code that does:
views.py
rec = session.query(Records).filter(Records.id==r).first()
n = rec.checkoutRecord(current_user.id)
session.add(n)
session.commit()
models.py:
class Records(UserMixin, CRUDMixin, Base):
__table__ = Table('main_records', Base.metadata, autoload=True)
def checkoutRecord(self,uid):
self.editing_uid = uid
self.date_out = datetime.now()
return self
def checkinRecord(self,uid):
self.editing_uid = uid
self.date_in = datetime.now()
return self
The program craps out on the commit(), giving the above exception. Interestingly, some test code which does not import flask, but does import sqlalchemy works fine and lets me commit without error.
The full stack-trace:
Traceback (most recent call last):
File "/Users/bhoward/Envs/py27/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/bhoward/Envs/py27/lib/python2.7/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/bhoward/Envs/py27/lib/python2.7/site-packages/flask_login.py", line 663, in decorated_view
return func(*args, **kwargs)
File "/Users/bhoward/projects/PeerCoUI/mk2/peercoui/app/records/views.py", line 65, in select_view
session.commit()
File "/Users/bhoward/Envs/py27/lib/python2.7/site-packages/sqlalchemy/orm/scoping.py", line 149, in do
return getattr(self.registry(), name)(*args, **kwargs)
File "/Users/bhoward/Envs/py27/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 721, in commit
self.transaction.commit()
File "/Users/bhoward/Envs/py27/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 354, in commit
self._prepare_impl()
File "/Users/bhoward/Envs/py27/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 323, in _prepare_impl
self.session.dispatch.before_commit(self.session)
File "/Users/bhoward/Envs/py27/lib/python2.7/site-packages/sqlalchemy/event.py", line 372, in __call__
fn(*args, **kw)
File "/Users/bhoward/Envs/py27/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 162, in session_signal_before_commit
d = session._model_changes
AttributeError: 'Session' object has no attribute '_model_changes'
Full code for the project is in github: https://github.com/bhoward00/peercoui
Any advice appreciated
|
Yes this is exactly problem when using flask-sqlalchemy models mixed with pure sqlalchemy session. Thing is that flask-sqlalchemy subclasses the base Session from sqlalchemy and adds some internals one of which is the _model_changes dict. This dict is used for model modification tracking.
So if you want to use flask-sqlalchemy based models with regular sqlalchemy session, one way would be to just add the dict to the session (this is just example code):
def create_session(config):
engine = create_engine(config['DATABASE_URI'])
Session = sessionmaker(bind=engine)
session = Session()
session._model_changes = {}
return session
I had the same exact problem as you, so hopefully this should help you.
UPDATE:
There is new version available, which should be fixing this behaviour, quoting the 2.0 docs:
Changed how the builtin signals are subscribed to skip non Flask-SQLAlchemy sessions. This will also fix the attribute error about model changes not existing.
Docs: http://flask-sqlalchemy.pocoo.org/2.0/changelog/#version-2-0
|
Why is the cmp parameter removed from sort/sorted in Python3.0?
|
from python wiki:
In Py3.0, the cmp parameter was removed entirely (as part of a larger effort to simplify and unify the language, eliminating the conflict between rich comparisons and the __cmp__ methods).
I do not understand the reasoning why cmp is removed in py3.0
consider this example:
>>> def numeric_compare(x, y):
return x - y
>>> sorted([5, 2, 4, 1, 3], cmp=numeric_compare)
[1, 2, 3, 4, 5]
and now consider this version (recommended and compatible with 3.0):
def cmp_to_key(mycmp):
'Convert a cmp= function into a key= function'
class K(object):
def __init__(self, obj, *args):
self.obj = obj
def __lt__(self, other):
return mycmp(self.obj, other.obj) < 0
def __gt__(self, other):
return mycmp(self.obj, other.obj) > 0
def __eq__(self, other):
return mycmp(self.obj, other.obj) == 0
def __le__(self, other):
return mycmp(self.obj, other.obj) <= 0
def __ge__(self, other):
return mycmp(self.obj, other.obj) >= 0
def __ne__(self, other):
return mycmp(self.obj, other.obj) != 0
return K
>>> sorted([5, 2, 4, 1, 3], key=cmp_to_key(reverse_numeric))
[5, 4, 3, 2, 1]
The latter is very verbose and the same purpose is achieved in the former with just one line. On another note, I am writing my custom class for which I want to write the __cmp__ method. from my little reading across web, it is recommended to write __lt__,__gt__,__eq__,__le__,__ge__,__ne__ and not __cmp__
Again, why this recommendation? can I not just define __cmp__ making life simpler?
|
For two objects a and b, __cmp__ requires that one of a < b, a == b, and a > b is true. But that might not be the case: consider sets, where it's very common that none of those are true, e.g. {1, 2, 3} vs {4, 5, 6}.
So __lt__ and friends were introduced. But that left Python with two separate ordering mechanisms, which is kind of ridiculous, so the less flexible one was removed in Python 3.
You don't actually have to implement all six comparison methods. You can use the @total_ordering decorator and only implement __lt__ and __eq__.
edit: Also note that, in the case of sorting, key functions can be more efficient than cmp: in the example you gave, Python may have to call your Python comparison function O(n²) times. But a key function only needs to be called O(n) times, and if the return value is then a builtin type (as it very often is), the O(n²) pairwise comparisons go through C.
|
How to correctly parse UTF-8 encoded HTML to Unicode strings with BeautifulSoup?
|
I'm running a Python program which fetches a UTF-8-encoded web page, and I extract some text from the HTML using BeautifulSoup.
However, when I write this text to a file (or print it on the console), it gets written in an unexpected encoding.
Sample program:
import urllib2
from BeautifulSoup import BeautifulSoup
# Fetch URL
url = 'http://www.voxnow.de/'
request = urllib2.Request(url)
request.add_header('Accept-Encoding', 'utf-8')
# Response has UTF-8 charset header,
# and HTML body which is UTF-8 encoded
response = urllib2.urlopen(request)
# Parse with BeautifulSoup
soup = BeautifulSoup(response)
# Print title attribute of a <div> which uses umlauts (e.g. können)
print repr(soup.find('div', id='navbutton_account')['title'])
Running this gives the result:
# u'Hier k\u0102\u015bnnen Sie sich kostenlos registrieren und / oder einloggen!'
But I would expect a Python Unicode string to render ö in the word können as \xf6:
# u'Hier k\xf6bnnen Sie sich kostenlos registrieren und / oder einloggen!'
I've tried passing the 'fromEncoding' parameter to BeautifulSoup, and trying to read() and decode() the response object, but it either makes no difference, or throws an error.
With the command curl www.voxnow.de | hexdump -C, I can see that the web page is indeed UTF-8 encoded (i.e. it contains 0xc3 0xb6) for the ö character:
20 74 69 74 6c 65 3d 22 48 69 65 72 20 6b c3 b6 | title="Hier k..|
6e 6e 65 6e 20 53 69 65 20 73 69 63 68 20 6b 6f |nnen Sie sich ko|
73 74 65 6e 6c 6f 73 20 72 65 67 69 73 74 72 69 |stenlos registri|
I'm beyond the limit of my Python abilities, so I'm at a loss as to how to debug this further. Any advice?
|
As justhalf points out above, my question here is essentially a duplicate of this question.
The HTML content reported itself as UTF-8 encoded and, for the most part it was, except for one or two rogue invalid UTF-8 characters.
This apparently confuses BeautifulSoup about which encoding is in use, and when trying to first decode as UTF-8 when passing the content to BeautifulSoup like
this:
soup = BeautifulSoup(response.read().decode('utf-8'))
I would get the error:
UnicodeDecodeError: 'utf8' codec can't decode bytes in position 186812-186813:
invalid continuation byte
Looking more closely at the output, there was an instance of the character à which was wrongly encoded as the invalid byte sequence 0xe3 0x9c, rather than the correct 0xc3 0x9c.
As the currently highest-rated answer on that question suggests, the invalid UTF-8 characters can be removed while parsing, so that only valid data is passed to BeautifulSoup:
soup = BeautifulSoup(response.read().decode('utf-8', 'ignore'))
|
How can a pandas merge preserve order?
|
I have two DataFrames in pandas, trying to merge them. But pandas keeps changing the order. I've tried setting indexes, resetting them, no matter what I do, I can't get the returned output to have the rows in the same order. Is there a trick?
Note we start out with the loans order 'a,b,c' but after the merge, it's "a,c,b".
import pandas
loans = [ 'a', 'b', 'c' ]
states = [ 'OR', 'CA', 'OR' ]
x = pandas.DataFrame({ 'loan' : loans, 'state' : states })
y = pandas.DataFrame({ 'state' : [ 'CA', 'OR' ], 'value' : [ 1, 2]})
z = x.merge(y, how='left', on='state')
But now the order is no longer the original 'a,b,c'. Any ideas? I'm using pandas version 11.
|
Hopefully someone will provide a better answer, but in case no one does, this will definitely work, soâ¦
Zeroth, I'm assuming you don't want to just end up sorted on loan, but to preserve whatever original order was in x, which may or may not have anything to do with the order of the loan column. (Otherwise, the problem is easier, and less interesting.)
First, you're asking it to sort based on the join keys. As the docs explain, that's the default when you don't pass a sort argument.
Second, if you don't sort based on the join keys, the rows will end up grouped together, such that two rows that merged from the same source row end up next to each other, which means you're still going to get a, c, b.
You can work around this by getting the rows grouped together in the order they appear in the original x by just merging again with x (on either side, it doesn't really matter), or by reindexing based on x if you prefer. Like this:
x.merge(x.merge(y, how='left', on='state', sort=False))
Alternatively, you can cram an x-index in there with reset_index, then just sort on that, like this:
x.reset_index().merge(y, how='left', on='state', sort=False).sort('index')
Either way obviously seems a bit wasteful, and clumsy⦠so, as I said, hopefully there's a better answer that I'm just not seeing at the moment. But if not, that works.
|
How do I get Flask to run on port 80?
|
I have a Flask server running through port 5000, and it's fine. I can access it at http://example.com:5000
But is it possible to simply access it at http://example.com? I'm assuming that means I have to change the port from 5000 to 80. But when I try that on Flask, I get this error message when I run it.
Traceback (most recent call last):
File "xxxxxx.py", line 31, in <module>
app.run(host="0.0.0.0", port=int("80"), debug=True)
File "/usr/local/lib/python2.6/dist-packages/flask/app.py", line 772, in run
run_simple(host, port, self, **options)
File "/usr/local/lib/python2.6/dist-packages/werkzeug/serving.py", line 706, in run_simple
test_socket.bind((hostname, port))
File "<string>", line 1, in bind
socket.error: [Errno 98] Address already in use
Running lsof -i :80 returns
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
apache2 467 root 3u IPv4 92108840 0t0 TCP *:www (LISTEN)
apache2 4413 www-data 3u IPv4 92108840 0t0 TCP *:www (LISTEN)
apache2 14346 www-data 3u IPv4 92108840 0t0 TCP *:www (LISTEN)
apache2 14570 www-data 3u IPv4 92108840 0t0 TCP *:www (LISTEN)
apache2 14571 www-data 3u IPv4 92108840 0t0 TCP *:www (LISTEN)
apache2 14573 www-data 3u IPv4 92108840 0t0 TCP *:www (LISTEN)
Do I need to kill these processes first? Is that safe? Or is there another way to keep Flask running on port 5000 but have the main website domain redirect somehow?
Thanks.
|
1- Stop other applications that are using port 80.
2- run application with port 80 :
if __name__ == '__main__':
app.run(host='0.0.0.0', port=80)
|
Annoying white space in bar chart (matplotlib, Python)
|
It's probably a trivial question, but I am trying to plot a bar chart with matplotlib and with rotated text on the x axis.
The code I'm using is shown below:
fig = plt.figure()
x_labels_list = []
for i in range(0, pow(2, N)):
x_labels_list.append(str(f(i))) # The function f() converts i to a binary string
ax = plt.subplot(111)
width = 1.0
bins = map(lambda x: x-width, range(1,pow(2,N)+1))
ax.bar(bins, my_data, width=width)
ax.set_xticks(map(lambda x: x-width/2, range(1,pow(2,N)+1)))
ax.set_xticklabels(x_labels_list, rotation=90, rotation_mode="anchor", ha="right")
It works perfectly, but I obtain an annoying white space on the right of the x axis, as shown by the red ellipse in the following picture:
Do you know how I can remove it? Thanks in advance!
|
Try calling plt.xlim() with the number of bins, e.g.
plt.xlim([0,bins.size])
Here is an example:
#make some data
N = 22
data = np.random.randint(1,10,N)
bin = np.arange(N)
width = 1
#plot it
ax = plt.subplot(111)
ax.bar(bin, data, width, color='r')
plt.show()
No plt.xlim() output:
Now plot it with plt.xlim using the number of bins to define the size:
#plot it
ax = plt.subplot(111)
ax.bar(bin, data, width, color='r')
plt.xlim([0,bin.size])
plt.show()
Results it:
There may be a better way, but this should work for you.
|
How to write to an existing excel file without overwriting data (using pandas)?
|
I use pandas to write to excel file in the following fashion:
import pandas
writer = pandas.ExcelWriter('Masterfile.xlsx')
data_filtered.to_excel(writer, "Main", cols=['Diff1', 'Diff2'])
writer.save()
Masterfile.xlsx already consists of number of different tabs.
Pandas correctly writes to "Main" sheet, unfortunately it also deletes all other tabs.
|
Pandas docs says it uses openpyxl for xlsx files. Quick look through the code in ExcelWriter gives a clue that something like this might work out:
import pandas
from openpyxl import load_workbook
book = load_workbook('Masterfile.xlsx')
writer = pandas.ExcelWriter('Masterfile.xlsx', engine='openpyxl')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
data_filtered.to_excel(writer, "Main", cols=['Diff1', 'Diff2'])
writer.save()
|
how to extract the decision rules from scikit-learn decision-tree?
|
Can I extract the underlying decision-rules (or 'decision paths') from a trained tree in a decision tree - as a textual list ?
something like: "if A>0.4 then if B<0.2 then if C>0.8 then class='X' etc...
If anyone knows of a simple way to do so, it will be very helpful.
|
I created my own function to extract the rules from the decision trees created by sklearn:
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeClassifier
# dummy data:
df = pd.DataFrame({'col1':[0,1,2,3],'col2':[3,4,5,6],'dv':[0,1,0,1]})
# create decision tree
dt = DecisionTreeClassifier(max_depth=5, min_samples_leaf=1)
dt.fit(df.ix[:,:2], df.dv)
This function first starts with the nodes (identified by -1 in the child arrays) and then recursively finds the parents. I call this a node's 'lineage'. Along the way, I grab the values I need to create if/then/else SAS logic:
def get_lineage(tree, feature_names):
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
features = [feature_names[i] for i in tree.tree_.feature]
# get ids of child nodes
idx = np.argwhere(left == -1)[:,0]
def recurse(left, right, child, lineage=None):
if lineage is None:
lineage = [child]
if child in left:
parent = np.where(left == child)[0].item()
split = 'l'
else:
parent = np.where(right == child)[0].item()
split = 'r'
lineage.append((parent, split, threshold[parent], features[parent]))
if parent == 0:
lineage.reverse()
return lineage
else:
return recurse(left, right, parent, lineage)
for child in idx:
for node in recurse(left, right, child):
print node
The sets of tuples below contain everything I need to create SAS if/then/else statements. I do not like using do blocks in SAS which is why I create logic describing a node's entire path. The single integer after the tuples is the ID of the terminal node in a path. All of the preceding tuples combine to create that node.
In [1]: get_lineage(dt, df.columns)
(0, 'l', 0.5, 'col1')
1
(0, 'r', 0.5, 'col1')
(2, 'l', 4.5, 'col2')
3
(0, 'r', 0.5, 'col1')
(2, 'r', 4.5, 'col2')
(4, 'l', 2.5, 'col1')
5
(0, 'r', 0.5, 'col1')
(2, 'r', 4.5, 'col2')
(4, 'r', 2.5, 'col1')
6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.