instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
takeOrdered descending Pyspark
i would like to sort K/V pairs by values and then take the biggest five values. I managed to do this with reverting K/V with first map, sort in descending order with FALSE, and then reverse key.value to the original (second map) and then take the first 5 that are the bigget, the code is this: RDD.map(lambda x:(x[1],x[0])).sortByKey(False).map(lambda x:(x[1],x[0])).take(5) i know there is a takeOrdered action on pySpark, but i only managed to sort on values (and not on key), i don't know how to get a descending sorting: RDD.takeOrdered(5,key = lambda x: x[1])
Sort by keys (ascending): RDD.takeOrdered(5, key = lambda x: x[0]) Sort by keys (descending): RDD.takeOrdered(5, key = lambda x: -x[0]) Sort by values (ascending): RDD.takeOrdered(5, key = lambda x: x[1]) Sort by values (descending): RDD.takeOrdered(5, key = lambda x: -x[1])
Test if dict contained in dict
Testing for equality works fine like this for python dicts: first = {"one":"un", "two":"deux", "three":"trois"} second = {"one":"un", "two":"deux", "three":"trois"} print(first == second) # Result: True But now my second dict contains some additional keys I want to ignore: first = {"one":"un", "two":"deux", "three":"trois"} second = {"one":"un", "two":"deux", "three":"trois", "foo":"bar"} Is there a simple way to test if the first dict is part of the second dict, with all its keys and values? EDIT 1: This question is suspected to be a duplicate of How to test if a dictionary contains certain keys, but I'm interested in testing keys and their values. Just containing the same keys does not make two dicts equal. EDIT 2: OK, I got some answers now using four different methods, and proved all of them working. As I need a fast process, I tested each for execution time. I created three identical dicts with 1000 items, keys and values were random strings of length 10. The second and third got some extra key-value pairs, and the last non-extra key of the third got a new value. So, first is a subset of second, but not of third. Using module timeit with 10000 repetitions, I got: Method Time [s] first.viewitems() <=second.viewitems() 0.9 set(first.items()).issubset(second.items()) 7.3 len(set(first.items()) & set(second.items())) == len(first) 8.5 all(first[key] == second.get(key, sentinel) for key in first) 6.0 I guessed the last method is the slowest, but it's on place 2. But method 1 beats them all. Thanks for your answers!
You can use a dictionary view: # Python 2 if first.viewitems() <= second.viewitems(): # true only if `first` is a subset of `second` # Python 3 if first.items() <= second.items(): # true only if `first` is a subset of `second` Dictionary views are the standard in Python 3, in Python 2 you need to prefix the standard methods with view. They act like sets, and <= tests if one of those is a subset of (or is equal to) another. Demo in Python 3: >>> first = {"one":"un", "two":"deux", "three":"trois"} >>> second = {"one":"un", "two":"deux", "three":"trois", "foo":"bar"} >>> first.items() <= second.items() True >>> first['four'] = 'quatre' >>> first.items() <= second.items() False This works for non-hashable values too, as the keys make the key-value pairs unique already. The documentation is a little confusing on this point, but even with mutable values (say, lists) this works: >>> first_mutable = {'one': ['un', 'een', 'einz'], 'two': ['deux', 'twee', 'zwei']} >>> second_mutable = {'one': ['un', 'een', 'einz'], 'two': ['deux', 'twee', 'zwei'], 'three': ['trois', 'drie', 'drei']} >>> first_mutable.items() <= second_mutable.items() True >>> first_mutable['one'].append('ichi') >>> first_mutable.items() <= second_mutable.items() False You could also use the all() function with a generator expression; use object() as a sentinel to detect missing values concisely: sentinel = object() if all(first[key] == second.get(key, sentinel) for key in first): # true only if `first` is a subset of `second` but this isn't as readable and expressive as using dictionary views.
What does this: s[s[1:] == s[:-1]] do in numpy?
I've been looking for a way to efficiently check for duplicates in a numpy array and stumbled upon a question that contained an answer using this code. What does this line mean in numpy? s[s[1:] == s[:-1]] Would like to understand the code before applying it. Looked in the Numpy doc but had trouble finding this information.
The slices [1:] and [:-1] mean all but the first and all but the last elements of the array: >>> import numpy as np >>> s = np.array((1, 2, 2, 3)) # four element array >>> s[1:] array([2, 2, 3]) # last three elements >>> s[:-1] array([1, 2, 2]) # first three elements therefore the comparison generates an array of boolean comparisons between each element s[x] and its "neighbour" s[x+1], which will be one shorter than the original array (as the last element has no neighbour): >>> s[1:] == s[:-1] array([False, True, False], dtype=bool) and using that array to index the original array gets you the elements where the comparison is True, i.e. the elements that are the same as their neighbour: >>> s[s[1:] == s[:-1]] array([2]) Note that this only identifies adjacent duplicate values.
Multiprocessing IOError: bad message length
I get an IOError: bad message length when passing large arguments to the map function. How can I avoid this? The error occurs when I set N=1500 or bigger. The code is: import numpy as np import multiprocessing def func(args): i=args[0] images=args[1] print i return 0 N=1500 #N=1000 works fine images=[] for i in np.arange(N): images.append(np.random.random_integers(1,100,size=(500,500))) iter_args=[] for i in range(0,1): iter_args.append([i,images]) pool=multiprocessing.Pool() print pool pool.map(func,iter_args) In the docs of multiprocessing there is the function recv_bytes that raises an IOError. Could it be because of this? (https://python.readthedocs.org/en/v2.7.2/library/multiprocessing.html) EDIT If I use images as a numpy array instead of a list, I get a different error: SystemError: NULL result without error in PyObject_Call. A bit different code: import numpy as np import multiprocessing def func(args): i=args[0] images=args[1] print i return 0 N=1500 #N=1000 works fine images=[] for i in np.arange(N): images.append(np.random.random_integers(1,100,size=(500,500))) images=np.array(images) #new iter_args=[] for i in range(0,1): iter_args.append([i,images]) pool=multiprocessing.Pool() print pool pool.map(func,iter_args) EDIT2 The actual function that I use is: def func(args): i=args[0] images=args[1] image=np.mean(images,axis=0) np.savetxt("image%d.txt"%(i),image) return 0 Additionally, the iter_args do not contain the same set of images: iter_args=[] for i in range(0,1): rand_ind=np.random.random_integers(0,N-1,N) iter_args.append([i,images[rand_ind]])
You're creating a pool and sending all the images at once to func(). If you can get away with working on a single image at once, try something like this, which runs to completion with N=10000 in 35s with Python 2.7.10 for me: import numpy as np import multiprocessing def func(args): i = args[0] img = args[1] print "{}: {} {}".format(i, img.shape, img.sum()) return 0 N=10000 images = ((i, np.random.random_integers(1,100,size=(500,500))) for i in xrange(N)) pool=multiprocessing.Pool(4) pool.imap(func, images) pool.close() pool.join() The key here is to use iterators so you don't have to hold all the data in memory at once. For instance I converted images from an array holding all the data to a generator expression to create the image only when needed. You could modify this to load your images from disk or whatever. I also used pool.imap instead of pool.map. If you can, try to load the image data in the worker function. Right now you have to serialize all the data and ship it across to another process. If your image data is larger, this might be a bottleneck. [update now that we know func has to handle all images at once] You could do an iterative mean on your images. Here's a solution without using multiprocessing. To use multiprocessing, you could divide your images into chunks, and farm those chunks out to the pool. import numpy as np N=10000 shape = (500,500) def func(images): average = np.full(shape, 0) for i, img in images: average += img / N return average images = ((i, np.full(shape,i)) for i in range(N)) print func(images)
How to get the index of an integer from a list if the list contains a boolean?
I am just starting with Python. How to get index of integer 1 from a list if the list contains a boolean True object before the 1? >>> lst = [True, False, 1, 3] >>> lst.index(1) 0 >>> lst.index(True) 0 >>> lst.index(0) 1 I think Python considers 0 as False and 1 as True in the argument of the index method. How can I get the index of integer 1 (i.e. 2)? Also what is the reasoning or logic behind treating boolean object this way in list? As from the solutions, I can see it is not so straightforward.
The documentation says that Lists are mutable sequences, typically used to store collections of homogeneous items (where the precise degree of similarity will vary by application). You shouldn't store heterogeneous data in lists. The implementation of list.index only performs the comparison using Py_EQ (== operator). In your case that comparison returns truthy value because True and False have values of the integers 1 and 0, respectively (the bool class is a subclass of int after all). However, you could use generator expression and the built-in next function (to get the first value from the generator) like this: In [4]: next(i for i, x in enumerate(lst) if not isinstance(x, bool) and x == 1) Out[4]: 2 Here we check if x is an instance of bool before comparing x to 1. Keep in mind that next can raise StopIteration, in that case it may be desired to (re-)raise ValueError (to mimic the behavior of list.index). Wrapping this all in a function: def index_same_type(it, val): gen = (i for i, x in enumerate(it) if type(x) is type(val) and x == val) try: return next(gen) except StopIteration: raise ValueError('{!r} is not in iterable'.format(val)) from None Some examples: In [34]: index_same_type(lst, 1) Out[34]: 2 In [35]: index_same_type(lst, True) Out[35]: 0 In [37]: index_same_type(lst, 42) ValueError: 42 is not in iterable
Django DRF with oAuth2 using DOT (django-oauth-toolkit)
I am trying to make DRF work with oAuth2 (django-oauth-toolkit). I was focusing on http://httplambda.com/a-rest-api-with-django-and-oauthw-authentication/ First I followed that instruction, but later, after getting authentication errors, I setup this demo: https://github.com/felix-d/Django-Oauth-Toolkit-Python-Social-Auth-Integration Result was the same: I couldn't generate access token using this curl: curl -X POST -d "grant_type=password&username=<user_name>&password=<password>" -u "<client_id>:<client_secret>" http://127.0.0.1:8000/o/token/ I got this error: {"error": "unsupported_grant_type"} The oAuth2 application was set with grant_type password. I changed grant_type to "client credentials" and tried this curl: curl -X POST -d "grant_type=client_credentials" -u "<client_id>:<client_secret>" http://127.0.0.1:8000/o/token/ This worked and I got generated auth token. After that I tried to get a list of all beers: curl -H "Authorization: Bearer <auth_token>" http://127.0.0.1:8000/beers/ And I got this response: {"detail":"You do not have permission to perform this action."} This is the content of views.py that should show the beers: from beers.models import Beer from beers.serializer import BeerSerializer from rest_framework import generics, permissions class BeerList(generics.ListCreateAPIView): serializer_class = BeerSerializer permission_classes = (permissions.IsAuthenticated,) def get_queryset(self): user = self.request.user return Beer.objects.filter(owner=user) def perform_create(self, serializer): serializer.save(owner=self.request.user) I am not sure what can be the problem here. First with "unsuported grant type" and later with other curl call. This also happen to me when I did basic tutorial from django-oauth-toolkit. I am using Django 1.8.2 and python3.4 Thanks for all help! My settings.py looks like this import os BASE_DIR = os.path.dirname(os.path.dirname(__file__)) SECRET_KEY = 'hd#x!ysy@y+^*%i+klb)o0by!bh&7nu3uhg+5r0m=$3x$a!j@9' DEBUG = True TEMPLATE_DEBUG = True ALLOWED_HOSTS = [] TEMPLATE_CONTEXT_PROCESSORS = ( 'django.contrib.auth.context_processors.auth', ) INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'oauth2_provider', 'rest_framework', 'beers', ) MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ) AUTHENTICATION_BACKENDS = ( 'django.contrib.auth.backends.ModelBackend', ) ROOT_URLCONF = 'beerstash.urls' WSGI_APPLICATION = 'beerstash.wsgi.application' DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True STATIC_URL = '/static/' REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'oauth2_provider.ext.rest_framework.OAuth2Authentication', ) } OAUTH2_PROVIDER = { # this is the list of available scopes 'SCOPES': {'read': 'Read scope', 'write': 'Write scope'} }
I have tried demo you mentioned and everything was fine. $ curl -X POST -d "grant_type=password&username=superuser&assword=123qwe" -u"xLJuHBcdgJHNuahvER9pgqSf6vcrlbkhCr75hTCZ:nv9gzOj0BMf2cdxoxsnYZuRYTK5QwpKWiZc7USuJpm11DNtSE9X6Ob9KaVTKaQqeyQZh4KF3oZS4IJ7o9n4amzfqKJnoL7a2tYQiWgtYPSQpY6VKFjEazcqSacqTx9z8" http://127.0.0.1:8000/o/token/ {"access_token": "jlLpKwzReB6maEnjuJrk2HxE4RHbiA", "token_type": "Bearer", "expires_in": 36000, "refresh_token": "DsDWz1LiSZ3bd7NVuLIp7Dkj6pbse1", "scope": "read write groups"} $ curl -H "Authorization: Bearer jlLpKwzReB6maEnjuJrk2HxE4RHbiA" http://127.0.0.1:8000/beers/ [] In your case, I think, you created application with wrong "Authorization grant type". Use this application settings: Name: just a name of your choice Client Type: confidential Authorization Grant Type: Resource owner password-based This https://django-oauth-toolkit.readthedocs.org/en/latest/rest-framework/getting_started.html#step-3-register-an-application halped me a lot. Here the database file i've created: https://www.dropbox.com/s/pxeyphkiy141i1l/db.sqlite3.tar.gz?dl=0 You can try it yourself. No source code changed at all. Django admin username - superuser, password - 123qwe.
Installing new versions of Python on Cygwin does not install Pip?
While I am aware of the option of installing Pip from source, I'm trying to avoid going down that path so that updates to Pip will be managed by Cygwin's package management. I've recently learned that the latest versions of Python include Pip. However, even though I have recently installed the latest versions of Python from the Cygwin repos, Bash doesn't recognize a valid Pip install on the system. 896/4086 MB RAM 0.00 0.00 0.00 1/12 Tue, Jun 16, 2015 ( 3:53:22am CDT) [0 jobs] [ethan@firetail: +2] ~ $ python -V Python 2.7.10 892/4086 MB RAM 0.00 0.00 0.00 1/12 Tue, Jun 16, 2015 ( 3:53:27am CDT) [0 jobs] [ethan@firetail: +2] ~ $ python3 -V Python 3.4.3 883/4086 MB RAM 0.00 0.00 0.00 1/12 Tue, Jun 16, 2015 ( 3:53:34am CDT) [0 jobs] [ethan@firetail: +2] ~ $ pip bash: pip: command not found 878/4086 MB RAM 0.00 0.00 0.00 1/12 Tue, Jun 16, 2015 ( 3:53:41am CDT) [0 jobs] [ethan@firetail: +2] ~ $ pip2 bash: pip2: command not found 876/4086 MB RAM 0.00 0.00 0.00 1/12 Tue, Jun 16, 2015 ( 3:53:42am CDT) [0 jobs] [ethan@firetail: +2] ~ $ pip3 bash: pip3: command not found Note that the installed Python 2.7.10 and Python 3.4.3 are both recent enough that they should include Pip. Is there something that I might have overlooked? Could there be a new install of Pip that isn't in the standard binary directories referenced in the $PATH? If the Cygwin packages of Python do in fact lack an inclusion of Pip, is that something that's notable enough to warrant a bug report to the Cygwin project?
cel self-answered this question in a comment above. For posterity, let's convert this helpfully working solution into a genuine answer. Unfortunately, Cygwin currently fails to: Provide pip, pip2, or pip3 packages. Install the pip and pip2 commands when the python package is installed. Install the pip3 command when the python3 package is installed. It's time to roll up our grubby command-line sleeves and get it done ourselves. What's the Catch? Since no pip packages are currently available, the answer to the specific question of "Is pip installable as a Cygwin package?" is technically "Sorry, son." That said, pip is trivially installable via a one-liner. This requires manually re-running said one-liner to update pip but has the distinct advantage of actually working. (Which is more than we usually get in Cygwin Land.) pip3 Installation, Please To install pip3, the Python 3-specific version of pip, under Cygwin: $ python3 -m ensurepip This assumes the python3 Cygwin package to have been installed, of course. pip2 Installation, Please To install both pip and pip2, the Python 2-specific versions of pip, under Cygwin: $ python -m ensurepip This assumes the python Cygwin package to have been installed, of course.
Creating deb or rpm with setuptools - data_files
I have a Python 3 project. MKC ├── latex │ ├── macros.tex │ └── main.tex ├── mkc │ ├── cache.py │ ├── __init__.py │ └── __main__.py ├── README.md ├── setup.py └── stdeb.cfg On install, I would like to move my latex files to known directory, say /usr/share/mkc/latex, so I've told setuptools to include data files data_files=[("/usr/share/mkc/latex", ["latex/macros.tex", "latex/main.tex"])], Now when I run ./setup.py bdist --formats=rpm or ./setup.py --command-packages=stdeb.command bdist_deb I get the following error: error: can't copy 'latex/macros.tex': doesn't exist or not a regular file Running just ./setup.py bdist works fine, so the problem must be in package creation.
When creating a deb file (I guess the same counts for a rpm file), ./setup.py --command-packages=stdeb.command bdist_deb first creates a source distribution and uses that archive for further processing. But your LaTeX files are not included there, so they're not found. You need to add them to the source package. Such can be achieved by adding a MANIFEST.in with contents: recursive-include latex *.tex distutils from version 3.1 on would automatically include the data_files in a source distribution, while setuptools apparently works very differently.
Pandas: Add multiple empty columns to DataFrame
This may be a stupid question, but how do I add multiple empty columns to a DataFrame from a list? I can do: df["B"] = None df["C"] = None df["D"] = None But I can't do: df[["B", "C", "D"]] = None KeyError: "['B' 'C' 'D'] not in index"
You could use df.reindex to add new columns: In [18]: df = pd.DataFrame(np.random.randint(10, size=(5,1)), columns=['A']) In [19]: df Out[19]: A 0 4 1 7 2 0 3 7 4 6 In [20]: df.reindex(columns=list('ABCD')) Out[20]: A B C D 0 4 NaN NaN NaN 1 7 NaN NaN NaN 2 0 NaN NaN NaN 3 7 NaN NaN NaN 4 6 NaN NaN NaN reindex will return a new DataFrame, with columns appearing in the order they are listed: In [31]: df.reindex(columns=list('DCBA')) Out[31]: D C B A 0 NaN NaN NaN 4 1 NaN NaN NaN 7 2 NaN NaN NaN 0 3 NaN NaN NaN 7 4 NaN NaN NaN 6 The reindex method as a fill_value parameter as well: In [22]: df.reindex(columns=list('ABCD'), fill_value=0) Out[22]: A B C D 0 4 0 0 0 1 7 0 0 0 2 0 0 0 0 3 7 0 0 0 4 6 0 0 0
psycopg2: AttributeError: 'module' object has no attribute 'extras'
In my code I use the DictCursor from psycopg2.extras like this dict_cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor) However, all of the sudden I get the following error when I load the cursor: AttributeError: 'module' object has no attribute 'extras' Maybe something is dorked in my installation but I have no clue where to start looking. I made some updates with pip, but as far as I know no dependencies of psycopg2.
You need to explicitly import psycopg2.extras: import psycopg2.extras
Why does '12345'.count('') return 6 and not 5?
>>> '12345'.count('') 6 Why does this happen? If there are only 5 characters in that string, why is the count function returning one more? Also, is there a more effective way of counting characters in a string?
count returns how many times an object occurs in a list, so if you count occurrences of '' you get 6 because the empty string is at the beginning, end, and in between each letter. Use the len function to find the length of a string.
Why is globals() a function in Python?
Python offers the function globals() to access a dictionary of all global variables. Why is that a function and not a variable? The following works: g = globals() g["foo"] = "bar" print foo # Works and outputs "bar" What is the rationale behind hiding globals in a function? And is it better to call it only once and store a reference somewhere or should I call it each time I need it? IMHO, this is not a duplicate of Reason for globals() in Python?, because I'm not asking why globals() exist but rather why it must be a function (instead of a variable __globals__).
Because it may depend on the Python implementation how much work it is to build that dictionary. In CPython, globals are kept in just another mapping, and calling the globals() function returns a reference to that mapping. But other Python implementations are free to create a separate dictionary for the object, as needed, on demand. This mirrors the locals() function, which in CPython has to create a dictionary on demand because locals are normally stored in an array (local names are translated to array access in CPython bytecode). So you'd call globals() when you need access to the mapping of global names. Storing a reference to that mapping works in CPython, but don't count on other this in other implementations.
Scrapy throws ImportError: cannot import name xmlrpc_client
After install Scrapy via pip, and having Python 2.7.10: scrapy Traceback (most recent call last): File "/usr/local/bin/scrapy", line 7, in <module> from scrapy.cmdline import execute File "/Library/Python/2.7/site-packages/scrapy/__init__.py", line 48, in <module> from scrapy.spiders import Spider File "/Library/Python/2.7/site-packages/scrapy/spiders/__init__.py", line 10, in <module> from scrapy.http import Request File "/Library/Python/2.7/site-packages/scrapy/http/__init__.py", line 12, in <module> from scrapy.http.request.rpc import XmlRpcRequest File "/Library/Python/2.7/site-packages/scrapy/http/request/rpc.py", line 7, in <module> from six.moves import xmlrpc_client as xmlrpclib ImportError: cannot import name xmlrpc_client But I can import module: Python 2.7.10 (default, Jun 10 2015, 19:42:47) [GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scrapy >>> What's going on?
I've just fixed this issue on my OS X. Please backup your files first. sudo rm -rf /Library/Python/2.7/site-packages/six* sudo rm -rf /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six* sudo pip install six Scrapy 1.0.0 is ready to go.
How does the class_weight parameter in scikit-learn work?
I am having a lot of trouble understanding how the class_weight parameter in scikit-learn's Logistic Regression operates. The Situation I want to use logistic regression to do binary classification on a very unbalanced data set. The classes are labelled 0 (negative) and 1 (positive) and the observed data is in a ratio of about 19:1 with the majority of samples having negative outcome. First Attempt: Manually Preparing Training Data I split the data I had into disjoint sets for training and testing (about 80/20). Then I randomly sampled the training data by hand to get training data in different proportions than 19:1; from 2:1 -> 16:1. I then trained logistic regression on these different training data subsets and plotted recall (= TP/(TP+FN)) as a function of the different training proportions. Of course, the recall was computed on the disjoint TEST samples which had the observed proportions of 19:1. Note, although I trained the different models on different training data, I computed recall for all of them on the same (disjoint) test data. The results were as expected: the recall was about 60% at 2:1 training proportions and fell off rather fast by the time it got to 16:1. There were several proportions 2:1 -> 6:1 where the recall was decently above 5%. Second Attempt: Grid Search Next, I wanted to test different regularization parameters and so I used GridSearchCV and made a grid of several values of the C parameter as well as the class_weight parameter. To translate my n:m proportions of negative:positive training samples into the dictionary language of class_weight I thought that I just specify several dictionaries as follows: { 0:0.67, 1:0.33 } #expected 2:1 { 0:0.75, 1:0.25 } #expected 3:1 { 0:0.8, 1:0.2 } #expected 4:1 and I also included None and auto. This time the results were totally wacked. All my recalls came out tiny (< 0.05) for every value of class_weight except auto. So I can only assume that my understanding of how to set the class_weight dictionary is wrong. Interestingly, the class_weight value of 'auto' in the grid search was around 59% for all values of C, and I guessed it balances to 1:1? My Questions 1) How do you properly use class_weight to achieve different balances in training data from what you actually give it? Specifically, what dictionary do I pass to class_weight to use n:m proportions of negative:positive training samples? 2) If you pass various class_weight dictionaries to GridSearchCV, during cross-validation will it rebalance the training fold data according to the dictionary but use the true given sample proportions for computing my scoring function on the test fold? This is critical since any metric is only useful to me if it comes from data in the observed proportions. 3) What does the auto value of class_weight do as far as proportions? I read the documentation and I assume "balances the data inversely proportional to their frequency" just means it makes it 1:1. Is this correct? If not, can someone clarify? Thank you very much, any clarification would be greatly appreciated!
First off, it might not be good to just go by recall alone. You can simply achieve a recall of 100% by classifying everything as the positive class. I usually suggest using AUC for selecting parameters, and then finding a threshold for the operating point (say a given precision level) that you are interested in. For how class-weight works: It penalizes mistakes in samples of class i with class_weight[i] instead of 1. So higher class-weight means you want to put more emphasis on a class. From what you say it seems class 0 is 19 times more frequent than class 1. So you should increase the class-weight of class 1 relative to class 0, say {0:.1, 1:.9}. If the class_weight doesn't sum to 1, it will basically change the regularization parameter. For how class_weight="auto" works, you can have a look at this discussion. In the dev version you can use class_weight="balanced", which is easier to understand: it basically means replicating the smaller class until you have as many samples as in the larger one, but in an implicit way.
Convert Rust vector of tuples to a C compatible structure
Following these answers, I've currently defined a Rust 1.0 function as follows, in order to be callable from Python using ctypes: use std::vec; extern crate libc; use libc::{c_int, c_float, size_t}; use std::slice; #[no_mangle] pub extern fn convert_vec(input_lon: *const c_float, lon_size: size_t, input_lat: *const c_float, lat_size: size_t) -> Vec<(i32, i32)> { let input_lon = unsafe { slice::from_raw_parts(input_lon, lon_size as usize) }; let input_lat = unsafe { slice::from_raw_parts(input_lat, lat_size as usize) }; let combined: Vec<(i32, i32)> = input_lon .iter() .zip(input_lat.iter()) .map(|each| convert(*each.0, *each.1)) .collect(); return combined } And I'm setting up the Python part like so: from ctypes import * class Int32_2(Structure): _fields_ = [("array", c_int32 * 2)] rust_bng_vec = lib.convert_vec_py rust_bng_vec.argtypes = [POINTER(c_float), c_size_t, POINTER(c_float), c_size_t] rust_bng_vec.restype = POINTER(Int32_2) This seems to be OK, but I'm: Not sure how to transform combined (a Vec<(i32, i32)>) to a C-compatible structure, so it can be returned to my Python script. Not sure whether I should be returning a reference (return &combined?) and how I would have to annotate the function with the appropriate lifetime specifier if I did
The most important thing to note is that there is no such thing as a tuple in C. C is the lingua franca of library interoperability, and you will be required to restrict yourself to abilities of this language. It doesn't matter if you are talking between Rust and another high-level language; you have to speak C. There may not be tuples in C, but there are structs. A two-element tuple is just a struct with two members! Let's start with the C code that we would write: #include <stdio.h> #include <stdint.h> typedef struct { uint32_t a; uint32_t b; } tuple_t; typedef struct { void *data; size_t len; } array_t; extern array_t convert_vec(array_t lat, array_t lon); int main() { uint32_t lats[3] = {0, 1, 2}; uint32_t lons[3] = {9, 8, 7}; array_t lat = { .data = lats, .len = 3 }; array_t lon = { .data = lons, .len = 3 }; array_t fixed = convert_vec(lat, lon); tuple_t *real = fixed.data; for (int i = 0; i < fixed.len; i++) { printf("%d, %d\n", real[i].a, real[i].b); } return 0; } We've defined two structs — one to represent our tuple, and another to represent an array, as we will be passing those back and forth a bit. We will follow this up by defining the exact same structs in Rust and define them to have the exact same members (types, ordering, names). Importantly, we use #[repr(C)] to let the Rust compiler know to not do anything funky with reordering the data. extern crate libc; use std::slice; use std::mem; #[repr(C)] pub struct Tuple { a: libc::uint32_t, b: libc::uint32_t, } #[repr(C)] pub struct Array { data: *const libc::c_void, len: libc::size_t, } impl Array { unsafe fn as_u32_slice(&self) -> &[u32] { assert!(!self.data.is_null()); slice::from_raw_parts(self.data as *const u32, self.len as usize) } fn from_vec<T>(mut vec: Vec<T>) -> Array { // Important to make length and capacity match // A better solution is to track both length and capacity vec.shrink_to_fit(); let array = Array { data: vec.as_ptr() as *const libc::c_void, len: vec.len() as libc::size_t }; // Whee! Leak the memory, and now the raw pointer (and // eventually C) is the owner. mem::forget(vec); array } } #[no_mangle] pub extern fn convert_vec(lon: Array, lat: Array) -> Array { let lon = unsafe { lon.as_u32_slice() }; let lat = unsafe { lat.as_u32_slice() }; let vec = lat.iter().zip(lon.iter()) .map(|(&lat, &lon)| Tuple { a: lat, b: lon }) .collect(); Array::from_vec(vec) } We must never accept or return non-repr(C) types across the FFI boundary, so we pass across our Array. Note that there's a good amount of unsafe code, as we have to convert an unknown pointer to data (c_void) to a specific type. That's the price of being generic in C world. Let's turn our eye to Python now. Basically, we just have to mimic what the C code did: import ctypes class FFITuple(ctypes.Structure): _fields_ = [("a", ctypes.c_uint32), ("b", ctypes.c_uint32)] class FFIArray(ctypes.Structure): _fields_ = [("data", ctypes.c_void_p), ("len", ctypes.c_size_t)] # Allow implicit conversions from a sequence of 32-bit unsigned # integers. @classmethod def from_param(cls, seq): return cls(seq) # Wrap sequence of values. You can specify another type besides a # 32-bit unsigned integer. def __init__(self, seq, data_type = ctypes.c_uint32): array_type = data_type * len(seq) raw_seq = array_type(*seq) self.data = ctypes.cast(raw_seq, ctypes.c_void_p) self.len = len(seq) # A conversion function that cleans up the result value to make it # nicer to consume. def void_array_to_tuple_list(array, _func, _args): tuple_array = ctypes.cast(array.data, ctypes.POINTER(FFITuple)) return [tuple_array[i] for i in range(0, array.len)] lib = ctypes.cdll.LoadLibrary("./target/debug/libtupleffi.dylib") lib.convert_vec.argtypes = (FFIArray, FFIArray) lib.convert_vec.restype = FFIArray lib.convert_vec.errcheck = void_array_to_tuple_list for tupl in lib.convert_vec([1,2,3], [9,8,7]): print tupl.a, tupl.b Forgive my rudimentary Python. I'm sure an experienced Pythonista could make this look a lot prettier! Thanks to @eryksun for some nice advice on how to make the consumer side of calling the method much nicer. A word about ownership and memory leaks In this example code, we've leaked the memory allocated by the Vec. Theoretically, the FFI code now owns the memory, but realistically, it can't do anything useful with it. To have a fully correct example, you'd need to add another method that would accept the pointer back from the callee, transform it back into a Vec, then allow Rust to drop the value. This is the only safe way, as Rust is almost guaranteed to use a different memory allocator than the one your FFI language is using. Not sure whether I should be returning a reference and how I would have to annotate the function with the appropriate lifetime specifier if I did No, you don't want to (read: can't) return a reference. If you could, then the ownership of the item would end with the function call, and the reference would point to nothing. This is why we need to do the two-step dance with mem::forget and returning a raw pointer.
Weird numpy.sum behavior when adding zeros
I understand how mathematically-equivalent arithmentic operations can result in different results due to numerical errors (e.g. summing floats in different orders). However, it surprises me that adding zeros to sum can change the result. I thought that this always holds for floats, no matter what: x + 0. == x. Here's an example. I expected all the lines to be exactly zero. Can anybody please explain why this happens? M = 4 # number of random values Z = 4 # number of additional zeros for i in range(20): a = np.random.rand(M) b = np.zeros(M+Z) b[:M] = a print a.sum() - b.sum() -4.4408920985e-16 0.0 0.0 0.0 4.4408920985e-16 0.0 -4.4408920985e-16 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.22044604925e-16 0.0 4.4408920985e-16 4.4408920985e-16 0.0 It seems not to happen for smaller values of M and Z. I also made sure a.dtype==b.dtype. Here is one more example, which also demonstrates python's builtin sum behaves as expected: a = np.array([0.1, 1.0/3, 1.0/7, 1.0/13, 1.0/23]) b = np.array([0.1, 0.0, 1.0/3, 0.0, 1.0/7, 0.0, 1.0/13, 1.0/23]) print a.sum() - b.sum() => -1.11022302463e-16 print sum(a) - sum(b) => 0.0 I'm using numpy V1.9.2.
Short answer: You are seeing the difference between a + b + c + d and (a + b) + (c + d) which because of floating point inaccuracies is not the same. Long answer: Numpy implements pair-wise summation as an optimization of both speed (it allows for easier vectorization) and rounding error. The numpy sum-implementation can be found here (function pairwise_sum_@TYPE@). It essentially does the following: If the length of the array is less than 8, a regular for-loop summation is performed. This is why the strange result is not observed if W < 4 in your case - the same for-loop summation will be used in both cases. If the length is between 8 and 128, it accumulates the sums in 8 bins r[0]-r[7] then sums them by ((r[0] + r[1]) + (r[2] + r[3])) + ((r[4] + r[5]) + (r[6] + r[7])). Otherwise, it recursively sums two halves of the array. Therefore, in the first case you get a.sum() = a[0] + a[1] + a[2] + a[3] and in the second case b.sum() = (a[0] + a[1]) + (a[2] + a[3]) which leads to a.sum() - b.sum() != 0.
Check if a list is a rotation of another list that works with duplicates
I have this function for determining if a list is a rotation of another list: def isRotation(a,b): if len(a) != len(b): return False c=b*2 i=0 while a[0] != c[i]: i+=1 for x in a: if x!= c[i]: return False i+=1 return True e.g. >>> a = [1,2,3] >>> b = [2,3,1] >>> isRotation(a, b) True How do I make this work with duplicates? e.g. a = [3,1,2,3,4] b = [3,4,3,1,2] And can it be done in O(n)time?
The following meta-algorithm will solve it. Build a concatenation of a, e.g., a = [3,1,2,3,4] => aa = [3,1,2,3,4,3,1,2,3,4]. Run any string adaptation of a string-matching algorithm, e.g., Boyer Moore to find b in aa. One particularly easy implementation, which I would first try, is to use Rabin Karp as the underlying algorithm. In this, you would calculate the Rabin Fingerprint for b calculate the Rabin fingerprint for aa[: len(b)], aa[1: len(b) + 1], ..., and compare the lists only when the fingerprints match Note that The Rabin fingerprint for a sliding window can be calculated iteratively very efficiently (read about it in the Rabin-Karp link) If your list is of integers, you actually have a slightly easier time than for strings, as you don't need to think what is the numerical hash value of a letter -
Stopping list selection in Python 2.7
Imagine that I have an order list of tuples: s = [(0,-1), (1,0), (2,-1), (3,0), (4,0), (5,-1), (6,0), (7,-1)] Given a parameter X, I want to select all the tuples that have a first element equal or greater than X up to but not including the first tuple that has -1 as the second element. For example, if X = 3, I want to select the list [(3,0), (4,0)] One idea I had is: Get the cut-off key with E = min (x [0] for x in s if (x [0] >= X) and (x [1] == -1) ) Then select elements with keys between the X and E: R = [x for x in s if X <= x [0] < E] That gives me what I want in R, but it seems really inefficient, involving two table scans. I could do it in a for loop, discarding tuples with keys too small, and break when I hit the first blocking tuple. But for runs like a dog compared to list selection. Is there a super-efficient, python-esque (2.7) way of doing this?
You can simply filter the tuples from the list as a generator expression and then you can stop taking the values from the generator expression when you get the first tuple whose second element is -1, like this >>> s = [(0,-1), (1,0), (2,-1), (3,0), (4,0), (5,-1), (6,0), (7,-1)] >>> from itertools import takewhile >>> X = 3 >>> list(takewhile(lambda x: x[1] != -1, (item for item in s if item[0] >= X))) [(3, 0), (4, 0)] Here, the generator expression, (item for item in s if item[0] >= X) will give values one-by-one, on demand, (they are not generated all at once, so we save memory here) which are greater than or equal to X. Then, we take values from that generator expression, only till we find a tuple whose second element is not equal to -1, with itertools.takewhile.
Python: issue when using vars() dictionary
I have the following snippet: a, b = 1, 2 params = ['a', 'b'] res = {p: vars()[p] for p in params} Which gives me KeyError: 'a' whereas the following code works fine: a, b = 1, 2 params = ['a', 'b'] res = {} for p in params: res[p] = vars()[p] What's the difference here?
vars() without any argument acts like locals() and since a dictionary comprehension has its own scope it has no variable named a or b. You can use eval() here. Without any argument it will execute in LEGB manner, or specify globals() dict explicitly to eval: >>> res = {p: eval(p) for p in params} >>> res {'a': 1, 'b': 2} But then again the correct way will be to create a dictionary from the start if you want to access variables using their names.
Python if not == vs if !=
What is the difference between these two lines of code: if not x == 'val': and if x != 'val': Is one more efficient than the other? Would it be better to use if x == 'val': pass else:
Using dis to look at the bytecode generated for the two versions: not == 4 0 LOAD_FAST 0 (foo) 3 LOAD_FAST 1 (bar) 6 COMPARE_OP 2 (==) 9 UNARY_NOT 10 RETURN_VALUE != 4 0 LOAD_FAST 0 (foo) 3 LOAD_FAST 1 (bar) 6 COMPARE_OP 3 (!=) 9 RETURN_VALUE The latter has fewer operations, and is therefore likely to be slightly more efficient. It was pointed out in the commments (thanks, @Quincunx) that where you have if foo != bar vs. if not foo == bar the number of operations is exactly the same, it's just that the COMPARE_OP changes and POP_JUMP_IF_TRUE switches to POP_JUMP_IF_FALSE: not ==: 2 0 LOAD_FAST 0 (foo) 3 LOAD_FAST 1 (bar) 6 COMPARE_OP 2 (==) 9 POP_JUMP_IF_TRUE 16 != 2 0 LOAD_FAST 0 (foo) 3 LOAD_FAST 1 (bar) 6 COMPARE_OP 3 (!=) 9 POP_JUMP_IF_FALSE 16 In this case, unless there was a difference in the amount of work required for each comparison, it's unlikely you'd see any performance difference at all. However, note that the two versions won't always be logically identical, as it will depend on the implementations of __eq__ and __ne__ for the objects in question. Per the data model documentation: There are no implied relationships among the comparison operators. The truth of x==y does not imply that x!=y is false. For example: >>> class Dummy(object): def __eq__(self, other): return True def __ne__(self, other): return True >>> not Dummy() == Dummy() False >>> Dummy() != Dummy() True Finally, and perhaps most importantly: in general, where the two are logically identical, x != y is much more readable than not x == y.
Plotting categorical data with pandas and matplotlib
I have a data frame with categorical data: colour direction 1 red up 2 blue up 3 green down 4 red left 5 red right 6 yellow down 7 blue down and now I want to generate some graphs, like pie charts and histograns based on the categories. Is it possible without creating dummy numeric variables? Something like df.plot(kind='hist')
You can simply use value_counts on the series: df.colour.value_counts().plot(kind='bar')
get the index of the last negative value in a 2d array per column
I'm trying to get the index of the last negative value of an array per column (in order to slice it after). a simple working example on a 1d vector is : import numpy as np A = np.arange(10) - 5 A[2] = 2 print A # [-5 -4 2 -2 -1 0 1 2 3 4] idx = np.max(np.where(A <= 0)[0]) print idx # 5 A[:idx] = 0 print A # [0 0 0 0 0 0 1 2 3 4] Now I wanna do the same thing on each column of a 2D array : A = np.arange(10) - 5 A[2] = 2 A2 = np.tile(A, 3).reshape((3, 10)) - np.array([0, 2, -1]).reshape((3, 1)) print A2 # [[-5 -4 2 -2 -1 0 1 2 3 4] # [-7 -6 0 -4 -3 -2 -1 0 1 2] # [-4 -3 3 -1 0 1 2 3 4 5]] And I would like to obtain : print A2 # [[0 0 0 0 0 0 1 2 3 4] # [0 0 0 0 0 0 0 0 1 2] # [0 0 0 0 0 1 2 3 4 5]] but I can't manage to figure out how to translate the max/where statement to the this 2d array...
You already have good answers, but I wanted to propose a potentially quicker variation using the function np.maximum.accumulate. Since your method for a 1D array uses max/where, you may also find this approach quite intuitive. (Edit: quicker Cython implementation added below). The overall approach is very similar to the others; the mask is created with: np.maximum.accumulate((A2 < 0)[:, ::-1], axis=1)[:, ::-1] This line of code does the following: (A2 < 0) creates a Boolean array, indicating whether a value is negative or not. The index [:, ::-1] flips this left-to-right. np.maximum.accumulate is used to return the cumulative maximum along each row (i.e. axis=1). For example [False, True, False] would become [False, True, True]. The final indexing operation [:, ::-1] flips this new Boolean array left-to-right. Then all that's left to do is to use the Boolean array as a mask to set the True values to zero. Borrowing the timing methodology and two functions from @Divakar's answer, here are the benchmarks for my proposed method: # method using np.maximum.accumulate def accumulate_based(A2): A2[np.maximum.accumulate((A2 < 0)[:, ::-1], axis=1)[:, ::-1]] = 0 return A2 # large sample array A2 = np.random.randint(-4, 10, size=(100000, 100)) A2c = A2.copy() A2c2 = A2.copy() The timings are: In [47]: %timeit broadcasting_based(A2) 10 loops, best of 3: 61.7 ms per loop In [48]: %timeit cumsum_based(A2c) 10 loops, best of 3: 127 ms per loop In [49]: %timeit accumulate_based(A2c2) # quickest 10 loops, best of 3: 43.2 ms per loop So using np.maximum.accumulate can be as much as 30% faster than the next fastest solution for arrays of this size and shape. As @tom10 points out, each NumPy operation processes arrays in their entirety, which can be inefficient when multiple operations are needed to get a result. An iterative approach which works through the array just once may fare better. Below is a naive function written in Cython which could more than twice as fast as a pure NumPy approach. This function may be able to be sped up further using memory views. cimport cython import numpy as np cimport numpy as np @cython.boundscheck(False) @cython.wraparound(False) @cython.nonecheck(False) def cython_based(np.ndarray[long, ndim=2, mode="c"] array): cdef int rows, cols, i, j, seen_neg rows = array.shape[0] cols = array.shape[1] for i in range(rows): seen_neg = 0 for j in range(cols-1, -1, -1): if seen_neg or array[i, j] < 0: seen_neg = 1 array[i, j] = 0 return array This function works backwards through each row and starts setting values to zero once it has seen a negative value. Testing it works: A2 = np.random.randint(-4, 10, size=(100000, 100)) A2c = A2.copy() np.array_equal(accumulate_based(A2), cython_based(A2c)) # True Comparing the performance of the function: In [52]: %timeit accumulate_based(A2) 10 loops, best of 3: 49.8 ms per loop In [53]: %timeit cython_based(A2c) 100 loops, best of 3: 18.6 ms per loop
Why is "1.real" a syntax error but "1 .real" valid in Python?
So I saw these two questions on twitter. How is 1.real a syntax error but 1 .real is not? >>> 1.real File "<stdin>", line 1 1.real ^ SyntaxError: invalid syntax >>> 1 .real 1 >>> 1. real File "<stdin>", line 1 1. real ^ SyntaxError: invalid syntax >>> 1 . real 1 >>> 1..real 1.0 >>> 1 ..real File "<stdin>", line 1 1 ..real ^ SyntaxError: invalid syntax >>> 1.. real 1.0 >>> 1 .. real File "<stdin>", line 1 1 .. real ^ SyntaxError: invalid syntax
I guess that the . is greedily parsed as part of a number, if possible, making it the float 1., instead of being part of the method call. Spaces are not allowed around the decimal point, but you can have spaces before and after the . in a method call. If the number is followed by a space, the parse of the number is terminated, so it's unambiguous. Let's look at the different cases and how they are parsed: >>> 1.real # parsed as (1.)real -> missing '.' >>> 1 .real # parsed as (1).real -> okay >>> 1. real # parsed as (1.)real -> missing '.' >>> 1 . real # parsed as (1).real -> okay >>> 1..real # parsed as (1.).real -> okay >>> 1 ..real # parsed as (1)..real -> one '.' too much >>> 1.. real # parsed as (1.).real -> okay >>> 1 .. real # parsed as (1)..real -> one '.' too much
Python multi-line with statement
What is a clean way to create a multi-line with in python? I want to open up several files inside a single with, but it's far enough to the right that I want it on multiple lines. Like this: class Dummy: def __enter__(self): pass def __exit__(self, type, value, traceback): pass with Dummy() as a, Dummy() as b, Dummy() as c: pass Unfortunately, that is a SyntaxError. So I tried this: with (Dummy() as a, Dummy() as b, Dummy() as c): pass Also a syntax error. However, this worked: with Dummy() as a, Dummy() as b,\ Dummy() as c: pass But what if I wanted to place a comment? This does not work: with Dummy() as a, Dummy() as b,\ # my comment explaining why I wanted Dummy() as c\ Dummy() as c: pass Nor does any obvious variation on the placement of the \s. Is there a clean way to create a multi-line with statement that allows comments inside it?
Given that you've tagged this Python 3, if you need to intersperse comments with your context managers, I would use a contextlib.ExitStack: with ExitStack() as stack: a = stack.enter_context(Dummy()) # Relevant comment b = stack.enter_context(Dummy()) # Comment about b c = stack.enter_context(Dummy()) # Further information This is equivalent to with Dummy() as a, Dummy() as b, Dummy() as c: This has the benefit that you can generate your context managers in a loop instead of needing to separately list each one. The documentation gives the example that if you want to open a bunch of files, and you have the filenames in a list, you can do with ExitStack() as stack: files = [stack.enter_context(open(fname)) for fname in filenames] If your context managers take so much screen space that you want to put comments between them, you probably have enough to want to use some sort of loop.
Insert element in Python list after every nth element
Say I have a Python list like this: letters = ['a','b','c','d','e','f','g','h','i','j'] I want to insert an 'x' after every nth element, let's say three characters in that list. The result should be: letters = ['a','b','c','x','d','e','f','x','g','h','i','x','j'] I understand that I can do that with looping and inserting. What I'm actually looking for is a Pythonish-way, a one-liner maybe?
I've got two one liners. Given: >>> letters = ['a','b','c','d','e','f','g','h','i','j'] Use enumerate to get index, add 'x' every 3rd letter, eg: mod(n, 3) == 2, then concatenate into string and list() it. >>> list(''.join(l + 'x' * (n % 3 == 2) for n, l in enumerate(letters))) ['a', 'b', 'c', 'x', 'd', 'e', 'f', 'x', 'g', 'h', 'i', 'x', 'j'] Use nested comprehensions to flatten a list of lists(a), sliced in groups of 3 with 'x' added if less than 3 from end of list. >>> [x for y in (letters[i:i+3] + ['x'] * (i < len(letters) - 2) for i in xrange(0, len(letters), 3)) for x in y] ['a', 'b', 'c', 'x', 'd', 'e', 'f', 'x', 'g', 'h', 'i', 'x', 'j'] (a) [item for subgroup in groups for item in subgroup] flattens a jagged list of lists.
How do I get authentication in a telegram bot?
Telegram Bots are ready now. If we use the analogy of web browser and websites, the telegram client applications are like the browser clients. The Telegram Chatrooms are like websites. Suppose we have some information we only want to restrict to certain users, on the websites, we will have authentication. How do we achieve the same effect on the Telegram Bots? I was told that I can use deep linking. See description here I will reproduce it below: Create a bot with a suitable username, e.g. @ExampleComBot Set up a webhook for incoming messages Generate a random string of a sufficient length, e.g. $memcache_key = "vCH1vGWJxfSeofSAs0K5PA" Put the value 123 with the key $memcache_key into Memcache for 3600 seconds (one hour) Show our user the button https://telegram.me/ExampleComBot?start=vCH1vGWJxfSeofSAs0K5PA Configure the webhook processor to query Memcached with the parameter that is passed in incoming messages beginning with /start. If the key exists, record the chat_id passed to the webhook as telegram_chat_id for the user 123. Remove the key from Memcache. Now when we want to send a notification to the user 123, check if they have the field telegram_chat_id. If yes, use the sendMessage method in the Bot API to send them a message in Telegram. I know how to do step 1. I want to understand the rest. This is the image I have in mind when I try to decipher step 2. So the various telegram clients communicate with the Telegram Server when talking to ExampleBot on their applications. The communication is two-way. Step 2 suggests that the Telegram Server will update the ExampleBot Server via a webhook. A webhook is just a URL. So far, am I correct? What's the next step towards using this for authentication?
Forget about the webhook thingy. The deep linking explained: Let the user log in on an actual website with actual username-password authentication. Generate a unique hashcode (we will call it unique_code) Save unique_code->username to a database or key-value storage. Show the user the URL https://telegram.me/YOURBOTNAME?start=unique_code Now as soon as the user opens this URL in Telegram and presses 'Start', your bot will receive a text message containing '/start unique_code', where unique_code is of course replaced by the actual hashcode. Let the bot retrieve the username by querying the database or key-value storage for unique_code. Save chat_id->username to a database or key-value storage. Now when your bot receives another message, it can query message.chat.id in the database to check if the message is from this specific user. (And handle accordingly) Some code (using pyTelegramBotAPI): import telebot import time bot = telebot.TeleBot('TOKEN') def extract_unique_code(text): # Extracts the unique_code from the sent /start command. return text.split()[1] if len(text.split()) > 1 else None def in_storage(unique_code): # Should check if a unique code exists in storage return True def get_username_from_storage(unique_code): # Does a query to the storage, retrieving the associated username # Should be replaced by a real database-lookup. return "ABC" if in_storage(unique_code) else None def save_chat_id(chat_id, username): # Save the chat_id->username to storage # Should be replaced by a real database query. pass @bot.message_handler(commands=['start']) def send_welcome(message): unique_code = extract_unique_code(message.text) if unique_code: # if the '/start' command contains a unique_code username = get_username_from_storage(unique_code) if username: # if the username exists in our database save_chat_id(message.chat.id, username) reply = "Hello {0}, how are you?".format(username) else: reply = "I have no clue who you are..." else: reply = "Please visit me via a provided URL from the website." bot.reply_to(message, reply) bot.polling() while True: time.sleep(0) Note: the unique_code will not be shown as '/start unique_code', only '/start', in the Telegram client, but your bot will still receive '/start unique_code'. Another way I can think of off the top of my head is direct authentication within the bot via '/auth username password', but keep in mind that your username and password will be saved in chat history this way. Edit 1: Keep in mind that you do not have to use WebHooks to process messages. In fact, pyTelegramBotAPI uses the getUpdates method rather than a webhook.
Define True, if not defined, causes syntax error
I have found the following construct today in someone elses code: try: True, False except NameError: True = 1==1; False = 1==0 As I understand this, it defines True and False if they are not defined already. So if they are defined it shouldn't throw the NameError-Exception, right? I have tryed this for myself in a shell and it shows me SyntaxError: can't assign to keyword My question is why does it even shot the syntax error if True and False are defined? If True and False are available on my system, shouldn't it just go past the exception handling and not show a syntax error?
This code is written for Python 2.x and won't work on Python 3.x (in which True and False are true keywords). Since True and False are keywords in Python 3, you'll get a SyntaxError which you cannot catch. This code exists because of very old versions of Python. In Python 2.2 (released in 2001!), True and False did not exist as predefined names, so this code would provide compatible definitions so that later code could simply use True and False. When converting your Python 2.x code to Python 3.x, remove these lines; they are historical and have no use in Python 3.x. If you see these lines in someone else's code, they are a sure sign that the program was not written or ported for Python 3.
What is the most pythonic way to iterate over OrderedDict
I have an OrderedDict and in a loop I want to get index, key and value. It's sure can be done in multiple ways, i.e. a = collections.OrderedDict({…}) for i,b,c in zip(range(len(a)), a.iterkeys(), a.itervalues()): … But I would like to avoid range(len(a)) and shorten a.iterkeys(), a.itervalues() to something like a.iteritems(). With enumerate and iteritems it's possible to rephrase as for i,d in enumerate(a.iteritems()): b,c = d But it requires to unpack inside the loop body. Is there a way to unpack in a for statement or maybe a more elegant way to iterate?
You can use tuple unpacking in for statement: for i, (key, value) in enumerate(a.iteritems()): # Do something with i, key, value >>> d = {'a': 'b'} >>> for i, (key, value) in enumerate(d.iteritems()): ... print i, key, value ... 0 a b Side Note: In Python 3.x, use dict.items() which returns an iterable dictionary view. >>> for i, (key, value) in enumerate(d.items()): ... print(i, key, value)
Simple line plots using seaborn
I'm trying to plot a ROC curve using seaborn (python). With matplotlib I simply use the function plot: plt.plot(one_minus_specificity, sensitivity, 'bs--') where one_minus_specificity and sensitivity are two lists of paired values. Is there a simple counterparts of the plot function in seaborn? I had a look at the gallery but I didn't find any straightforward method.
Since seaborn also uses matplotlib to do its plotting you can easily combine the two. If you only what to adopt the styling of seaborn the set_style function should get you started: import matplotlib.pyplot as plt import numpy as np import seaborn as sns sns.set_style("darkgrid") plt.plot(np.cumsum(np.random.randn(1000,1))) plt.show() Result:
ImportError: No module named concurrent.futures.process
I have followed the procedure given in How to use valgrind with python? for checking memory leaks in my python code. I have my python source under the path /root/Test/ACD/atech I have given above path in PYTHONPATH. Everything is working fine if I run the code with default python binary, located under /usr/bin/. I need to run the code with the python binary I have build manually which is located under /home/abcd/workspace/pyhon/bin/python Then I am getting the following error from concurrent.futures.process import ProcessPoolExecutor ImportError: No module named concurrent.futures.process How can I solve this?
If you're using Python 2.7 you must install this module : pip install futures Futures feature has never included in Python 2.x core. However, it's present in Python 3.x since Python 3.2.
Efficiently build a graph of words with given Hamming distance
I want to build a graph from a list of words with Hamming distance of (say) 1, or to put it differently, two words are connected if they only differ from one letter (lol -> lot). so that given words = [ lol, lot, bot ] the graph would be { 'lol' : [ 'lot' ], 'lot' : [ 'lol', 'bot' ], 'bot' : [ 'lot' ] } The easy way is to compare every word in the list with every other and count the different chars; sadly, this is a O(N^2) algorithm. Which algo/ds/strategy can I use to to achieve better performance? Also, let's assume only latin chars, and all the words have the same length.
Assuming you store your dictionary in a set(), so that lookup is O(1) in the average (worst case O(n)). You can generate all the valid words at hamming distance 1 from a word: >>> def neighbours(word): ... for j in range(len(word)): ... for d in string.ascii_lowercase: ... word1 = ''.join(d if i==j else c for i,c in enumerate(word)) ... if word1 != word and word1 in words: yield word1 ... >>> {word: list(neighbours(word)) for word in words} {'bot': ['lot'], 'lol': ['lot'], 'lot': ['bot', 'lol']} If M is the length of a word, L the length of the alphabet (i.e. 26), the worst case time complexity of finding neighbouring words with this approach is O(L*M*N). The time complexity of the "easy way" approach is O(N^2). When this approach is better? When L*M < N, i.e. if considering only lowercase letters, when M < N/26. (I considered only worst case here) Note: the average length of an english word is 5.1 letters. Thus, you should consider this approach if your dictionary size is bigger than 132 words. Probably it is possible to achieve better performance than this. However this was really simple to implement. Experimental benchmark: The "easy way" algorithm (A1): from itertools import zip_longest def hammingdist(w1,w2): return sum(1 if c1!=c2 else 0 for c1,c2 in zip_longest(w1,w2)) def graph1(words): return {word: [n for n in words if hammingdist(word,n) == 1] for word in words} This algorithm (A2): def graph2(words): return {word: list(neighbours(word)) for word in words} Benchmarking code: for dict_size in range(100,6000,100): words = set([''.join(random.choice(string.ascii_lowercase) for x in range(3)) for _ in range(dict_size)]) t1 = Timer(lambda: graph1()).timeit(10) t2 = Timer(lambda: graph2()).timeit(10) print('%d,%f,%f' % (dict_size,t1,t2)) Output: 100,0.119276,0.136940 200,0.459325,0.233766 300,0.958735,0.325848 400,1.706914,0.446965 500,2.744136,0.545569 600,3.748029,0.682245 700,5.443656,0.773449 800,6.773326,0.874296 900,8.535195,0.996929 1000,10.445875,1.126241 1100,12.510936,1.179570 ... I ran another benchmark with smaller steps of N to see it closer: 10,0.002243,0.026343 20,0.010982,0.070572 30,0.023949,0.073169 40,0.035697,0.090908 50,0.057658,0.114725 60,0.079863,0.135462 70,0.107428,0.159410 80,0.142211,0.176512 90,0.182526,0.210243 100,0.217721,0.218544 110,0.268710,0.256711 120,0.334201,0.268040 130,0.383052,0.291999 140,0.427078,0.312975 150,0.501833,0.338531 160,0.637434,0.355136 170,0.635296,0.369626 180,0.698631,0.400146 190,0.904568,0.444710 200,1.024610,0.486549 210,1.008412,0.459280 220,1.056356,0.501408 ... You see the tradeoff is very low (100 for dictionaries of words with length=3). For small dictionaries the O(N^2) algorithm perform slightly better, but that is easily beat by the O(LMN) algorithm as N grows. For dictionaries with longer words, the O(LMN) algorithm remains linear in N, it just has a different slope, so the tradeoff moves slightly to the right (130 for length=5).
Swapping two sublists in a list
Given the following list: my_list=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] I want to be able to swap the sub-list my_list[2:4] with the sub-list my_list[7:10] as quickly and as efficiently as possible, to get the new list: new_list=[0, 1, 7, 8, 9, 4, 5, 6, 2, 3, 10, 11, 12] Here's my attempt: def swap(s1, s2, l): seg1=l[:s1.start]+l[s2] seg2=l[s1.stop : s2.start] seg3=l[s1]+l[s2.stop:] return seg1+seg2+seg3 print swap(slice(2,4), slice(7,10), [0,1,2,3,4,5,6,7,8,9,10,11,12]) This does print the desired output, although this way of doing it looks awful to me. Is there a more easy and elegant way of doing it, that will not create four new lists for every function call? (I plan to call this function a lot) I don't mind (actually I'd prefer) changing the original list, rather than creating new instance every function call.
Slices can be assigned. Two variables can be swapped with a, b = b, a. Combine the two above:: >>> my_list[7:10], my_list[2:4] = my_list[2:4], my_list[7:10] >>> my_list [0, 1, 7, 8, 9, 4, 5, 6, 2, 3, 10, 11, 12] Beware that - if slices have different sizes - the order is important: If you swap in the opposite order, you end up with a different result, because it will change first the initial items (lower indices), and then the higher index items (but those will be shifted in a different position by the first assignment). Also, slices must not overlap.
Virtualenv Command Not Found
I couldn't get virtualenv to work despite various attempts. I installed virtualenv on MAC OS X using: pip install virtualenv and have also added the PATH into my .bash_profile. Every time I try to run the virtualenv command, it returns: -bash: virtualenv: command not found Every time I run pip install virtualenv, it returns: Requirement already satisfied (use --upgrade to upgrade): virtualenv in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages I understand that in mac, the virtualenv should be correctly installed in /usr/local/bin The virtualenv is indeed installed in /usr/local/bin, but whenever I try to run the virtualenv command, the command is not found. I've also tried to run the virtualenv command in the directory /usr/local/bin, and it gives me the same result: -bash: virtualenv: command not found These are the PATHs I added to my .bash_profile export PATH=$PATH:/usr/local/bin export PATH=$PATH:/usr/local/bin/python export PATH=$PATH:/Library/Framework/Python.framework/Version/2.7/lib/site-packages Any workarounds for this? Why is this the case?
I faced the same issue and this is how I solved it: The issue occurred to me because I installed virtualenv via pip as a regular user (not root). pip installed the packages into the directory ~/.local/lib/pythonX.X/site-packages When I ran pip as root or with admin privileges (sudo), it installed packages in /usr/lib/pythonX.X/dist-packages. This path might be different for you. virtualenv command gets recognized only in the second scenario So, to solve the issue, do pip uninstall virtualenv and then reinstall it with sudo pip install virtualenv (or install as root)
how do you create a linear regression forecast on time series data in python
I need to be able to create a python function for forecasting based on linear regression model with confidence bands on time series data: The function needs to take in an argument to how far out it forecasts. For example 1day, 7days, 30days, 90days etc. Depending on the argument, it will need to create holtwinters forcasting with confidence bands: My time series data looks like this: print series [{"target": "average", "datapoints": [[null, 1435688679], [34.870499801635745, 1435688694], [null, 1435688709], [null, 1435688724], [null, 1435688739], [null, 1435688754], [null, 1435688769], [null, 1435688784], [null, 1435688799], [null, 1435688814], [null, 1435688829], [null, 1435688844], [null, 1435688859], [null, 1435688874], [null, 1435688889], [null, 1435688904], [null, 1435688919], [null, 1435688934], [null, 1435688949], [null, 1435688964], [null, 1435688979], [38.180000209808348, 1435688994], [null, 1435689009], [null, 1435689024], [null, 1435689039], [null, 1435689054], [null, 1435689069], [null, 1435689084], [null, 1435689099], [null, 1435689114], [null, 1435689129], [null, 1435689144], [null, 1435689159], [null, 1435689174], [null, 1435689189], [null, 1435689204], [null, 1435689219], [null, 1435689234], [null, 1435689249], [null, 1435689264], [null, 1435689279], [30.79849989414215, 1435689294], [null, 1435689309], [null, 1435689324], [null, 1435689339], [null, 1435689354], [null, 1435689369], [null, 1435689384], [null, 1435689399], [null, 1435689414], [null, 1435689429], [null, 1435689444], [null, 1435689459], [null, 1435689474], [null, 1435689489], [null, 1435689504], [null, 1435689519], [null, 1435689534], [null, 1435689549], [null, 1435689564]]}] Once the function is done it needs to append the forecasted values to the above time series data called series and return series: [{"target": "average", "datapoints": [[null, 1435688679], [34.870499801635745, 1435688694], [null, 1435688709], [null, 1435688724], [null, 1435688739], [null, 1435688754], [null, 1435688769], [null, 1435688784], [null, 1435688799], [null, 1435688814], [null, 1435688829], [null, 1435688844], [null, 1435688859], [null, 1435688874], [null, 1435688889], [null, 1435688904], [null, 1435688919], [null, 1435688934], [null, 1435688949], [null, 1435688964], [null, 1435688979], [38.180000209808348, 1435688994], [null, 1435689009], [null, 1435689024], [null, 1435689039], [null, 1435689054], [null, 1435689069], [null, 1435689084], [null, 1435689099], [null, 1435689114], [null, 1435689129], [null, 1435689144], [null, 1435689159], [null, 1435689174], [null, 1435689189], [null, 1435689204], [null, 1435689219], [null, 1435689234], [null, 1435689249], [null, 1435689264], [null, 1435689279], [30.79849989414215, 1435689294], [null, 1435689309], [null, 1435689324], [null, 1435689339], [null, 1435689354], [null, 1435689369], [null, 1435689384], [null, 1435689399], [null, 1435689414], [null, 1435689429], [null, 1435689444], [null, 1435689459], [null, 1435689474], [null, 1435689489], [null, 1435689504], [null, 1435689519], [null, 1435689534], [null, 1435689549], [null, 1435689564]]},{"target": "Forecast", "datapoints": [[186.77999925613403, 1435520801], [178.95000147819519, 1435521131]]},{"target": "Upper", "datapoints": [[186.77999925613403, 1435520801], [178.95000147819519, 1435521131]]},{"target": "Lower", "datapoints": [[186.77999925613403, 1435520801], [178.95000147819519, 1435521131]]}] Has anyone done something like this in python? Any ideas how to start?
In the text of your question, you clearly state that you would like upper and lower bounds on your regression output, as well as the output prediction. You also mention using Holt-Winters algorithms for forecasting in particular. The packages suggested by other answerers are useful, but you might note that sklearn LinearRegression does not give you error bounds "out of the box", statsmodels does not provide Holt-Winters right now. Therefore, I suggest try using this implementation of Holt-Winters. Unfortunately its license is unclear, so I can't reproduce it here in full. Now, I'm not sure whether you actually want Holt-Winters (seasonal) prediction, or Holt's linear exponential smoothing algorithm. I'm guessing the latter given the title of the post. Thus, you can use the linear() function of the linked library. The technique is described in detail here for interested readers. In the interests of not providing a link only answer - I'll describe the main features here. A function is defined that takes the data i.e. def linear(x, fc, alpha = None, beta = None): x is the data to be fit, fc is the number of timesteps that you want to forecast, alpha and beta take their usual Holt-Winters meanings: roughly a parameter to control the amount of smoothing to the "level" and to the "trend" respectively. If alpha or beta are not specified, they are estimated using scipy.optimize.fmin_l_bfgs_b to minimise the RMSE. The function simply applies the Holt algorithm by looping through the existing data points and then returns the forecast as follows: return Y[-fc:], alpha, beta, rmse where Y[-fc:] are the forecast points, alpha and beta are the values actually used and rmse is the root mean squared error. Unfortunately, as you can see, there are no upper or lower confidence intervals. By the way - we should probably refer to them as prediction intervals. Prediction intervals maths Holt's algorithm and Holt-Winters algorithm are exponential smoothing techniques and finding confidence intervals for predictions generated from them is a tricky subject. They have been referred to as "rule of thumb" methods and, in the case of the Holt-Winters multiplicative algorithm, without "underlying statistical model". However, the final footnote to this page asserts that: It is possible to calculate confidence intervals around long-term forecasts produced by exponential smoothing models, by considering them as special cases of ARIMA models. (Beware: not all software calculates confidence intervals for these models correctly.) The width of the confidence intervals depends on (i) the RMS error of the model, (ii) the type of smoothing (simple or linear); (iii) the value(s) of the smoothing constant(s); and (iv) the number of periods ahead you are forecasting. In general, the intervals spread out faster as α gets larger in the SES model and they spread out much faster when linear rather than simple smoothing is used. We see here that an ARIMA(0,2,2) model is equivalent to a Holt linear model with additive errors Prediction intervals code (i.e. how to proceed) You indicate in comments that you "can easily do this in R". I guess you may be used to the holt() function provided by the forecast package in R and therefore expecting such intervals. In which case - you can adapt the python library to give them to you on the same basis. Looking at the R holt code, we can see that it returns an object based on forecast(ets(...). Under the hood - this eventually calls to this function class1, which returns a mean mu and variance var (as well as cj which I have to confess I do not understand). The variance is used to calculate the upper and lower bounds here. To do something similar in Python - we would need to produce something similar to the class1 R function that estimates the variance of each prediction. This function takes the residuals found in model fitting and multiplies them by a factor at each time step to get the variance at that timestep. In the particular case of the linear Holt's algorithm, the factor is the cumulative sum of alpha + k*beta where k is the number of timesteps' prediction. Once you have that variance at each prediction point, treat the errors as normally distributed and get the X% value from the normal distribution. Here's an idea how to do this in Python (using the code I linked as your linear function) #Copy, import or reimplement the RMSE and linear function from #https://gist.github.com/andrequeiroz/5888967 #factor in case there are not 1 timestep per day - in your case #assuming the timesteps are UTC epoch - I think they're 5 min # spaced i.e. 288 per day timesteps_per_day = 288 # Note - big assumption here - your known data will be regular in time # i.e. timesteps_per_day observations per day. From the timestamps this seems valid. # if you can't guarantee that - you'll need to interpolate the data def holt_predict(data, timestamps, forecast_days, pred_error_level = 0.95): forecast_timesteps = forecast_days*timesteps_per_day middle_predictions, alpha, beta, rmse = linear(data,int(forecast_timesteps)) cum_error = [beta+alpha] for k in range(1,forecast_timesteps): cum_error.append(cum_error[k-1] + k*beta + alpha) cum_error = np.array(cum_error) #Use some numpy multiplication to get the intervals var = cum_error * rmse**2 # find the correct ppf on the normal distribution (two-sided) p = abs(scipy.stats.norm.ppf((1-pred_error_level)/2)) interval = np.sqrt(var) * p upper = middle_predictions + interval lower = middle_predictions - interval fcast_timestamps = [timestamps[-1] + i * 86400 / timesteps_per_day for i in range(forecast_timesteps)] ret_value = [] ret_value.append({'target':'Forecast','datapoints': zip(middle_predictions, fcast_timestamps)}) ret_value.append({'target':'Upper','datapoints':zip(upper,fcast_timestamps)}) ret_value.append({'target':'Lower','datapoints':zip(lower,fcast_timestamps)}) return ret_value if __name__=='__main__': import numpy as np import scipy.stats from math import sqrt null = None data_in = [{"target": "average", "datapoints": [[null, 1435688679], [34.870499801635745, 1435688694], [null, 1435688709], [null, 1435688724], [null, 1435688739], [null, 1435688754], [null, 1435688769], [null, 1435688784], [null, 1435688799], [null, 1435688814], [null, 1435688829], [null, 1435688844], [null, 1435688859], [null, 1435688874], [null, 1435688889], [null, 1435688904], [null, 1435688919], [null, 1435688934], [null, 1435688949], [null, 1435688964], [null, 1435688979], [38.180000209808348, 1435688994], [null, 1435689009], [null, 1435689024], [null, 1435689039], [null, 1435689054], [null, 1435689069], [null, 1435689084], [null, 1435689099], [null, 1435689114], [null, 1435689129], [null, 1435689144], [null, 1435689159], [null, 1435689174], [null, 1435689189], [null, 1435689204], [null, 1435689219], [null, 1435689234], [null, 1435689249], [null, 1435689264], [null, 1435689279], [30.79849989414215, 1435689294], [null, 1435689309], [null, 1435689324], [null, 1435689339], [null, 1435689354], [null, 1435689369], [null, 1435689384], [null, 1435689399], [null, 1435689414], [null, 1435689429], [null, 1435689444], [null, 1435689459], [null, 1435689474], [null, 1435689489], [null, 1435689504], [null, 1435689519], [null, 1435689534], [null, 1435689549], [null, 1435689564]]}] #translate the data. There may be better ways if you're #prepared to use pandas / input data is proper json time_series = data_in[0]["datapoints"] epoch_in = [] Y_observed = [] for (y,x) in time_series: if y and x: epoch_in.append(x) Y_observed.append(y) #Pass in the number of days to forecast fcast_days = 30 res = holt_predict(Y_observed,epoch_in,fcast_days) data_out = data_in + res #data_out now holds the data as you wanted it. #Optional plot of results import matplotlib.pyplot as plt plt.plot(epoch_in,Y_observed) m,tstamps = zip(*res[0]['datapoints']) u,tstamps = zip(*res[1]['datapoints']) l,tstamps = zip(*res[2]['datapoints']) plt.plot(tstamps,u, label='upper') plt.plot(tstamps,l, label='lower') plt.plot(tstamps,m, label='mean') plt.show() N.B. The output I've given adds points as tuple type into your object. If you really need list, then replace zip(upper,fcast_timestamps) with map(list,zip(upper,fcast_timestamps)) where the code adds upper, lower and Forecast dicts to the result. This code is for the particular case of the Holt's linear algorithm - it is not a generic way to calculate correct prediction intervals. Important note Your sample input data seems to have a lot of null and only 3 genuine data points. This is highly unlikely to be a good basis for doing timeseries prediction - especially as they all seem to be with 15 minutes and you're trying to forecast up to 3 months!. Indeed - if you feed that data into the R holt(), it will say: You've got to be joking. I need more data! i'm assuming you have a larger dataset to test on. I tried the code above on the stock market opening prices for 2015 and it seemed to give reasonable results (see below). You may think the prediction intervals look a little wide. This blog from the author of the R forecast module implies that is intentional, though : "con­fi­dence inter­vals for the mean are much nar­rower than pre­dic­tion inter­vals"
Logarithmic plot of a cumulative distribution function in matplotlib
I have a file containing logged events. Each entry has a time and latency. I'm interested in plotting the cumulative distribution function of the latencies. I'm most interested in tail latencies so I want the plot to have a logarithmic y-axis. I'm interested in the latencies at the following percentiles: 90th, 99th, 99.9th, 99.99th, and 99.999th. Here is my code so far that generates a regular CDF plot: # retrieve event times and latencies from the file times, latencies = read_in_data_from_file('myfile.csv') # compute the CDF cdfx = numpy.sort(latencies) cdfy = numpy.linspace(1 / len(latencies), 1.0, len(latencies)) # plot the CDF plt.plot(cdfx, cdfy) plt.show() I know what I want the plot to look like, but I've struggled to get it. I want it to look like this (I did not generate this plot): Making the x-axis logarithmic is simple. The y-axis is the one giving me problems. Using set_yscale('log') doesn't work because it wants to use powers of 10. I really want the y-axis to have the same ticklabels as this plot. How can I get my data into a logarithmic plot like this one? EDIT: If I set the yscale to 'log', and ylim to [0.1, 1], I get the following plot: The problem is that a typical log scale plot on a data set ranging from 0 to 1 will focus on values close to zero. Instead, I want to focus on the values close to 1.
Essentially you need to apply the following transformation to your Y values: -log10(1-y). This imposes the only limitation that y < 1, so you should be able to have negative values on the transformed plot. Here's a modified example from matplotlib documentation that shows how to incorporate custom transformations into "scales": import numpy as np from numpy import ma from matplotlib import scale as mscale from matplotlib import transforms as mtransforms from matplotlib.ticker import FixedFormatter, FixedLocator class CloseToOne(mscale.ScaleBase): name = 'close_to_one' def __init__(self, axis, **kwargs): mscale.ScaleBase.__init__(self) self.nines = kwargs.get('nines', 5) def get_transform(self): return self.Transform(self.nines) def set_default_locators_and_formatters(self, axis): axis.set_major_locator(FixedLocator( np.array([1-10**(-k) for k in range(1+self.nines)]))) axis.set_major_formatter(FixedFormatter( [str(1-10**(-k)) for k in range(1+self.nines)])) def limit_range_for_scale(self, vmin, vmax, minpos): return vmin, min(1 - 10**(-self.nines), vmax) class Transform(mtransforms.Transform): input_dims = 1 output_dims = 1 is_separable = True def __init__(self, nines): mtransforms.Transform.__init__(self) self.nines = nines def transform_non_affine(self, a): masked = ma.masked_where(a > 1-10**(-1-self.nines), a) if masked.mask.any(): return -ma.log10(1-a) else: return -np.log10(1-a) def inverted(self): return CloseToOne.InvertedTransform(self.nines) class InvertedTransform(mtransforms.Transform): input_dims = 1 output_dims = 1 is_separable = True def __init__(self, nines): mtransforms.Transform.__init__(self) self.nines = nines def transform_non_affine(self, a): return 1. - 10**(-a) def inverted(self): return CloseToOne.Transform(self.nines) mscale.register_scale(CloseToOne) if __name__ == '__main__': import pylab pylab.figure(figsize=(20, 9)) t = np.arange(-0.5, 1, 0.00001) pylab.subplot(121) pylab.plot(t) pylab.subplot(122) pylab.plot(t) pylab.yscale('close_to_one') pylab.grid(True) pylab.show() Note that you can control the number of 9's via a keyword argument: pylab.figure() pylab.plot(t) pylab.yscale('close_to_one', nines=3) pylab.grid(True)
Is "__module__" guaranteed to be defined during class creation?
I was reading some code that looked basically like this: class Foo(object): class_name = __module__.replace('_', '-') To me, that looked really weird (__module__, what is that?) so I went and looked at the python data-model. A quick search shows that __module__ is a property of class objects and of function objects. However, there is no __module__ available in the global namespace (as can easily be verified by just trying to look at it and observing the NameError that results ...). I decided to chalk this up to implementation specific behavior, but as a last check, I decided to test with other implementations I have handy. It turns out that this code executes with1 Cpython 2.7.6 Cpython 3.4.0 jython 2.5.3 PyPy 2.2.1 (Python 2.7.3) My question is whether this behavior is actually defined anywhere in the language reference. I'm not sure why I'd want to, but could I safely rely on __module__ being in the class creation namespace or did all the implementors just decide to do this the same way? 1All linux, but I doubt that matters ...
What the documentation does define is that classes will have a __module__ attribute. It seems the way CPython does this is that it defines a local variable __module__ at the beginning of the class block. This variable then becomes a class attribut like any other variable defined there. I can't find any documentation saying that __module__ has to be defined in this way. In particular, I can't find any documentation explicitly saying the attribute has to be define as a local variable in the class body, instead of being assigned as a class attribute at some later stage in class creation. This answer to a different question mentions that it works this way, and shows how it appears in the bytecode. There was a Jython bug that they fixed by making it work the same as CPython. I'm guessing this is a CPython implementation detail that was carried over to other implementations. As far as I can tell the documentation doesn't actually say __module__ has to be available inside the class body, only on the class object afterwards.
How to send an array using requests.post (Python)? "Value Error: Too many values to unpack"
I'm trying to send an array(list) of requests to the WheniWork API using requests.post, and I keep getting one of two errors. When I send the list as a list, I get an unpacking error, and when I send it as a string, I get an error asking me to submit an array. I think it has something to do with how requests handles lists. Here are the examples: url='https://api.wheniwork.com/2/batch' headers={"W-Token": "Ilovemyboss"} data=[{'url': '/rest/shifts', 'params': {'user_id': 0,'other_stuff':'value'}, 'method':'post',{'url': '/rest/shifts', 'params': {'user_id': 1,'other_stuff':'value'}, 'method':'post'}] r = requests.post(url, headers=headers,data=data) print r.text # ValueError: too many values to unpack Simply wrapping the value for data in quotes: url='https://api.wheniwork.com/2/batch' headers={"W-Token": "Ilovemyboss"} data="[]" #removed the data here to emphasize that the only change is the quotes r = requests.post(url, headers=headers,data=data) print r.text #{"error":"Please include an array of requests to make.","code":5000}
You want to pass in JSON encoded data. See the API documentation: Remember — All post bodies must be JSON encoded data (no form data). The requests library makes this trivially easy: headers = {"W-Token": "Ilovemyboss"} data = [ { 'url': '/rest/shifts', 'params': {'user_id': 0, 'other_stuff': 'value'}, 'method': 'post', }, { 'url': '/rest/shifts', 'params': {'user_id': 1,'other_stuff': 'value'}, 'method':'post', }, ] requests.post(url, json=data, headers=headers) By using the json keyword argument the data is encoded to JSON for you, and the Content-Type header is set to application/json.
getattr and setattr on nested objects?
this is probably a simple problem so hopefuly its easy for someone to point out my mistake or if this is even possible. I have an object that has multiple objects as properties. I want to be able to dynamically set the properties of these objects like so: class Person(object): def __init__(self): self.pet = Pet() self.residence = Residence() class Pet(object): def __init__(self,name='Fido',species='Dog'): self.name = name self.species = species class Residence(object): def __init__(self,type='House',sqft=None): self.type = type self.sqft=sqft if __name__=='__main__': p=Person() setattr(p,'pet.name','Sparky') setattr(p,'residence.type','Apartment') print p.__dict__ The output is: {'pet': <main.Pet object at 0x10c5ec050>, 'residence': <main.Residence object at 0x10c5ec0d0>, 'pet.name': 'Sparky', 'residence.type': 'Apartment'} As you can see, rather then having the name attribute set on the pet object of the person, a new attribute "pet.name" is created. I cannot specify person.pet to setattr because different child-objects will be set by the same method, which is parsing some text and filling in the object attributes if/when a relevant key is found. Is there a easy/built in way to accomplish this? Or perhaps I need to write a recursive function to parse the string and call getattr multiple times until the necessary child-object is found and then call setattr on that found object? Thank you!
You could use functools.reduce: import functools def rsetattr(obj, attr, val): pre, _, post = attr.rpartition('.') return setattr(rgetattr(obj, pre) if pre else obj, post, val) sentinel = object() def rgetattr(obj, attr, default=sentinel): if default is sentinel: _getattr = getattr else: def _getattr(obj, name): return getattr(obj, name, default) return functools.reduce(_getattr, [obj]+attr.split('.')) rgetattr and rsetattr are drop-in replacements for getattr and setattr, which can also handle dotted attr strings. import functools class Person(object): def __init__(self): self.pet = Pet() self.residence = Residence() class Pet(object): def __init__(self,name='Fido',species='Dog'): self.name = name self.species = species class Residence(object): def __init__(self,type='House',sqft=None): self.type = type self.sqft=sqft def rsetattr(obj, attr, val): pre, _, post = attr.rpartition('.') return setattr(rgetattr(obj, pre) if pre else obj, post, val) sentinel = object() def rgetattr(obj, attr, default=sentinel): if default is sentinel: _getattr = getattr else: def _getattr(obj, name): return getattr(obj, name, default) return functools.reduce(_getattr, [obj]+attr.split('.')) if __name__=='__main__': p = Person() print(rgetattr(p, 'pet.favorite.color', 'calico')) # 'calico' try: # Without a default argument, `rgetattr`, like `getattr`, raises # AttributeError when the dotted attribute is missing print(rgetattr(p, 'pet.favorite.color')) except AttributeError as err: print(err) # 'Pet' object has no attribute 'favorite' rsetattr(p, 'pet.name', 'Sparky') rsetattr(p, 'residence.type', 'Apartment') print(p.__dict__) print(p.pet.name) # Sparky print(p.residence.type) # Apartment
How to import all the environment variables in tox
I'm using following in setenv to import the environment variable from where I run, but is there a way to import all the variables so that I don't really need to import one by one. e.g: {env:TEMPEST_CONFIG:} and {env:TEMPEST_CONFIG_DIR:} used to import these 2 variables. [testenv:nosetests] setenv = TEMPEST_CONFIG={env:TEMPEST_CONFIG:} TEMPEST_CONFIG_DIR={env:TEMPEST_CONFIG_DIR:} deps = {[testenv]deps} commands = find . -type f -name "*.pyc" -delete bash {toxinidir}/tools/setup.sh nosetests --with-xunit {posargs}
You can use passenv. If you pass the catch all wildcard * you have access to all environment variables from the parent environment: passenv=SPACE-SEPARATED-GLOBNAMES New in version 2.0. A list of wildcard environment variable names which shall be copied from the tox invocation environment to the test environment when executing test commands. If a specified environment variable doesn’t exist in the tox invocation environment it is ignored. You can use * and ? to match multiple environment variables with one name. minimal tox.ini to reproduce: [tox] envlist = py27 skipsdist = True [testenv] passenv = * whitelist_externals = echo commands = echo {env:MY_FANCY_ENV_VAR:} from my fancy env var :) invocation in linux/unix shell: MY_FANCY_ENV_VAR='hello' tox invocation on Windows cmd.exe: set MY_FANCY_ENV_VAR=hello & tox output: py27 create: /tmp/tt/.tox/py27 py27 installed: py27 runtests: PYTHONHASHSEED='2037875709' py27 runtests: commands[0] | echo from my fancy env var :) hello from my fancy env var :) _______________________ summary __________________________ py27: commands succeeded congratulations :)
Unexpected output from list(generator)
I have a list and a lambda function defined as In [1]: i = lambda x: a[x] In [2]: alist = [(1, 2), (3, 4)] Then I try two different methods to calculate a simple sum First method. In [3]: [i(0) + i(1) for a in alist] Out[3]: [3, 7] Second method. In [4]: list(i(0) + i(1) for a in alist) Out[4]: [7, 7] Both results are unexpectedly different. Why is that happening?
This behaviour has been fixed in python 3. When you use a list comprehension [i(0) + i(1) for a in alist] you will define a in its surrounding scope which is accessible for i. In a new session list(i(0) + i(1) for a in alist) will throw error. >>> i = lambda x: a[x] >>> alist = [(1, 2), (3, 4)] >>> list(i(0) + i(1) for a in alist) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in <genexpr> File "<stdin>", line 1, in <lambda> NameError: global name 'a' is not defined A list comprehension is not a generator: Generator expressions and list comprehensions. Generator expressions are surrounded by parentheses (“()”) and list comprehensions are surrounded by square brackets (“[]”). In your example list() as a class has its own scope of variables and it has access to global variables at most. When you use that, i will look for a inside that scope. Try this in new session: >>> i = lambda x: a[x] >>> alist = [(1, 2), (3, 4)] >>> [i(0) + i(1) for a in alist] [3, 7] >>> a (3, 4) Compare it to this in another session: >>> i = lambda x: a[x] >>> alist = [(1, 2), (3, 4)] >>> l = (i(0) + i(1) for a in alist) <generator object <genexpr> at 0x10e60db90> >>> a Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'a' is not defined >>> [x for x in l] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in <genexpr> File "<stdin>", line 1, in <lambda> NameError: global name 'a' is not defined When you run list(i(0) + i(1) for a in alist) you will pass a generator (i(0) + i(1) for a in alist) to the list class which it will try to convert it to a list in its own scope before return the list. For this generator which has no access inside lambda function, the variable a has no meaning. The generator object <generator object <genexpr> at 0x10e60db90> has lost the variable name a. Then when list tries to call the generator, lambda function will throw error for undefined a. The behaviour of list comprehensions in contrast with generators also mentioned here: List comprehensions also "leak" their loop variable into the surrounding scope. This will also change in Python 3.0, so that the semantic definition of a list comprehension in Python 3.0 will be equivalent to list(). Python 2.4 and beyond should issue a deprecation warning if a list comprehension's loop variable has the same name as a variable used in the immediately surrounding scope. In python3: >>> i = lambda x: a[x] >>> alist = [(1, 2), (3, 4)] >>> [i(0) + i(1) for a in alist] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in <listcomp> File "<stdin>", line 1, in <lambda> NameError: name 'a' is not defined
sympy: order of result from solving a quadratic equation
I solved a quadratic equation using sympy: import sympy as sp q,qm,k,c0,c,vt,vm = sp.symbols('q qm k c0 c vt vm') c = ( c0 * vt - q * vm) / vt eq1 = sp.Eq(qm * k * c / (1 + k * c) ,q) q_solve = sp.solve(eq1,q) Based on some testing I figured out that only q_solve[0] makes physical sense. Will sympy always put (b - sqrt(b**2 - 4*a*c))/2a in the first place ? I guess, it might change with an upgrade ?
A simple test to answer your question is to symbolically solve the quadratic equation using sympy per below: import sympy as sp a, b, c, x = sp.symbols('a b c x') solve( a*x**2 + b*x + c, x) this gives you the result: [(-b + sqrt(-4*a*c + b**2))/(2*a), -(b + sqrt(-4*a*c + b**2))/(2*a)] which leads me to believe that in general the order is first the + sqrt() solution and then the - sqrt() solution. For your program q_solve[0] gives you: (c0*k*vt + k*qm*vm + vt - sqrt(c0**2*k**2*vt**2 - 2*c0*k**2*qm*vm*vt + 2*c0*k*vt**2 + k**2*qm**2*vm**2 + 2*k*qm*vm*vt + vt**2))/(2*k*vm) this is still the x= (-b + sqrt(b**2-4*a*c))/(2*a) answer, the negative sign from the b term goes away as a result of the distribution of the signs of the variables within the solution
What is the difference between "range(0,2)" and "list(range(0,2))"?
Need to understand the difference between range(0,2) and list(range(0,2)), using python2.7 Both return a list so what exactly is the difference?
In Python 3.x , range(0,3) returns a class of immutable iterable objects that lets you iterate over them, it does not produce lists, and they do not store all the elements in the range in memory, instead they produce the elements on the fly (as you are iterating over them) , whereas list(range(0,3)) produces a list (by iterating over all the elements and appending to the list internally) . Example - >>> range(0,3) range(0, 3) >>> list(range(0,3)) [0, 1, 2] Ideally, if you only want to iterate over that range of values , range(0,3) would be faster than (list(range(0,3)) because the latter has the overhead of producing a list before you start iterating over it. In Python 2.x , range(0,3) produces an list, instead we also had an xrange() function that has similar behavior of range() function from Python 3.x (xrange was renamed to range in Python 3.x) For Python 3.5, From the documentation - Range objects implement the collections.abc.Sequence ABC, and provide features such as containment tests, element index lookup, slicing and support for negative indices So you can do things like - >>> range(0,10)[5] 5 >>> range(0,10)[3:7] range(3, 7) >>> 5 in range(6,10) False >>> 7 in range(1,8) True And all of these are constant time operations , as can be seen from this test - In [11]: %timeit a = xrange(0,1000000)[1000] 1000000 loops, best of 3: 342 ns per loop In [12]: %timeit a = xrange(0,1000000)[10000] 1000000 loops, best of 3: 342 ns per loop In [13]: %timeit a = xrange(0,1000000)[100000] 1000000 loops, best of 3: 342 ns per loop In [14]: %timeit a = xrange(0,1000000)[999999] 1000000 loops, best of 3: 342 ns per loop In [15]: %timeit a = xrange(0,10000000)[9999999] 1000000 loops, best of 3: 339 ns per loop In [16]: %timeit a = xrange(0,1000000000000)[9999999999] 1000000 loops, best of 3: 341 ns per loop
What's the difference between select_related and prefetch_related in Django ORM?
In Django doc, select_related() “follow” foreign-key relationships, selecting additional related-object data when it executes its query. prefetch_related() does a separate lookup for each relationship, and does the ‘joining’ in Python. What does it mean by "doing the joining in python"? Can someone illustrate with an example? My understanding is that for foreign key relationship, use select_related; and for M2M relationship, use prefetch_related. Is this correct?
Your understanding is mostly correct. You use select_related when the object that you're going to be selecting is a single object, so OneToOneField or a ForeignKey. You use prefetch_related when you're going to get a "set" of things, so ManyToManyFields as you stated or reverse ForeignKeys. Just to clarify what I mean by "reverse ForeignKeys" here's an example: class ModelA(models.Model): pass class ModelB(models.Model): a = ForeignKey(ModelA) ModelB.objects.select_related('a').all() # Forward ForeignKey relationship ModelA.objects.prefetch_related('modelb_set').all() # Reverse ForeignKey relationship The difference is that select_related does an SQL join and therefore gets the results back as part of the table from the SQL server. prefetch_related on the other hand executes another query and therefore reduces the redundant columns in the original object (ModelA in the above example). You may use prefetch_related for anything that you can use select_related for, it just generally doesn't make much sense since it's another query with no communications savings.
Different behavior in python script and python idle?
In the python idle: >>> a=1.1 >>> b=1.1 >>> a is b False But when I put the code in a script and run it, I will get a different result: $cat t.py a=1.1 b=1.1 print a is b $python t.py True Why did this happen? I know that is compares the id of two objects, so why the ids of two objects are same/unique in python script/idle? I also found that, if I use a small int, for example 1, instead of 1.1, the result will be the same in both the python script and python idle. Why did small int and small float have different behavior? I am using CPython 2.7.5.
When Python executes a script file, the whole file is parsed first. You can notice that when you introduce a syntax error somewhere: Regardless of where it is, it will prevent any line from executing. So since Python parses the file first, literals can be loaded effectively into the memory. Since Python knows that these are constant, all variables that represent those constant values can point to the same object in memory. So the object is shared. This works for ints and floats, but even for strings; even when there is a constant expression that needs to be evaluated first: a = "foo" b = "foo" c = "fo" + "o" print(a is b) print(a is c) Now in IDLE, the behavior is very different: As an interactive interpreter, IDLE executes every line separately. So a = 1.1 and b = 1.1 are executed in separated contexts which makes it impossible (or just very hard) to figure out that they both share the same constant literal value and could share the memory. So instead, the interpreter will allocate two different objects, which causes the identity check using is to fail. For small integers, the situation is a bit different. Because they are often used, CPython stores a set of integers (in the range between -5 and 256) statically and makes that every value of these points to the same int object. That’s why you get a different result for small integers than for any other object. See also the following questions: Python's "is" operator behaves unexpectedly with integers Weird Integer Cache inside Python 2.6
PEP 0492 - Python 3.5 async keyword
PEP 0492 adds the async keyword to Python 3.5. How does Python benefit from the use of this operator? The example that is given for a coroutine is async def read_data(db): data = await db.fetch('SELECT ...') According to the docs this achieves suspend[ing] execution of read_data coroutine until db.fetch awaitable completes and returns the result data. Does this async keyword actually involve creation of new threads or perhaps the use of an existing reserved async thread? In the event that async does use a reserved thread, is it a single shared thread each in their own?
No, co-routines do not involve any kind of threads. Co-routines allow for cooperative multi-tasking in that each co-routine yields control voluntarily. Threads on the other hand switch between units at arbitrary points. Up to Python 3.4, it was possible to write co-routines using generators; by using yield or yield from expressions in a function body you create a generator object instead, where code is only executed when you iterate over the generator. Together with additional event loop libraries (such as asyncio) you could write co-routines that would signal to an event loop that they were going to be busy (waiting for I/O perhaps) and that another co-routine could be run in the meantime: import asyncio import datetime @asyncio.coroutine def display_date(loop): end_time = loop.time() + 5.0 while True: print(datetime.datetime.now()) if (loop.time() + 1.0) >= end_time: break yield from asyncio.sleep(1) Every time the above code advances to the yield from asyncio.sleep(1) line, the event loop is free to run a different co-routine, because this routine is not going to do anything for the next second anyway. Because generators can be used for all sorts of tasks, not just co-routines, and because writing a co-routine using generator syntax can be confusing to new-comers, the PEP introduces new syntax that makes it clearer that you are writing a co-routine. With the PEP implemented, the above sample could be written instead as: async def display_date(loop): end_time = loop.time() + 5.0 while True: print(datetime.datetime.now()) if (loop.time() + 1.0) >= end_time: break await asyncio.sleep(1) The resulting coroutine object still needs an event loop to drive the co-routines; an event loop would await on each co-routine in turn, which would execute those co-routines that are not currently awaiting for something to complete. The advantages are that with native support, you can also introduce additional syntax to support asynchronous context managers and iterators. Entering and exiting a context manager, or looping over an iterator then can become more points in your co-routine that signal that other code can run instead because something is waiting again.
Doc2vec : How to get document vectors
How to get document vectors of two text documents using Doc2vec? I am new to this, so it would be helpful if someone could point me in right direction/help me with some tutorial I am using gensim python library. doc1=["This is a sentence","This is another sentence"] documents1=[doc.strip().split(" ") for doc in doc1 ] model = doc2vec.Doc2Vec(documents1, size = 100, window = 300, min_count = 10, workers=4) I get AttributeError: 'list' object has no attribute 'words' whenever I run this
doc=["This is a sentence","This is another sentence"] documents=[doc.strip().split(" ") for doc in doc1 ] model = doc2vec.Doc2Vec(documents, size = 100, window = 300, min_count = 10, workers=4) I got AttributeError: 'list' object has no attribute 'words' because the input documents to the Doc2vec() was not in correct LabeledSentence format. I hope this below example will help you understand the format. documents = LabeledSentence(words=[u'some', u'words', u'here'], labels=[u'SENT_1']) More details are here : http://rare-technologies.com/doc2vec-tutorial/ However, I solved the problem by taking input data from file using TaggedLineDocument(). File format: one document = one line = one TaggedDocument object. Words are expected to be already preprocessed and separated by whitespace, tags are constructed automatically from the document line number. sentences=doc2vec.TaggedLineDocument(file_path) model = doc2vec.Doc2Vec(sentences,size = 100, window = 300, min_count = 10, workers=4) To get document vector : You can use docvecs. More details here : https://radimrehurek.com/gensim/models/doc2vec.html#gensim.models.doc2vec.TaggedDocument docvec = model.docvecs[99] where 99 is the document id whose vector we want. If labels are in integer format (by default, if you load using TaggedLineDocument() ), directly use integer id like I did. If labels are in string format,use "SENT_99" .This is similar to Word2vec
Finding gradient of a Caffe conv-filter with regards to input
I need to find the gradient with regards to the input layer for a single convolutional filter in a convolutional neural network (CNN) as a way to visualize the filters. Given a trained network in the Python interface of Caffe such as the one in this example, how can I then find the gradient of a conv-filter with respect to the data in the input layer? Edit: Based on the answer by cesans, I added the code below. The dimensions of my input layer is [8, 8, 7, 96]. My first conv-layer, conv1, has 11 filters with a size of 1x5, resulting in the dimensions [8, 11, 7, 92]. net = solver.net diffs = net.backward(diffs=['data', 'conv1']) print diffs.keys() # >> ['conv1', 'data'] print diffs['data'].shape # >> (8, 8, 7, 96) print diffs['conv1'].shape # >> (8, 11, 7, 92) As you can see from the output, the dimensions of the arrays returned by net.backward() are equal to the dimensions of my layers in Caffe. After some testing I've found that this output is the gradients of the loss with regards to respectively the data layer and the conv1 layer. However, my question was how to find the gradient of a single conv-filter with respect to the data in the input layer, which is something else. How can I achieve this?
Caffe net juggles two "streams" of numbers. The first is the data "stream": images and labels pushed through the net. As these inputs progress through the net they are converted into high-level representation and eventually into class probabilities vectors (in classification tasks). The second "stream" holds the parameters of the different layers, the weights of the convolutions, the biases etc. These numbers/weights are changed and learned during the train phase of the net. Despite the fundamentally different role these two "streams" play, caffe nonetheless use the same data structure, blob, to store and manage them. However, for each layer there are two different blobs vectors one for each stream. Here's an example that I hope would clarify: import caffe solver = caffe.SGDSolver( PATH_TO_SOLVER_PROTOTXT ) net = solver.net If you now look at net.blobs You will see a dictionary storing a "caffe blob" object for each layer in the net. Each blob has storing room for both data and gradient net.blobs['data'].data.shape # >> (32, 3, 224, 224) net.blobs['data'].diff.shape # >> (32, 3, 224, 224) And for a convolutional layer: net.blobs['conv1/7x7_s2'].data.shape # >> (32, 64, 112, 112) net.blobs['conv1/7x7_s2'].diff.shape # >> (32, 64, 112, 112) net.blobs holds the first data stream, it's shape matches that of the input images up to the resulting class probability vector. On the other hand, you can see another member of net net.layers This is a caffe vector storing the parameters of the different layers. Looking at the first layer ('data' layer): len(net.layers[0].blobs) # >> 0 There are no parameters to store for an input layer. On the other hand, for the first convolutional layer len(net.layers[1].blobs) # >> 2 The net stores one blob for the filter weights and another for the constant bias. Here they are net.layers[1].blobs[0].data.shape # >> (64, 3, 7, 7) net.layers[1].blobs[1].data.shape # >> (64,) As you can see, this layer performs 7x7 convolutions on 3-channel input image and has 64 such filters. Now, how to get the gradients? well, as you noted diffs = net.backward(diffs=['data','conv1/7x7_s2']) Returns the gradients of the data stream. We can verify this by np.all( diffs['data'] == net.blobs['data'].diff ) # >> True np.all( diffs['conv1/7x7_s2'] == net.blobs['conv1/7x7_s2'].diff ) # >> True (TL;DR) You want the gradients of the parameters, these are stored in the net.layers with the parameters: net.layers[1].blobs[0].diff.shape # >> (64, 3, 7, 7) net.layers[1].blobs[1].diff.shape # >> (64,) To help you map between the names of the layers and their indices into net.layers vector, you can use net._layer_names. Update regarding the use of gradients to visualize filter responses: A gradient is normally defined for a scalar function. The loss is a scalar, and therefore you can speak of a gradient of pixel/filter weight with respect to the scalar loss. This gradient is a single number per pixel/filter weight. If you want to get the input that results with maximal activation of a specific internal hidden node, you need an "auxiliary" net which loss is exactly a measure of the activation to the specific hidden node you want to visualize. Once you have this auxiliary net, you can start from an arbitrary input and change this input based on the gradients of the auxilary loss to the input layer: update = prev_in + lr * net.blobs['data'].diff
Cannot apply DjangoModelPermissions on a view that does not have `.queryset` property or overrides the `.get_queryset()` method
I am getting the error ".accepted_renderer not set on Response resp api django". I am following the django rest-api tutorial. Django version i am using 1.8.3 I followed the tutorial till first part. It worked properly. But when i continued the 2nd part in sending response, i got an error Cannot apply DjangoModelPermissions on a view that does not have `.queryset` property or overrides the `.get_queryset()` method. Then i tried other ways i got .accepted_renderer not set on Response resp api django Please help me out. I think its permission issue.
You probably have set DjangoModelPermissions as a default permission class in your settings. Something like: REST_FRAMEWORK = { 'DEFAULT_PERMISSION_CLASSES': ( 'rest_framework.permissions.DjangoModelPermissions', ) } DjangoModelPermissions can only be applied to views that have a .queryset property or .get_queryset() method. Since Tutorial 2 uses FBVs, you probably need to convert it to a CBV or an easy way is to specify a different permission class for that view. You must be using the api_view decorator in your view. You can then define permissions like below: from rest_framework.decorators import api_view, permission_classes from rest_framework import permissions @api_view([..]) @permission_classes((permissions.AllowAny,)) def my_view(request) ... To resolve the renderer error, you need to add the corresponding renderer to your settings. REST_FRAMEWORK = { 'DEFAULT_RENDERER_CLASSES': ( 'rest_framework.renderers.<corresponding_renderer>', ... ) }
MySQL Improperly Configured Reason: unsafe use of relative path
I'm using Django, and when I run python manage.py runserver I receive the following error: ImproperlyConfigured: Error loading MySQLdb module: dlopen(/Library/Python/2.7/site-packages/_mysql.so, 2): Library not loaded: libmysqlclient.18.dylib Referenced from: /Library/Python/2.7/site-packages/_mysql.so Reason: unsafe use of relative rpath libmysqlclient.18.dylib in /Library/Python/2.7/site-packages/_mysql.so with restricted binary I'm not entirely sure how to fix this. I have installed MySQL-python via pip. And I followed this step earlier. I want to also point out this is with El Capitan Beta 3.
In OS X El Capitan (10.11), Apple added System Integrity Protection. This prevents programs in protected locations like /usr from calling a shared library that uses a relative reference to another shared library. In the case of _mysql.so, it contains a relative reference to the shared library libmysqlclient.18.dylib. In the future, the shared library _mysql.so may be updated. Until then, you can force it to use an absolute reference via the install_name_tool utility. Assuming that libmysqlclient.18.dylib is in /usr/local/mysql/lib/, then run the command: sudo install_name_tool -change libmysqlclient.18.dylib \ /usr/local/mysql/lib/libmysqlclient.18.dylib \ /Library/Python/2.7/site-packages/_mysql.so
Portable way of detecting number of *usable* CPUs in Python
Per this question and answer -- Python multiprocessing.cpu_count() returns '1' on 4-core Nvidia Jetson TK1 -- the output of Python's multiprocessing.cpu_count() function on certain systems reflects the number of CPUs actively in use, as opposed to the number of CPUs actually usable by the calling Python program. A common Python idiom is to use the return-value of cpu_count() to initialize the number of processes in a Pool. However, on systems that use such a "dynamic CPU activation" strategy, that idiom breaks rather badly (at least on a relatively quiescent system). Is there some straightforward (and portable) way to get at the number of usable processors (as opposed the number currently in use) from Python? Notes: This question is not answered by the accepted answer to How to find out the number of CPUs using python, since as noted in the question linked at the top of this question, printing the contents of /proc/self/status shows all 4 cores as being available to the program. To my mind, "portable" excludes any approach that involves parsing the contents of /proc/self/status, whose format may vary from release to release of Linux, and which doesn`t even exist on OS X. (The same goes for any other pseudo-file, as well.)
I don't think you will get any truly portable answers, so I will give a correct one. The correct* answer for Linux is len(os.sched_getaffinity(pid)), where pid may be 0 for the current process. This function is exposed in Python 3.3 and later; if you need it in earlier, you'll have to do some fancy cffi coding. Edit: you might try to see if you can use a function int omp_get_num_procs(); if it exists, it is the only meaningful answer I found on this question but I haven't tried it from Python.
Using Cloudfront with Django S3Boto
I have successfully set up my app to use S3 for storing all static and media files. However, I would like to upload to S3 (current operation), but serve from a cloudfront instance I have set up. I have tried adjusting settings to the cloudfront url but it does not work. How can I upload to S3 and serve from Cloudfront please? Settings AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME DEFAULT_FILE_STORAGE = 'app.custom_storages.MediaStorage' STATICFILES_STORAGE = 'app.custom_storages.StaticStorage' STATICFILES_LOCATION = 'static' MEDIAFILES_LOCATION = 'media' STATIC_URL = "https://s3-eu-west-1.amazonaws.com/app/%s/" % (STATICFILES_LOCATION) MEDIA_URL = "https://%s/%s/" % (AWS_S3_CUSTOM_DOMAIN, MEDIAFILES_LOCATION) custom_storages.py from django.conf import settings from storages.backends.s3boto import S3BotoStorage class StaticStorage(S3BotoStorage): location = settings.STATICFILES_LOCATION class MediaStorage(S3BotoStorage): location = settings.MEDIAFILES_LOCATION
Your code is almost complete except you are not adding your cloudfront domain to STATIC_URL/MEDIA_URL and your custom storages. In detail, you must first install the dependencies pip install django-storages-redux boto Add the required settings to your django settings file INSTALLED_APPS = ( ... 'storages', ... ) AWS_STORAGE_BUCKET_NAME = 'mybucketname' AWS_CLOUDFRONT_DOMAIN = 'xxxxxxxx.cloudfront.net' AWS_ACCESS_KEY_ID = get_secret("AWS_ACCESS_KEY_ID") AWS_SECRET_ACCESS_KEY = get_secret("AWS_SECRET_ACCESS_KEY") MEDIAFILES_LOCATION = 'media' MEDIA_ROOT = '/%s/' % MEDIAFILES_LOCATION MEDIA_URL = '//%s/%s/' % (AWS_CLOUDFRONT_DOMAIN, MEDIAFILES_LOCATION) DEFAULT_FILE_STORAGE = 'app.custom_storages.MediaStorage' STATICFILES_LOCATION = 'static' STATIC_ROOT = '/%s/' % STATICFILES_LOCATION STATIC_URL = '//%s/%s/' % (AWS_CLOUDFRONT_DOMAIN, STATICFILES_LOCATION) STATICFILES_STORAGE = 'app.custom_storages.StaticStorage' Your custom storages need some modification to present the cloudfront domain for the resources, instead of the S3 domain: from django.conf import settings from storages.backends.s3boto import S3BotoStorage class StaticStorage(S3BotoStorage): """uploads to 'mybucket/static/', serves from 'cloudfront.net/static/'""" location = settings.STATICFILES_LOCATION def __init__(self, *args, **kwargs): kwargs['custom_domain'] = settings.AWS_CLOUDFRONT_DOMAIN super(StaticStorage, self).__init__(*args, **kwargs) class MediaStorage(S3BotoStorage): """uploads to 'mybucket/media/', serves from 'cloudfront.net/media/'""" location = settings.MEDIAFILES_LOCATION def __init__(self, *args, **kwargs): kwargs['custom_domain'] = settings.AWS_CLOUDFRONT_DOMAIN super(MediaStorage, self).__init__(*args, **kwargs) And that is all you need, assuming your bucket and cloudfront domain are correctly linked and the user's AWS_ACCESS_KEY has access permissions to your bucket. Additionally, based on your use case, you may wish to make your s3 bucket items read-only accessible by everyone.
Memory efficient sort of massive numpy array in Python
I need to sort a VERY large genomic dataset using numpy. I have an array of 2.6 billion floats, dimensions = (868940742, 3) which takes up about 20GB of memory on my machine once loaded and just sitting there. I have an early 2015 13' MacBook Pro with 16GB of RAM, 500GB solid state HD and an 3.1 GHz intel i7 processor. Just loading the array overflows to virtual memory but not to the point where my machine suffers or I have to stop everything else I am doing. I build this VERY large array step by step from 22 smaller (N, 2) subarrays. Function FUN_1 generates 2 new (N, 1) arrays using each of the 22 subarrays which I call sub_arr. The first output of FUN_1 is generated by interpolating values from sub_arr[:,0] on array b = array([X, F(X)]) and the second output is generated by placing sub_arr[:, 0] into bins using array r = array([X, BIN(X)]). I call these outputs b_arr and rate_arr, respectively. The function returns a 3-tuple of (N, 1) arrays: import numpy as np def FUN_1(sub_arr): """interpolate b values and rates based on position in sub_arr""" b = np.load(bfile) r = np.load(rfile) b_arr = np.interp(sub_arr[:,0], b[:,0], b[:,1]) rate_arr = np.searchsorted(r[:,0], sub_arr[:,0]) # HUGE efficiency gain over np.digitize... return r[rate_r, 1], b_arr, sub_arr[:,1] I call the function 22 times in a for-loop and fill a pre-allocated array of zeros full_arr = numpy.zeros([868940742, 3]) with the values: full_arr[:,0], full_arr[:,1], full_arr[:,2] = FUN_1 In terms of saving memory at this step, I think this is the best I can do, but I'm open to suggestions. Either way, I don't run into problems up through this point and it only takes about 2 minutes. Here is the sorting routine (there are two consecutive sorts) for idx in range(2): sort_idx = numpy.argsort(full_arr[:,idx]) full_arr = full_arr[sort_idx] # ... # <additional processing, return small (1000, 3) array of stats> Now this sort had been working, albeit slowly (takes about 10 minutes). However, I recently started using a larger, more fine resolution table of [X, F(X)] values for the interpolation step above in FUN_1 that returns b_arr and now the SORT really slows down, although everything else remains the same. Interestingly, I am not even sorting on the interpolated values at the step where the sort is now lagging. Here are some snippets of the different interpolation files - the smaller one is about 30% smaller in each case and far more uniform in terms of values in the second column; the slower one has a higher resolution and many more unique values, so the results of interpolation are likely more unique, but I'm not sure if this should have any kind of effect...? bigger, slower file: 17399307 99.4 17493652 98.8 17570460 98.2 17575180 97.6 17577127 97 17578255 96.4 17580576 95.8 17583028 95.2 17583699 94.6 17584172 94 smaller, more uniform regular file: 1 24 1001 24 2001 24 3001 24 4001 24 5001 24 6001 24 7001 24 I'm not sure what could be causing this issue and I would be interested in any suggestions or just general input about sorting in this type of memory limiting case!
At the moment each call to np.argsort is generating a (868940742, 1) array of int64 indices, which will take up ~7 GB just by itself. Additionally, when you use these indices to sort the columns of full_arr you are generating another (868940742, 1) array of floats, since fancy indexing always returns a copy rather than a view. One fairly obvious improvement would be to sort full_arr in place using its .sort() method. Unfortunately, .sort() does not allow you to directly specify a row or column to sort by. However, you can specify a field to sort by for a structured array. You can therefore force an inplace sort over one of the three columns by getting a view onto your array as a structured array with three float fields, then sorting by one of these fields: full_arr.view('f8, f8, f8').sort(order=['f0'], axis=0) In this case I'm sorting full_arr in place by the 0th field, which corresponds to the first column. Note that I've assumed that there are three float64 columns ('f8') - you should change this accordingly if your dtype is different. This also requires that your array is contiguous and in row-major format, i.e. full_arr.flags.C_CONTIGUOUS == True. Credit for this method should go to Joe Kington for his answer here. Although it requires less memory, sorting a structured array by field is unfortunately much slower compared with using np.argsort to generate an index array, as you mentioned in the comments below (see this previous question). If you use np.argsort to obtain a set of indices to sort by, you might see a modest performance gain by using np.take rather than direct indexing to get the sorted array: %%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort() x[idx] # 1 loops, best of 100: 148 µs per loop %%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort() np.take(x, idx, axis=0) # 1 loops, best of 100: 42.9 µs per loop However I wouldn't expect to see any difference in terms of memory usage, since both methods will generate a copy. Regarding your question about why sorting the second array is faster - yes, you should expect any reasonable sorting algorithm to be faster when there are fewer unique values in the array because on average there's less work for it to do. Suppose I have a random sequence of digits between 1 and 10: 5 1 4 8 10 2 6 9 7 3 There are 10! = 3628800 possible ways to arrange these digits, but only one in which they are in ascending order. Now suppose there are just 5 unique digits: 4 4 3 2 3 1 2 5 1 5 Now there are 2⁵ = 32 ways to arrange these digits in ascending order, since I could swap any pair of identical digits in the sorted vector without breaking the ordering. By default, np.ndarray.sort() uses Quicksort. The qsort variant of this algorithm works by recursively selecting a 'pivot' element in the array, then reordering the array such that all the elements less than the pivot value are placed before it, and all of the elements greater than the pivot value are placed after it. Values that are equal to the pivot are already sorted. Having fewer unique values means that, on average, more values will be equal to the pivot value on any given sweep, and therefore fewer sweeps are needed to fully sort the array. For example: %%timeit -n 1 -r 100 x = np.random.random_integers(0, 10, 100000) x.sort() # 1 loops, best of 100: 2.3 ms per loop %%timeit -n 1 -r 100 x = np.random.random_integers(0, 1000, 100000) x.sort() # 1 loops, best of 100: 4.62 ms per loop In this example the dtypes of the two arrays are the same. If your smaller array has a smaller item size compared with the larger array then the cost of copying it due to the fancy indexing will also be smaller.
Celery chain not working with batches
At first glance I liked very much the "Batches" feature in Celery because I need to group an amount of IDs before calling an API (otherwise I may be kicked out). Unfortunately, when testing a little bit, batch tasks don't seem to play well with the rest of the Canvas primitives, in this case, chains. For example: @a.task(base=Batches, flush_every=10, flush_interval=5) def get_price(requests): for request in requests: a.backend.mark_as_done(request.id, 42, request=request) print "filter_by_price " + str([r.args[0] for r in requests]) @a.task def completed(): print("complete") So, with this simple workflow: chain(get_price.s("ID_1"), completed.si()).delay() I see this output: [2015-07-11 16:16:20,348: INFO/MainProcess] Connected to redis://localhost:6379/0 [2015-07-11 16:16:20,376: INFO/MainProcess] mingle: searching for neighbors [2015-07-11 16:16:21,406: INFO/MainProcess] mingle: all alone [2015-07-11 16:16:21,449: WARNING/MainProcess] celery@ultra ready. [2015-07-11 16:16:34,093: WARNING/Worker-4] filter_by_price ['ID_1'] After 5 seconds, filter_by_price() gets triggered just like expected. The problem is that completed() never gets invoked. Any ideas of what could be going on here? If not using batches, what could be a decent approach to solve this problem? PS: I have set CELERYD_PREFETCH_MULTIPLIER=0 like the docs say.
Looks like the behaviour of batch tasks is significantly different from normal tasks. Batch tasks are not even emitting signals like task_success. Since you need to call completed task after get_price, You can call it directly from get_price itself. @a.task(base=Batches, flush_every=10, flush_interval=5) def get_price(requests): for request in requests: # do something completed.delay()
python dask DataFrame, support for (trivially parallelizable) row apply?
I recently found dask module that aims to be an easy-to-use python parallel processing module. Big selling point for me is that it works with pandas. After reading a bit on its manual page, I can't find a way to do this trivially parallelizable task: ts.apply(func) # for pandas series df.apply(func, axis = 1) # for pandas DF row apply At the moment, to achieve this in dask, AFAIK, ddf.assign(A=lambda df: df.apply(func, axis=1)).compute() # dask DataFrame which is ugly syntax and is actually slower than outright df.apply(func, axis = 1) # for pandas DF row apply Any suggestion? Edit: Thanks @MRocklin for the map function. It seems to be slower than plain pandas apply. Is this related to pandas GIL releasing issue or am I doing it wrong? import dask.dataframe as dd s = pd.Series([10000]*120) ds = dd.from_pandas(s, npartitions = 3) def slow_func(k): A = np.random.normal(size = k) # k = 10000 s = 0 for a in A: if a > 0: s += 1 else: s -= 1 return s s.apply(slow_func) # 0.43 sec ds.map(slow_func).compute() # 2.04 sec
map_partitions You can apply your function to all of the partitions of your dataframe with the map_partitions function. df.map_partitions(func, columns=...) Note that func will be given only part of the dataset at a time, not the entire dataset like with pandas apply (which presumably you wouldn't want if you want to do parallelism.) map / apply You can map a function row-wise across a series with map df.mycolumn.map(func) You can map a function row-wise across a dataframe with apply df.apply(func, axis=1) Threads vs Processes As of version 0.6.0 dask.dataframes parallelizes with threads. Custom Python functions will not receive much benefit from thread-based parallelism. You could try processes instead df = dd.read_csv(...) from dask.multiprocessing import get df.map_partitions(func, columns=...).compute(get=get) But avoid apply However, you should really avoid apply with custom Python functions, both in Pandas and in Dask. This is often a source of poor performance. It could be that if you find a way to do your operation in a vectorized manner then it could be that your Pandas code will be 100x faster and you won't need dask.dataframe at all. Consider numba For your particular problem you might consider numba. This significantly improves your performance. In [1]: import numpy as np In [2]: import pandas as pd In [3]: s = pd.Series([10000]*120) In [4]: %paste def slow_func(k): A = np.random.normal(size = k) # k = 10000 s = 0 for a in A: if a > 0: s += 1 else: s -= 1 return s ## -- End pasted text -- In [5]: %time _ = s.apply(slow_func) CPU times: user 345 ms, sys: 3.28 ms, total: 348 ms Wall time: 347 ms In [6]: import numba In [7]: fast_func = numba.jit(slow_func) In [8]: %time _ = s.apply(fast_func) # First time incurs compilation overhead CPU times: user 179 ms, sys: 0 ns, total: 179 ms Wall time: 175 ms In [9]: %time _ = s.apply(fast_func) # Subsequent times are all gain CPU times: user 68.8 ms, sys: 27 µs, total: 68.8 ms Wall time: 68.7 ms Disclaimer, I work for the company that makes both numba and dask and employs many of the pandas developers.
Reading JSON from SimpleHTTPServer Post data
I am trying to build a simple REST server with python SimpleHTTPServer. I am having problem reading data from the post message. Please let me know if I am doing it right. from SimpleHTTPServer import SimpleHTTPRequestHandler import SocketServer import simplejson class S(SimpleHTTPRequestHandler): def _set_headers(self): self.send_response(200) self.send_header('Content-type', 'text/html') self.end_headers() def do_GET(self): print "got get request %s" % (self.path) if self.path == '/': self.path = '/index.html' return SimpleHTTPRequestHandler.do_GET(self) def do_POST(self): print "got post!!" content_len = int(self.headers.getheader('content-length', 0)) post_body = self.rfile.read(content_len) test_data = simplejson.loads(post_body) print "post_body(%s)" % (test_data) return SimpleHTTPRequestHandler.do_POST(self) def run(handler_class=S, port=80): httpd = SocketServer.TCPServer(("", port), handler_class) print 'Starting httpd...' httpd.serve_forever() The index.html file <html> <title>JSON TEST PAGE</title> <head> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script> <script type="text/javascript"> JSONTest = function() { var resultDiv = $("#resultDivContainer"); $.ajax({ url: "http://128.107.138.51:8080", type: "POST", data: {txt1: $("#json_text").val()}, dataType: "json", success: function (result) { switch (result) { case true: processResponse(result); break; default: resultDiv.html(result); } }, error: function (xhr, ajaxOptions, thrownError) { alert(xhr.status); alert(thrownError); } }); }; </script> </head> <body> <h1>My Web Page</h1> <div id="resultDivContainer"></div> <form> <textarea name="json_text" id="json_text" rows="50" cols="80"> [{"resources": {"dut": "any_ts", "endpoint1": "endpoint", "endpoint2": "endpoint"}}, {"action": "create_conference", "serverName": "dut", "confName": "GURU_TEST"}] </textarea> <button type="button" onclick="JSONTest()">Generate Test</button> </form> </body> </html> The SimpleJson fails to load the json from the POST message. I am not familiar with web coding and I am not even sure if what I am doing is right for creating a simple REST API server. I appreciate your help.
Thanks matthewatabet for the klein idea. I figured a way to implement it using BaseHTTPHandler. The code below. from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer import SocketServer import simplejson import random class S(BaseHTTPRequestHandler): def _set_headers(self): self.send_response(200) self.send_header('Content-type', 'text/html') self.end_headers() def do_GET(self): self._set_headers() f = open("index.html", "r") self.wfile.write(f.read()) def do_HEAD(self): self._set_headers() def do_POST(self): self._set_headers() print "in post method" self.data_string = self.rfile.read(int(self.headers['Content-Length'])) self.send_response(200) self.end_headers() data = simplejson.loads(self.data_string) with open("test123456.json", "w") as outfile: simplejson.dump(data, outfile) print "{}".format(data) f = open("for_presen.py") self.wfile.write(f.read()) return def run(server_class=HTTPServer, handler_class=S, port=80): server_address = ('', port) httpd = server_class(server_address, handler_class) print 'Starting httpd...' httpd.serve_forever() if __name__ == "__main__": from sys import argv if len(argv) == 2: run(port=int(argv[1])) else: run() And the corresponding html page <form action="/profile/index/sendmessage" method="post" enctype="application/x-www-form-urlencoded"> <div class="upload_form"> <dt id="message-label"><label class="optional" for="message">Enter Message</label></dt> <dd id="message-element"> <textarea cols="80" rows="50" id="message" name="message"> [{"resources": {"dut": "any_ts", "endpoint1": "multistream_endpoint", "endpoint2": "multistream_endpoint"}}, {"action": "create_conference", "serverName": "dut", "conferenceName": "GURU_SLAVE_TS"}, {"action": "dial_out_ep", "serverName": "dut", "confName": "GURU_SLAVE_TS", "epName": "endpoint1"} ] </textarea></dd> <dt id="id-label">&nbsp;</dt> <dd id="id-element"> <input type="hidden" id="id" value="145198" name="id"></dd> <dt id="send_message-label">&nbsp;</dt> <dd id="send_message-element"> <input type="submit" class="sendamessage" value="Send" id="send_message" name="send_message"></dd> </div> </form> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script> <script type="text/javascript"> $("input.sendamessage").click(function(event) { event.preventDefault(); var message = $('textarea#message').val(); var id = $('input#id').val(); url = "http://128.107.138.51:8080" var posting = $.post(url, message) posting.done(function( data ) { alert(message); }); }); </script>
Why doesn't the MySQLdb Connection context manager close the cursor?
MySQLdb Connections have a rudimentary context manager that creates a cursor on enter, either rolls back or commits on exit, and implicitly doesn't suppress exceptions. From the Connection source: def __enter__(self): if self.get_autocommit(): self.query("BEGIN") return self.cursor() def __exit__(self, exc, value, tb): if exc: self.rollback() else: self.commit() So, does anyone know why the cursor isn't closed on exit? At first, I assumed it was because closing the cursor didn't do anything and that cursors only had a close method in deference to the Python DB API (see the comments to this answer). However, the fact is that closing the cursor burns through the remaining results sets, if any, and disables the cursor. From the cursor source: def close(self): """Close the cursor. No further queries will be possible.""" if not self.connection: return while self.nextset(): pass self.connection = None It would be so easy to close the cursor at exit, so I have to suppose that it hasn't been done on purpose. On the other hand, we can see that when a cursor is deleted, it is closed anyway, so I guess the garbage collector will eventually get around to it. I don't know much about garbage collection in Python. def __del__(self): self.close() self.errorhandler = None self._result = None Another guess is that there may be a situation where you want to re-use the cursor after the with block. But I can't think of any reason why you would need to do this. Can't you always finish using the cursor inside its context, and just use a separate context for the next transaction? To be very clear, this example obviously doesn't make sense: with conn as cursor: cursor.execute(select_stmt) rows = cursor.fetchall() It should be: with conn as cursor: cursor.execute(select_stmt) rows = cursor.fetchall() Nor does this example make sense: # first transaction with conn as cursor: cursor.execute(update_stmt_1) # second transaction, reusing cursor try: cursor.execute(update_stmt_2) except: conn.rollback() else: conn.commit() It should just be: # first transaction with conn as cursor: cursor.execute(update_stmt_1) # second transaction, new cursor with conn as cursor: cursor.execute(update_stmt_2) Again, what would be the harm in closing the cursor on exit, and what benefits are there to not closing it?
To answer your question directly: I cannot see any harm whatsoever in closing at the end of a with block. I cannot say why it is not done in this case. But, as there is a dearth of activity on this question, I had a search through the code history and will throw in a few thoughts (guesses) on why the close() may not be called: There is a small chance that spinning through calls to nextset() may throw an exception - possibly this had been observed and seen as undesirable. This may be why the newer version of cursors.py contains this structure in close(): def close(self): """Close the cursor. No further queries will be possible.""" if not self.connection: return self._flush() try: while self.nextset(): pass except: pass self.connection = None There is the (somewhat remote) potential that it might take some time to spin through all the remaining results doing nothing. Therefore close() may not be called to avoid doing some unnecessary iterations. Whether you think it's worth saving those clock cycles is subjective, I suppose, but you could argue along the lines of "if it's not necessary, don't do it". Browsing the sourceforge commits, the functionality was added to the trunk by this commit in 2007 and it appears that this section of connections.py has not changed since. That's a merge based on this commit, which has the message Add Python-2.5 support for with statement as described in http://docs.python.org/whatsnew/pep-343.html Please test And the code you quote has never changed since. This prompts my final thought - it's probably just a first attempt / prototype that just worked and therefore never got changed. More modern version You link to source for a legacy version of the connector. I note there is a more active fork of the same library here, which I link to in my comments about "newer version" in point 1. Note that the more recent version of this module has implemented __enter__() and __exit__() within cursor itself: see here. __exit__() here does call self.close() and perhaps this provides a more standard way to use the with syntax e.g. with conn.cursor() as c: #Do your thing with the cursor End notes N.B. I guess I should add, as far as I understand garbage collection (not an expert either) once there are no references to conn, it will be deallocated. At this point there will be no references to the cursor object and it will be deallocated too. However calling cursor.close() does not mean that it will be garbage collected. It simply burns through the results and set the connection to None. This means it can't be re-used, but it won't be garbage collected immediately. You can convince yourself of that by manually calling cursor.close() after your with block and then, say, printing some attribute of cursor N.B. 2 I think this is a somewhat unusual use of the with syntax as the conn object persists because it is already in the outer scope - unlike, say, the more common with open('filename') as f: where there are no objects hanging around with references after the end of the with block.
How to cope with the performance of generating signed URLs for accessing private content via CloudFront?
A common use case of AWS S3 and CloudFront is serving private content. The common solution is using signed CloudFront URLs to access private files stored using S3. However, the generation of these URLs comes with a cost: computing the RSA signature of any given URL using a private key. For Python (or boto, AWS's Python SDK), the rsa (https://pypi.python.org/pypi/rsa) library is used for this task. On my late 2014 MBP, it takes about ~25ms per computation with a 2048-bit key. This cost potentially impacts the scalability of an application that uses this approach for authorizing access to private content via CloudFront. Imagine multiple clients request for access to multiple files frequently at 25~30ms/req. It seems to me that not much can be improve on the signature computation itself, though the rsa library mentioned above was last updated almost 1.5 years ago. I wonder if there are other techniques or designs that may optimize the performance of this process to achieve higher scalability. Or do we simply have to throw in more hardware and try to solve it in a brute force way? One optimization can be making the API endpoint accept multiple file signings per request and return the signed URLs in bulk rather than dealing with them individually in separate requests, but the total time necessary for computing all those signatures is still there.
Use Signed Cookies When I use CloudFront with many private URLs, I prefer to use Signed Cookies when all the restrictions are met. This does not speed up the generation of signed cookies but it reduces the number of signing requests to be one per user until they expire. Tuning RSA Signature Generation I can imagine you may have requirements which render signed cookies as an invalid option. In that case I tried to speed up the signing by comparing the RSA module used with boto and cryptography. Two additional alternative options are m2crypto and pycrypto but for this example I will use cryptography. In order to test performance of signing URLs with different modules I reduced the method _sign_string to remove any logic except the signing of a string then created a new Distribution class. Then I took the private key and example URL from boto tests to test with. The results show that cryptography is quicker but still requires close to 1ms per signing request. These results are skewed higher by iPython's use of scoped variables in timing. timeit -n10000 rsa_distribution.create_signed_url(url, message, expire_time) 10000 loops, best of 3: 6.01 ms per loop timeit -n10000 cryptography_distribution.create_signed_url(url, message, expire_time) 10000 loops, best of 3: 644 µs per loop The full script: from cryptography.hazmat.primitives.asymmetric import padding from cryptography.hazmat.primitives import serialization from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import hashes import rsa from boto.cloudfront.distribution import Distribution from textwrap import dedent # The private key provided in the Boto tests pk_key = dedent(""" -----BEGIN RSA PRIVATE KEY----- MIICXQIBAAKBgQDA7ki9gI/lRygIoOjV1yymgx6FYFlzJ+z1ATMaLo57nL57AavW hb68HYY8EA0GJU9xQdMVaHBogF3eiCWYXSUZCWM/+M5+ZcdQraRRScucmn6g4EvY 2K4W2pxbqH8vmUikPxir41EeBPLjMOzKvbzzQy9e/zzIQVREKSp/7y1mywIDAQAB AoGABc7mp7XYHynuPZxChjWNJZIq+A73gm0ASDv6At7F8Vi9r0xUlQe/v0AQS3yc N8QlyR4XMbzMLYk3yjxFDXo4ZKQtOGzLGteCU2srANiLv26/imXA8FVidZftTAtL viWQZBVPTeYIA69ATUYPEq0a5u5wjGyUOij9OWyuy01mbPkCQQDluYoNpPOekQ0Z WrPgJ5rxc8f6zG37ZVoDBiexqtVShIF5W3xYuWhW5kYb0hliYfkq15cS7t9m95h3 1QJf/xI/AkEA1v9l/WN1a1N3rOK4VGoCokx7kR2SyTMSbZgF9IWJNOugR/WZw7HT njipO3c9dy1Ms9pUKwUF46d7049ck8HwdQJARgrSKuLWXMyBH+/l1Dx/I4tXuAJI rlPyo+VmiOc7b5NzHptkSHEPfR9s1OK0VqjknclqCJ3Ig86OMEtEFBzjZQJBAKYz 470hcPkaGk7tKYAgP48FvxRsnzeooptURW5E+M+PQ2W9iDPPOX9739+Xi02hGEWF B0IGbQoTRFdE4VVcPK0CQQCeS84lODlC0Y2BZv2JxW3Osv/WkUQ4dslfAQl1T303 7uwwr7XTroMv8dIFQIPreoPhRKmd/SbJzbiKfS/4QDhU -----END RSA PRIVATE KEY-----""") # Initializing keys in a global context cryptography_private_key = serialization.load_pem_private_key( pk_key, password=None, backend=default_backend()) # Instantiate a signer object using PKCS 1v 15, this is not recommended but required for Amazon def sign_with_cryptography(message): signer = cryptography_private_key.signer( padding.PKCS1v15(), hashes.SHA1()) signer.update(message) return signer.finalize() # Initializing the key in a global context rsa_private_key = rsa.PrivateKey.load_pkcs1(pk_key) def sign_with_rsa(message): signature = rsa.sign(str(message), rsa_private_key, 'SHA-1') return signature # All this information comes from the Boto tests. url = "http://d604721fxaaqy9.cloudfront.net/horizon.jpg?large=yes&license=yes" expected_url = "http://d604721fxaaqy9.cloudfront.net/horizon.jpg?large=yes&license=yes&Expires=1258237200&Signature=Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyEXPDNv0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4kXAJK6tdNx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCMIYHIaiOB6~5jt9w2EOwi6sIIqrg_&Key-Pair-Id=PK123456789754" message = "PK123456789754" expire_time = 1258237200 class CryptographyDistribution(Distribution): def _sign_string( self, message, private_key_file=None, private_key_string=None): return sign_with_cryptography(message) class RSADistribution(Distribution): def _sign_string( self, message, private_key_file=None, private_key_string=None): return sign_with_rsa(message) cryptography_distribution = CryptographyDistribution() rsa_distribution = RSADistribution() cryptography_url = cryptography_distribution.create_signed_url( url, message, expire_time) rsa_url = rsa_distribution.create_signed_url( url, message, expire_time) assert cryptography_url == rsa_url == expected_url, "URLs do not match" Conclusion Although the cryptography module performs better in this test, I recommend trying to find a way to utilize signed cookies but I hope this information is useful.
1 class inherits 2 different metaclasses (abcmeta and user defined meta)
I have a class1 that needs to inherit from 2 different metaclasses which is Meta1 and abc.ABCMeta Current implementation: Implementation of Meta1: class Meta1(type): def __new__(cls, classname, parent, attr): new_class = type.__new__(cls, classname, parent, attr) return super(Meta1, cls).__new__(cls, classname, parent, attr) implementation of class1Abstract class class1Abstract(object): __metaclass__ = Meta1 __metaclass__ = abc.ABCMeta implementation of mainclass class mainClass(class1Abstract): # do abstract method stuff I know this is wrong to implement 2 different meta twice. I change the way metclass is loaded (a few tries) and I get this TypeError: Error when calling the metaclass bases metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases I ran out of ideas... EDITED 1 I tried this solution it works but the mainClass is not an instance of class1Abstract print issubclass(mainClass, class1Abstract) # true print isinstance(mainClass, class1Abstract) # false Implementation of class1Abstract class TestMeta(Meta1): pass class AbcMeta(object): __metaclass__ = abc.ABCMeta pass class CombineMeta(AbcMeta, TestMeta): pass class class1Abstract(object): __metaclass__ = CombineMeta @abc.abstractmethod def do_shared_stuff(self): pass @abc.abstractmethod def test_method(self): ''' test method ''' Implementation of mainClass class mainClass(class1Abstract): def do_shared_stuff(self): print issubclass(mainClass, class1Abstract) # True print isinstance(mainClass, class1Abstract) # False Since mainClass inherits from an abstract class python should complain about test_method not being implemented in mainClass. But it doesn't complain anything because print isinstance(mainClass, class1Abstract) # False dir(mainClass) doesn't have ['__abstractmethods__', '_abc_cache', '_abc_negative_cache', '_abc_negative_cache_version', '_abc_registry'] HELP! EDITED 2 Implementation of class1Abstract CombineMeta = type("CombineMeta", (abc.ABCMeta, Meta1), {}) class class1Abstract(object): __metaclass__ = abc.ABCMeta @abc.abstractmethod def do_shared_stuff(self): pass @abc.abstractmethod def test_method(self): ''' test method ''' Implementation of mainClass class mainClass(class1Abstract): __metaclass__ = CombineMeta def do_shared_stuff(self): print issubclass(mainClass, class1Abstract) # True print isinstance(mainClass, class1Abstract) # False dir(mainClass) now have abstractmethod's magic methods ['__abstractmethods__', '__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__metaclass__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_abc_cache', '_abc_negative_cache', '_abc_negative_cache_version', '_abc_registry', 'do_shared_stuff', 'test_method'] But python doesn't warn about test_method not being instantiated HELP!
In Python, every class can only have one metaclass, not many. However, it is possible to achieve similar behaviour (like if it would have multiple metaclasses) by mixing what these metaclasses do. Let's start simple. Our own metaclass, simply adds new attribute to a class: class SampleMetaClass(type): """Sample metaclass: adds `sample` attribute to the class""" def __new__(cls, clsname, bases, dct): dct['sample'] = 'this a sample class attribute' return super(SampleMetaClass, cls).__new__(cls, clsname, bases, dct) class MyClass(object): __metaclass__ = SampleMetaClass print("SampleMetaClass was mixed in!" if 'sample' in MyClass.__dict__ else "We've had a problem here") This prints "SampleMetaClass was mixed in!", so we know our basic metaclass works fine. Now, on the other side, we want an abstract class, in its simplest it would be: from abc import ABCMeta, abstractmethod class AbstractClass(object): __metaclass__ = ABCMeta @abstractmethod def implement_me(self): pass class IncompleteImplementor(AbstractClass): pass class MainClass(AbstractClass): def implement_me(self): return "correct implementation in `MainClass`" try: IncompleteImplementor() except TypeError as terr: print("missing implementation in `IncompleteImplementor`") MainClass().implement_me() This prints "missing implementation in IncompleteImplementor" followed up by "correct implementation in MainClass". So, the abstract class also works fine. Now, we have 2 simple implementations and we need to mix together the behaviour of the two metaclasses. There are multiple options here. Option 1 - subclassing One can implement a SampleMetaClass as a subclass of ABCMeta - metaclasses are also classes and one can inhereit them! class SampleMetaABC(ABCMeta): """Same as SampleMetaClass, but also inherits ABCMeta behaviour""" def __new__(cls, clsname, bases, dct): dct['sample'] = 'this a sample class attribute' return super(SampleMetaABC, cls).__new__(cls, clsname, bases, dct) Now, we change the metaclass in the AbstractClass definition: class AbstractClass(object): __metaclass__ = SampleMetaABC @abstractmethod def implement_me(self): pass # IncompleteImplementor and MainClass implementation is the same, but make sure to redeclare them if you use same interpreter from the previous test And run both our tests again: try: IncompleteImplementor() except TypeError as terr: print("missing implementation in `IncompleteImplementor`") MainClass().implement_me() print("sample was added!" if 'sample' in IncompleteImplementor.__dict__ else "We've had a problem here") print("sample was added!" if 'sample' in MainClass.__dict__ else "We've had a problem here") This would still print that IncompleteImplementor is not correctly implemented, that MainClass is, and that both now have the sample class-level attribute added. Thing to note here that Sample part of the metaclass was successfully applied to the IncompleteImplementor as well (well, there is no reason it wouldn't). As it would be expected, isinstance and issubclass still work as supposed to: print(issubclass(MainClass, AbstractClass)) # True, inheriting from AbtractClass print(isinstance(MainClass, AbstractClass)) # False as expected - AbstractClass is a base class, not a metaclass print(isinstance(MainClass(), AbstractClass)) # True, now created an instance here Option 2 - composing metaclasses In fact, there is this option in the question itself, it only required a small fix. Declare new metaclass as a composition of several simpler metaclasses to mix their behaviour: SampleMetaWithAbcMixin = type('SampleMetaWithAbcMixin', (ABCMeta, SampleMetaClass), {}) As previously, change the metaclass for an AbstractClass (and again, IncompleteImplementor and MainClass don't change, but redeclare them if in same interpreter): class AbstractClass(object): __metaclass__ = SampleMetaWithAbcMixin @abstractmethod def implement_me(self): pass From here, running same tests should yield same results: ABCMeta still works and ensures that @abstractmethod-s are implemented, SampleMetaClass adds a sample attribute. I personally prefer this latter option, for the same reason as I generally prefer composition to the inheritance: the more combinations one eventually needs between multiple (meta)classes - the simpler it would be with composition. More on metaclasses Finally, the best explanation of metaclasses I've ever read is this SO answer: What is a metaclass in Python?
How to avoid building C library with my python package?
I'm building a python package using a C library with ctypes. I want to make my package portable (Windows, Mac and Linux). I found a strategy, using build_ext with pip to build the library during the installation of my package. It creates libfoo.dll or libfoo.dylib or libfoo.so depending on the target's platform. The problem with this is that my user needs CMake installed. Does exist another strategy to avoid building during the installation? Do I have to bundle built libraries in my package? I want to keep my users doing pip install mylib. Edit: thank to @Dawid comment, I'm trying to make a python wheel with the command python setup.py bdist_wheel without any success. How can I create my python wheel for different platform with the embedded library ? Edit 2: I'm using python 3.4 and working on Mac OS X, but I have access to Windows computer, and Linux computer
You're certainly heading down the right path according to my research... As Daniel says, the only option you have is to build and distribute the binaries yourself. In general, the recommended way to install packages is covered well in the packaging user guide. I won't repeat advice there as you have clearly already found it. However the key point in there is that the Python community, specifically PyPA are trying to standardize on using platform wheels to package binary extensions. Sadly, there are a few issues at this point: As mentioned in the above link, you cannot upload Linux wheels to PyPI, which means you have to manage the distribution to your users yourself. The advice on building extensions is somewhat incomplete, reflecting the lack of a complete solution for binary distributions. People then try to build their own library and distribute it as a data file, which confuses setuptools. I think you are hitting this last issue. A workaround is to force the Distribution to build a platform wheel by overriding is_pure() to always return False. However you could just keep your original build instructions and bdist_wheel should handle it. Once you've built the wheel, though, you still need to distribute it and maybe other binary packages that it uses or use it. At this point, you probably need to use one of the recommended tools like conda or a PyPI proxy like devpi to serve up your wheels. EDIT: To answer the extra question about cross-compiling As covered here Python 2.6 and later allows cross-compilation for Windows 32/64-bit builds. There is no formal support for other packages on other platforms and people have had limited success trying to do it. You are really best off building natively on each of your Linux/Mac/Windows environments.
Error setting up Vagrant with VirtualBox in PyCharm under OS X 10.10
When setting up the remote interpreter and selecting Vagrant, I get the following error in PyCharm: Can't Get Vagrant Settings: [0;31mThe provider 'virtualbox' that was requested to back the machine 'default' is reporting that it isn't usable on this system. The reason is shown bellow: Vagrant could not detect VirtualBox! Make sure VirtualBox is properly installed. Vagrant uses the `VBoxManage` binary that ships with VirtualBox, and requires this to be available on the PATH. If VirtualBox is installed, please find the `VBoxManage` binary and add it to the PATH environment variable.[0m Now, from a terminal, everything works. I can do 'up' and ssh into the VM without issues. Ports are forwarded as well as local files. So the issue is only in PyCharm. I have installed Java 1.8 PATH is: /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin which VBoxManage: /usr/local/bin/VBoxManage and works in terminal. Note that this is a fresh install of OS X done this morning. Vagrant version is 1.7.3, VirtualBox is 4.3.30 and PyCharm is 4.5.3
Another workaround: sudo ln -s /usr/local/bin/VBoxManage /usr/bin/VBoxManage Edit: Since it all worked some time ago, one of the following has to be cause of this problem: either update of VirtualBox changed location of it's executable or update of PyCharm changed PATH settings / executable location expectation for the IDE Whatever the cause is, the solution is to make sure VBoxManage is in location expected by PyCharm. I haven't make up this solution myself, just googled it, but because it is so nice and clean I decided to add it here.
ipython server can't launch: No module named notebook.notebookapp
I've been trying to setup an ipython server following several tutorials (since none was exactly my case). A couple days ago, I did manage to get it to the point where it was launching but then was not able to access it via url. Today it's not launching anymore and I can't find much about this specific error I get: Traceback (most recent call last): File "/usr/local/bin/ipython", line 9, in <module> load_entry_point('ipython==4.0.0-dev', 'console_scripts', 'ipython')() File "/usr/local/lib/python2.7/dist-packages/ipython-4.0.0_dev-py2.7.egg/IPython/__init__.py", line 118, in start_ipython return launch_new_instance(argv=argv, **kwargs) File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 591, in launch_instance app.initialize(argv) File "<string>", line 2, in initialize File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 75, in catch_config_error return method(app, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/ipython-4.0.0_dev-py2.7.egg/IPython/terminal/ipapp.py", line 302, in initialize super(TerminalIPythonApp, self).initialize(argv) File "<string>", line 2, in initialize File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 75, in catch_config_error return method(app, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/ipython-4.0.0_dev-py2.7.egg/IPython/core/application.py", line 386, in initialize self.parse_command_line(argv) File "/usr/local/lib/python2.7/dist-packages/ipython-4.0.0_dev-py2.7.egg/IPython/terminal/ipapp.py", line 297, in parse_command_line return super(TerminalIPythonApp, self).parse_command_line(argv) File "<string>", line 2, in parse_command_line File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 75, in catch_config_error return method(app, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 487, in parse_command_line return self.initialize_subcommand(subc, subargv) File "<string>", line 2, in initialize_subcommand File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 75, in catch_config_error return method(app, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 418, in initialize_subcommand subapp = import_item(subapp) File "build/bdist.linux-x86_64/egg/ipython_genutils/importstring.py", line 31, in import_item ImportError: No module named notebook.notebookapp So about the setup, I have installed the anaconda distrib of ipython, pyzmq & tornado libraries. I have created a profile nbserver and the config file is as follows - ipython.config.py: c = get_config() c.IPKernalApp.pylab = 'inline' c.NotebookApp.certfile = u'/home/ludo/.ipython/profile_nbserver/mycert.pem' c.NotebookApp.ip = '*' c.NotebookApp.open_browser = False c.NotebookApp.password = u'sha1:e6cb2aa9a[...]' c.NotebookApp.port = 9999 c.NotebookManager.notebook_dir = u'/var/www/ipynb/' c.NotebookApp.base_project_url = '/ipynb/' c.NotebookApp.base_kernel_url = '/ipynb/' c.NotebookApp.webapp_settings = {'static_url_prefix':'/ipynb/static/'} I really don't know where to look for clues anymore - and I'm probably lacking a greater understanding of how all this works to figure it out. My ultimate goal is to then use the answer to this question on SO to complete a setup behind apache and eventually connect it to colaboratory - but seems like it should launch first. Many thanks for any help :)
This should fix the issue: pip install jupyter
Determining implementation of Python at runtime?
I'm writing a piece of code that returns profiling information and it would be helpful to be able to dynamically return the implementation of Python in use. Is there a Pythonic way to determine which implementation (e.g. Jython, PyPy) of Python my code is executing on at runtime? I know that I am able to get version information from sys.version: >>> import sys >>> sys.version '3.4.3 (default, May 1 2015, 19:14:18) \n[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.49)]' but I'm not sure where to look in the sys module to get the implementation that the code is running.
You can use python_implementation from the platform module in Python 3 or Python 2. This returns a string that identifies the Python implementation. e.g. return_implementation.py import platform print(platform.python_implementation()) and iterating through some responses on the command line: $ for i in python python3 pypy pypy3; do echo -n "implementation $i: "; $i return_implementation.py; done implementation python: CPython implementation python3: CPython implementation pypy: PyPy implementation pypy3: PyPy Note as of this answer's date, the possible responses are 'CPython', 'IronPython', 'Jython', 'PyPy', meaning that it's possible that your implementation will not be returned by this python_implementation function if it does not identity to the sys module as one of these types. python_implementation is calling sys.version under the hood and attempting to match the response to a regex pattern -- if there's no conditional match, there's no matching string response.
How to prepend a path to sys.path in Python?
Problem description: Using pip, I upgraded to the latest version of requests (version 2.7.0, with pip show requests giving the location /usr/local/lib/python2.7/dist-packages). When I import requests and print requests.__version__ in the interactive command line, though, I am seeing version 2.2.1. It turns out that Python is using the pre-installed Ubuntu version of requests (requests.__file__ is /usr/lib/python2.7/dist-packages/requests/__init__.pyc -- not /user/local/lib/...). From my investigation, this fact is caused by Ubuntu's changes to the Python search path (I run Ubuntu 14.04) by prepending the path to Ubuntu's Python package (for my machine, this happens in usr/local/lib/python2.7/dist-packages/easy-install.pth). In my case, this causes the apt-get version of requests, which is pre-packaged with Ubuntu, to be used, rather than the pip version I want to use. What I'm looking for: I want to globally prepend pip's installation directory path to Python's search path (sys.path), before the path to Ubuntu's Python installation directory. Since requests (and many other packages) are used in many Python scripts of mine, I don't want to manually change the search path for every single file on my machine. Unsatisfactory Solution 1: Using virtualenv Using virtualenv would cause an unnecessary amount of change to my machine, since I would have to reinstall every package that exists globally. I only want to upgrade from Ubuntu's packages to pip's packages. Unsatisfactory Solution 2: Changing easy-install.pth Since easy-install.pth is overwritten every time easy-install is used, my changes to easy-install.pth would be removed if a new package is installed. This problem makes it difficult to maintain the packages on my machine. Unsatisfactory (but best one I have so far) Solution 3: Adding a separate .pth file In the same directory as easy-install.pth I added a zzz.pth with contents: import sys; sys.__plen = len(sys.path) /usr/lib/python2.7/dist-packages/test_dir import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sys.path[p:p]=new; sys.__egginsert = p+len(new) This file is read by site.py when Python is starting. Since its file name comes after easy-install.pth alphanumerically, it is consumed by site.py afterwards. Taken together, the first and last lines of the file prepend the path to sys.path (these lines were taken from easy-install.pth). I don't like how this solution depends on the alphanumeric ordering of the file name to correctly place the new path. PYTHONPATHs come after Ubuntu's paths Another answer on Stack Overflow didn't work for me. My PYTHONPATH paths come after the paths in easy-install.pth, which uses the same code I mention in "Unsatisfactory solution 3" to prepend its paths. Thank you in advance!
You shouldn't need to mess with pip's path, python actually handles it's pathing automatically in my experience. It appears you have two pythons installed. If you type: which pip which python what paths do you see? If they're not in the same /bin folder, then that's your problem. I'm guessing that the python you're running (probably the original system one), doesn't have it's own pip installed. You probably just need make sure the path for the python you want to run should come before /usr/bin in your .bashrc or .zshrc If this is correct, then you should see that: which easy_install shares the same path as the python installation you're using, maybe under /usr/local/bin. Then just run: easy_install pip And start installing the right packages for the python that you're using.
ImportError: cannot import name wraps
I'm using python 2.7.6 on Ubuntu 14.04.2 LTS. I'm using mock to mock some unittests and noticing when I import mock it fails importing wraps. Not sure if there's a different version of mock or six I should be using for it's import to work? Couldn't find any relevant answers and I'm not using virtual environments. mock module says it's compatible with python 2.7.x: https://pypi.python.org/pypi/mock mock==1.1.3 six==1.9.0 Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from mock import Mock Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/dist-packages/mock/__init__.py", line 2, in <module> import mock.mock as _mock File "/usr/local/lib/python2.7/dist-packages/mock/mock.py", line 68, in <module> from six import wraps ImportError: cannot import name wraps also tried with sudo with no luck. $ sudo python -c 'from six import wraps' Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: cannot import name wraps
Installed mock==1.0.1 and that worked for some reason. (shrugs) edit: The real fix for me was to updated setuptools to the latest and it allowed me to upgrade mock and six to the latest. I was on setuptools 3.3. In my case I also had to remove said modules by hand because they were owned by OS in '/usr/local/lib/python2.7/dist-packages/' check versions of everything pip freeze | grep -e six -e mock easy_install --version Update everything wget https://bootstrap.pypa.io/ez_setup.py -O - | sudo python pip install mock --upgrade pip install six --upgrade Thanks @lifeless
Why does "not(True) in [False, True]" return False?
If I do this: >>> False in [False, True] True That returns True. Simply because False is in the list. But if I do: >>> not(True) in [False, True] False That returns False. Whereas not(True) is equal to False: >>> not(True) False Why?
Operator precedence 2.x, 3.x. The precedence of not is lower than that of in. So it is equivalent to: >>> not (True in [False, True]) False This is what you want: >>> (not True) in [False, True] True As @Ben points out: It's recommended to never write not(True), prefer not True. The former makes it look like a function call, while not is an operator, not a function.
How to compute precision, recall, accuracy and f1-score for the multiclass case with scikit learn?
I'm working in a sentiment analysis problem the data looks like this: label instances 5 1190 4 838 3 239 1 204 2 127 So my data is unbalanced since 1190 instances are labeled with 5. For the classification Im using scikit's SVC. The problem is I do not know how to balance my data in the right way in order to compute accurately the precision, recall, accuracy and f1-score for the multiclass case. So I tried the following approaches: First: wclf = SVC(kernel='linear', C= 1, class_weight={1: 10}) wclf.fit(X, y) weighted_prediction = wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, weighted_prediction) print 'F1 score:', f1_score(y_test, weighted_prediction,average='weighted') print 'Recall:', recall_score(y_test, weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, weighted_prediction, average='weighted') print '\n clasification report:\n', classification_report(y_test, weighted_prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, weighted_prediction) Second: auto_wclf = SVC(kernel='linear', C= 1, class_weight='auto') auto_wclf.fit(X, y) auto_weighted_prediction = auto_wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, auto_weighted_prediction) print 'F1 score:', f1_score(y_test, auto_weighted_prediction, average='weighted') print 'Recall:', recall_score(y_test, auto_weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, auto_weighted_prediction, average='weighted') print '\n clasification report:\n', classification_report(y_test,auto_weighted_prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, auto_weighted_prediction) Third: clf = SVC(kernel='linear', C= 1) clf.fit(X, y) prediction = clf.predict(X_test) from sklearn.metrics import precision_score, \ recall_score, confusion_matrix, classification_report, \ accuracy_score, f1_score print 'Accuracy:', accuracy_score(y_test, prediction) print 'F1 score:', f1_score(y_test, prediction) print 'Recall:', recall_score(y_test, prediction) print 'Precision:', precision_score(y_test, prediction) print '\n clasification report:\n', classification_report(y_test,prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, prediction) F1 score:/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:676: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) /usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1172: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) /usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1082: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) 0.930416613529 However, Im getting warnings like this:/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1172: DeprecationWarning: The defaultweightedaveraging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value foraverage, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". How can I deal correctly with my unbalanced data in order to compute in the right way classifier's metrics?
I think there is a lot of confusion about which weights are used for what. I am not sure I know precisely what bothers you so I am going to cover different topics, bear with me ;). Class weights The weights from the class_weight parameter are used to train the classifier. They are not used in the calculation of any of the metrics you are using: with different class weights, the numbers will be different simply because the classifier is different. Basically in every scikit-learn classifier, the class weights are used to tell your model how important a class is. That means that during the training, the classifier will make extra efforts to classify properly the classes with high weights. How they do that is algorithm-specific. If you want details about how it works for SVC and the doc does not make sense to you, feel free to mention it. The metrics Once you have a classifier, you want to know how well it is performing. Here you can use the metrics you mentioned: accuracy, recall_score, f1_score... Usually when the class distribution is unbalanced, accuracy is considered a poor choice as it gives high scores to models which just predict the most frequent class. I will not detail all these metrics but note that, with the exception of accuracy, they are naturally applied at the class level: as you can see in this print of a classification report they are defined for each class. They rely on concepts such as true positives or false negative that require defining which class is the positive one. precision recall f1-score support 0 0.65 1.00 0.79 17 1 0.57 0.75 0.65 16 2 0.33 0.06 0.10 17 avg / total 0.52 0.60 0.51 50 The warning F1 score:/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:676: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". You get this warning because you are using the f1-score, recall and precision without defining how they should be computed! The question could be rephrased: from the above classification report, how do you output one global number for the f1-score? You could: Take the average of the f1-score for each class: that's the avg / total result above. It's also called macro averaging. Compute the f1-score using the global count of true positives / false negatives, etc. (you sum the number of true positives / false negatives for each class). Aka micro averaging. Compute a weighted average of the f1-score. Using 'weighted' in scikit-learn will weigh the f1-score by the support of the class: the more elements a class has, the more important the f1-score for this class in the computation. These are 3 of the options in scikit-learn, the warning is there to say you have to pick one. So you have to specify an average argument for the score method. Which one you choose is up to how you want to measure the performance of the classifier: for instance macro-averaging does not take class imbalance into account and the f1-score of class 1 will be just as important as the f1-score of class 5. If you use weighted averaging however you'll get more importance for the class 5. The whole argument specification in these metrics is not super-clear in scikit-learn right now, it will get better in version 0.18 according to the docs. They are removing some non-obvious standard behavior and they are issuing warnings so that developers notice it. Computing scores Last thing I want to mention (feel free to skip it if you're aware of it) is that scores are only meaningful if they are computed on data that the classifier has never seen. This is extremely important as any score you get on data that was used in fitting the classifier is completely irrelevant. Here's a way to do it using StratifiedShuffleSplit, which gives you a random splits of your data (after shuffling) that preserve the label distribution. from sklearn.datasets import make_classification from sklearn.cross_validation import StratifiedShuffleSplit from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report, confusion_matrix # We use a utility to generate artificial classification data. X, y = make_classification(n_samples=100, n_informative=10, n_classes=3) sss = StratifiedShuffleSplit(y, n_iter=1, test_size=0.5, random_state=0) for train_idx, test_idx in sss: X_train, X_test, y_train, y_test = X[train_idx], X[test_idx], y[train_idx], y[test_idx] svc.fit(X_train, y_train) y_pred = svc.predict(X_test) print(f1_score(y_test, y_pred, average="macro")) print(precision_score(y_test, y_pred, average="macro")) print(recall_score(y_test, y_pred, average="macro")) Hope this helps.
What does the -> (dash-greater-than arrow symbol) mean in a Python method signature?
There is a ->, or dash-greater-than symbol at the end of a python method, and I'm not sure what it means. One might call it an arrow as well. Here is the example: @property def get_foo(self) -> Foo: return self._foo where self._foo is an instance of Foo. My guess is that it is some kind of static type declaration, to tell the interpreter that self._foo is of type Foo. But when I tested this, if self._foo is not an instance of Foo, nothing unusual happens. Also, if self._foo is of a type other than Foo, let's say it was an int, then type(SomeClass.get_foo()) returns int. So, what's the point of -> Foo? This concept is hard to lookup because it is a symbol without a common name, and the term "arrow" is misleading.
This is function annotations. It can be use to attach additional information to the arguments or a return values of functions. It is a useful way to say how a function must be use. Functions annotations are stored in a function's __annotations__ attribute. Use Cases (From documentation) Providing typing information Type checking Let IDEs show what types a function expects and returns Function overloading / generic functions Foreign-language bridges Adaptation Predicate logic functions Database query mapping RPC parameter marshaling Other information Documentation for parameters and return values From python-3.5 it can be used for Type Hints
Why does Python 3 allow "00" as a literal for 0 but not allow "01" as a literal for 1?
Why does Python 3 allow "00" as a literal for 0 but not allow "01" as a literal for 1? Is there a good reason? This inconsistency baffles me. (And we're talking about Python 3, which purposely broke backward compatibility in order to achieve goals like consistency.) For example: >>> from datetime import time >>> time(16, 00) datetime.time(16, 0) >>> time(16, 01) File "<stdin>", line 1 time(16, 01) ^ SyntaxError: invalid token >>>
Per https://docs.python.org/3/reference/lexical_analysis.html#integer-literals: Integer literals are described by the following lexical definitions: integer ::= decimalinteger | octinteger | hexinteger | bininteger decimalinteger ::= nonzerodigit digit* | "0"+ nonzerodigit ::= "1"..."9" digit ::= "0"..."9" octinteger ::= "0" ("o" | "O") octdigit+ hexinteger ::= "0" ("x" | "X") hexdigit+ bininteger ::= "0" ("b" | "B") bindigit+ octdigit ::= "0"..."7" hexdigit ::= digit | "a"..."f" | "A"..."F" bindigit ::= "0" | "1" There is no limit for the length of integer literals apart from what can be stored in available memory. Note that leading zeros in a non-zero decimal number are not allowed. This is for disambiguation with C-style octal literals, which Python used before version 3.0. As noted here, leading zeros in a non-zero decimal number are not allowed. "0"+ is legal as a very special case, which wasn't present in Python 2: integer ::= decimalinteger | octinteger | hexinteger | bininteger decimalinteger ::= nonzerodigit digit* | "0" octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+ SVN commit r55866 implemented PEP 3127 in the tokenizer, which forbids the old 0<octal> numbers. However, curiously, it also adds this note: /* in any case, allow '0' as a literal */ with a special nonzero flag that only throws a SyntaxError if the following sequence of digits contains a nonzero digit. This is odd because PEP 3127 does not allow this case: This PEP proposes that the ability to specify an octal number by using a leading zero will be removed from the language in Python 3.0 (and the Python 3.0 preview mode of 2.6), and that a SyntaxError will be raised whenever a leading "0" is immediately followed by another digit. (emphasis mine) So, the fact that multiple zeros are allowed is technically violating the PEP, and was basically implemented as a special case by Georg Brandl. He made the corresponding documentation change to note that "0"+ was a valid case for decimalinteger (previously that had been covered under octinteger). We'll probably never know exactly why Georg chose to make "0"+ valid - it may forever remain an odd corner case in Python. UPDATE [28 Jul 2015]: This question led to a lively discussion thread on python-ideas in which Georg chimed in: Steven D'Aprano wrote: Why was it defined that way? [...] Why would we write 0000 to get zero? I could tell you, but then I'd have to kill you. Georg Later on, the thread spawned this bug report aiming to get rid of this special case. Here, Georg says: I don't recall the reason for this deliberate change (as seen from the docs change). I'm unable to come up with a good reason for this change now [...] and thus we have it: the precise reason behind this inconsistency is lost to time. Finally, note that the bug report was rejected: leading zeros will continue to be accepted only on zero integers for the rest of Python 3.x.
Why does my Sieve of Eratosthenes work faster with integers than with booleans?
I wrote a simple Sieve of Eratosthenes, which uses a list of ones and turns them into zeros if not prime, like so: def eSieve(n): #Where m is fixed-length list of all integers up to n '''Creates a list of primes less than or equal to n''' m = [1]*(n+1) for i in xrange(2,int((n)**0.5)+1): if m[i]: for j in xrange(i*i,n+1,i): m[j]=0 return [i for i in xrange(2,n) if m[i]] I tested the speed it ran with %timeit and got: #n: t #10**1: 7 μs #10**2: 26.6 μs #10**3: 234 μs #10**4: 2.46 ms #10**5: 26.4 ms #10**6: 292 ms #10**7: 3.27 s I assumed, if I changed [1] and 0 to booleans, it would run faster... but it does the opposite: #n: t #10**1: 7.31 μs #10**2: 29.5 μs #10**3: 297 μs #10**4: 2.99 ms #10**5: 29.9 ms #10**6: 331 ms #10**7: 3.7 s Why are the booleans slower?
This happens because True and False are looked up as globals in Python 2. The 0 and 1 literals are just constants, looked up by a quick array reference, while globals are dictionary lookups in the global namespace (falling through to the built-ins namespace): >>> import dis >>> def foo(): ... a = True ... b = 1 ... >>> dis.dis(foo) 2 0 LOAD_GLOBAL 0 (True) 3 STORE_FAST 0 (a) 3 6 LOAD_CONST 1 (1) 9 STORE_FAST 1 (b) 12 LOAD_CONST 0 (None) 15 RETURN_VALUE The True value is looked up with the LOAD_GLOBAL bytecode, while the 1 literal value is copied to the stack with LOAD_CONST. If you make True and False locals you can make them just as fast again: def eSieve(n, True=True, False=False): m = [True]*(n+1) for i in xrange(2,int((n)**0.5)+1): if m[i]: for j in xrange(i*i,n+1,i): m[j]=False return [i for i in xrange(2,n) if m[i]] Assigning True and False as default values to for arguments gives the function those names as locals, with the exact same values; again using a simplified version: >>> def bar(True=True, False=False): ... True == False ... >>> dis.dis(bar) 2 0 LOAD_FAST 0 (True) 3 LOAD_FAST 1 (False) 6 COMPARE_OP 2 (==) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE Note the LOAD_FAST opcodes, now with indices just like the LOAD_CONST bytecodes; locals in a CPython function are stored in an array just like bytecode constants. With that change, using booleans wins out, albeit by a small margin; my timings: # n integers globals locals # 10**1 4.31 µs 4.2 µs 4.2 µs # 10**2 17.1 µs 17.3 µs 16.5 µs # 10**3 147 µs 158 µs 144 µs # 10**4 1.5 ms 1.66 ms 1.48 ms # 10**5 16.4 ms 18.2 ms 15.9 ms # 10**6 190 ms 215 ms 189 ms # 10**7 2.21 s 2.47 s 2.18 s The difference isn't really that much because Python booleans are just an int subclass. Note that in Python 3, True and False have become keywords and can no longer be assigned to, making it possible to treat them just like integer literals.
Make a number more probable to result from random
I'm using x = numpy.random.rand(1) to generate a random number between 0 and 1. How do I make it so that x > .5 is 2 times more probable than x < .5?
That's a fitting name! Just do a little manipulation of the inputs. First set x to be in the range from 0 to 1.5. x = numpy.random.uniform(1.5) x has a 2/3 chance of being greater than 0.5 and 1/3 chance being smaller. Then if x is greater than 1.0, subtract .5 from it if x >= 1.0: x = x - 0.5
Fitting a closed curve to a set of points
I have a set of points pts which form a loop and it looks like this: This is somewhat similar to 31243002, but instead of putting points in between pairs of points, I would like to fit a smooth curve through the points (coordinates are given at the end of the question), so I tried something similar to scipy documentation on Interpolation: values = pts tck = interpolate.splrep(values[:,0], values[:,1], s=1) xnew = np.arange(2,7,0.01) ynew = interpolate.splev(xnew, tck, der=0) but I get this error: ValueError: Error on input data Is there any way to find such a fit? Coordinates of the points: pts = array([[ 6.55525 , 3.05472 ], [ 6.17284 , 2.802609], [ 5.53946 , 2.649209], [ 4.93053 , 2.444444], [ 4.32544 , 2.318749], [ 3.90982 , 2.2875 ], [ 3.51294 , 2.221875], [ 3.09107 , 2.29375 ], [ 2.64013 , 2.4375 ], [ 2.275444, 2.653124], [ 2.137945, 3.26562 ], [ 2.15982 , 3.84375 ], [ 2.20982 , 4.31562 ], [ 2.334704, 4.87873 ], [ 2.314264, 5.5047 ], [ 2.311709, 5.9135 ], [ 2.29638 , 6.42961 ], [ 2.619374, 6.75021 ], [ 3.32448 , 6.66353 ], [ 3.31582 , 5.68866 ], [ 3.35159 , 5.17255 ], [ 3.48482 , 4.73125 ], [ 3.70669 , 4.51875 ], [ 4.23639 , 4.58968 ], [ 4.39592 , 4.94615 ], [ 4.33527 , 5.33862 ], [ 3.95968 , 5.61967 ], [ 3.56366 , 5.73976 ], [ 3.78818 , 6.55292 ], [ 4.27712 , 6.8283 ], [ 4.89532 , 6.78615 ], [ 5.35334 , 6.72433 ], [ 5.71583 , 6.54449 ], [ 6.13452 , 6.46019 ], [ 6.54478 , 6.26068 ], [ 6.7873 , 5.74615 ], [ 6.64086 , 5.25269 ], [ 6.45649 , 4.86206 ], [ 6.41586 , 4.46519 ], [ 5.44711 , 4.26519 ], [ 5.04087 , 4.10581 ], [ 4.70013 , 3.67405 ], [ 4.83482 , 3.4375 ], [ 5.34086 , 3.43394 ], [ 5.76392 , 3.55156 ], [ 6.37056 , 3.8778 ], [ 6.53116 , 3.47228 ]])
Your problem is because you're trying to work with x and y directly. The interpolation function you're calling assumes that the x-values are in sorted order and that each x value will have a unique y-value. Instead, you'll need to make a parameterized coordinate system (e.g. the index of your vertices) and interpolate x and y separately using it. To start with, consider the following: import numpy as np from scipy.interpolate import interp1d # Different interface to the same function import matplotlib.pyplot as plt #pts = np.array([...]) # Your points x, y = pts.T i = np.arange(len(pts)) # 5x the original number of points interp_i = np.linspace(0, i.max(), 5 * i.max()) xi = interp1d(i, x, kind='cubic')(interp_i) yi = interp1d(i, y, kind='cubic')(interp_i) fig, ax = plt.subplots() ax.plot(xi, yi) ax.plot(x, y, 'ko') plt.show() I didn't close the polygon. If you'd like, you can add the first point to the end of the array (e.g. pts = np.vstack([pts, pts[0]]) If you do that, you'll notice that there's a discontinuity where the polygon closes. This is because our parameterization doesn't take into account the closing of the polgyon. A quick fix is to pad the array with the "reflected" points: import numpy as np from scipy.interpolate import interp1d import matplotlib.pyplot as plt #pts = np.array([...]) # Your points pad = 3 pts = np.pad(pts, [(pad,pad), (0,0)], mode='wrap') x, y = pts.T i = np.arange(0, len(pts)) interp_i = np.linspace(pad, i.max() - pad + 1, 5 * (i.size - 2*pad)) xi = interp1d(i, x, kind='cubic')(interp_i) yi = interp1d(i, y, kind='cubic')(interp_i) fig, ax = plt.subplots() ax.plot(xi, yi) ax.plot(x, y, 'ko') plt.show() Alternately, you can use a specialized curve-smoothing algorithm such as PEAK or a corner-cutting algorithm.
Python 3 - Can pickle handle byte objects larger than 4GB?
Based on this comment and the referenced documentation, Pickle 4.0+ from Python 3.4+ should be able to pickle byte objects larger than 4 GB. However, using python 3.4.3 or python 3.5.0b2 on Mac OS X 10.10.4, I get an error when I try to pickle a large byte array: >>> import pickle >>> x = bytearray(8 * 1000 * 1000 * 1000) >>> fp = open("x.dat", "wb") >>> pickle.dump(x, fp, protocol = 4) Traceback (most recent call last): File "<stdin>", line 1, in <module> OSError: [Errno 22] Invalid argument Is there a bug in my code or am I misunderstanding the documentation?
To sum up what was answered in the comments: Yes, Python can pickle byte objects bigger than 4GB. The observed error is caused by a bug in the implementation (see Issue24658).
Python PIL Image in Label auto resize
I'm trying to make a widget to hold an image that will automatically resize to fit its container, e.g. if packed directly into a window, then expanding that window will expand the image. I have some code that is semi functional but I've had to add a couple of constants into one of the routines to prevent the auto resize from re triggering itself (causing it to keep growing in size) I'm sure that the reason for this is due to the widgets internal padding/border, but even trying to take that into account I get this issue. I'm using python 3.3.2, and PIL 1.1.7 on 64 bit Windows 7 my code is the following: from tkinter import tix from PIL import Image, ImageTk def Resize_Image(image, maxsize): r1 = image.size[0]/maxsize[0] # width ratio r2 = image.size[1]/maxsize[1] # height ratio ratio = max(r1, r2) newsize = (int(image.size[0]/ratio), int(image.size[1]/ratio)) # keep image aspect ratio image = image.resize(newsize, Image.ANTIALIAS) return image class Pict_Frame(tix.Label): def __init__(self, parent=None, picture=None, maxupdate=None, **kwargs): tix.Label.__init__(self, parent, **kwargs) self.bind("<Configure>", self._resize_binding) self.maxupdate = maxupdate self.update_after_id = None self.photo = None self.image = None if picture: self.set_picture(picture) def _resize_binding(self, event): if self.photo: if not self.maxupdate: self.load_picture() else: if not self.update_after_id: self.update_after_id = self.after(int(1000/self.maxupdate), self.load_picture) def load_picture(self): if self.photo: if self.update_after_id: self.update_after_id = None if (self.winfo_width() > 1) and (self.winfo_height() > 1): # prevent updates before widget gets sized self.image = ImageTk.PhotoImage(Resize_Image(self.photo, ( self.winfo_width()-int(self.cget("bd"))-1, self.winfo_height()-int(self.cget("bd"))-1))) # here is where I added the constants ^^^ # but even using cget to get the border size I have had to add to this # to prevent the resize loop, and when using other widget styles #(raised etc) this problem persists self.configure(image=self.image) def set_picture(self, filename): with open(filename, mode="rb") as file: self.photo = Image.open(file) self.photo.load() # load image into memory to allow resizing later without file access self.load_picture() if __name__ == "__main__": test = Pict_Frame(bg="grey", bd=2, relief="raised", maxupdate=2, # allows problem to be easily seen picture="image.jpg") test.pack(fill="both", expand=True) test.master.mainloop() when I apply other styles, such as a thicker border (10px) this resizing problem occurs showing that the constants don't really solve the problem. so is there any method to get only the space inside the widget, instead of its requested size?
I believe I have now solved this, but it really needs a lot more testing with different parameters to ensure accurate results. The code I have use to test this is as follows: from tkinter import tix from PIL import Image, ImageTk def Resize_Image(image, maxsize): r1 = image.size[0]/maxsize[0] # width ratio r2 = image.size[1]/maxsize[1] # height ratio ratio = max(r1, r2) newsize = (int(image.size[0]/ratio), int(image.size[1]/ratio)) # keep image aspect ratio image = image.resize(newsize, Image.ANTIALIAS) return image class Pict_Frame(tix.Label): def __init__(self, parent=None, picture=None, maxupdate=None, imagesize=None, **kwargs): tix.Label.__init__(self, parent, **kwargs) self.bind("<Configure>", self._resize_binding) self.maxupdate = maxupdate self.imagesize = imagesize self.update_after_id = None # used for update rate limiting self.photo = None # used to store raw image from file for later use self.image = None # used for reference to the resized image if imagesize: self.photo=Image.new("RGB", (1,1)) # create empty image to insert self.image=ImageTk.PhotoImage(self.photo) # create instance of image for PIL self.configure(image=self.image) self.configure(width=imagesize[0], height=imagesize[1]) # not label uses pixels for size, set size passed in if picture: self.set_picture(picture) # we have a picture so load it now def _resize_binding(self, event): if self.photo: # we have a picture if not self.maxupdate: # no rate limiting self.load_picture() else: if not self.update_after_id: # if we're not waiting then queue resize self.update_after_id = self.after(int(1000/self.maxupdate), self.load_picture) def load_picture(self): if self.photo: if self.update_after_id: self.update_after_id = None if (self.winfo_width() > 1) and (self.winfo_height() > 1): # prevent updates before widget gets sized bd = self.cget("bd") # get the border width if type(bd) != int: # if there was no border set we get an object back pad = 4 # set this explicitly to avoid problems else: pad = int(bd*2) # we have a border both sides, so double the retrieved value newsize = (self.winfo_width()-pad, self.winfo_height()-pad) elif self.imagesize: # only use the passed in image size if the widget has not rendered newsize = self.imagesize else: return # widget not rendered yet and no size explicitly set, so break until rendered self.image = ImageTk.PhotoImage(Resize_Image(self.photo, newsize)) self.configure(image=self.image) def set_picture(self, filename): with open(filename, mode="rb") as file: self.photo = Image.open(file) self.photo.load() # load image into memory to allow resizing later without file access self.load_picture() and my test cases were: import os path = "E:\imagefolder" images = [] ind = 0 for item in os.listdir(path): # get a fully qualified list of images if os.path.isdir(os.path.join(path, item)): if os.path.isfile(os.path.join(path, item, "thumb.jpg")): images.append(os.path.join(path, item, "thumb.jpg")) def callback(): global ind ind += 1 if ind >= len(images): ind = 0 pict.set_picture(images[ind]) ignore_test_cases = [] if 1 not in ignore_test_cases: print("test case 1: - no border no set size") root = tix.Tk() tix.Button(root, text="Next Image", command=callback).pack() pict = Pict_Frame(parent=root, bg="grey", maxupdate=2, # allows problem to be easily seen picture=images[ind]) pict.pack(fill="both", expand=True) tix.Button(root, text="Next Image", command=callback).pack() root.mainloop() if 2 not in ignore_test_cases: print("test case 2: - small border no set size") root = tix.Tk() tix.Button(root, text="Next Image", command=callback).pack() pict = Pict_Frame(parent=root, bg="grey", bd=2, relief="raised", maxupdate=2, picture=images[ind]) pict.pack(fill="both", expand=True) tix.Button(root, text="Next Image", command=callback).pack() root.mainloop() if 3 not in ignore_test_cases: print("test case 3: - large border no set size") root = tix.Tk() tix.Button(root, text="Next Image", command=callback).pack() pict = Pict_Frame(parent=root, bg="grey", bd=10, relief="raised", maxupdate=2, picture=images[ind]) pict.pack(fill="both", expand=True) tix.Button(root, text="Next Image", command=callback).pack() root.mainloop() if 4 not in ignore_test_cases: print("test case 4: - no border with set size") root = tix.Tk() tix.Button(root, text="Next Image", command=callback).pack() pict = Pict_Frame(parent=root, bg="grey", maxupdate=2, imagesize=(256,384), picture=images[ind]) pict.pack(fill="both", expand=True) tix.Button(root, text="Next Image", command=callback).pack() root.mainloop() if 5 not in ignore_test_cases: print("test case 5: - small border with set size") root = tix.Tk() tix.Button(root, text="Next Image", command=callback).pack() pict = Pict_Frame(parent=root, bg="grey", bd=2, relief="raised", maxupdate=2, imagesize=(256,384), picture=images[ind]) pict.pack(fill="both", expand=True) tix.Button(root, text="Next Image", command=callback).pack() root.mainloop() if 6 not in ignore_test_cases: print("test case 6: - large border with set size") root = tix.Tk() tix.Button(root, text="Next Image", command=callback).pack() pict = Pict_Frame(parent=root, bg="grey", bd=10, relief="raised", maxupdate=2, imagesize=(256,384), picture=images[ind]) pict.pack(fill="both", expand=True) tix.Button(root, text="Next Image", command=callback).pack() root.mainloop() if 10 not in ignore_test_cases: print("test case fullscreen: - small border no set size, in fullscreen window with expansion set up") root = tix.Tk() root.state("zoomed") root.grid_columnconfigure(1, weight=2) root.grid_columnconfigure(2, weight=1) root.grid_rowconfigure(2, weight=1) tix.Button(root, text="Next Image", command=callback).grid(column=2, row=1, sticky="nesw") pict = Pict_Frame(parent=root, bg="grey",# bd=10, relief="raised", maxupdate=2, picture=images[ind]) pict.grid(column=2, row=2, sticky="nesw") tix.Button(root, text="Next Image", command=callback).grid(column=2, row=3, sticky="nesw") root.mainloop() if 11 not in ignore_test_cases: print("test case fullscreen: - small border no set size, in fullscreen window with expansion set up") root = tix.Tk() root.state("zoomed") root.grid_columnconfigure(1, weight=2) root.grid_columnconfigure(2, weight=1) root.grid_rowconfigure(1, weight=1) frame = tix.Frame(root) frame.grid(column=2, row=1, sticky="nesw") frame.grid_columnconfigure(1, weight=1) frame.grid_rowconfigure(2, weight=1) tix.Button(frame, text="Next Image", command=callback).grid(column=1, row=1, sticky="nesw") pict = Pict_Frame(parent=frame, bg="grey",# bd=10, relief="raised", maxupdate=2, picture=images[ind]) pict.grid(column=1, row=2, sticky="nesw") tix.Button(frame, text="Next Image", command=callback).grid(column=1, row=3, sticky="nesw") root.mainloop() The only issue I have had with this code is that when I am using the widget in a full-screen application the re-sizing doesn't work as intended, when using the grid method and setting the weight of the right column to 1 (with the pict widget) and the left column (empty) to 1, the right column ends up taking approx 2/3rds the width of the screen. I suspect this is due to explicitly setting the size of the image, which then makes it wider, meaning the geometry manager wants to make it wider still (ad infinitum) until it reaches some equilibrium. But if anyone can shed any light on this (or even a solution) it would be appreciated.
Test if all values are in an iterable in a pythonic way
I am currently doing this: if x in a and y in a and z in a and q in a and r in a and s in a: print b Is there a more pythonic way to express this if statement?
Using the all function allows to write this in a nice and compact way: if all(i in a for i in (x, y, z, q, r, s)): print b This code should do almost exactly the same as your example, even if the objects are not hashable or if the a object has some funny __contains__ method. The all function also has similar short-circuit behavior as the chain of and in the original problem. Collecting all objects to be tested in a tuple (or a list) will guarantee the same order of execution of the tests as in the original problem. If you use a set, the order might be random.
Flask CORS - no Access-control-allow-origin header present on a redirect()
I am implementing OAuth Twitter User-sign in (Flask API and Angular) I keep getting the following error when I click the sign in with twitter button and a pop up window opens: XMLHttpRequest cannot load https://api.twitter.com/oauth/authenticate?oauth_token=r-euFwAAAAAAgJsmAAABTp8VCiE. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access. I am using the python-Cors packages to handle CORS, and I already have instagram sign in working correctly. I believe it has something to do with the response being a redirect but have not been able to correct the problem. My flask code looks like this: app = Flask(__name__, static_url_path='', static_folder=client_path) cors = CORS(app, allow_headers='Content-Type', CORS_SEND_WILDCARD=True) app.config.from_object('config') @app.route('/auth/twitter', methods=['POST','OPTIONS']) @cross_origin(origins='*', send_wildcard=True) #@crossdomain(origin='') def twitter(): request_token_url = 'https://api.twitter.com/oauth/request_token' access_token_url = 'https://api.twitter.com/oauth/access_token' authenticate_url = 'https://api.twitter.com/oauth/authenticate' # print request.headers if request.args.get('oauth_token') and request.args.get('oauth_verifier'): -- omitted for brevity -- else: oauth = OAuth1(app.config['TWITTER_CONSUMER_KEY'], client_secret=app.config['TWITTER_CONSUMER_SECRET'], callback_uri=app.config['TWITTER_CALLBACK_URL']) r = requests.post(request_token_url, auth=oauth) oauth_token = dict(parse_qsl(r.text)) qs = urlencode(dict(oauth_token=oauth_token['oauth_token'])) return redirect(authenticate_url + '?' + qs)
The problem is not yours. Your client-side application is sending requests to Twitter, so it isn't you that need to support CORS, it is Twitter. But the Twitter API does not currently support CORS, which effectively means that you cannot talk to it directly from the browser. A common practice to avoid this problem is to have your client-side app send the authentication requests to a server of your own (such as this same Flask application that you have), and in turn the server connects to the Twitter API. Since the server side isn't bound to the CORS requirements there is no problem. In case you want some ideas, I have written a blog article on doing this type of authentication flow for Facebook and Twitter: http://blog.miguelgrinberg.com/post/oauth-authentication-with-flask
Python file open function modes
I have noticed that, in addition to the documented mode characters, Python 2.7.5.1 in Windows XP and 8.1 also accepts modes U and D at least when reading files. Mode U is used in numpy's genfromtxt. Mode D has the effect that the file is deleted, as per the following code fragment: f = open('text.txt','rD') print(f.next()) f.close() # file text.txt is deleted when closed Does anybody know more about these modes, especially whether they are a permanent feature of the language applicable also on Linux systems?
The D flag seems to be Windows specific. Windows seems to add several flags to the fopen function in its CRT, as described here. While Python does filter the mode string to make sure no errors arise from it, it does allow some of the special flags, as can be seen in the Python sources here. Specifically, it seems that the N flag is filtered out, while the T and D flags are allowed: while (*++mode) { if (*mode == ' ' || *mode == 'N') /* ignore spaces and N */ continue; s = "+TD"; /* each of this can appear only once */ ... I would suggest sticking to the documented options to keep the code cross-platform.
PIP install unable to find ffi.h even though it recognizes libffi
I have installed libffi on my Linux server as well as correctly set the PKG_CONFIG_PATH environment variable to the correct directory, as pip recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both thatffi.h exists as well as its directory, so how do I go about closing this gap between ffi.h and pip?
You need to install the development package as well. libffi-dev on Debian/Ubuntu, libffi-devel on Redhat/Centos/Fedora.
How can I pass arguments to a docker container with a python entry-point script using command?
So I've got a docker image with a python script as the entry-point and I would like to pass arguments to the python script when the container is run. I've tried to get the arguments using sys.argv and sys.stdin, but neither has worked. I'm trying to run the container using: docker run image argument
It depends how the entrypoint was set up. If it was set up in "exec form" then you simply pass the arguments after the docker run command, like this: docker run image -a -b -c If it was set up in "shell form" then you have to override the entrypoint, unfortunately. $ docker run --entrypoint echo image hi hi You can check the form using docker inspect. If the entrypoint appears to begin with /bin/sh -c, then it is shell form. References: http://docs.docker.com/reference/run/#entrypoint-default-command-to-execute-at-runtime http://docs.docker.com/reference/builder/#entrypoint
AttributeError: '_socketobject' object has no attribute 'set_tlsext_host_name'
In python, on a Ubuntu server, I am trying to get the requests library to make https requests, like so: import requests requests.post("https://example.com") At first, I got the following: /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. After following the advice in this question: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately, I have now upgraded my warning to the following error: AttributeError: '_socketobject' object has no attribute 'set_tlsext_host_name' How do I fix this?
The fix for me was the following: sudo apt-get purge python-openssl sudo pip install pyopenssl
Monkey patching a @property
Is it at all possible to monkey patch the value of a @property of an instance of a class that I do not control? class Foo: @property def bar(self): return here().be['dragons'] f = Foo() print(f.bar) # baz f.bar = 42 # MAGIC! print(f.bar) # 42 Obviously the above would produce an error when trying to assign to f.bar. Is # MAGIC! possible in any way? The implementation details of the @property are a black box and not indirectly monkey-patchable. The entire method call needs to be replaced. It needs to affect a single instance only (class-level patching is okay if inevitable, but the changed behaviour must only selectively affect a given instance, not all instances of that class).
Subclass the base class (Foo) and change single instance's class to match the new subclass using __class__ attribute: >>> class Foo: ... @property ... def bar(self): ... return 'Foo.bar' ... >>> f = Foo() >>> f.bar 'Foo.bar' >>> class _SubFoo(Foo): ... bar = 0 ... >>> f.__class__ = _SubFoo >>> f.bar 0 >>> f.bar = 42 >>> f.bar 42
pandas iloc vs ix vs loc explanation?
Can someone explain how these three methods of slicing are different? I've seen the docs, and I've seen these answers, but I still find myself unable to explain how the three are different. To me, they seem interchangeable in large part, because they are at the lower levels of slicing. For example, say we want to get the first five rows of a DataFrame. How is it that all three of these work? df.loc[:5] df.ix[:5] df.iloc[:5] Can someone present three cases where the distinction in uses are clearer?
First, a recap: loc works on labels in the index. iloc works on the positions in the index (so it only takes integers). ix usually tries to behave like loc but falls back to behaving like iloc if the label is not in the index. It's important to note some subtleties that can make ix slightly tricky to use: if the index is of integer type, ix will only use label-based indexing and not fall back to position-based indexing. If the label is not in the index, an error is raised. if the index does not contain only integers, then given an integer, ix will immediately use position-based indexing rather than label-based indexing. If however ix is given another type (e.g. a string), it can use label-based indexing. To illustrate the differences between the three methods, consider the following Series: >>> s = pd.Series(np.nan, index=[49,48,47,46,45, 1, 2, 3, 4, 5]) >>> s 49 NaN 48 NaN 47 NaN 46 NaN 45 NaN 1 NaN 2 NaN 3 NaN 4 NaN 5 NaN Then s.iloc[:3] returns the first 3 rows (since it looks at the position) and s.loc[:3] returns the first 8 rows (since it looks at the labels): >>> s.iloc[:3] 49 NaN 48 NaN 47 NaN >>> s.loc[:3] 49 NaN 48 NaN 47 NaN 46 NaN 45 NaN 1 NaN 2 NaN 3 NaN >>> s.ix[:3] # the integer is in the index so s.ix[:3] works like loc 49 NaN 48 NaN 47 NaN 46 NaN 45 NaN 1 NaN 2 NaN 3 NaN Notice s.ix[:3] returns the same Series as s.loc[:3] since it looks for the label first rather than going by position (and the index is of integer type). What if we try with an integer label that isn't in the index (say 6)? Here s.iloc[:6] returns the first 6 rows of the Series as expected. However, s.loc[:6] raises a KeyError since 6 is not in the index. >>> s.iloc[:6] 49 NaN 48 NaN 47 NaN 46 NaN 45 NaN 1 NaN >>> s.loc[:6] KeyError: 6 >>> s.ix[:6] KeyError: 6 As per the subtleties noted above, s.ix[:6] now raises a KeyError because it tries to work like loc but can't find a 6 in the index. Because our index is of integer type it doesn't fall back to behaving like iloc. If, however, our index was of mixed type, given an integer ix would behave like iloc immediately instead of raising a KeyError: >>> s2 = pd.Series(np.nan, index=['a','b','c','d','e', 1, 2, 3, 4, 5]) >>> s2.index.is_mixed() # index is mix of types True >>> s2.ix[:6] # behaves like iloc given integer a NaN b NaN c NaN d NaN e NaN 1 NaN Keep in mind that ix can still accept non-integers and behave like loc: >>> s2.ix[:'c'] # behaves like loc given non-integer a NaN b NaN c NaN General advice: if you're only indexing using labels, or only indexing using integer positions, stick with loc or iloc to avoid unexpected results. If however you have a DataFrame and you want to mix label and positional index types, ix lets you do this: >>> df = pd.DataFrame(np.arange(25).reshape(5,5), index=list('abcde'), columns=['x','y','z', 8, 9]) >>> df x y z 8 9 a 0 1 2 3 4 b 5 6 7 8 9 c 10 11 12 13 14 d 15 16 17 18 19 e 20 21 22 23 24 Using ix, we can slice the rows by label and the columns by position (note that for the columns, ix default to position-based slicing since the label 4 is not a column name): >>> df.ix[:'c', :4] x y z 8 a 0 1 2 3 b 5 6 7 8 c 10 11 12 13
Scrapy spider memory leak
My spider have a serious memory leak.. After 15 min of run its memory 5gb and scrapy tells (using prefs() ) that there 900k requests objects and thats all. What can be the reason for this high number of living requests objects? Request only goes up and doesnt goes down. All other objects are close to zero. My spider looks like this: class ExternalLinkSpider(CrawlSpider): name = 'external_link_spider' allowed_domains = [''] start_urls = [''] rules = (Rule(LxmlLinkExtractor(allow=()), callback='parse_obj', follow=True),) def parse_obj(self, response): if not isinstance(response, HtmlResponse): return for link in LxmlLinkExtractor(allow=(), deny=self.allowed_domains).extract_links(response): if not link.nofollow: yield LinkCrawlItem(domain=link.url) Here output of prefs() HtmlResponse 2 oldest: 0s ago ExternalLinkSpider 1 oldest: 3285s ago LinkCrawlItem 2 oldest: 0s ago Request 1663405 oldest: 3284s ago Memory for 100k scraped pages can hit 40gb mark on some sites ( for example at victorinox.com it reach 35gb of memory at 100k scraped pages mark). On other its much lesser. UPD.
There are a few possible issues I see right away. Before starting though, I wanted to mention that prefs() doesn't show the number of requests queued, it shows the number of Request() objects that are alive. It's possible to reference a request object and keep it alive, even if it's no longer queued to be downloaded. I don't really see anything in the code you've provided that would cause this, though but you should keep it in mind. Right off the bat, I'd ask: are you using cookies? If not, sites which pass around a session ID as a GET variable will generate a new sessionID for each page visit. You'll essentially continue queuing up the same pages over and over again. For instance, victorinox.com will have something like "jsessionid=18537CBA2F198E3C1A5C9EE17B6C63AD" in it's URL string, with the ID changing for every new page load. Second, you may that you're hitting a spider trap. That is, a page which just reloads itself, with a new infinite amount of links. Think of a calendar with a link to "next month" and "previous month". I'm not directly seeing any on victorinox.com, though. Third, from the provided code your Spider is not constrained to any specific domain. It will extract every link it finds on every page, running parse_obj on each one. The main page to victorinox.com for instance has a link to http://www.youtube.com/victorinoxswissarmy. This will in turn fill up your requests with tons of YouTube links. You'll need to troubleshoot more to find out exactly what's going on, though. Some strategies you may want to use: Create a new Downloader Middleware and log all of your requests (to a file, or database). Review the requests for odd behaviour. Limit the Depth to prevent it from continuing down the rabbit hole infinitely. Limit the domain to test if it's still a problem. If you find you're legitimately just generating to many requests, and memory is an issue, enable the persistent job queue and save the requests to disk, instead. I'd recommend against this as a first step, though, as it's more likely your crawler isn't working as you wanted it to.
How to copy/paste DataFrame from StackOverflow into Python
In questions and answers, users very often post an example DataFrame which their question/answer works with: In []: x Out[]: bar foo 0 4 1 1 5 2 2 6 3 It'd be really useful to be able to get this DataFrame into my Python interpreter so I can start debugging the question, or testing the answer. How can I do this?
Pandas is written by people that really know what people want to do. Since version 0.13 there's a function pd.read_clipboard which is absurdly effective at making this "just work". Copy and paste the part of the code in the question that starts bar foo, (i.e. the DataFrame) and do this in a Python interpreter: In [53]: import pandas as pd In [54]: df = pd.read_clipboard() In [55]: df Out[55]: bar foo 0 4 1 1 5 2 2 6 3 Caveats Don't include the iPython In or Out stuff or it won't work If you have a named index, you currently need to add engine='python' (see this issue on GitHub). The 'c' engine is currently broken when the index is named. It's not brilliant at MultiIndexes: Try this: 0 1 2 level1 level2 foo a 0.518444 0.239354 0.364764 b 0.377863 0.912586 0.760612 bar a 0.086825 0.118280 0.592211 which doesn't work at all, or this: 0 1 2 foo a 0.859630 0.399901 0.052504 b 0.231838 0.863228 0.017451 bar a 0.422231 0.307960 0.801993 Which works, but returns something totally incorrect!
How to place xaxis grid over spectrogram in Python?
I have the following plot, which provides the spectrogram of a pressure signal along with the signal placed on it for comparison. I was able to draw the y-axis grids on the spectrogram, but could not place the x-axis grid on it. The data used to generate the spectrogram is available here. Reproducible code from __future__ import division from matplotlib import ticker as mtick from matplotlib.backends.backend_pdf import PdfPages import matplotlib.pyplot as plt import numpy as np data = np.genfromtxt('pressure.dat', skiprows = 1, delimiter = '\t') pressure = data[:, 1] theta = data[:, 0] with PdfPages('Spectorgram of cylinder pressure.pdf') as spectorgram_pressure: _spectorgram_pressure_vs_frequency_ = plt.figure(figsize=(5.15, 5.15)) _spectorgram_pressure_vs_frequency_.clf() spectorgram_pressure_vs_frequency = plt.subplot(111) cax = plt.specgram(pressure * 100000, NFFT = 256, Fs = 90000, cmap=plt.cm.gist_heat, zorder = 1) spectorgram_pressure_vs_frequency.grid(False, which="major") spectorgram_pressure_vs_frequency.set_xlabel('Time (s)', labelpad=6) spectorgram_pressure_vs_frequency.set_ylabel('Frequency (Hz)', labelpad=6) y_min, y_max = spectorgram_pressure_vs_frequency.get_ylim() # plt.gca cbar = plt.colorbar(orientation='vertical', ax = spectorgram_pressure_vs_frequency, fraction = 0.046, pad = 0.2) cbar.set_label('Power spectral density (dB)', rotation=90) primary_ticks = len(spectorgram_pressure_vs_frequency.yaxis.get_major_ticks()) pressure_vs_time = spectorgram_pressure_vs_frequency.twinx() pressure_vs_time.plot(((theta + 360) / (6 * 6600)), pressure, linewidth = 0.75, linestyle = '-', color = '#FFFFFF', zorder = 2) pressure_vs_time.grid(b = True, which='major', color='#FFFFFF', linestyle=':', linewidth = 0.3) spectorgram_pressure_vs_frequency.xaxis.grid(True, which='major', color='#FFFFFF', linestyle=':', linewidth = 0.3) pressure_vs_time.set_ylabel('Cylinder pressure (bar)', labelpad=6) pressure_vs_time.yaxis.set_major_locator(mtick.LinearLocator(primary_ticks)) spectorgram_pressure_vs_frequency.set_xlim([0, max(cax[2])]) spectorgram_pressure.savefig(bbox_inches='tight') plt.close() How to place x-axis grids on top of the spectrogram like the y-axis grids in Python? I am using matplotlib version 1.3.1. Is this a version-specific problem? Update I updated matplotlib from version 1.3.1 to 1.4.3, even then I cannot set the x-axis grids on.
As others have noted - it is very difficult to replicate your issue with the code you have provided. In particular - I have tried on Windows 8.1, Ubuntu 14.04 (on Virtualbox VM), matplotlib versions 1.3.1 and 1.4.3, with and without text.usetex set and with Python 2.7.6 and Python 3. None of them reproduce your problem with the code you provide. However I can reproduce what you see if I replace the line spectorgram_pressure_vs_frequency.xaxis.grid(True, which='major', color='#FFFFFF', linestyle=':', linewidth = 0.3) with pressure_vs_time.xaxis.grid(True, which='major', color='#FFFFFF', linestyle=':', linewidth = 0.3) i.e. I try to set the xaxis.grid on the twinned axis rather than the original axis. I have to conclude that somehow in your real code you are setting the xaxis.grid on your twinned axis rather than your main axis. This means I can answer your direct questions as follows: How to place x-axis grids on top of the spectrogram like the y-axis grids in Python? Your code does this - you call xaxis.grid() on the original (not twinned) axis. I am using matplotlib version 1.3.1. Is this a version-specific problem? No, the behaviour is the same in 1.3.1 and 1.4.3
Pythonic and efficient way to do an elementwise "in" using numpy
I'm looking for a way to efficiently get an array of booleans, where given two arrays with equal size a and b, each element is true if the corresponding element of a appears in the corresponding element of b. For example, the following program: a = numpy.array([1, 2, 3, 4]) b = numpy.array([[1, 2, 13], [2, 8, 9], [5, 6], [7]]) print(numpy.magic_function(a, b)) Should print [True, True, False, False] Keep in mind this function should be the equivalent of [x in y for x, y in zip(a, b)] Only numpy-optimized for cases when a and b are big, and each element of b is reasonably small.
To take advantage of NumPy's broadcasting rules you should make array b squared first, which can be achieved using itertools.izip_longest: from itertools import izip_longest c = np.array(list(izip_longest(*b))).astype(float) resulting in: array([[ 1., 2., 5., 7.], [ 2., 8., 6., nan], [ 13., 9., nan, nan]]) Then, by doing np.isclose(c, a) you get a 2D array of Booleans showing the difference between each c[:, i] and a[i], according to the broadcasting rules, giving: array([[ True, True, False, False], [False, False, False, False], [False, False, False, False]], dtype=bool) Which can be used to obtain your answer: np.any(np.isclose(c, a), axis=0) #array([ True, True, False, False], dtype=bool)
python elasticsearch client set mappings during create index
I can set mappings of index being created in curl command like this: { "mappings":{ "logs_june":{ "_timestamp":{ "enabled":"true" }, "properties":{ "logdate":{ "type":"date", "format":"dd/MM/yyy HH:mm:ss" } } } } } But I need to create that index with elasticsearch client in python and set mappings.. what is the way ? I tried somethings below but not work: self.elastic_con = Elasticsearch([host], verify_certs=True) self.elastic_con.indices.create(index="accesslog", ignore=400) params = "{\"mappings\":{\"logs_june\":{\"_timestamp\": {\"enabled\": \"true\"},\"properties\":{\"logdate\":{\"type\":\"date\",\"format\":\"dd/MM/yyy HH:mm:ss\"}}}}}" self.elastic_con.indices.put_mapping(index="accesslog",body=params)
You can simply add the mapping in the create call like this: from elasticsearch import Elasticsearch self.elastic_con = Elasticsearch([host], verify_certs=True) mapping = ''' { "mappings":{ "logs_june":{ "_timestamp":{ "enabled":"true" }, "properties":{ "logdate":{ "type":"date", "format":"dd/MM/yyy HH:mm:ss" } } } } }''' self.elastic_con.indices.create(index='test-index', ignore=400, body=mapping)
How to package a linked DLL and a pyd file into one self contained pyd file?
I am building a python module with Cython that links against a DLL file. In order to succesfully import my module I need to have the DLL in the Windows search path. Otherwise, the typical error message is: ImportError: DLL load failed: The specified module could not be found. Is there a way to packaged the DLL directly into the produced pyd file to make the distribution easier? One example of this is with the OpenCV distribution, where a (huge) pyd file is distributed and is the only file needed for the Python bindings to work.
Python's packaging & deployment is still a pain point for many of us. There is just not a silver bullet. Here are several methods: 1. OpenCV build method The method is decribed here : https://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_bindings/py_bindings_basics/py_bindings_basics.html#bindings-basics OpenCV generates these wrapper functions automatically from the C++ headers using some Python scripts which are located in modules/python/src2. Basically it parse the header files and generate the static PyObject keywords whenever it's needed. Once the header are created appropriately, it just call python setup. Honestly, it might work, but I would not advise this method. 2. Makefiles If you already use Makefile, just create a rule to place your lib accordinlgy. Example, from my own code : setup.py from distutils.core import setup, Extension setup(name='sha1_hmac', version='1.0', \ ext_modules=[Extension('sha1_hmac', library_dirs=['C:\MinGW\lib'], sources= ['../tools/sha1.c','sha1_hmac.c'])]) Makefile # The hmac generation used by the webserver is done # using the sha1.c implementation. There is a binding needed to # glue the C code with the python script libsha1_hmac: ifeq ($(OS), Windows_NT) $(PYTHON) setup.py build --compiler=mingw32 else $(PYTHON) setup.py install --home=$(CURDIR) endif .PHONY: webserver webserver: libsha1_hmac ifeq ($(OS), Windows_NT) mv $(shell find build -type f -name "sha1*.pyd") $(LIB) else mv -f $(shell find $(LIB)/python -type f -name "sha1*.so") $(LIB) endif $(PYTHON) hmac_server.py 3. Modern deployement tools There are several new tools to deploy python applications, namely wheels which seem to gain traction. I don't use it, but it look like it can ease up your bundling problem : How can I make a Python Wheel from an existing native library? Once it wheeled, you can install it like this : pip install some-package.whl
Python - list comprehension in this case is efficient?
The is the input "dirty" list in python input_list = [' \n ',' data1\n ',' data2\n',' \n','data3\n'.....] each list element contains either empty spaces with new line chars or data with newline chars Cleaned it up using the below code.. cleaned_up_list = [data.strip() for data in input_list if data.strip()] gives cleaned_up_list = ['data1','data2','data3','data4'..] Does python internally call strip() twice during the above list comprehension? or would i have to use a for loop iteration and strip() just once if i cared about efficiency? for data in input_list clean_data = data.strip() if(clean_data): cleaned_up_list.append(clean_data)
Using your list comp strip is called twice, use a gen exp if you want to only call strip once and keep the comprehension: input_list[:] = [x for x in (s.strip() for s in input_list) if x] Input: input_list = [' \n ',' data1\n ',' data2\n',' \n','data3\n'] Output: ['data1', 'data2', 'data3'] input_list[:] will change the original list which may or may not be what you want, if you actually want to create a new list just use cleaned_up_list = .... I always found using itertools.imap in python 2 and map in python 3 instead of the generator to be the most efficient for larger inputs: from itertools import imap input_list[:] = [x for x in imap(str.strip, input_list) if x] Some timings with different approaches: In [17]: input_list = [choice(input_list) for _ in range(1000000)] In [19]: timeit filter(None, imap(str.strip, input_list)) 10 loops, best of 3: 115 ms per loop In [20]: timeit list(ifilter(None,imap(str.strip,input_list))) 10 loops, best of 3: 110 ms per loop In [21]: timeit [x for x in imap(str.strip,input_list) if x] 10 loops, best of 3: 125 ms per loop In [22]: timeit [x for x in (s.strip() for s in input_list) if x] 10 loops, best of 3: 145 ms per loop In [23]: timeit [data.strip() for data in input_list if data.strip()] 10 loops, best of 3: 160 ms per loop In [24]: %%timeit ....: cleaned_up_list = [] ....: for data in input_list: ....: clean_data = data.strip() ....: if clean_data: ....: cleaned_up_list.append(clean_data) ....: 10 loops, best of 3: 150 ms per loop In [25]: In [25]: %%timeit ....: cleaned_up_list = [] ....: append = cleaned_up_list.append ....: for data in input_list: ....: clean_data = data.strip() ....: if clean_data: ....: append(clean_data) ....: 10 loops, best of 3: 123 ms per loop The fastest approach is actually itertools.ifilter combined with itertools.imap closely followed by filterwith imap. Removing the need to reevaluate the function reference list.append each iteration is more efficient, if you were stuck with a loop and wanted the most efficient approach then it is a viable alternative.
Vagrant Not Starting Up. User that created VM doesn't match current user
I was trying to start up my vagrant machine, so I navigated to the folder where my vagrantfile is, and used: vagrant up && vagrant ssh but I got the following error message: The VirtualBox VM was created with a user that doesn't match the current user running Vagrant. VirtualBox requires that the same user be used to manage the VM that was created. Please re-run Vagrant with that user. This is not a Vagrant issue. The UID used to create the VM was: 0 Your UID is: 501 I also tried with sudo, but that didn't work either. Do I need to switch UID's? And how would I do this?
I ran into the same problem today. I edited my UID by opening the file .vagrant/machines/default/virtualbox/creator_uid and changing the 501 to a 0. After I saved the file, the command vagrant up worked like a champ.
python requests ssl handshake failure
Every time I try to do: requests.get('https://url') I got this message: import requests >>> requests.get('https://reviews.gethuman.com/companies') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/dist-packages/requests/api.py", line 55, in get return request('get', url, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 455, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 558, in send r = adapter.send(request, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 385, in send raise SSLError(e) requests.exceptions.SSLError: [Errno 1] _ssl.c:510: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure I tried everything: update my requests update my ssl but nothing changes. I am using Python 2.7.6, can't change this.
I resolve the problem in the end i updated my ubuntu from 14.04 to 14.10 and the problem was solved but in the older version of ubuntu and python I install those lib and it seems to fix all my problems sudo apt-get install python-dev libssl-dev libffi-dev sudo pip2.7 install -U pyopenssl==0.13.1 pyasn1 ndg-httpsclient if you don`t have pip2.7 installed you can use pip instead
Python: Feed and parse stream of data to and from external program with additional input and output files
The problem: I have a poorly designed Fortran program (I cannot change it, I'm stuck with it) which takes text input from stdin and other input files, and writes text output results to stdout and other output files. The size of input and out is quite large, and I would like to avoid writing to the hard drive (slow operation). I have written a function that iterates over the lines of the several input files, and I also have parsers for multiple output. I don't really know if the program first read all the input and then starts to output, or starts outputting while reading the input. The goal: To have a function that feeds the external program with what it wants, and parses the output as it comes from the program, without writing data to text files on the hard drive. Research: The naive way using files is: from subprocess import PIPE, Popen def execute_simple(cmd, stdin_iter, stdout_parser, input_files, output_files): for filename, file_iter in input_files.iteritems(): with open(filename ,'w') as f: for line in file_iter: f.write(line + '\n') p_sub = Popen( shlex.split(cmd), stdin = PIPE, stdout = open('stdout.txt', 'w'), stderr = open('stderr.txt', 'w'), bufsize=1 ) for line in stdin_iter: p_sub.stdin.write(line + '\n') p_sub.stdin.close() p_sub.wait() data = {} for filename, parse_func in output_files.iteritems(): # The stdout.txt and stderr.txt is included here with open(filename,'r') as f: data[filename] = parse_func( iter(f.readline, b'') ) return data I have tried to and the subprocess module to execute the external program together. The additional input/output files are handled with named pipes and multiprocessing. I want to feed stdin with an iterator (which returns the lines for input), save the stderr in a list, and parse the stdout as it comes from the external program. The input and output can be quite large, so using communicate is not feasible. I have a parser on the format: def parser(iterator): for line in iterator: # Do something if condition: break some_other_function(iterator) return data I looked at this solution using select to choose the appropriate stream, however I don't know how to make it work with my stdout parser and how to feed the stdin. I also look the asyncio module, but as I can see I will have the same problem with the parsing of stout.
You should use named pipes for all input and output to the Fortran program to avoid writing to disk. Then, in your consumer, you can use threads to read from each of the program's output sources and add the information to a Queue for in-order processing. To model this, I created a python app daemon.py that reads from standard input and returns the square root until EOF. It logs all input to a log file specified as a command-line argument and prints the square root to stdout and all errors to stderr. I think it simulates your program (of course the number of output files is only one, but it can be scaled). You can view the source code for this test application here. Note the explicit call to stdout.flush(). By default, the standard output is print buffered, which means that this is output at the end and messages will not arrive in order. I hope your Fortran application does not buffer its output. I believe that my sample application will probably not run on Windows, due to a Unix-only use of select, which shouldn't matter in your case. I have my consumer application which starts the daemon application as a subprocess, with stdin, stdout and stderr redirected to subprocess.PIPEs. each of these pipes is given to a different thread, one to give input, and three to handle the log file, errors and standard output respectively. They all add their messages to a shared Queue which your main thread reads from and sends to your parser. This is my consumer's code: import os, random, time import subprocess import threading import Queue import atexit def setup(): # make a named pipe for every file the program should write logfilepipe='logpipe' os.mkfifo(logfilepipe) def cleanup(): # put your named pipes here to get cleaned up logfilepipe='logpipe' os.remove(logfilepipe) # run our cleanup code no matter what - avoid leaving pipes laying around # even if we terminate early with Ctrl-C atexit.register(cleanup) # My example iterator that supplies input for the program. You already have an iterator # so don't worry about this. It just returns a random input from the sample_data list # until the maximum number of iterations is reached. class MyIter(): sample_data=[0,1,2,4,9,-100,16,25,100,-8,'seven',10000,144,8,47,91,2.4,'^',56,18,77,94] def __init__(self, numiterations=1000): self.numiterations=numiterations self.current = 0 def __iter__(self): return self def next(self): self.current += 1 if self.current > self.numiterations: raise StopIteration else: return random.choice(self.__class__.sample_data) # Your parse_func function - I just print it out with a [tag] showing its source. def parse_func(source,line): print "[%s] %s" % (source,line) # Generic function for sending standard input to the problem. # p - a process handle returned by subprocess def input_func(p, queue): # run the command with output redirected for line in MyIter(30): # Limit for testing purposes time.sleep(0.1) # sleep a tiny bit p.stdin.write(str(line)+'\n') queue.put(('INPUT', line)) p.stdin.close() p.wait() # Once our process has ended, tell the main thread to quit queue.put(('QUIT', True)) # Generic function for reading output from the program. source can either be a # named pipe identified by a string, or subprocess.PIPE for stdout and stderr. def read_output(source, queue, tag=None): print "Starting to read output for %r" % source if isinstance(source,str): # Is a file or named pipe, so open it source=open(source, 'r') # open file with string name line = source.readline() # enqueue and read lines until EOF while line != '': queue.put((tag, line.rstrip())) line = source.readline() if __name__=='__main__': cmd='daemon.py' # set up our FIFOs instead of using files - put file names into setup() and cleanup() setup() logfilepipe='logpipe' # Message queue for handling all output, whether it's stdout, stderr, or a file output by our command lq = Queue.Queue() # open the subprocess for command print "Running command." p = subprocess.Popen(['/path/to/'+cmd,logfilepipe], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Start threads to handle the input and output threading.Thread(target=input_func, args=(p, lq)).start() threading.Thread(target=read_output, args=(p.stdout, lq, 'OUTPUT')).start() threading.Thread(target=read_output, args=(p.stderr, lq, 'ERRORS')).start() # open a thread to read any other output files (e.g. log file) as named pipes threading.Thread(target=read_output, args=(logfilepipe, lq, 'LOG')).start() # Now combine the results from our threads to do what you want run=True while(run): (tag, line) = lq.get() if tag == 'QUIT': run=False else: parse_func(tag, line) My iterator returns a random input value (some of which are junk to cause errors). Yours should be a drop-in replacement. The program will run until the end of its input and then wait for the subprocess to complete before enqueueing a QUIT message to your main thread. My parse_func is obviously super simple, simply printing out the message and its source, but you should be able to work with something. The function to read from an output source is designed to work with both PIPEs and strings - don't open the pipes on your main thread because they will block until input is available. So for file readers (e.g. reading log files), it's better to have the child thread open the file and block. However, we spawn the subprocess on the main thread so we can pass the handles for stdin, stdout and stderr to their respective child threads. Based partially on this Python implementation of multitail.
How do I properly use connection pools in redis?
It's not clear to me how connections pools work, and how to properly use them. I was hoping someone could elaborate. I've sketched out my use case below: settings.py: import redis def get_redis_connection(): return redis.StrictRedis(host='localhost', port=6379, db=0) task1.py import settings connection = settings.get_redis_connection() def do_something1(): return connection.hgetall(...) task2.py import settings connection = settings.get_redis_connection() def do_something1(): return connection.hgetall(...) etc. Basically I have a setting.py file that returns redis connections, and several different task files that get the redis connections, and then run operations. So each task file has its own redis instance (which presumably is very expensive). What's the best way of optimizing this process. Is it possible to use connection pools for this example? Is there a more efficient way of setting up this pattern? For our system, we have over a dozen task files following this same pattern, and I've noticed our requests slowing down. Thanks
Redis-py provides a connection pool for you from which you can retrieve a connection. Connection pools create a set of connections which you can use as needed (and when done - the connection is returned to the connection pool for further reuse). Trying to create connections on the fly without discarding them (i.e. not using a pool or not using the pool correctly) will leave you with way too many connections to redis (until you hit the connection limit). You could choose to setup the connection pool in the init method and make the pool global (you can look at other options if uncomfortable with global). redis_pool = None def init(): global pool print("PID %d: initializing redis pool..." % os.getpid()) redis_pool = redis.ConnectionPool(host='10.0.0.1', port=6379, db=0) You can then retrieve the connection from a pool like this: redis_conn = redis.Redis(connection_pool=redis_pool) Also, I am assuming you are using hiredis along with redis-py as it should improve performance in certain cases. Have you also checked the number of connections open to the redis server with your existing setup as it most likely is quite high? You can use the INFO commmand to get that information: redis-cli info Check for the Clients section in which you will see the "connected_clients" field that will tell you how many connections you have open to the redis server at that instant.
What is a Pythonic way for Dependency Injection?
Introduction For Java, Dependency Injection works as pure OOP, i.e. you provide an interface to be implemented and in your framework code accept an instance of a class that implements the defined interface. Now for Python, you are able to do the same way, but I think that method was too much overhead right in case of Python. So then how would you implement it in the Pythonic way? Use Case Say this is the framework code: class FrameworkClass(): def __init__(self, ...): ... def do_the_job(self, ...): # some stuff # depending on some external function The Basic Approach The most naive (and maybe the best?) way is to require the external function to be supplied into the FrameworkClass constructor, and then be invoked from the do_the_job method. Framework Code: class FrameworkClass(): def __init__(self, func): self.func = func def do_the_job(self, ...): # some stuff self.func(...) Client Code: def my_func(): # my implementation framework_instance = FrameworkClass(my_func) framework_instance.do_the_job(...) Question The question is short. Is there any better commonly used Pythonic way to do this? Or maybe any libraries supporting such functionality? UPDATE: Concrete Situation Imagine I develop a micro web framework, which handles authentication using tokens. This framework needs a function to supply some ID obtained from the token and get the user corresponding to that ID. Obviously, the framework does not know anything about users or any other application specific logic, so the client code must inject the user getter functionality into the framework to make the authentication work.
See Raymond Hettinger - Super considered super! - PyCon 2015 for an argument about how to use super and multiple inheritance instead of DI. If you don't have time to watch the whole video, jump to minute 15 (but I'd recommend watching all of it). Here is an example of how to apply what's described in this video to your example: Framework Code: class TokenInterface(): def getUserFromToken(self, token): raise NotImplementedError class FrameworkClass(TokenInterface): def do_the_job(self, ...): # some stuff self.user = super().getUserFromToken(...) Client Code: class SQLUserFromToken(TokenInterface): def getUserFromToken(self, token): # load the user from the database return user class ClientFrameworkClass(FrameworkClass, SQLUserFromToken): pass framework_instance = ClientFrameworkClass() framework_instance.do_the_job(...) This will work because the Python MRO will guarantee that the getUserFromToken client method is called (if super() is used). The code will have to change if you're on Python 2.x. One added benefit here is that this will raise an exception if the client does not provide a implementation. Of course, this is not really dependency injection, it's multiple inheritance and mixins, but it is a Pythonic way to solve your problem.
Making SVM run faster in python
Using the code below for svm in python: from sklearn import datasets from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import SVC iris = datasets.load_iris() X, y = iris.data, iris.target clf = OneVsRestClassifier(SVC(kernel='linear', probability=True, class_weight='auto')) clf.fit(X, y) proba = clf.predict_proba(X) But it is taking a huge amount of time. Actual Data Dimensions: train-set (1422392,29) test-set (233081,29) How can I speed it up(parallel or some other way)? Please help. I have already tried PCA and downsampling. I have 6 classes. Edit: Found http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html but I wish for probability estimates and it seems not to so for svm. Edit: from sklearn import datasets from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import SVC,LinearSVC from sklearn.linear_model import SGDClassifier import joblib import numpy as np from sklearn import grid_search import multiprocessing import numpy as np import math def new_func(a): #converts array(x) elements to (1/(1 + e(-x))) a=1/(1 + math.exp(-a)) return a if __name__ == '__main__': iris = datasets.load_iris() cores=multiprocessing.cpu_count()-2 X, y = iris.data, iris.target #loading dataset C_range = 10.0 ** np.arange(-4, 4); #c value range param_grid = dict(estimator__C=C_range.tolist()) svr = OneVsRestClassifier(LinearSVC(class_weight='auto'),n_jobs=cores) ################LinearSVC Code faster #svr = OneVsRestClassifier(SVC(kernel='linear', probability=True, ##################SVC code slow # class_weight='auto'),n_jobs=cores) clf = grid_search.GridSearchCV(svr, param_grid,n_jobs=cores,verbose=2) #grid search clf.fit(X, y) #training svm model decisions=clf.decision_function(X) #outputs decision functions #prob=clf.predict_proba(X) #only for SVC outputs probablilites print decisions[:5,:] vecfunc = np.vectorize(new_func) prob=vecfunc(decisions) #converts deicision to (1/(1 + e(-x))) print prob[:5,:] Edit 2: The answer by user3914041 yields very poor probability estimates.
If you want to stick with SVC as much as possible and train on the full dataset, you can use ensembles of SVCs that are trained on subsets of the data to reduce the number of records per classifier (which apparently has quadratic influence on complexity). Scikit supports that with the BaggingClassifier wrapper. That should give you similar (if not better) accuracy compared to a single classifier, with much less training time. The training of the individual classifiers can also be set to run in parallel using the n_jobs parameter. Alternatively, I would also consider using a Random Forest classifier - it supports multi-class classification natively, it is fast and gives pretty good probability estimates when min_samples_leaf is set appropriately. I did a quick tests on the iris dataset blown up 100 times with an ensemble of 10 SVCs, each one trained on 10% of the data. It is more than 10 times faster than a single classifier. These are the numbers I got on my laptop: Single SVC: 45s Ensemble SVC: 3s Random Forest Classifier: 0.5s See below the code that I used to produce the numbers: import time import numpy as np from sklearn.ensemble import BaggingClassifier, RandomForestClassifier from sklearn import datasets from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import SVC iris = datasets.load_iris() X, y = iris.data, iris.target X = np.repeat(X, 100, axis=0) y = np.repeat(y, 100, axis=0) start = time.time() clf = OneVsRestClassifier(SVC(kernel='linear', probability=True, class_weight='auto')) clf.fit(X, y) end = time.time() print "Single SVC", end - start, clf.score(X,y) proba = clf.predict_proba(X) n_estimators = 10 start = time.time() clf = OneVsRestClassifier(BaggingClassifier(SVC(kernel='linear', probability=True, class_weight='auto'), max_samples=1.0 / n_estimators, n_estimators=n_estimators)) clf.fit(X, y) end = time.time() print "Bagging SVC", end - start, clf.score(X,y) proba = clf.predict_proba(X) start = time.time() clf = RandomForestClassifier(min_samples_leaf=20) clf.fit(X, y) end = time.time() print "Random Forest", end - start, clf.score(X,y) proba = clf.predict_proba(X) If you want to make sure that each record is used only once for training in the BaggingClassifier, you can set the bootstrap parameter to False.
How to use Java/Scala function from an action or a transformation?
Background My original question here was Why using DecisionTreeModel.predict inside map function raises an exception? and is related to How to generate tuples of (original lable, predicted label) on Spark with MLlib? When we use Scala API a recommended way of getting predictions for RDD[LabeledPoint] using DecisionTreeModel is to simply map over RDD: val labelAndPreds = testData.map { point => val prediction = model.predict(point.features) (point.label, prediction) } Unfortunately similar approach in PySpark doesn't work so well: labelsAndPredictions = testData.map( lambda lp: (lp.label, model.predict(lp.features)) labelsAndPredictions.first() Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforamtion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063. Instead of that official documentation recommends something like this: predictions = model.predict(testData.map(lambda x: x.features)) labelsAndPredictions = testData.map(lambda lp: lp.label).zip(predictions) So what is going on here? There is no broadcast variable here and Scala API defines predict as follows: /** * Predict values for a single data point using the model trained. * * @param features array representing a single data point * @return Double prediction from the trained model */ def predict(features: Vector): Double = { topNode.predict(features) } /** * Predict values for the given data set using the model trained. * * @param features RDD representing data points to be predicted * @return RDD of predictions for each of the given data points */ def predict(features: RDD[Vector]): RDD[Double] = { features.map(x => predict(x)) } so at least at the first glance calling from action or transformation is not a problem since prediction seems to be a local operation. Explanation After some digging I figured out that the source of the problem is a JavaModelWrapper.call method invoked from DecisionTreeModel.predict. It access SparkContext which is required to call Java function: callJavaFunc(self._sc, getattr(self._java_model, name), *a) Question In case of DecisionTreeModel.predict there is a recommended workaround and all the required code is already a part of the Scala API but is there any elegant way to handle problem like this in general? Only solutions I can think of right now are rather heavyweight: pushing everything down to JVM either by extending Spark classes through Implicit Conversions or adding some kind of wrappers using Py4j gateway directly
Communication using default Py4J gateway is simply not possible. To understand why we have to take a look at the following diagram from the PySpark Internals document [1]: Since Py4J gateway runs on the driver it is not accessible to Python interpreters which communicate with JVM workers through sockets (See for example PythonRDD / rdd.py). Theoretically it could be possible to create a separate Py4J gateway for each worker but in practice it is unlikely to be useful. Ignoring issues like reliability Py4J is simply not designed to perform data intensive tasks. Are there any workarounds? Using Spark SQL Data Sources API to wrap JVM code. Pros: Supported, high level, doesn't require access to the internal PySpark API Cons: Relatively verbose and not very well documented, limited mostly to the input data Operating on DataFrames using Scala UDFs. Pros: Easy to implement (see Spark: How to map Python with Scala or Java User Defined Functions?), no data conversion between Python and Scala if data is already stored in a DataFrame, minimal access to Py4J Cons: Requires access to Py4J gateway and internal methods, limited to Spark SQL, hard to debug, not supported Creating high level Scala interface in a similar way how it is done in MLlib. Pros: Flexible, ability to execute arbitrary complex code. It can be don either directly on RDD (see for example MLlib model wrappers) or with DataFrames (see How to use a Scala class inside Pyspark). The latter solution seems to be much more friendly since all ser-de details are already handled by existing API. Cons: Low level, required data conversion, same as UDFs requires access to Py4J and internal API, not supported Some basic examples can be found in Strings not converted when calling Scala code from a PySpark app Using external workflow management tool to switch between Python and Scala / Java jobs and passing data to a DFS. Pros: Easy to implement, minimal changes to the code itself Cons: Cost of reading / writing data (Tachyon?) Using shared SQLContext (see for example Apache Zeppelin or Livy) to pass data between guest languages using registered temporary tables. Pros: Well suited for interactive analysis Cons: Not so much for batch jobs (Zeppelin) or may require additional orchestration (Livy) Joshua Rosen. (2014, August 04) PySpark Internals. Retrieved from https://cwiki.apache.org/confluence/display/SPARK/PySpark+Internals
Trade off between code duplication and performance
Python, being the dynamic language that it is, offer multiple ways to implement the same feature. These options may vary in readability, maintainability and performance. Even though the usual scripts that I write in Python are of a disposable nature, I now have a certain project that I am working on (academic) that must be readable, maintainable and perform reasonably well. Since I haven't done any serious coding in Python before, including any sort of profiling, I need help in deciding the balance between the three factors I mentioned above. Here's a code snippet from one of the modules in a scientific package that I am working on. It is an n-ary Tree class with a very basic skeleton structure. This was written with inheritance and sub classing in mind. Note : in the code below a tree is the same thing as a node. Every tree is an instance of the same class Tree. class Tree(object): def __init__(self, parent=None, value=None): self.parent = parent self.value = value self.children = set() The two functions below belongs to this class (along with many others) def isexternal(self): """Return True if this is an external tree.""" return not bool(self.children) def isleaf(self): """Return True if this is a leaf tree.""" return not bool(self.children) Both these functions are doing exactly the same thing - they are just two different names. So, why not change it to something like: def isleaf(self): """Return True of this is a leaf tree.""" return self.isexternal() My doubts are these : I've read that function calls in Python are rather expensive (creating new stacks for each call), but I don't know if it is a good or bad thing if one function depends on another. How will it affect maintainability. This happens many times in my code, where I call one method from another method to avoid code duplication. Is it bad practice to do this? Here's another example of this code duplication scenario in the same class: def isancestor(self, tree): """Return True if this tree is an ancestor of the specified tree.""" return tree.parent is self or (not tree.isroot() and self.isancestor(tree.parent)) def isdescendant(self, tree): """Return True if this tree is a descendant of the specified tree.""" return self.parent is tree or (not self.isroot() and self.parent.isdescendant(tree)) I could instead go for: def isdescendant(self, tree): """Return True if this tree is a descendant of the specified tree.""" return tree.isancestor(self)
Very broadly speaking, there are two types of optimization: macro optimizations and micro optimizations. Macro optimizations include things like your choice of algorithms, deciding between different data structures, and the like. Things that can have a big impact on performance and often have large ripple effects on your code base if you change your mind. Switching from a data structure with linear O(n) to one with constant O(1) inserts could be a huge win and well worth the cost of doing it. Adding caching may change a dog slow algorithm into a lightning fast one. Micro optimizations are things like eliding or inlining function calls, eliminating or adding variables, caching calculation results for a very short window, unrolling loops, etc. As a rule, you should forget about these types of optimizations and focus on the readability and maintainability of your code. The effects of micro optimizations are simply too small to be worth it. You should only consider these types of changes after profiling your code. If you can identify a critical loop that would benefit from such an optimization, and your profiling confirms it would, and you make the change and verify the improvement worked with another round of profiling--then you should micro optimize. But until then, don't sweat the small stuff. def isdescendant(self, tree): """Return True if this tree is a descendant of the specified tree.""" return tree.isancestor(self) I would absolutely recommend this type of code reuse. It makes it crystal clear that isdescendant is the inverse of isancestor. It ensures that both functions work the same way so you can't inadvertantly introduce a bug in one but not the other. def isleaf(self): """Return True of this is a leaf tree.""" return self.isexternal() Here I would ask myself if isleaf and isexternal are conceptually the same. Ignoring that they're implemented the same, are they logically identical? If so, I would have one call the other. If it's just happenstance that they have the same implementation, I might duplicate the code. Can you imagine a scenario where you would want to change one function but not the other? That would point towards duplication.
Lost important .py file (overwritten as 0byte file), but the old version still LOADED IN IPYTHON as module -- can it be retrieved?
Due to my stupidity, while managing several different screen sessions with vim open in many of them, in the process of trying to "organize" my sessions I somehow managed to overwrite a very important .py script with a 0Byte file. HOWEVER, I have an ipython instance open that, when running that same .py file as a module, still remembers the code that used to be there! So did I just learn a hard lesson about backups (my last one was done by vim about a week ago, which would leave me with a lot of work to do), or is there any possible, conceivable way to retrieve the .py file from an already loaded module? I probably deserve this for being so cavalier, but I'm seriously desperate here.
As noted in comments, inspect.getsource will not work because it depends on the original file (ie, module.__file__). Best option: check to see if there's a .pyc file (ex, foo.pyc should be beside foo.py). If there is, you can use Decompile Python 2.7 .pyc to decompile it. The inspect modules also caches the source. You may be able to get lucky and use inspect.getsource(module), or inspect.getsourcelines(module.function) if it has been called in the past. Otherwise you'll need to rebuild the module "manually" by inspecting the exports (ie, module.__globals__). Constants and whatnot are obvious, and for functions you can use func.func_name to get its name, func.__doc__ to get the docstring, inspect.getargspec(func) to get the arguments, and func.func_code to get details about the code: co_firstlineno will get the line number, then co_code will get the code. There's more on decompiling that here: Exploring and decompiling python bytecode For example, to use uncompyle2: >>> def foo(): ... print "Hello, world!" ... >>> from StringIO import StringIO >>> import uncompyle2 >>> out = StringIO() >>> uncompyle2.uncompyle("2.7", foo.func_code, out=out) >>> print out.getvalue() print 'Hello, world!' But, no — I'm not aware of any more straight forward method to take a module and get the source code back out.
How to test Pl/Python PostgreSQL procedures with Travis CI?
I'm trying to set up CI for some PL/Python PostgreSQL procedures in Travis CI. I've tried several ways: 1) With the legacy infrastructure I've tried to just assume, that PL/Python is already installed, but it had not succeed: The command "psql -U postgres -c 'CREATE EXTENSION plpythonu;'" exited with 1. 0.01s$ psql -U postgres -d test -c 'CREATE LANGUAGE plpythonu;' ERROR: could not access file "$libdir/plpython2": No such file or directory 2) Have tried to add sudo apt-get update && sudo apt-get -y install postgresql-plpython-9.4 commands in the beginning. And it was also failed, because this command initiated replacement of PostgresSQL 9.4, that comes already installed in the Travis environment. Travis build. 3) Also tried to use container-based infrastructure with this lines in the config: addons: postgresql: "9.4" apt: packages: - postgresql-plpython-9.4 No success too. What is the good way to test PL/Python procedure in Travis CI?
I was able to get the python-tempo build working with the following .travis.yml: sudo: required language: python before_install: - sudo apt-get -qq update - sudo /etc/init.d/postgresql stop - sudo apt-get install -y postgresql-9.4 - sudo apt-get install -y postgresql-contrib-9.4 postgresql-plpython-9.4 - sudo -u postgres createdb test - sudo -u postgres createlang plpython2u test - sudo pip install jinja2 script: - > sudo -u postgres psql -d test -c 'CREATE OR REPLACE FUNCTION py_test() RETURNS void LANGUAGE plpython2u AS $$ import jinja2 $$;' - sudo -u postgres psql -d test -c 'SELECT py_test();' Your legacy configuration attempts had a variety of issues including not stopping the existing PostgreSQL 9.1 instance before installing 9.4 and not specifying the plpython language properly. I believe some commands were also not being run as the correct user. All of the issues are resolved by the above configuration. There may be ways in which this configuration can be improved, but I stopped once I got it working. The container-based configuration won't work because postgresql-plpython-9.4 is not currently in the whitelist of pre-approved packages. However, postgresql-plpython-9.5 is, so if you want to migrate to a container-based configuration, you can either try following the package approval process for postgresql-plpython-9.4 or wait for the GA release of PostgreSQL 9.5 and try migrating then.
How to handle an exhausted iterator?
While searching the Python Documentation I found the equivalent python implementation of Pythons build-in zip() function. Instead of catching a StopIteration exception which signals that there are no further items produced by the iterator the author(s) use an if statement to check if the returned default value form next() equals object() ("sentinel") and stop the generator: def zip(*iterables): # zip('ABCD', 'xy') --> Ax By sentinel = object() iterators = [iter(it) for it in iterables] while iterators: result = [] for it in iterators: elem = next(it, sentinel) if elem is sentinel: return result.append(elem) yield tuple(result) I wonder now if there is any difference between the exception catching or an if statement as used by the Python Docs? Or better, as @hiro protagonist pointed out:What's wrong with using a try statement considering EAFP (Easier to ask for forgiveness than permission) in Python? def zip(*iterables): # zip('ABCD', 'xy') --> Ax By iterators = [iter(it) for it in iterables] while iterators: result = [] for it in iterators: try: elem = next(it) except StopIteration: return result.append(elem) yield tuple(result) Also as Stoyan Dekov mentioned "A try/except block is extremely efficient if no exceptions are raised. Actually catching an exception is expensive." (see the docs for more information)But an exception would only occur once, namely as soon as the iterator is exhausted. So exception handling would be the better solution in this case?
you mean as opposed to this? def zip2(*iterables): # zip('ABCD', 'xy') --> Ax By iterators = [iter(it) for it in iterables] while iterators: result = [] for it in iterators: try: elem = next(it) except StopIteration: return result.append(elem) yield tuple(result) interesting question... i'd have preferred this alternative version - expecially considering EAFP (Easier to ask for forgiveness than permission.) even if try/except is slower than the if statement; this happens once only - as soon as the first iterator is exhausted. it may be worth noting that this is not the actual implementaion in python; just an implementation that is equivalent to the real implementation. UPDATE according to comments: note that PEP 479 suggests to return from the geneartor and not raise StopIteration.
How do you install mysql-connector-python (development version) through pip?
I have a virtualenv in which I am running Django 1.8 with Python 3.4 I am trying to get support for MySQL however I am having trouble getting the different connectors to work. I have always used mysql-connector-python with django 1.7 and would like to continue using it. The development version of mysql-connector-python (2.1.2) seems to have support for Django 1.8 now. How do I install the development version using pip install (within my virtualenv)? I have tried running the following command: pip install mysql-connector-python==2.1.2 --allow-external mysql-connector-python but the 2.1.2 developer version is not available there: could not find a version that satisfies the requirement mysql-connector-python==2.1.2 (from versions: 1.1.4, 1.1.5, 1.1.6, 1.2.2, 1.2.3, 2.0.1, 2.0.2, 2.0.3, 2.0.4) Some insecure and unverifiable files were ignored (use --allow-unverified mysql-connector-python to allow). No matching distribution found for mysql-connector-python==2.1.2 I have tried downloading the file directly from: http://dev.mysql.com/downloads/connector/python/ ...but for some reason after installation the connector is not available within my virtualenv Help please :-) Thank you. ' EDIT: I have tried pip install mysql-python but it is not supported by Python 3.4
I agree, the debian packages appear to be broken for at least Ubuntu 14.04. The apt-get version and the pip version do not include the 2.1.x releases. To get it done, I had to grab the source: $ git clone https://github.com/mysql/mysql-connector-python.git $ cd mysql-connector-python $ python ./setup.py build $ sudo python ./setup.py install ... >>> import mysql.connector as msc >>> msc.__version__ '2.1.3' >>>