instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
|---|---|---|
Install wxPython on Mac os Mavericks
|
I am on a Macbook Air, running Mavericks. Today I downloaded wxPython via their website, however, when I click on the install package I get:
wxPython3.0-osx-cocoa-py2.7.pkgâ is damaged and canât be opened. You should eject the disk image.
Anyone have any ideas to get around this, thanks.
|
According to #14523, the problem is that the package is not signed (and it's an old-style package, which isn't fully supported anymore, which is why you get the wrong error message, and the usual workaround of context-clicking Open doesn't work).
There are two workarounds:
First, you can turn off Gatekeeper (in System Preferences, Security & Privacy, General, select the "Allow apps downloads from: ⦠Anywhere" radio button), install wxPython, then turn Gatekeeper back on.
Alternatively, you can install from the command line:
sudo installer -pkg /path/to/wxPython.pkg -target /
|
Creating a pandas DataFrame from columns of other DataFrames with similar indexes
|
I have 2 DataFrames df1 and df2 with the same column names ['a','b','c'] and indexed by dates.
The date index can have similar values.
I would like to create a DataFrame df3 with only the data from columns ['c'] renamed respectively 'df1' and 'df2' and with the correct date index. My problem is that I cannot get how to merge the index properly.
df1 = pd.DataFrame(np.random.randn(5,3), index=pd.date_range('01/02/2014',periods=5,freq='D'), columns=['a','b','c'] )
df2 = pd.DataFrame(np.random.randn(8,3), index=pd.date_range('01/01/2014',periods=8,freq='D'), columns=['a','b','c'] )
df1
a b c
2014-01-02 0.580550 0.480814 1.135899
2014-01-03 -1.961033 0.546013 1.093204
2014-01-04 2.063441 -0.627297 2.035373
2014-01-05 0.319570 0.058588 0.350060
2014-01-06 1.318068 -0.802209 -0.939962
df2
a b c
2014-01-01 0.772482 0.899337 0.808630
2014-01-02 0.518431 -1.582113 0.323425
2014-01-03 0.112109 1.056705 -1.355067
2014-01-04 0.767257 -2.311014 0.340701
2014-01-05 0.794281 -1.954858 0.200922
2014-01-06 0.156088 0.718658 -1.030077
2014-01-07 1.621059 0.106656 -0.472080
2014-01-08 -2.061138 -2.023157 0.257151
The df3 DataFrame should have the following form :
df3
df1 df2
2014-01-01 NaN 0.808630
2014-01-02 1.135899 0.323425
2014-01-03 1.093204 -1.355067
2014-01-04 2.035373 0.340701
2014-01-05 0.350060 0.200922
2014-01-06 -0.939962 -1.030077
2014-01-07 NaN -0.472080
2014-01-08 NaN 0.257151
But with NaN in the df1 column as the date index of df2 is wider. (In this example, I would get NaN for the ollowing dates : 2014-01-01, 2014-01-07 and 2014-01-08)
Thanks for your help.
|
You can use concat:
In [11]: pd.concat([df1['c'], df2['c']], axis=1, keys=['df1', 'df2'])
Out[11]:
df1 df2
2014-01-01 NaN -0.978535
2014-01-02 -0.106510 -0.519239
2014-01-03 -0.846100 -0.313153
2014-01-04 -0.014253 -1.040702
2014-01-05 0.315156 -0.329967
2014-01-06 -0.510577 -0.940901
2014-01-07 NaN -0.024608
2014-01-08 NaN -1.791899
[8 rows x 2 columns]
|
Why is float() faster than int()?
|
Experimenting with some code and doing some microbenchmarks I just found out that using the float function on a string containing an integer number is a factor 2 faster than using int on the same string.
>>> python -m timeit int('1')
1000000 loops, best of 3: 0.548 usec per loop
>>> python -m timeit float('1')
1000000 loops, best of 3: 0.273 usec per loop
It gets even stranger when testing int(float('1')) which runtime is shorter than the bare int('1').
>>> python -m timeit int(float('1'))
1000000 loops, best of 3: 0.457 usec per loop
I tested the code under Windows 7 running cPython 2.7.6 and Linux Mint 16 with cPython 2.7.6.
I have to add that only Python 2 is affected, Python 3 shows a way smaller (not remarkable) difference between the runtimes.
I know that the information I get by such microbenchmarks are easy to misuse, but I'm curious why there is such a difference in the functions' runtime.
I tried to find the implementations of int and float but I can not find it in the sources.
|
int has lots of bases.
*, 0*, 0x*, 0b*, 0o* and it can be long, it takes time to determine the base and other things
if the base is set, it saves a lot of time
python -m timeit "int('1',10)"
1000000 loops, best of 3: 0.252 usec per loop
python -m timeit "int('1')"
1000000 loops, best of 3: 0.594 usec per loop
as @Martijn Pieters metions the code the Object/intobject.c(int_new) and Object/floatobject.c(float_new)
|
How to use the @shared_task decorator for class based tasks
|
As seen on the documentation the @shared_task decorator lets you create tasks without having any concrete app instance. The given examples shows how to decorate a function based task.
How to decorate a class based task?
|
The documentation you linked to says:
The @shared_task decorator lets you create tasks without having any concrete app instance:
As far as I can tell, the documentation is misleading, and should say:
The @shared_task decorator lets you create tasks that can be used by any app(s).
In fact, any Task must be attached to an app instance. My evidence comes from the celery source file celery/app/builtins.py:
def shared_task(constructor):
"""Decorator that specifies a function that generates a built-in task.
The function will then be called for every new app instance created
(lazily, so more exactly when the task registry for that app is needed).
The function must take a single ``app`` argument.
"""
_shared_tasks.add(constructor)
return constructor
So it looks like you can use this decorator to create a task without having a concrete app instance, but in fact the decorated function MUST take an app argument - as the source comment says.
The next function follows:
def load_shared_tasks(app):
"""Create built-in tasks for an app instance."""
constructors = set(_shared_tasks)
for constructor in constructors:
constructor(app)
You can confirm here that each function decorated by @shared_tasks will be invoked with an app argument.
|
What's the purpose of the "__package__" attribute in Python?
|
All I want to know is what exactly does __package__ mean? Didn't find any explanation in the official doc, even on SO.
If you could provide some examples I would be very happy.
|
See the PEP 366 and import system reference documentation:
The major proposed change is the introduction of a new module level attribute, __package__. When it is present, relative imports will be based on this attribute rather than the module __name__ attribute.
and
The moduleâs __package__ attribute should be set. Its value must be a string, but it can be the same value as its __name__. If the attribute is set to None or is missing, the import system will fill it in with a more appropriate value. When the module is a package, its __package__ value should be set to its __name__. When the module is not a package, __package__ should be set to the empty string for top-level modules, or for submodules, to the parent packageâs name. See PEP 366 for further details.
So, for a module located in foo/bar/baz.py, __name__ is set to foo.bar.baz, and __package__ is set to foo.bar, while foo/bar/__init__.py will have foo.bar for both the __name__ and __package__ attributes.
|
How to read user input until EOF?
|
My current code reads user input until line-break.
But I am trying to change that to a format, where the user can write input until strg+d to end his input.
I currently do it like this:
input = raw_input ("Input: ")
But how can I change that to an EOF-Ready version?
|
Use file.read:
input_str = sys.stdin.read()
According to the documentation:
file.read([size])
Read at most size bytes from the file (less if the read hits EOF
before obtaining size bytes). If the size argument is negative or
omitted, read all data until EOF is reached.
>>> import sys
>>> isinstance(sys.stdin, file)
True
BTW, dont' use input as a variable name. It shadows builtin function input.
|
Unresolved reference issue in PyCharm
|
I have a directory structure
âââ simulate.py
âââ src
â  âââ networkAlgorithm.py
â  âââ ...
And I can access the network module with sys.path.insert().
import sys
import os.path
sys.path.insert(0, "./src")
from networkAlgorithm import *
However, pycharm complains that it cannot access the module. How can I teach pycham to resolve the reference?
|
Manually adding it as you have done is indeed one way of doing this, but there is a simpler method, and that is by simply telling pycharm that you want to add the src folder as a source root, and then adding the sources root to your python path.
This way, you don't have to hard code things into your interpreter's settings:
Add src as a source content root:
Then make sure to add add sources to your PYTHONPATH:
Now imports will be resolved:
This way, you can add whatever you want as a source root, and things will simply work. If you unmarked it as a source root however, you will get an error:
|
Django NoReverseMatch
|
I'm making a simple login app in django 1.6 (and python 2.7) and I get an error at the beggining that is not letting me continue.
This is the site's url.py
from django.conf.urls import patterns, include, url
from django.contrib import admin
import login
admin.autodiscover()
urlpatterns = patterns('',
url(r'^$', include('login.urls', namespace='login')),
url(r'^admin/', include(admin.site.urls)),
)
And this is login/urls.py:
from django.conf.urls import patterns, url
from login import views
urlpatterns = patterns('',
url(r'^$', views.index, name='index'),
url(r'^auth/', views.auth, name='auth'),
)
This is login/views,py
from django.shortcuts import render
from django.contrib.auth import authenticate
def auth(request):
user = authenticate(username=request.POST['username'], password=request.POST['password'])
if user is not None:
# the password verified for the user
if user.is_active:
msg = "User is valid, active and authenticated"
else:
msg = "The password is valid, but the account has been disabled!"
else:
# the authentication system was unable to verify the username and password
msg = "The username and password were incorrect."
return render(request, 'login/authenticate.html', {'MESSAGE': msg})
def index(request):
return render(request, 'login/login_form.html')
I have a form that has this as action:
{% url 'login:auth' %}
And that's where the problem is, when I try to load the page, I get:
Reverse for 'auth' with arguments '()' and keyword arguments '{}' not found. 1 pattern(s) tried: [u'$auth/']
But if I set the url pattern to
url(r'', views.auth, name='auth')
it works fine, only it sets the action as '/'.
I've been looking all around for an answer and I don't understand why it doesn't work.
I tried changing the login url pattern to url(r'^login/$', include('login.urls', namespace='login')), and it didn't change anything.
|
The problem is in the way you include the auth URLs in the main one.
Because you use both ^ and $, only the empty string matches. Drop the $.
|
Reading output from child process using python
|
The Context
I am using the subprocess module to start a process from python. I want to be able to access the output (stdout, stderr) as soon as it is written/buffered.
The solution must support Windows 7. I require a solution for Unix systems too but I suspect the Windows case is more difficult to solve.
The solution should support Python 2.6. I am currently restricted to Python 2.6 but solutions using later versions of Python are still appreciated.
The solution should not use third party libraries. Ideally I would love a solution using the standard library but I am open to suggestions.
The solution must work for just about any process. Assume there is no control over the process being executed.
The Child Process
For example, imagine I want to run a python file called counter.py via a subprocess. The contents of counter.py is as follows:
import sys
for index in range(10):
# Write data to standard out.
sys.stdout.write(str(index))
# Push buffered data to disk.
sys.stdout.flush()
The Parent Process
The parent process responsible for executing the counter.py example is as follows:
import subprocess
command = ['python', 'counter.py']
process = subprocess.Popen(
cmd,
bufsize=1,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
The Issue
Using the counter.py example I can access the data before the process has completed. This is great! This is exactly what I want. However, removing the sys.stdout.flush() call prevents the data from being accessed at the time I want it. This is bad! This is exactly what I don't want. My understanding is that the flush() call forces the data to be written to disk and before the data is written to disk it exists only in a buffer. Remember I want to be able to run just about any process. I do not expect the process to perform this kind of flushing but I still expect the data to be available in real time (or close to it). Is there a way to achieve this?
A quick note about the parent process. You may notice I am using bufsize=0 for line buffering. I was hoping this would cause a flush to disk for every line but it doesn't seem to work that way. How does this argument work?
You will also notice I am using subprocess.PIPE. This is because it appears to be the only value which produces IO objects between the parent and child processes. I have come to this conclusion by looking at the Popen._get_handles method in the subprocess module (I'm referring to the Windows definition here). There are two important variables, c2pread and c2pwrite which are set based on the stdout value passed to the Popen constructor. For instance, if stdout is not set, the c2pread variable is not set. This is also the case when using file descriptors and file-like objects. I don't really know whether this is significant or not but my gut instinct tells me I would want both read and write IO objects for what I am trying to achieve - this is why I chose subprocess.PIPE. I would be very grateful if someone could explain this in more detail. Likewise, if there is a compelling reason to use something other than subprocess.PIPE I am all ears.
Method For Retrieving Data From The Child Process
import time
import subprocess
import threading
import Queue
class StreamReader(threading.Thread):
"""
Threaded object used for reading process output stream (stdout, stderr).
"""
def __init__(self, stream, queue, *args, **kwargs):
super(StreamReader, self).__init__(*args, **kwargs)
self._stream = stream
self._queue = queue
# Event used to terminate thread. This way we will have a chance to
# tie up loose ends.
self._stop = threading.Event()
def stop(self):
"""
Stop thread. Call this function to terminate the thread.
"""
self._stop.set()
def stopped(self):
"""
Check whether the thread has been terminated.
"""
return self._stop.isSet()
def run(self):
while True:
# Flush buffered data (not sure this actually works?)
self._stream.flush()
# Read available data.
for line in iter(self._stream.readline, b''):
self._queue.put(line)
# Breather.
time.sleep(0.25)
# Check whether thread has been terminated.
if self.stopped():
break
cmd = ['python', 'counter.py']
process = subprocess.Popen(
cmd,
bufsize=1,
stdout=subprocess.PIPE,
)
stdout_queue = Queue.Queue()
stdout_reader = StreamReader(process.stdout, stdout_queue)
stdout_reader.daemon = True
stdout_reader.start()
# Read standard out of the child process whilst it is active.
while True:
# Attempt to read available data.
try:
line = stdout_queue.get(timeout=0.1)
print '%s' % line
# If data was not read within time out period. Continue.
except Queue.Empty:
# No data currently available.
pass
# Check whether child process is still active.
if process.poll() != None:
# Process is no longer active.
break
# Process is no longer active. Nothing more to read. Stop reader thread.
stdout_reader.stop()
Here I am performing the logic which reads standard out from the child process in a thread. This allows for the scenario in which the read is blocking until data is available. Instead of waiting for some potentially long period of time, we check whether there is available data, to be read within a time out period, and continue looping if there is not.
I have also tried another approach using a kind of non-blocking read. This approach uses the ctypes module to access Windows system calls. Please note that I don't fully understand what I am doing here - I have simply tried to make sense of some example code I have seen in other posts. In any case, the following snippet doesn't solve the buffering issue. My understanding is that it's just another way to combat a potentially long read time.
import os
import subprocess
import ctypes
import ctypes.wintypes
import msvcrt
cmd = ['python', 'counter.py']
process = subprocess.Popen(
cmd,
bufsize=1,
stdout=subprocess.PIPE,
)
def read_output_non_blocking(stream):
data = ''
available_bytes = 0
c_read = ctypes.c_ulong()
c_available = ctypes.c_ulong()
c_message = ctypes.c_ulong()
fileno = stream.fileno()
handle = msvcrt.get_osfhandle(fileno)
# Read available data.
buffer_ = None
bytes_ = 0
status = ctypes.windll.kernel32.PeekNamedPipe(
handle,
buffer_,
bytes_,
ctypes.byref(c_read),
ctypes.byref(c_available),
ctypes.byref(c_message),
)
if status:
available_bytes = int(c_available.value)
if available_bytes > 0:
data = os.read(fileno, available_bytes)
print data
return data
while True:
# Read standard out for child process.
stdout = read_output_non_blocking(process.stdout)
print stdout
# Check whether child process is still active.
if process.poll() != None:
# Process is no longer active.
break
Comments are much appreciated.
Cheers
|
At issue here is buffering by the child process. Your subprocess code already works as well as it could, but if you have a child process that buffers its output then there is nothing that subprocess pipes can do about this.
I cannot stress this enough: the buffering delays you see are the responsibility of the child process, and how it handles buffering has nothing to do with the subprocess module.
You already discovered this; this is why adding sys.stdout.flush() in the child process makes the data show up sooner; the child process uses buffered I/O (a memory cache to collect written data) before sending it down the sys.stdout pipe 1.
Python automatically uses line-buffering when sys.stdout is connected to a terminal; the buffer flushes whenever a newline is written. When using pipes, sys.stdout is not connected to a terminal and a fixed-size buffer is used instead.
Now, the Python child process can be told to handle buffering differently; you can set an environment variable or use a command-line switch to alter how it uses buffering for sys.stdout (and sys.stderr and sys.stdin). From the Python command line documentation:
-u
Force stdin, stdout and stderr to be totally unbuffered. On systems where it matters, also put stdin, stdout and stderr in binary mode.
[...]
PYTHONUNBUFFERED
If this is set to a non-empty string it is equivalent to specifying the -u option.
If you are dealing with child processes that are not Python processes and you experience buffering issues with those, you'll need to look at the documentation of those processes to see if they can be switched to use unbuffered I/O, or be switched to more desirable buffering strategies.
One thing you could try is to use the script -c command to provide a pseudo-terminal to a child process. This is a POSIX tool, however, and is probably not available on Windows.
1. It should be noted that when flushing a pipe, no data is 'written to disk'; all data remains entirely in memory here. I/O buffers are just memory caches to get the best performance out of I/O by handling data in larger chunks. Only if you have a disk-based file object would fileobj.flush() cause it to push any buffers to the OS, which usually means that data is indeed written to disk.
|
How to configure display output in IPython pandas
|
I'm trying to configure my IPython output in my OS X terminal, but it would seem that none of the changes I'm trying to set are taking effect. I'm trying to configure the display settings such that wider outputs like a big DataFrame will output without any truncation or as the summary info.
After importing pandas into my script, I have a few options set where I tried a whole bunch, but any one (or all, for that matter) does not seem to take effect. I'm running the script from IPython using %run. Am I doing something wrong here?
import pandas as pd
pd.set_option('display.expand_max_repr', False)
pd.set_option('display.max_columns', 30)
pd.set_option('display.width', None)
pd.set_option('display.line_width', 200)
I've looked at some threads on Stack and the pandas FAQ to no avail, even when using these under the display namespace (or without), as I've attempted here.
I understand that there are some ways around this, such as calling to_string() or describe() methods on your output, but these are very manual, and don't always work as intended in some cases, like one where I have calling to_string() on a groupby object yields:
id type
106125 puzzle gameplay_id sitting_id user_id ...
106253 frames gameplay_id sitting_id user_id ...
106260 trivia gameplay_id sitting_id user_id ...
My terminal window size is more than sufficient to accommodate the width, and calling pd.util.terminal.get_terminal_size() is correctly finding the window size tuple, so it would seem that auto detecting the size isn't working either. Any insight would be appreciated!
|
Just for completeness (I'll add my comment as an answer), you missed out:
pd.options.display.max_colwidth # default is 50
this restricted the maximum length of a single column.
There are quite a few options to configure here, if you're using ipython then tab complete to find the full set of display options:
pd.options.display.<tab>
|
multiple plot in one figure in Python
|
I am new to python and am trying to plot multiple lines in the same figure using matplotlib.
Value of my Y axis is stored in a dictionary and I make corresponding values in X axis in my following code
My code is like this:
for i in range(len(ID)):
AxisY= PlotPoints[ID[i]]
if len(AxisY)> 5:
AxisX= [len(AxisY)]
for i in range(1,len(AxisY)):
AxisX.append(AxisX[i-1]-1)
plt.plot(AxisX,AxisY)
plt.xlabel('Lead Time (in days)')
plt.ylabel('Proportation of Events Scheduled')
ax = plt.gca()
ax.invert_xaxis()
ax.yaxis.tick_right()
ax.yaxis.set_label_position("right")
plt.show()
But I am getting separate figures with single plot one by one. Can anybody help me figure our what is wrong with my code? Why can't I produce multiple line plotting? Thanks a lot!
|
This is very simple to do:
import matplotlib.pyplot as plt
plt.plot(<X AXIS VALUES HERE>, <Y AXIS VALUES HERE>, 'line type', label='label here')
plt.plot(<X AXIS VALUES HERE>, <Y AXIS VALUES HERE>, 'line type', label='label here')
plt.show()
You can keep adding plt.plot as many times as you like. As for line type, you need to first specify the color. So for blue, it's b. And for a normal line it's -. An example would be:
plt.plot(total_lengths, sort_times_heap, 'b-', label="Heap")
|
What happens if you write a variable name alone in python?
|
Recently I became curious about but what happens in line 2 of the following bogus python code:
def my_fun(foo,bar):
foo
return foo + bar
The reason I became interested is that I'm trying Light Table and tried to put a watch on "foo." It appeared to cause the python interpreter to hang.
Am I correct in thinking that this line has absolutely no effect and does not cause any sort of error? Can someone explain what the interpreter does exactly here?
|
One can look at what is happening with a little help from the built-in dis module:
import dis
def my_fun(foo,bar):
foo
return foo + bar
dis.dis(my_fun)
The dis.dis function disassembles functions (yep, it can disassemble itself), methods, and classes.
The output of dis.dis(my_fun) is:
4 0 LOAD_FAST 0 (foo)
3 POP_TOP
5 4 LOAD_FAST 0 (foo)
7 LOAD_FAST 1 (bar)
10 BINARY_ADD
11 RETURN_VALUE
The first two bytecodes are exactly what we need: the foo line.
Here's what these bytecodes do:
The first one pushes a reference to a local variable foo onto the stack
(LOAD_FAST)
The second one removes the top of the stack (POP_TOP)
Basically, foo line has no effect. (well, if foo variable is not defined then LOAD_FAST will throw the NameError)
|
Change file type in PyCharm
|
I created a text file, and renamed it into testTreeGen.py.
The problem is that PyCharm does not detect it as a python source so that I can't execute it.
How can I teach PyCharm that this is python script?
I tried remove the file and recreated it, and created another python file and renamed it; all ends up being the file as text file not Python script.
The version is 3.0.2.
EDIT
Refactoring does not working.
The file property comes back to text (the original file property that I created before the first refactoring) even after I renamed it as python script.
The weird thing is that no matter where I create the testTreeGen.py, PyCharm creates it as a text file as if it remembers the name of the file should be text file. I can create python script anything but the testTreeGen.py.
|
Settings (Preferences on Mac) | Editor | File Types | Text
Check patterns there (bottom list) -- you must have had testTreeGen or similar pattern. Just remove it.
This usually happens when creating new file and instead of using specific file template you use New | File and forgetting to enter file extension (which is required here as opposed to specific file template which adds file extension automatically) -- this is to allow to create ANY file with any extension (e.g. .htaccess or just README).
Since such file (testTreeGen in your case) does not have extension, IDE does not know how to treat it and treats it as plain text (not automatically though -- it asking you to choose correct file type with "text file" type preselected, but vast majority of users just hitting enter instead of reading what that screen says -- "relax, I'm a professional and I know what I'm doing").
Now, since the testTreeGen pattern is there and it is bigger/more precise that generic *.rb or *.py, file is treated as Text file regardless of file extension -- renaming will not help.
Conclusion / advice -- pay a bit more attention to confirmation dialogs.
|
Sorting in pandas for large datasets
|
I would like to sort my data by a given column, specifically p-values. However, the issue is that I am not able to load my entire data into memory. Thus, the following doesn't work or rather works for only small datasets.
data = data.sort(columns=["P_VALUE"], ascending=True, axis=0)
Is there a quick way to sort my data by a given column that only takes chunks into account and doesn't require loading entire datasets in memory?
|
In the past, I've used Linux's pair of venerable sort and split utilities, to sort massive files that choked pandas.
I don't want to disparage the other answer on this page. However, since your data is text format (as you indicated in the comments), I think it's a tremendous complication to start transferring it into other formats (HDF, SQL, etc.), for something that GNU/Linux utilities have been solving very efficiently for the last 30-40 years.
Say your file is called stuff.csv, and looks like this:
4.9,3.0,1.4,0.6
4.8,2.8,1.3,1.2
Then the following command will sort it by the 3rd column:
sort --parallel=8 -t . -nrk3 stuff.csv
Note that the number of threads here is set to 8.
The above will work with files that fit into the main memory. When your file is too large, you would first split it into a number of parts. So
split -l 100000 stuff.csv stuff
would split the file into files of length at most 100000 lines.
Now you would sort each file individually, as above. Finally, you would use mergesort, again through (waith for it...) sort:
sort -m sorted_stuff_* > final_sorted_stuff.csv
Finally, if your file is not in CSV (say it is a tgz file), then you should find a way to pipe a CSV version of it into split.
|
Pandas: dataframe column type conversion
|
I read data from a .csv file to a Pandas dataframe as below. For one of the columns, namely id, I want to specify the column type as int. The problem is the id series has missing/empty values.
When I try to cast the id column to integer while reading the .csv, I get:
df= pd.read_csv("data.csv", dtype={'id': int})
error: Integer column has NA values
Alternatively, I tried to convert the column type after reading as below, but this time I get:
df= pd.read_csv("data.csv")
df[['id']] = df[['id']].astype(int)
error: Cannot convert NA to integer
How can I tackle this?
|
The lack of NaN rep in integer columns is a pandas "gotcha".
The usual workaround is to simply use floats (if you don't specify the dtype this is what will be used).
|
Second y-axis label getting cut off
|
I'm trying to plot two sets of data in a bar graph with matplotlib, so I'm using two axes with the twinx() method. However, the second y-axis label gets cut off. I've tried a few different methods with no success (tight_layout(), setting the major_pads in rcParams, etc...). I feel like the solution is simple, but I haven't come across it yet.
Here's a MWE:
#!/usr/bin/env python
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
matplotlib.rcParams.update({'font.size': 21})
ax = plt.gca()
plt.ylabel('Data1') #Left side
ax2 = ax.twinx()
for i in range(10):
if(i%2==0):
ax.bar(i,np.random.randint(10))
else:
ax2.bar(i,np.random.randint(1000),color='k')
plt.ylabel('Data2') #Right
side
plt.savefig("test.png")
|
I just figured it out: the trick is to use bbox_inches='tight' in savefig.
E.G. plt.savefig("test.png",bbox_inches='tight')
|
Link to class method in python docstring
|
I want to add a link to a method in my class from within the docstring of another method of the same class. I want the link to work in sphinx and preferentially also in Spyder and other Python IDE's.
I tried several options and found only one that works, but it's cumbersome.
Suppose the following structure in mymodule.py
def class MyClass():
def foo(self):
print 'foo'
def bar(self):
"""This method does the same as <link to foo>"""
print 'foo'
I tried the following options for <link to foo>:
:func:`foo`
:func:`self.foo`
:func:`MyClass.foo`
:func:`mymodule.MyClass.foo`
The only one that effectively produces a link is :func:`mymodule.MyClass.foo`, but the link is shown as mymodule.MyClass.foo() and I want a link that is shown as foo() or foo.
None of the options above produces a link in Spyder.
Thanks for your help.
|
The solution that works for Sphinx is to prefix the reference with ~.
So the answer is:
def class MyClass():
def foo(self):
print 'foo'
def bar(self):
"""This method does the same as :func:`~mymodule.MyClass.foo`"""
print 'foo'
This results in an html looking like this : This method does the same as foo(), and foo() is a link.
However, in Spyder it still does not work as a link.
Thanks to @devin_s to point me to the right section in the Sphinx documentation: http://sphinx-doc.org/markup/inline.html#xref-syntax
|
Convert floats to ints in Pandas?
|
I've working with data imported from a CSV. Pandas changed some columns to float, so now the numbers in these columns get displayed as floating points! However, I need them to be displayed as integers, or, without comma. Is there a way to convert them to integers or not diplay the comma?
|
Use the .astype() function to manipulate column dtypes.
>>> df = pd.DataFrame(np.random.rand(3,4), columns=list("ABCD"))
>>> df
A B C D
0 0.542447 0.949988 0.669239 0.879887
1 0.068542 0.757775 0.891903 0.384542
2 0.021274 0.587504 0.180426 0.574300
>>> df[list("ABCD")] = df[list("ABCD")].astype(int)
>>> df
A B C D
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
EDIT:
To handle missing values:
>>> df
A B C D
0 0.475103 0.355453 0.66 0.869336
1 0.260395 0.200287 NaN 0.617024
2 0.517692 0.735613 0.18 0.657106
>>> df[list("ABCD")] = df[list("ABCD")].fillna(0.0).astype(int)
>>> df
A B C D
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
>>>
|
how to get access to error message from abort command when using custom error handler
|
Using a python flask server, I want to be able to throw an http error response with the abort command and use a custom response string and a custom message in the body
@app.errorhandler(400)
def custom400(error):
response = jsonify({'message': error.message})
response.status_code = 404
response.status = 'error.Bad Request'
return response
abort(400,'{"message":"custom error message to appear in body"}')
But the error.message variable comes up as an empty string. I can't seem to find documentation on how to get access to the second variable of the abort function with a custom error handler
|
If you look at flask/__init__.py you will see that abort is actually imported from werkzeug.exceptions. Looking at the Aborter class, we can see that when called with a numeric code, the particular HTTPException subclass is looked up and called with all of the arguments provided to the Aborter instance. Looking at HTTPException, paying particular attention to lines 85-89 we can see that the second argument passed to HTTPException.__init__ is stored in the description property, as @dirn pointed out.
You can either access the message from the description property:
@app.errorhandler(400)
def custom400(error):
response = jsonify({'message': error.description['message']})
# etc.
abort(400, {'message': 'custom error message to appear in body'})
or just pass the description in by itself:
@app.errorhandler(400)
def custom400(error):
response = jsonify({'message': error.description})
# etc.
abort(400, 'custom error message to appear in body')
|
pip connection failure: cannot fetch index base URL http://pypi.python.org/simple/
|
I run sudo pip install git-review, and get the following messages:
Downloading/unpacking git-review
Cannot fetch index base URL http://pypi.python.org/simple/
Could not find any downloads that satisfy the requirement git-review
No distributions at all found for git-review
Storing complete log in /home/sai/.pip/pip.log
Does anyone has any idea about this?
|
EDIT:
The current version of PIP no longer has this issue. As of right now, version: 7.1.2 is the current version. Here is the PIP link:
https://pypi.python.org/pypi/pip
ORIGINAL FIX:
I got this issue when trying to use pip==1.5.4
This is an issue related to PIP and Python's PYPI trusting SSL certificates. If you look in the PIP log in Mac OS X at: /Users/username/.pip/pip.log it will give you more detail.
My workaround to get PIP back up and running after hours of trying different stuff was to go into my site-packages in Python whether it is in a virtualenv or in your normal site-packages, and get rid of the current PIP version. For me I had pip==1.5.4
I deleted the PIP directory and the PIP egg file. Then I ran
easy_install pip==1.2.1
This version of PIP doesn't have the SSL issue, and then I was able to go and run my normal pip install -r requirements.txt within my virtualenv to set up all packages that I wanted that were listed in my requirements.txt file.
This is also the recommended hack to get passed the issue by several people on this Google Group that I found:
https://groups.google.com/forum/#!topic/beagleboard/aSlPCNYcVjw
|
Iterate over all pairs of consecutive items from a given list
|
Given a list
l = [1,7,3,5]
I want to iterate over all pairs of consecutive list items (1,7),(7,3),(3,5), i.e.
for i in xrange(len(l)-1):
x=l[i]
y=l[i+1]
# do something
I would like to do this in a more compact way, like for (x,y) in someiterator(l): ...,
is there a way to do do this using some builtin python iterators? I'm sure the itertools module should have a solution, but I just can't figure it out...
|
Just use zip
>>> l = [1, 7, 3, 5]
>>> for first, second in zip(l, l[1:]):
... print first, second
...
1 7
7 3
3 5
As suggested you might consider using the izip function in itertools for very long lists where you don't want to create a new list.
import itertools
for first, second in itertools.izip(l, l[1:]):
...
|
Avoid twitter api limitation with Tweepy
|
I saw in some thread in stack exchange that the limitation can be in function of the number of requests per 15 minutes and depends also on the complexity of the alghorithm, except that this is not a complex one .
So I use this code :
import tweepy
import sqlite3
import time
db = sqlite3.connect('data/MyDB.db')
# Get a cursor object
cursor = db.cursor()
cursor.execute('''CREATE TABLE IF NOT EXISTS MyTable(id INTEGER PRIMARY KEY, name TEXT, geo TEXT, image TEXT, source TEXT, timestamp TEXT, text TEXT, rt INTEGER)''')
db.commit()
consumer_key = ""
consumer_secret = ""
key = ""
secret = ""
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(key, secret)
api = tweepy.API(auth)
search = "#MyHashtag"
for tweet in tweepy.Cursor(api.search,
q=search,
include_entities=True).items():
while True:
try:
cursor.execute('''INSERT INTO MyTable(name, geo, image, source, timestamp, text, rt) VALUES(?,?,?,?,?,?,?)''',(tweet.user.screen_name, str(tweet.geo), tweet.user.profile_image_url, tweet.source, tweet.created_at, tweet.text, tweet.retweet_count))
except tweepy.TweepError:
time.sleep(60 * 15)
continue
break
db.commit()
db.close()
I am always having the twitter limitation error :
Traceback (most recent call last):
File "stream.py", line 25, in <module>
include_entities=True).items():
File "/usr/local/lib/python2.7/dist-packages/tweepy/cursor.py", line 153, in next
self.current_page = self.page_iterator.next()
File "/usr/local/lib/python2.7/dist-packages/tweepy/cursor.py", line 98, in next
data = self.method(max_id = max_id, *self.args, **self.kargs)
File "/usr/local/lib/python2.7/dist-packages/tweepy/binder.py", line 200, in _call
return method.execute()
File "/usr/local/lib/python2.7/dist-packages/tweepy/binder.py", line 176, in execute
raise TweepError(error_msg, resp)
tweepy.error.TweepError: [{'message': 'Rate limit exceeded', 'code': 88}]
|
For anyone who stumbles upon this on Google, tweepy 3.2+ has additional parameters for the tweepy.api class, in particular:
wait_on_rate_limit â Whether or not to automatically wait for rate limits to replenish
wait_on_rate_limit_notify â Whether or not to print a notification when Tweepy is waiting for rate limits to replenish
Setting these flags to True will delegate the waiting to the API instance, which is good enough for most simple use cases.
|
List database tables with SQLAlchemy
|
I want to implement a function that gives information about all the tables (and their column names) that are present in a database (not only those created with SQLAlchemy). While reading the documentation it seems to me that this is done via reflection but I didn't manage to get something working. Any suggestions or examples on how to do this?
|
start with an engine:
from sqlalchemy import create_engine
engine = create_engine("postgresql://u:p@host/database")
quick path to all table /column names, use an inspector:
from sqlalchemy import inspect
inspector = inspect(engine)
for table_name in inspector.get_table_names():
for column in inspector.get_columns(table_name):
print("Column: %s" % column['name'])
docs: http://docs.sqlalchemy.org/en/rel_0_9/core/reflection.html?highlight=inspector#fine-grained-reflection-with-inspector
alternatively, use MetaData / Tables:
from sqlalchemy import MetaData
m = MetaData()
m.reflect(engine)
for table in m.tables.values():
print(table.name)
for column in table.c:
print(column.name)
docs: http://docs.sqlalchemy.org/en/rel_0_9/core/reflection.html#reflecting-all-tables-at-once
|
deep-copying a generator in python
|
I'm using a generator function, say:
def foo():
i=0
while (i<10):
i+=1
yield i
Now, I would like the option to copy the generator after any number of iterations, so that the new copy will retain the internal state (will have the same 'i' in the example) but will now be independent from the original (i.e. iterating over the copy should not change the original).
I've tried using copy.deepcopy but I get the error:
"TypeError: object.__new__(generator) is not safe, use generator.__new__()"
Obviously, I could solve this using regular functions with counters for example.
But I'm really looking for a solution using generators.
|
There are three cases I can think of:
Generator has no side effects, and you just want to be able to walk back through results you've already captured. You could consider a cached generator instead of a true generator. You can shared the cached generator around as well, and if any client walks to an item you haven't been to yet, it will advance. This is similar to the tee() method, but does the tee functionality in the generator/cache itself instead of requiring the client to do it.
Generator has side effects, but no history, and you want to be able to restart anywhere. Consider writing it as a coroutine, where you can pass in the value to start at any time.
Generator has side effects AND history, meaning that the state of the generator at G(x) depends on the results of G(x-1), and so you can't just pass x back into it to start anywhere. In this case, I think you'd need to be more specific about what you are trying to do, as the result depends not just on the generator, but on the state of other data. Probably, in this case, there is a better way to do it.
|
Make a Pandas MultiIndex from a product of iterables?
|
I have a utility function for creating a Pandas MultiIndex when I have two or more iterables and I want an index key for each unique pairing of the values in those iterables. It looks like this
import pandas as pd
import itertools
def product_index(values, names=None):
"""Make a MultiIndex from the combinatorial product of the values."""
iterable = itertools.product(*values)
idx = pd.MultiIndex.from_tuples(list(iterable), names=names)
return idx
And could be used like:
a = range(3)
b = list("ab")
product_index([a, b])
To create
MultiIndex(levels=[[0, 1, 2], [u'a', u'b']],
labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]])
This works perfectly fine, but it seems like a common usecase and I am surprised I had to implement it myself. So, my question is, what have I missed/misunderstood in the Pandas library itself that offers this functionality?
Edit to add: This function has been added to Pandas as MultiIndex.from_product for the 0.13.1 release.
|
This is a very similar construction (but using cartesian_product which for larger arrays is faster than itertools.product)
In [2]: from pandas.tools.util import cartesian_product
In [3]: MultiIndex.from_arrays(cartesian_product([range(3),list('ab')]))
Out[3]:
MultiIndex(levels=[[0, 1, 2], [u'a', u'b']],
labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]])
could be added as a convience method, maybe MultiIndex.from_iterables(...)
pls open an issue (and PR if you'd like)
FYI I very rarely actually construct a multi-index 'manually', almost always easier to actually construct a frame and just set_index.
In [10]: df = DataFrame(dict(A = np.arange(6),
B = ['foo'] * 3 + ['bar'] * 3,
C = np.ones(6)+np.arange(6)%2)
).set_index(['C','B']).sortlevel()
In [11]: df
Out[11]:
A
C B
1 bar 4
foo 0
foo 2
2 bar 3
bar 5
foo 1
[6 rows x 1 columns]
|
How do I create a login API using Django Rest Framework?
|
I want to create a login api (or use an existing one if it is already pre-bundled) using django rest framework. However, I'm completely at a loss. Whenever I send a post request to the django rest framework "login" url, it just sends back the browsable api template page...
MY CONFIGURATION
urls.py
url(r'^api/v1/', include('rest_framework.urls', namespace='rest_framework'))
settings.py
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.BasicAuthentication',
'rest_framework.authentication.SessionAuthentication',
)
}
WHAT I WANT
Request:
POST /api/v1/login username='name' pass='pass'
Response:
200 OK "{username: 'name', 'userId': '54321'}" set-cookie: sessionid="blahblah"
|
Take a look at the api view from django-rest-framework-jwt. It's an implementation for creating auth tokens rather than cookie sessions, but your implementation will be similar. See views.py and serializers.py. You can probably use the serializers.py unchanged, and just adjust your views to return the right parameters and possibly set the session cookie (can't recall if that's already performed in authentication).
|
How to determine whether a Pandas Column contains a particular value
|
I am trying to determine whether there is an entry in a Pandas column that has a particular value. I tried to do this with if x in df['id']. I thought this was working, except when I fed it a value that I knew was not in the column 43 in df['id'] it still returned True. When I subset to a data frame only containing entries matching the missing id df[df['id'] == 43] there are, obviously, no entries in it. How to I determine if a column in a Pandas data frame contains a particular value and why doesn't my current method work? (FYI, I have the same problem when I use the implementation in this answer to a similar question).
|
in of a Series checks whether the value is in the index:
In [11]: s = pd.Series(list('abc'))
In [12]: s
Out[12]:
0 a
1 b
2 c
dtype: object
In [13]: 1 in s
Out[13]: True
In [14]: 'a' in s
Out[14]: False
One option is to see if it's in unique values:
In [21]: s.unique()
Out[21]: array(['a', 'b', 'c'], dtype=object)
In [22]: 'a' in s.unique()
Out[22]: True
or a python set:
In [23]: set(s)
Out[23]: {'a', 'b', 'c'}
In [24]: 'a' in set(s)
Out[24]: True
As pointed out by @DSM, it may be more efficient (especially if you're just doing this for one value) to just use in directly on the values:
In [31]: s.values
Out[31]: array(['a', 'b', 'c'], dtype=object)
In [32]: 'a' in s.values
Out[32]: True
|
What is the Difference between PySphere and PyVmomi?
|
I need to write python scripts to automate time configuration of Virtual Machines running on a ESX/ESXi host.
I don't know which api to use...
I am able to find to python bindings for VMWare apis viz. PySphere and PyVmomi.
Could anyone please explain what is the difference between them, which one should be used?
Thanks!
|
I'm the (now former) VMware employee who helped get this out the door.
pyVmomi represents the official bindings of the vSphere API released by VMware. The functions and object names map directly to what's documented in the vSphere Web Services SDK. It takes a while to get used to it and we should add some docs helping people map what's in the official documentation to what you can actually use in pyVmomi, but it's really all there and you'll probably get more functionality than you would out of pysphere which wraps official API calls in API-specific function names.
One of the most complete projects that uses the vSphere API via pyVmomi is another project I helped open source, ThinApp Factory. I recommend looking at its source (specifically linked.py) to see what is possible.
If things are hard to use or unclear about pyVmomi, please feel free to file a bug on our Github. Have fun!
|
show reverse dependencies with pip?
|
Is it possible to show the reverse dependencies with pip?
I want to know which package needs package foo. And which version of foo is needed by this package.
|
I found Alexander's answer perfect, except it's hard to copy/paste. Here is the same, ready to paste:
import pip
def rdeps(package_name):
return [pkg.project_name
for pkg in pip.get_installed_distributions()
if package_name in [requirement.project_name
for requirement in pkg.requires()]]
rdeps('some-package-name')
|
ImportError: no module named win32api
|
I am using python 2.7 and i want to use pywin32-214 on windows 7. I installed pywin32-214 by using msi installer. But when i import win32api in my python script then it throws error "no module named win32api". what should i do? Can I use pywin32 api for windows 7?
|
This is resolve my case as found on
Where to find the win32api module for Python?
pip install pypiwin32
|
python selenium click on button
|
I am quite new to python selenium and I am trying to click on a button which has the following html structure:
<div class="b_div">
<div class="button c_button s_button" onclick="submitForm('mTF')">
<input class="very_small" type="button"></input>
<div class="s_image"></div>
<span>
Search
</span>
</div>
<div class="button c_button s_button" onclick="submitForm('rMTF')" style="margin-bottom: 30px;">
<input class="v_small" type="button"></input>
<span>
Reset
</span>
</div>
</div>
I would like to be able to click both the Search and Reset buttons above (obviously individually).
I have tried a couple of things, for example:
driver.find_element_by_css_selector('.button .c_button .s_button').click()
or,
driver.find_element_by_name('s_image').click()
or,
driver.find_element_by_class_name('s_image').click()
but, I seem to always end up with NoSuchElementException, for example:
selenium.common.exceptions.NoSuchElementException: Message: u'Unable to locate element: {"method":"name","selector":"s_image"}' ;
I am wondering if I can somehow use the onclick attributes of the HTML to make selenium click?
Any thoughts which can point me in the right direction would be great.
Thanks.
|
Remove space between classes in css selector:
driver.find_element_by_css_selector('.button .c_button .s_button').click()
# ^ ^
=>
driver.find_element_by_css_selector('.button.c_button.s_button').click()
|
Matplotlib: Plotting numerous disconnected line segments with different colors
|
I have a set of data records like this:
(s1, t1), (u1, v1), color1
(s2, t2), (u2, v2), color2
.
.
.
(sN, tN), (uN, vN), colorN
In any record, the first two values are the end-points of a line segment, the third value is the color of that line segment. More specifically, (sn, tn) are the x-y coordinates of the first end-point, (un, vn) are the x-y coordinates of the second-endpoint. Also, color is an rgb with alpha value.
In general, any two line segments are disconnected (meaning that their end-points do not necessarily coincide).
How to plot this data using matplotlib with a single plot call (or as few as possible) as there could be potentially thousands of records.
UPDATE:
Preparing the data in one big list and calling plot against it is way too slow. For example the following code couldn't finish in a reasonable amount of time:
import numpy as np
import matplotlib.pyplot as plt
data = []
for _ in xrange(60000):
data.append((np.random.rand(), np.random.rand()))
data.append((np.random.rand(), np.random.rand()))
data.append('r')
print 'now plotting...' # from now on, takes too long
plt.plot(*data)
print 'done'
#plt.show()
We need to have another way of plotting the data as quickly as possible as I will be using this in a near-real time system.
Update 2:
Amazingly enough, I was able to speed-up the plot rendering by using the None insertion trick as follows:
import numpy as np
import matplotlib.pyplot as plt
from timeit import timeit
N = 60000
_s = np.random.rand(N)
_t = np.random.rand(N)
_u = np.random.rand(N)
_v = np.random.rand(N)
x = []
y = []
for s, t, u, v in zip(_s, _t, _u, _v):
x.append(s)
x.append(u)
x.append(None)
y.append(t)
y.append(v)
y.append(None)
print timeit(lambda:plt.plot(x, y), number=1)
This executes in under a second on my machine. I still have to figure out how to embed the color values (RGB with alpha channel).
|
use LineCollection:
import numpy as np
import pylab as pl
from matplotlib import collections as mc
lines = [[(0, 1), (1, 1)], [(2, 3), (3, 3)], [(1, 2), (1, 3)]]
c = np.array([(1, 0, 0, 1), (0, 1, 0, 1), (0, 0, 1, 1)])
lc = mc.LineCollection(lines, colors=c, linewidths=2)
fig, ax = pl.subplots()
ax.add_collection(lc)
ax.autoscale()
ax.margins(0.1)
here is the output:
|
Should 3.4 enums use UPPER_CASE_WITH_UNDERSCORES?
|
As the documentation says, an enumeration is a set of symbolic names (members) bound to unique, constant values. The PEP8 says that constants are usually named as UPPER_CASE, should I use this notation in Python 3.4 enums? If yes, why the examples in the docs are using lower_case?
|
I think they're not UPPER_CASE because, well, it just looks weird when it is. Since you can only access the enumerations through the class (e.g. my_enum.VALUE) it looks weird if the members are capitalized. In C the members of the enumeration go into the module namespace, so it doesn't look weird (to me) when the members are capitalized, in usage:
typedef enum {OFF, ON} lightswitch;
lightswitch bathroomLight = ON;
But in Python you access them through the enumeration class that you create, and it looks weird to go from ClassStyle names to ALL_CAPS.
class Lightswitch(Enum):
OFF = 0
ON = 1
# isn't that weird?
my_light = Lightswitch.OFF
Bottom line, I think it's just aesthetic. I've been wrong before, though, and I realize that this is just my opinion.
|
how to dynamically update a plot in a loop in ipython notebook (within one cell)
|
Environment: Python 2.7, matplotlib 1.3, IPython notebook 1.1, linux, chrome. The code is in one single input cell, using --pylab=inline
I want to use IPython notebook and pandas to consume a stream and dynamically update a plot every 5 seconds.
When I just use print statement to print the data in text format, it works perfectly fine: the output cell just keeps printing data and adding new rows. But when I try to plot the data (and then update it in a loop), the plot never show up in the output cell. But if I remove the loop, just plot it once. It works fine.
Then I did some simple test:
i = pd.date_range('2013-1-1',periods=100,freq='s')
while True:
plot(pd.Series(data=np.random.randn(100), index=i))
#pd.Series(data=np.random.randn(100), index=i).plot() also tried this one
time.sleep(5)
The output will not show anything until I manually interrupt the process (ctrl+m+i). And after I interrupt it, the plot shows correctly as multiple overlapped lines. But what I really want is a plot that shows up and gets updated every 5 seconds (or whenever the plot() function gets called, just like what print statement outputs I mentioned above, which works well). Only showing the final chart after the cell is completely done is NOT what i want.
I even tried to explicitly add draw() function after each plot(), etc. None of them works. Wonder how to dynamically update a plot by a for/while loop within one cell in IPython notebook.
|
use IPython.display module:
%matplotlib inline
import time
import pylab as pl
from IPython import display
for i in range(10):
pl.plot(pl.randn(100))
display.clear_output(wait=True)
display.display(pl.gcf())
time.sleep(1.0)
|
Searching for equivalent of FileNotFoundError in Python 2
|
I created a class named Options. It works fine but not not with Python 2.
And I want it to work on both Python 2 and 3.
The problem is identified: FileNotFoundError doesn t exist in Python 2.
But if I use IOError it doesn t work in Python 3
Changed in version 3.3: EnvironmentError, IOError, WindowsError, VMSError, socket.error, select.error and mmap.error have been merged into OSError.
What should I do ???(Please do not discuss my choice of portability, I have reasons.)
Here s the code:
#!/usr/bin/python
#-*-coding:utf-8*
#option_controller.py
#Walle Cyril
#25/01/2014
import json
import os
class Options():
"""Options is a class designed to read, add and change informations in a JSON file with a dictionnary in it.
The entire object works even if the file is missing since it re-creates it.
If present it must respect the JSON format: e.g. keys must be strings and so on.
If something corrupted the file, just destroy the file or call read_file method to remake it."""
def __init__(self,directory_name="Cache",file_name="options.json",imported_default_values=None):
#json file
self.option_file_path=os.path.join(directory_name,file_name)
self.directory_name=directory_name
self.file_name=file_name
#self.parameters_json_file={'sort_keys':True, 'indent':4, 'separators':(',',':')}
#the default data
if imported_default_values is None:
DEFAULT_INDENT = 2
self.default_values={\
"translate_html_level": 1,\
"indent_size":DEFAULT_INDENT,\
"document_title":"Titre"}
else:
self.default_values=imported_default_values
def read_file(self,read_this_key_only=False):
"""returns the value for the given key or a dictionary if the key is not given.
returns None if it s impossible"""
try:
text_in_file=open(self.option_file_path,'r').read()
except FileNotFoundError:#not 2.X compatible
text_in_file=""#if the file is not there we re-make one with default values
if text_in_file=="":#same if the file is empty
self.__insert_all_default_values()
text_in_file=open(self.option_file_path,'r').read()
try:
option_dict=json.loads(text_in_file)
except ValueError:
#if the json file is broken we re-make one with default values
self.__insert_all_default_values()
text_in_file=open(self.option_file_path,'r').read()
option_dict=json.loads(text_in_file)
if read_this_key_only:
if read_this_key_only in option_dict:
return option_dict[read_this_key_only]#
else:
#if the value is not there it should be written for the next time
if read_this_key_only in self.default_values:
self.add_option_to_file(read_this_key_only,self.default_values[read_this_key_only])
return self.default_values[read_this_key_only]
else:
#impossible because there is not default value so the value isn t meant to be here
return None
else:
return option_dict
def add_option_to_file(self,key,value):#or update
"""Adds or updates an option(key and value) to the json file if the option exists in the default_values of the object."""
option_dict=self.read_file()
if key in self.default_values:
option_dict[key]=value
open(self.option_file_path,'w').write(\
json.dumps(option_dict,sort_keys=True, indent=4, separators=(',',':')))
def __insert_all_default_values(self):
"""Recreate json file with default values.
called if the document is empty or non-existing or corrupted."""
try:
open(self.option_file_path,'w').write(\
json.dumps(self.default_values,sort_keys=True, indent=4, separators=(',',':')))
except FileNotFoundError:
os.mkdir(self.directory_name)#Create the directory
if os.path.isdir(self.directory_name):#succes
self.__insert_all_default_values()
else:
print("Impossible to write in %s and file %s not found" % (os.getcwd(),self.option_file_path))
#demo
if __name__ == '__main__':
option_file_object=Options()
print(option_file_object.__doc__)
print(option_file_object.read_file())
option_file_object.add_option_to_file("","test")#this should have no effect
option_file_object.add_option_to_file("translate_html_level","0")#this should have an effect
print("value of translate_html_level:",option_file_object.read_file("translate_html_level"))
print(option_file_object.read_file())
|
If FileNotFoundError isn't there, define it:
try:
FileNotFoundError
except NameError:
FileNotFoundError = IOError
Now you can catch FileNotFoundError in Python 2 since it's really IOError.
Be careful though, IOError has other meanings. In particular, any message should probably say "file could not be read" rather than "file not found."
|
How to Access Hive via Python?
| ERROR: type should be string, got "https://cwiki.apache.org/confluence/display/Hive/HiveClient#HiveClient-Python appears to be outdated.\nWhen I add this to /etc/profile:\nexport PYTHONPATH=$PYTHONPATH:/usr/lib/hive/lib/py\n\nI can then do the imports as listed in the link, with the exception of from hive import ThriftHive which actually need to be:\nfrom hive_service import ThriftHive\n\nNext the port in the example was 10000, which when I tried caused the program to hang. The default Hive Thrift port is 9083, which stopped the hanging.\nSo I set it up like so:\nfrom thrift import Thrift\nfrom thrift.transport import TSocket\nfrom thrift.transport import TTransport\nfrom thrift.protocol import TBinaryProtocol\ntry:\n transport = TSocket.TSocket('<node-with-metastore>', 9083)\n transport = TTransport.TBufferedTransport(transport)\n protocol = TBinaryProtocol.TBinaryProtocol(transport)\n client = ThriftHive.Client(protocol)\n transport.open()\n client.execute(\"CREATE TABLE test(c1 int)\")\n\n transport.close()\nexcept Thrift.TException, tx:\n print '%s' % (tx.message)\n\nI received the following error:\nTraceback (most recent call last):\nFile \"<stdin>\", line 1, in <module>\nFile \"/usr/lib/hive/lib/py/hive_service/ThriftHive.py\", line 68, in execute\nself.recv_execute()\nFile \"/usr/lib/hive/lib/py/hive_service/ThriftHive.py\", line 84, in recv_execute\nraise x\nthrift.Thrift.TApplicationException: Invalid method name: 'execute'\n\nBut inspecting the ThriftHive.py file reveals the method execute within the Client class.\nHow may I use Python to access Hive?\n"
|
I assert that you are using HiveServer2, which is the reason that makes the code doesn't work.
You may use pyhs2 to access your Hive correctly and the example code like that:
import pyhs2
with pyhs2.connect(host='localhost',
port=10000,
authMechanism="PLAIN",
user='root',
password='test',
database='default') as conn:
with conn.cursor() as cur:
#Show databases
print cur.getDatabases()
#Execute query
cur.execute("select * from table")
#Return column info from query
print cur.getSchema()
#Fetch table results
for i in cur.fetch():
print i
Attention that you may install python-devel.x86_64 cyrus-sasl-devel.x86_64 before installing pyhs2 with pip.
Wish this can help you.
Reference: https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2#SettingUpHiveServer2-PythonClientDriver
|
django rest framework filter
|
I'm working with API made from Django rest framework,
I am trying to make a filter to a JSON
This is my serializers.py file
from rest_framework import serializers
from .models import Establecimiento,Categoria,Ciudad,Zona
import django_filters
class EstablecimientoSerializer(serializers.ModelSerializer):
class Meta:
model = Establecimiento
depth = 1
fields = ('nombre',
'ciudad',
'categoria',
'direccion',
'telefono',
'precioMinimo',
'precioMaximo',)
and this my views.py file
from rest_framework import viewsets
from .serializers import EstablecimientoSerializer, CategoriaSerializer
from models import *
from rest_framework import filters
from rest_framework import generics
class EstablecimientoViewSet(viewsets.ModelViewSet):
queryset = Establecimiento.objects.all()
serializer_class = EstablecimientoSerializer
filter_fields = ('categoria',)
Then in the EstablecimientoViewSet class, I put a filter_fields = ('categoria',)
to filter the url's API with the category field
If I add the filter to the query parameters, the result set does not change, it shows all data unfiltered.
...establecimiento?establecimiento=bar
How can I make this filter about this model?
|
You need to define filter backend and all related fields you're planning to filter on:
class EstablecimientoViewSet(viewsets.ModelViewSet):
filter_backends = (filters.DjangoFilterBackend,)
filter_fields = ('categoria', 'categoria__titulo',)
example:
URL?categoria__titulo=Categoria 1
|
How to check the version of scipy
|
How can I check the version of scipy installed on my system?
|
In [95]: import scipy
In [96]: scipy.__version__
Out[96]: '0.12.0'
In [104]: scipy.version.*version?
scipy.version.full_version
scipy.version.short_version
scipy.version.version
In [105]: scipy.version.full_version
Out[105]: '0.12.0'
In [106]: scipy.version.git_revision
Out[106]: 'cdd6b32233bbecc3e8cbc82531905b74f3ea66eb'
In [107]: scipy.version.release
Out[107]: True
In [108]: scipy.version.short_version
Out[108]: '0.12.0'
In [109]: scipy.version.version
Out[109]: '0.12.0'
|
How do you check if a string contains ONLY numbers - python
|
How do you check if a string contains only numbers?
I've given it a go here, need it in the simplest way. Thanks.
import string
def main():
isbn = input("Enter you're 10 digit ISBN number: ")
if len(isbn) == 10 and string.digits == True:
print ("Works")
else:
print("Error, 10 digit number was not inputted and/or letters were inputted.")
main()
if __name__ == "__main__":
main()
input("Press enter to exit: ")
|
You'll want to use the isdigit method on your str object:
if len(isbn) == 10 and isbn.isdigit():
From the isdigit documentation:
str.isdigit()
Return true if all characters in the string are digits
and there is at least one character, false otherwise.
For 8-bit strings, this method is locale-dependent.
|
python subprocess changing directory
|
I want to execute a script inside a subdirectory/superdirectory (I need to be inside this sub/super-directory first). I can't get subprocess to enter my subdirectory:
tducin@localhost:~/Projekty/tests/ve$ python
Python 2.7.4 (default, Sep 26 2013, 03:20:26)
[GCC 4.7.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import subprocess
>>> import os
>>> os.getcwd()
'/home/tducin/Projekty/tests/ve'
>>> subprocess.call(['cd ..'])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/subprocess.py", line 524, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1308, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Python throws OSError and I don't know why. It doesn't matter whether I try to go into an existing subdir or go one directory up (as above) - I always end up with the same error.
|
What your code tries to do is call a program named cd ... What you want is call a command named cd.
But cd is a shell internal. So you can only call it as
subprocess.call('cd ..', shell=True) # pointless code! See text below.
But it is pointless to do so. As no process can change another process's working directory (again, at least on a UNIX-like OS, but as well on Windows), this call will have the subshell change its dir and exit immediately.
What you want can be achieved with os.chdir() or with the subprocess named parameter cwd which changes the working directory immediately before executing a subprocess.
|
TemplateDoesNotExist - Django Error
|
I'm using Django Rest Framework. and I keep getting an error
Exception Type: TemplateDoesNotExist
Exception Value: rest_framework/api.html
I dont know how I'm going wrong. This is the first time I'm trying out hands on REST Framework.
This is code.
views.py
import socket, json
from modules.data.models import *
from modules.utils import *
from rest_framework import status
from rest_framework.decorators import api_view
from rest_framework.response import Response
from modules.actions.serializers import ActionSerializer
@api_view(['POST'])
@check_field_exists_wrapper("installation")
def api_actions(request, format = None):
action_type = request.POST['action_type']
if action_type == "Shutdown" :
send_message = '1'
print "Shutting Down the system..."
elif action_type == "Enable" :
send_message = '1'
print "Enabling the system..."
elif action_type == "Disable" :
send_message = '1'
print "Disabling the system..."
elif action_type == "Restart" :
send_message = '1'
print "Restarting the system..."
if action_type in ["Shutdown", "Enable", "Disable"] : PORT = 6000
else : PORT = 6100
controllers_list = Controller.objects.filter(installation_id = kwargs['installation_id'])
for controller_obj in controllers_list:
ip = controller_obj.ip
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((ip, PORT))
s.send(send_message)
s.close()
except Exception as e:
print("Exception when sending " + action_type +" command: "+str(e))
return Response(status = status.HTTP_200_OK)
models.py
class Controller(models.Model):
id = models.IntegerField(primary_key = True)
name = models.CharField(max_length = 255, unique = True)
ip = models.CharField(max_length = 255, unique = True)
installation_id = models.ForeignKey('Installation')
serializers.py
from django.forms import widgets
from rest_framework import serializers
from modules.data.models import *
class ActionSerializer(serializers.ModelSerializer):
class Meta:
model = Controller
fields = ('id', 'name', 'ip', 'installation_id')
urls.py
from django.conf.urls import patterns, url
from rest_framework.urlpatterns import format_suffix_patterns
urlpatterns = patterns('modules.actions.views',
url(r'^$','api_actions',name='api_actions'),
)
|
Make sure you have rest_framework listed in your settings.py INSTALLED_APPS.
|
Flask jsonify a list of objects
|
I have a list of objects that I need to jsonify. I've looked at the flask jsonify docs, but I'm just not getting it.
My class has several inst-vars, each of which is a string: gene_id, gene_symbol, p_value. What do I need to do to make this serializable as JSON?
My naive code:
jsonify(eqtls = my_list_of_eqtls)
Results in:
TypeError: <__main__.EqtlByGene object at 0x1073ff790> is not JSON serializable
Presumably I have to tell jsonify how to serialize an EqtlByGene, but I can't find an example that shows how to serialize an instance of a class.
This code now works (with many thanks to Martijn Pieters!):
class EqtlByGene(Resource):
def __init__(self, gene_id, gene_symbol, p_value):
self.gene_id = gene_id
self.gene_symbol = gene_symbol
self.p_value = p_value
class EqtlJSONEncoder(JSONEncoder):
def default(self, obj):
if isinstance(obj, EqtlByGene):
return {
'gene_id' : obj.gene_id,
'gene_symbol' : obj.gene_symbol,
'p_value' : obj.p_value
}
return super(EqtlJSONEncoder, self).default(obj)
class EqtlByGeneList(Resource):
def get(self):
eqtl1 = EqtlByGene(1, 'EGFR', 0.1)
eqtl2 = EqtlByGene(2, 'PTEN', 0.2)
eqtls = [eqtl1, eqtl2]
return jsonify(eqtls_by_gene = eqtls)
api.add_resource(EqtlByGeneList, '/eqtl/eqtlsbygene')
app.json_encoder = EqtlJSONEncoder
if __name__ == '__main__':
app.run(debug=True)
When I call it via curl, I get:
{
"eqtls_by_gene": [
{
"gene_id": 1,
"gene_symbol": "EGFR",
"p_value": 0.1
},
{
"gene_id": 2,
"gene_symbol": "PTEN",
"p_value": 0.2
}
]
}
|
Give your EqltByGene an extra method that returns a dictionary:
class EqltByGene(object):
#
def serialize(self):
return {
'gene_id': self.gene_id,
'gene_symbol': self.gene_symbol,
'p_value': self.p_value,
}
then use a list comprehension to turn your list of objects into a list of serializable values:
jsonify(eqtls=[e.serialize() for e in my_list_of_eqtls])
The alternative would be to write a hook function for the json.dumps() function, but since your structure is rather simple, the list comprehension and custom method approach is simpler.
You can also be really adventurous and subclass flask.json.JSONEncoder; give it a default() method that turns your EqltByGene() instances into a serializable value:
from flask.json import JSONEncoder
class MyJSONEncoder(JSONEncoder):
def default(self, obj):
if isinstance(obj, EqltByGene):
return {
'gene_id': obj.gene_id,
'gene_symbol': obj.gene_symbol,
'p_value': obj.p_value,
}
return super(MyJSONEncoder, self).default(obj)
and assign this to the app.json_encoder attribute:
app = Flask(__name__)
app.json_encoder = MyJSONEncoder
and just pass in your list directly to jsonify():
return jsonify(my_list_of_eqtls)
|
How to run python macros in LibreOffice?
|
When I go to Tools -> Macros -> Organize Macros -> Python I get this dialog:
It is not possible to create new Python macros.
Apparently LibreOffice has no Python editor so I have to write the macros elsewhere and then just execute them.
But I do not know where to put the Python scripts.
I tried a system-wide search for files with "HeloWorld" in their name and I got no results.
I tried to put a test.py file into:
/home/martin/.config/libreoffice/4/user/Scripts
and reload the application, but the test macro is not shown.
I tried to find the appropriate setting in Tools -> Options -> Path but there is no "path to macros":
How can I run Python macros from LibreOffice?
This question is a part of my quest to learn Python and to learn LibreOffice macros, so any link to a general tutorial is welcomed. The tutorials on LibreOffice Python macros seem to be especially hard to find.
I am using LibreOffice Version: 4.1.3.2
|
Try to manually make a subdirectory python (all lowercase) inside /home/martin/.config/libreoffice/4/user/Scripts and put your script there.
This is based on https://wiki.openoffice.org/wiki/Python_as_a_macro_language
|
Convert timedelta to floating-point
|
I got a timedelta object from the subtraction of two datetimes. I need this value as floating point for further calculations.
All that I've found enables the calculation with floating-points, but the result
is still a timedelta object.
time_d = datetime_1 - datetime_2
time_d_float = float(time_d)
does not work.
|
You could use the total_seconds method:
time_d_float = time_d.total_seconds()
|
Logic operator for boolean indexing in Pandas
|
I'm working with boolean index in Pandas.
The question is why the statement:
a[(a['some_column']==some_number) & (a['some_other_column']==some_other_number)]
works fine whereas
a[(a['some_column']==some_number) and (a['some_other_column']==some_other_number)]
exists with error?
Example:
a=pd.DataFrame({'x':[1,1],'y':[10,20]})
In: a[(a['x']==1)&(a['y']==10)]
Out: x y
0 1 10
In: a[(a['x']==1) and (a['y']==10)]
Out: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
|
When you say
(a['x']==1) and (a['y']==10)
You are implicitly asking Python to convert (a['x']==1) and (a['y']==10) to boolean values.
NumPy arrays (of length greater than 1) and Pandas objects such as Series do not have a boolean value -- in other words, they raise
ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().
when used as a boolean value. That's because its unclear when it should be True or False. Some users might assume they are True if they have non-zero length, like a Python list. Others might desire for it to be True only if all its elements are True. Others might want it to be True if any of its elements are True.
Because there are so many conflicting expectations, the designers of NumPy and Pandas refuse to guess, and instead raise a ValueError.
Instead, you must be explicit, by calling the empty(), all() or any() method to indicate which behavior you desire.
In this case, however, it looks like you do not want boolean evaluation, you want element-wise logical-and. That is what the & binary operator performs:
(a['x']==1) & (a['y']==10)
returns a boolean array.
|
Exponential curve fitting in SciPy
|
I have two NumPy arrays x and y. When I try to fit my data using exponential function and curve_fit (SciPy) with this simple code
#!/usr/bin/env python
from pylab import *
from scipy.optimize import curve_fit
x = np.array([399.75, 989.25, 1578.75, 2168.25, 2757.75, 3347.25, 3936.75, 4526.25, 5115.75, 5705.25])
y = np.array([109,62,39,13,10,4,2,0,1,2])
def func(x, a, b, c, d):
return a*np.exp(b-c*x)+d
popt, pcov = curve_fit(func, x, y)
I get wrong coefficients popt
[a,b,c,d] = [1., 1., 1., 24.19999988]
What is the problem?
|
First comment: since a*exp(b - c*x) = (a*exp(b))*exp(-c*x) = A*exp(-c*x), a or b is redundant. I'll drop b and use:
def func(x, a, c, d):
return a*np.exp(-c*x)+d
That isn't the main issue. The problem is simply that curve_fit fails to converge to a solution to this problem when you use the default initial guess (which is all 1s). Check pcov; you'll see that it is inf. This is not surprising, because if c is 1, most of the values of exp(-c*x) underflow to 0:
In [32]: np.exp(-x)
Out[32]:
array([ 2.45912644e-174, 0.00000000e+000, 0.00000000e+000,
0.00000000e+000, 0.00000000e+000, 0.00000000e+000,
0.00000000e+000, 0.00000000e+000, 0.00000000e+000,
0.00000000e+000])
This suggests that c should be small. A better initial guess is, say, p0 = (1, 1e-6, 1). Then I get:
In [36]: popt, pcov = curve_fit(func, x, y, p0=(1, 1e-6, 1))
In [37]: popt
Out[37]: array([ 1.63561656e+02, 9.71142196e-04, -1.16854450e+00])
This looks reasonable:
In [42]: xx = np.linspace(300, 6000, 1000)
In [43]: yy = func(xx, *popt)
In [44]: plot(x, y, 'ko')
Out[44]: [<matplotlib.lines.Line2D at 0x41c5ad0>]
In [45]: plot(xx, yy)
Out[45]: [<matplotlib.lines.Line2D at 0x41c5c10>]
|
py.test skips test class if constructor is defined
|
I have following unittest code running via py.test.
Mere presence of the constructor make the entire class skip when running
py.test -v -s
collected 0 items / 1 skipped
Can anyone please explain to me this behaviour of py.test?
I am interested in understanding py.test behaviour, I know the constructor is not needed.
Thanks,
Zdenek
class TestClassName(object):
def __init__(self):
pass
def setup_method(self, method):
print "setup_method called"
def teardown_method(self, method):
print "teardown_method called"
def test_a(self):
print "test_a called"
assert 1 == 1
def test_b(self):
print "test_b called"
assert 1 == 1
|
As already mentioned in the answer by Matti Lyra py.test purposely skips classes which have a constructor. The reason for this is that classes are only used for structural reasons in py.test and do not have any inherent behaviour, while when actually writing code it is the opposite and much rarer to not have an .__init__() method for a class. So in practice skipping a class with a constructor will likely be what was desired, usually it is just a class which happens to have a conflicting name.
Lastly py.test needs to instantiate the class in order to execute the tests. If the constructor takes any arguments it can't instantiate it, so again skipping is the right thing to do.
|
TypeError: can't use a string pattern on a bytes-like object
|
import json
import requests
url = 'http://developer.usa.gov/1usagov.json'
r = requests.get(url, stream=True)
for line in r.iter_lines():
if line:
print (json.loads(line))
Gives this error:
TypeError: can't use a string pattern on a bytes-like object
While viewing through the browser i do see that the response is a Json but request library says its a bytes like object why so ?
|
If you use Python 3.x, you should pass str object to json.loads.
Replace following line:
print(json.loads(line))
with:
print(json.loads(line.decode()))
|
Install mysql-python (Windows)
|
I've spent hours trying to make Django work on my computer. The problem is that I can't install the mysql-python package. I'm running Windows 7 64bit. This is what I've tried:
I have downloaded easy_install
I have downloaded Cygwin64 to be able to run Linux commands (Win cmd was driving me crazy)
I have typed in: easy_install mysql-python (gave me an error message saying it can't find vcvarsall.bat)
I have downloaded Visual Studio 2010. However, I uninstalled it since I found out that I had some other version of it already (it didn't solve the problem)
I have googled this problem like a thousand times, so I would be very grateful if someone could help me. Thanks in advance!
EDIT: I discovered this: https://pypi.python.org/pypi/MySQL-python/1.2.5. Does this mean I can't run Django with python 3.3? And why bother to go through all this work if there is an .exe-file out there?
|
try running the follow command:
pip install mysqlclient
|
Pandas Groupby Range of Values
|
Is there an easy method in pandas to groupby a range of value increments? For instance given the example below can I bin and group column B with a 0.155 increment so that for example, the first couple of groups in column B are divided into ranges between 0 - 0.155, 0.155 - 0.31 ...
import numpy as np
import pandas as pd
df=pd.DataFrame({'A':np.random.random(20),'B':np.random.random(20)})
A B
0 0.383493 0.250785
1 0.572949 0.139555
2 0.652391 0.401983
3 0.214145 0.696935
4 0.848551 0.516692
Alternatively I could first categorize the data by those increments into a new column and subsequently use groupby to determine any relevant statistics that may be applicable in column A?
|
You might be interested in pd.cut:
>>> df.groupby(pd.cut(df["B"], np.arange(0, 1.0+0.155, 0.155))).sum()
A B
B
(0, 0.155] 2.775458 0.246394
(0.155, 0.31] 1.123989 0.471618
(0.31, 0.465] 2.051814 1.882763
(0.465, 0.62] 2.277960 1.528492
(0.62, 0.775] 1.577419 2.810723
(0.775, 0.93] 0.535100 1.694955
(0.93, 1.085] NaN NaN
[7 rows x 2 columns]
|
Fabric - Is there any way to capture run stdout?
|
I'm trying to do the following:
output = run("ls -l backups")
for line in output.split("/n"):
do_stuff(line)
Any way of having the stdout of ls sent to output?
To be more specific I'm using a CLI app called s3cmd which does something similar to ls, but with remote Amazon S3 buckets.
So a replacement for ls won't help unfortunately.
|
Exactly what you are asking for should be happening. From the docs:
run will return the result of the remote programâs stdout as a single (likely multiline) string.
run(), and related commands like local() and sudo(), return an _AttributeString object that is just a wrapper around stdout with attribute access to additional information like failure/success booleans, stderr, the command run, etc. The result object also has a stdout attribute, which is just more explicit.
To troubleshoot, print type(output), output to be sure the response is what you expect. Examine output.failed and output.stderr. It could be the command isn't doing what you expect, there is no "backups" directory, etc.
|
scikit-learn cross validation, negative values with mean squared error
|
When I use the following code with Data matrix X of size (952,144) and output vector y of size (952), mean_squared_error metric returns negative values, which is unexpected. Do you have any idea?
from sklearn.svm import SVR
from sklearn import cross_validation as CV
reg = SVR(C=1., epsilon=0.1, kernel='rbf')
scores = CV.cross_val_score(reg, X, y, cv=10, scoring='mean_squared_error')
all values in scores are then negative.
|
Trying to close this out, so am providing the answer that David and larsmans have eloquently described in the comments section:
Yes, this is supposed to happen. The actual MSE is simply the positive version of the number you're getting.
The unified scoring API always maximizes the score, so scores which need to be minimized are negated in order for the unified scoring API to work correctly. The score that is returned is therefore negated when it is a score that should be minimized and left positive if it is a score that should be maximized.
This is also described in sklearn GridSearchCV with Pipeline.
|
Getting indices of True values in a boolean list
|
I have a piece of my code where I'm supposed to create a switchboard. I want to return a list of all the switches that are on. Here "on" will equal True and "off" equal False. So now I just want to return a list of all the True values and their position. This is all I have but it only return the position of the first occurrence of True (this is just a portion of my code):
self.states = [False, False, False, False, True, True, False, True, False, False, False, False, False, False, False, False]
def which_switch(self):
x = [self.states.index(i) for i in self.states if i == True]
This only returns "4"
|
Use enumerate, list.index returns the index of first match found.
>>> t = [False, False, False, False, True, True, False, True, False, False, False, False, False, False, False, False]
>>> [i for i, x in enumerate(t) if x]
[4, 5, 7]
For huge lists, it'd be better to use itertools.compress:
>>> from itertools import compress
>>> list(compress(xrange(len(t)), t))
[4, 5, 7]
>>> t = t*1000
>>> %timeit [i for i, x in enumerate(t) if x]
100 loops, best of 3: 2.55 ms per loop
>>> %timeit list(compress(xrange(len(t)), t))
1000 loops, best of 3: 696 µs per loop
|
Plotting confidence intervals for Maximum Likelihood Estimate
|
I am trying to write code to produce confidence intervals for the number of different books in a library (as well as produce an informative plot).
My cousin is at elementary school and every week is given a book by his teacher. He then reads it and returns it in time to get another one the next week. After a while we started noticing that he was getting books he had read before and this became gradually more common over time.
Say the true number of books in the library is N and the teacher picks one uniformly at random (with replacement) to give to you each week. If at week t the number of occasions on which you have received a book you have read is x, then I can produce a maximum likelihood estimate for the number of books in the library following http://math.stackexchange.com/questions/615464/how-many-books-are-in-a-library .
Example: Consider a library with five books A, B, C, D, and E. If you receive books [A, B, A, C, B, B, D] in seven successive weeks, then the value for x (the number of duplicates) will be [0, 0, 1, 1, 2, 3, 3] after each of those weeks, meaning after seven weeks, you have received a book you have already read on three occasions.
To visualise the likelihood function (assuming I have understood what one is correctly) I have written the following code which I believe plots the likelihood function. The maximum is around 135 which is indeed the maximum likelihood estimate according to the MSE link above.
from __future__ import division
import random
import matplotlib.pyplot as plt
import numpy as np
#N is the true number of books. t is the number of weeks.unk is the true number of repeats found
t = 30
unk = 3
def numberrepeats(N, t):
return t - len(set([random.randint(0,N) for i in xrange(t)]))
iters = 1000
ydata = []
for N in xrange(10,500):
sampledunk = [numberrepeats(N,t) for i in xrange(iters)].count(unk)
ydata.append(sampledunk/iters)
print "MLE is", np.argmax(ydata)
xdata = range(10, 500)
print len(xdata), len(ydata)
plt.plot(xdata,ydata)
plt.show()
The output looks like
My questions are these:
Is there an easy way to get a 95% confidence interval and plot it on the diagram?
How can you superimpose a smoothed curve over the plot?
Is there a better way my code should have been written? It isn't very elegant and is also quite slow.
Finding the 95% confidence interval means finding the range of the x axis so that 95% of the time the empirical maximum likelihood estimate we get by sampling (which should theoretically be 135 in this example) will fall within it. The answer @mbatchkarov has given does not currently do this correctly.
There is now a mathematical answer at http://math.stackexchange.com/questions/656101/how-to-find-a-confidence-interval-for-a-maximum-likelihood-estimate .
|
Looks like you're ok on the first part, so I'll tackle your second and third points.
There are plenty of ways to fit smooth curves, with scipy.interpolate and splines, or with scipy.optimize.curve_fit. Personally, I prefer curve_fit, because you can supply your own function and let it fit the parameters for you.
Alternatively, if you don't want to learn a parametric function, you could do simple rolling-window smoothing with numpy.convolve.
As for code quality: you're not taking advantage of numpy's speed, because you're doing things in pure python. I would write your (existing) code like this:
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
# N is the true number of books.
# t is the number of weeks.
# unk is the true number of repeats found
t = 30
unk = 3
def numberrepeats(N, t, iters):
rand = np.random.randint(0, N, size=(t, iters))
return t - np.array([len(set(r)) for r in rand])
iters = 1000
ydata = np.empty(500-10)
for N in xrange(10,500):
sampledunk = np.count_nonzero(numberrepeats(N,t,iters) == unk)
ydata[N-10] = sampledunk/iters
print "MLE is", np.argmax(ydata)
xdata = range(10, 500)
print len(xdata), len(ydata)
plt.plot(xdata,ydata)
plt.show()
It's probably possible to optimize this even more, but this change brings your code's runtime from ~30 seconds to ~2 seconds on my machine.
|
TransactionManagementError "You can't execute queries until the end of the 'atomic' block" while using signals, but only during Unit Testing
|
I am getting TransactionManagementError when trying to save a Django User model instance and in its post_save signal, I'm saving some models that have the user as the foreign key.
The context and error is pretty similar to this question
django TransactionManagementError when using signals
However, in this case, the error occurs only while unit testing.
It works well in manual testing, but unit tests fails.
Is there anything that I'm missing?
Here are the code snippets:
views.py
@csrf_exempt
def mobileRegister(request):
if request.method == 'GET':
response = {"error": "GET request not accepted!!"}
return HttpResponse(json.dumps(response), content_type="application/json",status=500)
elif request.method == 'POST':
postdata = json.loads(request.body)
try:
# Get POST data which is to be used to save the user
username = postdata.get('phone')
password = postdata.get('password')
email = postdata.get('email',"")
first_name = postdata.get('first_name',"")
last_name = postdata.get('last_name',"")
user = User(username=username, email=email,
first_name=first_name, last_name=last_name)
user._company = postdata.get('company',None)
user._country_code = postdata.get('country_code',"+91")
user.is_verified=True
user._gcm_reg_id = postdata.get('reg_id',None)
user._gcm_device_id = postdata.get('device_id',None)
# Set Password for the user
user.set_password(password)
# Save the user
user.save()
signal.py
def create_user_profile(sender, instance, created, **kwargs):
if created:
company = None
companycontact = None
try: # Try to make userprofile with company and country code provided
user = User.objects.get(id=instance.id)
rand_pass = random.randint(1000, 9999)
company = Company.objects.get_or_create(name=instance._company,user=user)
companycontact = CompanyContact.objects.get_or_create(contact_type="Owner",company=company,contact_number=instance.username)
profile = UserProfile.objects.get_or_create(user=instance,phone=instance.username,verification_code=rand_pass,company=company,country_code=instance._country_code)
gcmDevice = GCMDevice.objects.create(registration_id=instance._gcm_reg_id,device_id=instance._gcm_reg_id,user=instance)
except Exception, e:
pass
tests.py
class AuthTestCase(TestCase):
fixtures = ['nextgencatalogs/fixtures.json']
def setUp(self):
self.user_data={
"phone":"0000000000",
"password":"123",
"first_name":"Gaurav",
"last_name":"Toshniwal"
}
def test_registration_api_get(self):
response = self.client.get("/mobileRegister/")
self.assertEqual(response.status_code,500)
def test_registration_api_post(self):
response = self.client.post(path="/mobileRegister/",
data=json.dumps(self.user_data),
content_type="application/json")
self.assertEqual(response.status_code,201)
self.user_data['username']=self.user_data['phone']
user = User.objects.get(username=self.user_data['username'])
# Check if the company was created
company = Company.objects.get(user__username=self.user_data['phone'])
self.assertIsInstance(company,Company)
# Check if the owner's contact is the same as the user's phone number
company_contact = CompanyContact.objects.get(company=company,contact_type="owner")
self.assertEqual(user.username,company_contact[0].contact_number)
Traceback:
======================================================================
ERROR: test_registration_api_post (nextgencatalogs.apps.catalogsapp.tests.AuthTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/nextgencatalogs/apps/catalogsapp/tests.py", line 29, in test_registration_api_post
user = User.objects.get(username=self.user_data['username'])
File "/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/manager.py", line 151, in get
return self.get_queryset().get(*args, **kwargs)
File "/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/query.py", line 301, in get
num = len(clone)
File "/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/query.py", line 77, in __len__
self._fetch_all()
File "/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/query.py", line 854, in _fetch_all
self._result_cache = list(self.iterator())
File "/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/query.py", line 220, in iterator
for row in compiler.results_iter():
File "/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 710, in results_iter
for rows in self.execute_sql(MULTI):
File "/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 781, in execute_sql
cursor.execute(sql, params)
File "/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/backends/util.py", line 47, in execute
self.db.validate_no_broken_transaction()
File "/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/backends/__init__.py", line 365, in validate_no_broken_transaction
"An error occurred in the current transaction. You can't "
TransactionManagementError: An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block.
----------------------------------------------------------------------
|
I ran into this same problem myself. This is caused by a quirk in how transactions are handled in the newer versions of Django coupled with a unittest that intentionally triggers an exception.
I had a unittest that checked to make sure a unique column constraint was enforced by purposefully triggering an IntegrityError exception:
def test_constraint(self):
try:
# Duplicates should be prevented.
models.Question.objects.create(domain=self.domain, slug='barks')
self.assertTrue(0, 'Duplicate question allowed.')
except IntegrityError:
pass
do_more_model_stuff()
In Django 1.4, this works fine. However, in Django 1.5/1.6, each test is wrapped in a transaction, so if an exception occurs, it breaks the transaction until you explicitly roll it back. Therefore, any further ORM operations in that transaction, such as my do_more_model_stuff(), will fail with that django.db.transaction.TransactionManagementError exception.
Like caio mentioned in the comments, the solution is to capture your exception with transaction.atomic like:
from django.db import transaction
def test_constraint(self):
try:
# Duplicates should be prevented.
with transaction.atomic():
models.Question.objects.create(domain=self.domain, slug='barks')
self.assertTrue(0, 'Duplicate question allowed.')
except IntegrityError:
pass
That will prevent the purposefully-thrown exception from breaking the entire unittest's transaction.
|
change matplotlib's default font
|
I'm trying to change matplotlib's default font to Helvetica Neue. On my Mac with EPD/Canopy everything worked fine some time ago.
Trying to do the same on ubuntu now and it's not working.
This is what I did:
Installed Helvetica Neue
$ fc-match 'Helvetica Neue':Light
HelveticaNeue-Light.otf: "Helvetica Neue" "ç´°é«"
Converted the odt/dfont into ttf:
fondu -show HelveticaNeue.dfont
changed matplotlibrc to
$ cat ~/.config/matplotlib/matplotlibrc
...
font.family: Helvetica Neue
I also tried with:
font.family: sans-serif
font.sans-serif: Helvetica Neue
I removed the font cache
rm ~/.config/matplotlib/fontList.cache
But none of these steps are working for me.
$ python -c 'from matplotlib import pyplot as plt; plt.plot(1); plt.savefig("/tmp/test.png")'
/usr/local/lib/python2.7/dist-packages/matplotlib-1.3.0-py2.7-linux-x86_64.egg/matplotlib/font_manager.py:1236:
UserWarning: findfont: Font family ['Helvetica Neue'] not found. Falling back to Bitstream Vera Sans
(prop.get_family(), self.defaultFamily[fontext]))
Version is 1.3.0
$ python -c 'import matplotlib; print matplotlib.__version__'
1.3.0
I also tried moving the fonts to ~/.config/matplotlib/fonts/ttf but it didn't work.
EDIT:
As suggested I tried selecting a specific font for a specific text.
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
import matplotlib.font_manager as font_manager
path = '/home/<myusername>/.fonts/HelveticaNeue.ttf'
prop = font_manager.FontProperties(fname=path)
prop.set_weight = 'light'
mpl.rcParams['font.family'] = prop.get_name()
mpl.rcParams['font.weight'] = 'light'
fig, ax = plt.subplots()
ax.set_title('Text in a cool font', fontproperties=prop, size=40)
plt.savefig('/tmp/test2.png')
But it makes no difference.
/usr/local/lib/python2.7/dist-packages/matplotlib-1.3.0-py2.7-linux-x86_64.egg/matplotlib/font_manager.py:1236:
UserWarning: findfont: Font family ['Helvetica Neue'] not found. Falling back to Bitstream Vera Sans
However I seem to experience this problem only with this Helvetica/Helvetica Neue font.
(prop.get_family(), self.defaultFamily[fontext]))
|
This won't change you font permanently, but it's worth a try
matplotlib.rc('font', family='sans-serif')
matplotlib.rc('font', serif='Helvetica Neue')
matplotlib.rc('text', usetex='false')
matplotlib.rcParams.update({'font.size': 22})
|
Pandas: Chained assignments
|
I have been reading this link on "Returning a view versus a copy". I do not really get how the chained assignment concept in Pandas works and how the usage of .ix(), .iloc(), or .loc() affects it.
I get the SettingWithCopyWarning warnings for the following lines of codes, where data is a Panda dataframe and amount is a column (Series) name in that dataframe:
data['amount'] = data['amount'].astype(float)
data["amount"].fillna(data.groupby("num")["amount"].transform("mean"), inplace=True)
data["amount"].fillna(mean_avg, inplace=True)
Looking at this code, is it obvious that I am doing something suboptimal? If so, can you let me know the replacement code lines?
I am aware of the below warning and like to think that the warnings in my case are false positives:
The chained assignment warnings / exceptions are aiming to inform the
user of a possibly invalid assignment. There may be false positives;
situations where a chained assignment is inadvertantly reported.
EDIT : the code leading to the first copy warning error.
data['amount'] = data.apply(lambda row: function1(row,date,qty), axis=1)
data['amount'] = data['amount'].astype(float)
def function1(row,date,qty):
try:
if(row['currency'] == 'A'):
result = row[qty]
else:
rate = lookup[lookup['Date']==row[date]][row['currency'] ]
result = float(rate) * float(row[qty])
return result
except ValueError: # generic exception clause
print "The current row causes an exception:"
|
The point of the SettingWithCopy is to warn the user that you may be doing something that will not update the original data frame as one might expect.
Here, data is a dataframe, possibly of a single dtype (or not). You are then taking a reference to this data['amount'] which is a Series, and updating it. This probably works in your case because you are returning the same dtype of data as existed.
However it could create a copy which updates a copy of data['amount'] which you would not see; Then you would be wondering why it is not updating.
Pandas returns a copy of an object in almost all method calls. The inplace operations are a convience operation which work, but in general are not clear that data is being modified and could potentially work on copies.
Much more clear to do this:
data['amount'] = data["amount"].fillna(data.groupby("num")["amount"].transform("mean"))
data["amount"] = data['amount'].fillna(mean_avg)
One further plus to working on copies. You can chain operations, this is not possible with inplace ones.
e.g.
data['amount'] = data['amount'].fillna(mean_avg)*2
And just an FYI. inplace operations are neither faster nor more memory efficient. my2c they should be banned. But too late on that API.
You can of course turn this off:
pd.set_option('chained_assignment',None)
Pandas runs with the entire test suite with this set to raise (so we know if chaining is happening) on, FYI.
|
Python equivalent to 'hold on' in Matlab
|
Is there an explicit equivalent command in Python's matplotlib for Matlab's hold on? I'm trying to plot all my graphs on the same axes. Some graphs are generated inside a for loop, and these are plotted separately from su and sl:
import numpy as np
import matplotlib.pyplot as plt
for i in np.arange(1,5):
z = 68 + 4 * np.random.randn(50)
zm = np.cumsum(z) / range(1,len(z)+1)
plt.plot(zm)
plt.axis([0,50,60,80])
plt.show()
n = np.arange(1,51)
su = 68 + 4 / np.sqrt(n)
sl = 68 - 4 / np.sqrt(n)
plt.plot(n,su,n,sl)
plt.axis([0,50,60,80])
plt.show()
|
Just call plt.show() at the end:
import numpy as np
import matplotlib.pyplot as plt
plt.axis([0,50,60,80])
for i in np.arange(1,5):
z = 68 + 4 * np.random.randn(50)
zm = np.cumsum(z) / range(1,len(z)+1)
plt.plot(zm)
n = np.arange(1,51)
su = 68 + 4 / np.sqrt(n)
sl = 68 - 4 / np.sqrt(n)
plt.plot(n,su,n,sl)
plt.show()
|
Numba code slower than pure python
|
I've been working on speeding up a resampling calculation for a particle filter. As python has many ways to speed it up, I though I'd try them all. Unfortunately, the numba version is incredibly slow. As Numba should result in a speed up, I assume this is an error on my part.
I tried 4 different versions:
Numba
Python
Numpy
Cython
The code for each is below:
import numpy as np
import scipy as sp
import numba as nb
from cython_resample import cython_resample
@nb.autojit
def numba_resample(qs, xs, rands):
n = qs.shape[0]
lookup = np.cumsum(qs)
results = np.empty(n)
for j in range(n):
for i in range(n):
if rands[j] < lookup[i]:
results[j] = xs[i]
break
return results
def python_resample(qs, xs, rands):
n = qs.shape[0]
lookup = np.cumsum(qs)
results = np.empty(n)
for j in range(n):
for i in range(n):
if rands[j] < lookup[i]:
results[j] = xs[i]
break
return results
def numpy_resample(qs, xs, rands):
results = np.empty_like(qs)
lookup = sp.cumsum(qs)
for j, key in enumerate(rands):
i = sp.argmax(lookup>key)
results[j] = xs[i]
return results
#The following is the code for the cython module. It was compiled in a
#separate file, but is included here to aid in the question.
"""
import numpy as np
cimport numpy as np
cimport cython
DTYPE = np.float64
ctypedef np.float64_t DTYPE_t
@cython.boundscheck(False)
def cython_resample(np.ndarray[DTYPE_t, ndim=1] qs,
np.ndarray[DTYPE_t, ndim=1] xs,
np.ndarray[DTYPE_t, ndim=1] rands):
if qs.shape[0] != xs.shape[0] or qs.shape[0] != rands.shape[0]:
raise ValueError("Arrays must have same shape")
assert qs.dtype == xs.dtype == rands.dtype == DTYPE
cdef unsigned int n = qs.shape[0]
cdef unsigned int i, j
cdef np.ndarray[DTYPE_t, ndim=1] lookup = np.cumsum(qs)
cdef np.ndarray[DTYPE_t, ndim=1] results = np.zeros(n, dtype=DTYPE)
for j in range(n):
for i in range(n):
if rands[j] < lookup[i]:
results[j] = xs[i]
break
return results
"""
if __name__ == '__main__':
n = 100
xs = np.arange(n, dtype=np.float64)
qs = np.array([1.0/n,]*n)
rands = np.random.rand(n)
print "Timing Numba Function:"
%timeit numba_resample(qs, xs, rands)
print "Timing Python Function:"
%timeit python_resample(qs, xs, rands)
print "Timing Numpy Function:"
%timeit numpy_resample(qs, xs, rands)
print "Timing Cython Function:"
%timeit cython_resample(qs, xs, rands)
This results in the following output:
Timing Numba Function:
1 loops, best of 3: 8.23 ms per loop
Timing Python Function:
100 loops, best of 3: 2.48 ms per loop
Timing Numpy Function:
1000 loops, best of 3: 793 µs per loop
Timing Cython Function:
10000 loops, best of 3: 25 µs per loop
Any idea why the numba code is so slow? I assumed it would be at least comparable to Numpy.
Note: if anyone has any ideas on how to speed up either the Numpy or Cython code samples, that would be nice too:) My main question is about Numba though.
|
The problem is that numba can't intuit the type of lookup. If you put a print nb.typeof(lookup) in your method, you'll see that numba is treating it as an object, which is slow. Normally I would just define the type of lookup in a locals dict, but I was getting a strange error. Instead I just created a little wrapper, so that I could explicitly define the input and output types.
@nb.jit(nb.f8[:](nb.f8[:]))
def numba_cumsum(x):
return np.cumsum(x)
@nb.autojit
def numba_resample2(qs, xs, rands):
n = qs.shape[0]
#lookup = np.cumsum(qs)
lookup = numba_cumsum(qs)
results = np.empty(n)
for j in range(n):
for i in range(n):
if rands[j] < lookup[i]:
results[j] = xs[i]
break
return results
Then my timings are:
print "Timing Numba Function:"
%timeit numba_resample(qs, xs, rands)
print "Timing Revised Numba Function:"
%timeit numba_resample2(qs, xs, rands)
Timing Numba Function:
100 loops, best of 3: 8.1 ms per loop
Timing Revised Numba Function:
100000 loops, best of 3: 15.3 µs per loop
You can go even a little faster still if you use jit instead of autojit:
@nb.jit(nb.f8[:](nb.f8[:], nb.f8[:], nb.f8[:]))
For me that lowers it from 15.3 microseconds to 12.5 microseconds, but it's still impressive how well autojit does.
|
Add x and y labels to a pandas plot
|
Suppose I have the following code that plots something very simple using pandas:
import pandas as pd
values = [[1,2], [2,5]]
df2 = pd.DataFrame(values, columns=['Type A', 'Type B'], index=['Index 1','Index 2'])
df2.plot(lw=2,colormap='jet',marker='.',markersize=10,title='Video streaming dropout by category')
How do I easily set x and y-labels while preserving my ability to use specific colormaps? I noticed that the plot() wrapper for pandas dataframes doesn't take any parameters specific for that.
|
The df.plot() function returns a matplotlib.axes.AxesSubplot object. You can set the labels on that object.
In [4]: ax = df2.plot(lw=2,colormap='jet',marker='.',markersize=10,title='Video streaming dropout by category')
In [6]: ax.set_xlabel("x label")
Out[6]: <matplotlib.text.Text at 0x10e0af2d0>
In [7]: ax.set_ylabel("y label")
Out[7]: <matplotlib.text.Text at 0x10e0ba1d0>
Or, more succinctly: ax.set(xlabel="x label", ylabel="y label").
Alternatively, the index x-axis label is automatically set to the Index name, if it has one. so df2.index.name = 'x label' would work too.
|
How to run Python script on terminal
|
I want to run a Python script in terminal but I don't know how? I already have saved a file called gameover.py in /User/luca/Documents/python.
|
You need python installed on your system. Then you can run this in the terminal in the correct directory:
python gameover.py
|
Create a Pandas DataFrame from deeply nested JSON
|
I'm trying to create a single Pandas DataFrame object from a deeply nested JSON string.
The JSON schema is:
{"intervals": [
{
pivots: "Jane Smith",
"series": [
{
"interval_id": 0,
"p_value": 1
},
{
"interval_id": 1,
"p_value": 1.1162791357932633e-8
},
{
"interval_id": 2,
"p_value": 0.0000028675012051504467
}
],
},
{
"pivots": "Bob Smith",
"series": [
{
"interval_id": 0,
"p_value": 1
},
{
"interval_id": 1,
"p_value": 1.1162791357932633e-8
},
{
"interval_id": 2,
"p_value": 0.0000028675012051504467
}
]
}
]
}
Desired Outcome I need to flatten this to produce a table:
Actor Interval_id Interval_id Interval_id ...
Jane Smith 1 1.1162 0.00000 ...
Bob Smith 1 1.1162 0.00000 ...
The first column is the Pivots values, and the remaining columns are the values of the keys interval_id and p_value stored in the list series.
So far i've got
import requests as r
import pandas as pd
actor_data = r.get("url/to/data").json['data']['intervals']
df = pd.DataFrame(actor_data)
actor_data is a list where the length is equal to the number of individuals ie pivots.values(). The df object simply returns
<bound method DataFrame.describe of pivots Series
0 Jane Smith [{u'p_value': 1.0, u'interval_id': 0}, {u'p_va...
1 Bob Smith [{u'p_value': 1.0, u'interval_id': 0}, {u'p_va...
.
.
.
How can I iterate through that series list to get to the dict values and create N distinct columns? Should I try to create a DataFrame for the series list, reshape it,and then do a column bind with the actor names?
UPDATE:
pvalue_list = [i['p_value'] for i in json_data['series']]
this gives me a list of lists. Now I need to figure out how to add each list as a row in a DataFrame.
value_list = []
for i in pvalue_list:
pvs = [j['p_value'] for j in i]
value_list = value_list.append(pvs)
return value_list
This returns a NoneType
Solution
def get_hypthesis_data():
raw_data = r.get("/url/to/data").json()['data']
actor_dict = {}
for actor_series in raw_data['intervals']:
actor = actor_series['pivots']
p_values = []
for interval in actor_series['series']:
p_values.append(interval['p_value'])
actor_dict[actor] = p_values
return pd.DataFrame(actor_dict).T
This returns the correct DataFrame. I transposed it so the individuals were rows and not columns.
|
I think organizing your data in way that yields repeating column names is only going to create headaches for you later on down the road. A better approach IMHO is to create a column for each of pivots, interval_id, and p_value. This will make extremely easy to query your data after loading it into pandas.
Also, your JSON has some errors in it. I ran it through this to find the errors.
jq helps here
import sh
jq = sh.jq.bake('-M') # disable colorizing
json_data = "from above"
rule = """[{pivots: .intervals[].pivots,
interval_id: .intervals[].series[].interval_id,
p_value: .intervals[].series[].p_value}]"""
out = jq(rule, _in=json_data).stdout
res = pd.DataFrame(json.loads(out))
This will yield output similar to
interval_id p_value pivots
32 2 2.867501e-06 Jane Smith
33 2 1.000000e+00 Jane Smith
34 2 1.116279e-08 Jane Smith
35 2 2.867501e-06 Jane Smith
36 0 1.000000e+00 Bob Smith
37 0 1.116279e-08 Bob Smith
38 0 2.867501e-06 Bob Smith
39 0 1.000000e+00 Bob Smith
40 0 1.116279e-08 Bob Smith
41 0 2.867501e-06 Bob Smith
42 1 1.000000e+00 Bob Smith
43 1 1.116279e-08 Bob Smith
Adapted from this comment
Of course, you can always call res.drop_duplicates() to remove the duplicate rows. This gives
In [175]: res.drop_duplicates()
Out[175]:
interval_id p_value pivots
0 0 1.000000e+00 Jane Smith
1 0 1.116279e-08 Jane Smith
2 0 2.867501e-06 Jane Smith
6 1 1.000000e+00 Jane Smith
7 1 1.116279e-08 Jane Smith
8 1 2.867501e-06 Jane Smith
12 2 1.000000e+00 Jane Smith
13 2 1.116279e-08 Jane Smith
14 2 2.867501e-06 Jane Smith
36 0 1.000000e+00 Bob Smith
37 0 1.116279e-08 Bob Smith
38 0 2.867501e-06 Bob Smith
42 1 1.000000e+00 Bob Smith
43 1 1.116279e-08 Bob Smith
44 1 2.867501e-06 Bob Smith
48 2 1.000000e+00 Bob Smith
49 2 1.116279e-08 Bob Smith
50 2 2.867501e-06 Bob Smith
[18 rows x 3 columns]
|
How do I do a F-test in python
|
How do I do an F-test to check if the variance is equivalent in two vectors in Python?
For example if I have
a = [1,2,1,2,1,2,1,2,1,2]
b = [1,3,-1,2,1,5,-1,6,-1,2]
is there something similar to
scipy.stats.ttest_ind(a, b)
I found
sp.stats.f(a, b)
But it appears to be something different to an F-test
|
The test statistic F test for equal variances is simply:
F = Var(X) / Var(Y)
Where F is distributed as df1 = len(X) - 1, df2 = len(Y) - 1
scipy.stats.f which you mentioned in your question has a CDF method. This means you can generate a p-value for the given statistic and test whether that p-value is greater than your chosen alpha level.
Thus:
alpha = 0.05 #Or whatever you want your alpha to be.
p_value = scipy.stats.f.cdf(F, df1, df2)
if p_value > alpha:
# Reject the null hypothesis that Var(X) == Var(Y)
Note that the F-test is extremely sensitive to non-normality of X and Y, so you're probably better off doing a more robust test such as Levene's test or Bartlett's test unless you're reasonably sure that X and Y are distributed normally. These tests can be found in the scipy api:
Bartlett's test
Levene's test
|
Flask: get current route
|
In Flask, when I have several routes for the same function,
how can I know which route is used at the moment?
For example:
@app.route("/antitop/")
@app.route("/top/")
@requires_auth
def show_top():
....
How can I know, that now I was called using /top/ or /antitop/?
UPDATE
I know about request_path I don't want use it, because the request can be rather complex, and I want repeat the routing logic in the function. I think that the solution with url_rule it the best one.
|
the most 'flasky' way to check which route triggered your view is, by request.url_rule.
rule = request.url_rule
if 'antitop' in rule.rule:
# request by '/antitop'
elif 'top' in rule.rule:
# request by '/top'
|
How to circumvent the fallacy of Python's os.path.commonprefix?
|
My problem is to find the common path prefix of a given set of files.
Literally I was expecting that "os.path.commonprefix" would do just that. Unfortunately, the fact that commonprefix is located in path is rather misleading, since it actually will search for string prefixes.
The question to me is, how can this actually be solved for paths? The issue was briefly mentioned in this (fairly high rated) answer but only as a side-note and the proposed solution (appending slashes to the input of commonprefix) imho has issues, since it will fail for instance for:
os.path.commonprefix(['/usr/var1/log/', '/usr/var2/log/'])
# returns /usr/var but it should be /usr
To prevent others from falling into the same trap, it might be worthwhile to discuss this issue in a separate question: Is there a simple / portable solution for this problem that does not rely on nasty checks on the file system (i.e., access the result of commonprefix and check whether it is a directory and if not returns a os.path.dirname of the result)?
|
Awhile ago I ran into this where os.path.commonprefix is a string prefix and not a path prefix as would be expected. So I wrote the following:
def commonprefix(l):
# this unlike the os.path.commonprefix version
# always returns path prefixes as it compares
# path component wise
cp = []
ls = [p.split('/') for p in l]
ml = min( len(p) for p in ls )
for i in range(ml):
s = set( p[i] for p in ls )
if len(s) != 1:
break
cp.append(s.pop())
return '/'.join(cp)
it could be made more portable by replacing '/' with os.path.sep.
|
How to see which tests were run during Django's manage.py test command
|
After tests execution is finished using Django's manage.py test command only number of passed tests is printed to the console.
(virtualenv) G:\Project\>python manage.py test
Creating test database for alias 'default'...
True
..
----------------------------------------------------------------------
Ran 2 tests in 0.017s
OK
Destroying test database for alias 'default'...
Is there any way to see:
which tests were actually executed
from what module
in what order
I haven't found any solution in the doc.
|
You can pass -v 2 to the test command:
python manage.py test -v 2
After running this command you'll get something like this (I'm using django 1.9, feel free to ignore migrations/database stuff):
Creating test database for alias 'default' (':memory:')...
Operations to perform:
Synchronize unmigrated apps: staticfiles, messages
Apply all migrations: admin, contenttypes, auth, sessions
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying sessions.0001_initial... OK
test_equal_hard (polls.tests.TestHard) ... ok <--------+
test_equal_simple (polls.tests.TestSimple) ... ok <--------+
|
|
That's your tests! >----------------------------+
By the way, v stands for verbosity (You can also use --verbosity=2):
python manage.py test --verbosity=2
Here's the excerpt from the python manage.py --help:
Options: -v VERBOSITY, --verbosity=VERBOSITY
Verbosity level; 0=minimal output, 1=normal output, 2=verbose output, 3=very verbose output
|
Best way to combine probabilistic classifiers in scikit-learn
|
I have a logistic regression and a random forest and I'd like to combine them (ensemble) for the final classification probability calculation by taking an average.
Is there a built-in way to do this in sci-kit learn? Some way where I can use the ensemble of the two as a classifier itself? Or would I need to roll my own classifier?
|
For what it's worth I ended up doing this as follows:
class EnsembleClassifier(BaseEstimator, ClassifierMixin):
def __init__(self, classifiers=None):
self.classifiers = classifiers
def fit(self, X, y):
for classifier in self.classifiers:
classifier.fit(X, y)
def predict_proba(self, X):
self.predictions_ = list()
for classifier in self.classifiers:
self.predictions_.append(classifier.predict_proba(X))
return np.mean(self.predictions_, axis=0)
|
Difference between two numpy arrays in python
|
I have two arrays, for example:
array1=numpy.array([1.1, 2.2, 3.3])
array2=numpy.array([1, 2, 3])
How can I find the difference between these two arrays in Python, to give:
[0.1, 0.2, 0.3]
As an array as well?
Sorry if this is an amateur question - but any help would be greatly appreciated!
|
This is pretty simple with numpy, just subtract the arrays:
diffs = array1 - array2
I get:
diffs == array([ 0.1, 0.2, 0.3])
|
Command python setup.py egg_info failed with error code 1
|
I am trying to do make install, but I keep getting an error. I already tried following this answer: Can't install via pip because of egg_info error
Command python setup.py egg_info failed with error code 1 in /abc/abc_env/build/MySQL-python
Storing debug log for failure in /Users/Pat/.pip/pip.log
the full debug log: http://pastebin.com/cnGgWU4G
Hereâs the Makefile:
virtualenv-2.7 my_env && \
source my_env/bin/activate && \
pip install -r requirements.txt
It looks like the problem only happens in the virtual environment. I am able to do pip install mysql-python without a problem, but pip install -r requirements.txt has errors when trying to install mysql-python
requirements.txt below:
Flask==0.10.1
Jinja2==2.7.1
MarkupSafe==0.18
MySQL-python==1.2.4
PyYAML==3.10
SQLAlchemy==0.8.3
Tempita==0.5.1
Werkzeug==0.9.4
argparse==1.2.1
dataset==0.3.13
decorator==3.4.0
docopt==0.4.0
itsdangerous==0.23
mandrill==1.0.53
mysql-connector-python==1.0.12
requests==2.0.1
sqlalchemy-migrate==0.7.2
wsgiref==0.1.2
|
There were several problems with the original code.
First, MySQL-python version 1.2.4 for some reason fails to install. Changing this to 1.2.5 fixes that error.
Second, argparse cannot be installed as is. It needs --allow-all-external. The new Makefile is below:
virtualenv-2.7 my_env && \
source my_env/bin/activate && \
pip install -r requirements.txt --allow-all-external
Third, mysql-connector-python version 1.0.12 doesnât exist. Changing it to 1.1.4 worked.
|
Plotting a list of (x, y) coordinates in python matplotlib
|
I have a list of pairs (a, b) that I would like to plot with matplotlib in python as actual x-y coordinates. Currently, it is making two plots, where the index of the list gives the x-coordinate, and the first plot's y values are the as in the pairs and the second plot's y values are the bs in the pairs.
To clarify, my data looks like this: li = [(a,b), (c,d), ... , (t, u)]
I want to do a one-liner that just calls plt.plot() incorrect.
If I didn't require a one-liner I could trivially do:
xs = [x[0] for x in li]
ys = [x[1] for x in li]
plt.plot(xs, ys)
How can I get matplotlib to plot these pairs as x-y coordinates?
Thanks for all the help!
|
As per this example:
import numpy as np
import matplotlib.pyplot as plt
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
plt.scatter(x, y)
plt.show()
will produce:
To unpack your data from pairs into lists use zip:
x, y = zip(*li)
So, the one-liner:
plt.scatter(*zip(*li))
|
Python: converting a list of dictionaries to json
|
I have a list of dictionaries, looking some thing like this:
list = [{'id': 123, 'data': 'qwerty', 'indices': [1,10]}, {'id': 345, 'data': 'mnbvc', 'indices': [2,11]}]
and so on. There may be more documents in the list. I need to convert these to one JSON document, that can be returned via bottle, and I cannot understand how to do this. Please help. I saw similar questions on this website, but I couldn't understand the solutions there.
|
use json library
import json
json.dumps(list)
by the way, you might consider changing variable list to another name, list is the builtin function for a list creation, you may get some unexpected behaviours or some buggy code if you don't change the variable name.
|
fatal error: Python.h: No such file or directory
|
I am trying to build a shared library using a C extension file but first I have to generate the output file using the command below:
gcc -Wall utilsmodule.c -o Utilc
After executing the command, I get this error message:
utilsmodule.c:1:20: fatal error: Python.h: No such file or directory
compilation terminated.
in fact I have tried all the suggested solutions over the internet but the problem still exists ... also I have no problem with Python.h. I managed to locate the file on my machine ... anybody has faced the same problem before??
|
Looks like you haven't properly installed the header files and static libraries for python dev. Use your package manager to install them system-wide.
For apt (ubuntu, debian...):
sudo apt-get install python-dev # for python2.x installs
sudo apt-get install python3-dev # for python3.x installs
For yum (centos, redhat, fedora...):
sudo yum install python-devel
|
Pycharm: set environment variable for run manage.py Task
|
I have moved my SECRET_KEY value out of my settings file, and it gets set when I load my virtualenv. I can confirm the value is present from python manage.py shell.
When I run the Django Console, SECRET_KEY is missing, as it should. So in preferences, I go to Console>Django Console and load SECRET_KEY and the appropriate value. I go back into the Django Console, and SECRET_KEY is there.
As expected, I cannot yet run a manage.py Task because it has yet to find the SECRET_KEY. So I go into Run>Edit Configurations to add SECRET_KEY into Django server and Django Tests, and into the project server. Restart Pycharm, confirm keys.
When I run a manage.py Task, such as runserver, I still get KeyError: 'SECRET_KEY'.
Where do I put this key?
|
Because Pycharm is not launching from a terminal your environment will not loaded. In short any GUI program will not inherit the SHELL variables. See this for reasons (assuming a Mac).
However there are several basic solutions to this problem. As @user3228589 posted you can set this up as a variable within PyCharm. This has several pros and cons. I personally don't like this approach because it's not a single source. To fix this I use a small function at the top of my settings.py file which looks up the variable inside a local .env file. I put all of my "private" stuff in there. I also can reference this in my virtualenv.
Here is what it looks like.
-- settings.py
def get_env_variable(var_name, default=False):
"""
Get the environment variable or return exception
:param var_name: Environment Variable to lookup
"""
try:
return os.environ[var_name]
except KeyError:
import StringIO
import ConfigParser
env_file = os.environ.get('PROJECT_ENV_FILE', SITE_ROOT + "/.env")
try:
config = StringIO.StringIO()
config.write("[DATA]\n")
config.write(open(env_file).read())
config.seek(0, os.SEEK_SET)
cp = ConfigParser.ConfigParser()
cp.readfp(config)
value = dict(cp.items('DATA'))[var_name.lower()]
if value.startswith('"') and value.endswith('"'):
value = value[1:-1]
elif value.startswith("'") and value.endswith("'"):
value = value[1:-1]
os.environ.setdefault(var_name, value)
return value
except (KeyError, IOError):
if default is not False:
return default
from django.core.exceptions import ImproperlyConfigured
error_msg = "Either set the env variable '{var}' or place it in your " \
"{env_file} file as '{var} = VALUE'"
raise ImproperlyConfigured(error_msg.format(var=var_name, env_file=env_file))
# Make this unique, and don't share it with anybody.
SECRET_KEY = get_env_variable('SECRET_KEY')
Then the env file looks like this..
#!/bin/sh
#
# This should normally be placed in the ${SITE_ROOT}/.env
#
# DEPLOYMENT DO NOT MODIFY THESE..
SECRET_KEY='XXXSECRETKEY'
And finally your virtualenv/bin/postactivate can source this file. You could go further and export the variables as described here if you'd like but since settings file directly call the .env there isn't really a need.
|
Difference between using commas, concatenation, and string formatters in Python
|
I am learning python(2.7) on my own.
I have learned that we can use the following ways to put strings and variables together in printing:
x = "Hello"
y = "World"
By using commas:
print "I am printing" , x, y # I know that using comma gives automatic space
By using concatenation :
print "I am printing" + " " + x + " " + y
By using string formatters
print "I am printing %s %s" % (x, y)
In this case all three print the same:
I am printing Hello World
What is the difference between the three and are there any particular instances where one is preferred over the other?
|
To answer the general question first, you would use a print statement (or function in Python 3) in general to output information in your scripts to the screen when you're writing code to ensure that you're getting what you expect.
As your coding becomes more sophisticated, you may find that logging would be better than printing, but that's information for another response.
There is a big difference between printing and the return values representations that are echoed in an interactive session with the Python interpreter. Printing should print to your standard output. The echoed representation of the expression's return value (that show up in your Python shell if not None) will be silent when running the equivalent code in scripts.
1. Print Statements with Commas
The print statement with commas separating items, uses a space to separate them. A trailing comma will cause another space to be appended. No trailing comma will append a newline character to be appended to your printed item.
You could put each item on a separate print statement and use a comma after each and they would print the same, on the same line.
For example (this would only work in a script, in an interactive shell, you'd get a new prompt after every line):
x = "Hello"
y = "World"
print "I am printing",
print x,
print y
Would output:
I am printing Hello World
Print Function
With the print function from Python 3, also available in Python 2.6 and 2.7 with this import:
from __future__ import print_function
you can declare a separator and an end, which gives a lot more flexibility:
>>> print('hello', 'world', sep='-', end='\n****\n')
hello-world
****
2. String Concatenation
Concatenation creates each string in memory, and then combines them together at their ends in a new string (so this may not be very memory friendly), and then prints them to your output at the same time. This is good when you need to join strings, likely constructed elsewhere, together.
print('hello' + '-' + 'world')
will print
hello-world
Be careful before you attempt to join in this manner literals of other types to strings, to convert the literals to strings first.
print('here is a number: ' + str(2))
prints
here is a number: 2
If you attempt to concatenate the integer without coercing it to a string first:
>>> print('here is a number: ' + 2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: cannot concatenate 'str' and 'int' objects
This should demonstrate that you should only ever attempt to concatenate variables that are known to be strings. The new way of formatting demonstrated next handles this issue for you.
3. String Interpolation
The formatting you're demonstrating is the old style of string interpolation, borrowed from C. It takes the old string and one time creates a new one. What it does is fairly straightforward. You should use this when you may seem likely to building up a fairly large template (at 3+ lines and 3+ variables, you definitely should be doing it this way).
The new way of doing that would be to do this:
print('I am printing {0} and {1}'.format(x, y))
or in python 2.7 or 3:
print('I am printing {} and {}'.format(x, y))
or with named arguments
print('I am printing {x} and {y}'.format(x=x, y=y))
The biggest benefit of this is that it lets you combine positional and keyword arguments
print('I am printing {0} and {y}'.format(x, y=y))
New in Python 3.6, format literals
Python 3.6 will have format literals, with a more elegant syntax (less redundancy). The simple syntax is something like:
print(f'I am printing {x} and {y}')
|
Why is "except: pass" a bad programming practice?
|
I often see comments on other Stack Overflow questions about how the use of except: pass is discouraged. Why is this bad? Sometimes I just don't care what the errors, are and I want to just continue with the code.
try:
something
except:
pass
Why is using an except: pass block bad? What makes it bad? Is it the fact that I pass on an error or that I except any error?
|
The main problem here is that it ignores all and any error: Out of memory, CPU is burning, user wants to stop, program wants to exit, Jabberwocky is killing users.
This is way too much. In your head, you're thinking "I want to ignore this network error". If something unexpected goes wrong, then your code silently continues and breaks in completely unpredictable ways that no one can debug.
That's why you should limit yourself to ignoring specifically only some errors and let the rest pass.
|
Django allauth example [Errno 61] Connection refused
|
I have the following error when I run django allauth example and it tries to send an email:
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 61] Connection refused
Under OS X 10.9.1
|
From allauth documentation:
When I sign up I run into connectivity errors (connection refused et al)
You probably have not got an e-mail (SMTP) server running on the
machine you are developing on. Therefore, allauth is unable to send
verification mails.
You can work around this by adding the following line to settings.py:
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' This
will avoid the need for an SMTP server as e-mails will be printed to
the console. For more information, please refer to:
https://docs.djangoproject.com/en/dev/ref/settings/#email-host
|
Is there a difference in computation for Numpy vs Pandas?
|
I've written a bunch of code on the assumption that I was going to use Numpy arrays. Turns out the data I am getting is loaded through Pandas. I remember now that I loaded it in Pandas because I was having some problems loading it in Numpy. I believe the data was just too large.
Therefore I was wondering, is there a difference in computational ability when using Numpy vs Pandas?
If Pandas is more efficient then I would rather rewrite all my code for Pandas but if there is no more efficiency then I'll just use a numpy array...
|
Pandas data structures are backed by numpy. For example pandas dataframes have a values() method which gives you the underlying numpy array:
In [3]: df = DataFrame({'one' : [1., 2., 3., 4.],'two' : [4., 3., 2., 1.]}, index=['a', 'b', 'c', 'd'])
In [4]: df.values
Out[4]:
array([[ 1., 4.],
[ 2., 3.],
[ 3., 2.],
[ 4., 1.]])
So to answer your question, there is no performance difference.
|
Python complexity reference?
|
Is there any Python complexity reference? In cppreference, for example, for many functions (such as std::array::size or std::array::fill) there's a complexity section which describes their running complexity, in terms of linear in the size of the container or constant.
I would expect the same information to appear in the python website, perhaps, at least for the CPython implementation. For example, in the list reference, in list.insert I would expect to see complexity: linear; I know this case (and many other container-related operations) is covered here, but many other cases are not. Here are a few examples:
What is the complexity of tuple.__le__? It seems like when comparing two tuples of size n, k, the complexity is about O(min(n,k)) (however, for small n's it looks different).
What is the complexity of random.shuffle? It appears to be O(n). It also appears that the complexity of random.randint is O(1).
What is the complexity of the __format__ method of strings? It appears to be linear in the size of the input string; however, it also grows when the number of relevant arguments grow (compare ("{0}"*100000).format(*(("abc",)*100000)) with ("{}"*100000).format(*(("abc",)*100000))).
I'm aware that (a) each of these questions may be answered by itself, (b) one may look at the code of these modules (even though some are written in C), and (c) StackExchange is not a python mailing list for user requests. So: this is not a doc-feature request, just a question of two parts:
Do you know if such a resource exists?
If not, do you know what is the place to ask for such, or can you suggest why I don't need such?
|
CPython is pretty good about its algorithms, and the time complexity of an operation is usually just the best you would expect of a good standard library.
For example:
Tuple ordering has to be O(min(n,m)), because it works by comparing element-wise.
random.shuffle is O(n), because that's the complexity of the modern FisherâYates shuffle.
.format I imagine is linear, since it only requires one scan through the template string. As for the difference you see, CPython might just be clever enough to cache the same format code used twice.
The docs do mention time complexity, but generally only when it's not what you would expect â for example, because a deque is implemented with a doubly-linked list, it's explicitly mentioned as having O(n) for indexing in the middle.
Would the docs benefit from having time complexity called out everywhere it's appropriate? I'm not sure. The docs generally present builtins by what they should be used for and have implementations optimized for those use cases. Emphasizing time complexity seems like it would either be useless noise or encourage developers to second-guess the Python implementation itself.
|
How to check if a float value is a whole number
|
I am trying to find the largest cube root that is a whole number, that is less than 12,000.
processing = True
n = 12000
while processing:
n -= 1
if n ** (1/3) == #checks to see if this has decimals or not
I am not sure how to check if it is a whole number or not though! I could convert it to a string then use indexing to check the end values and see whether they are zero or not, that seems rather cumbersome though. Is there a simpler way?
|
To check if a float value is a whole number, use the float.is_integer() method:
>>> (1.0).is_integer()
True
>>> (1.555).is_integer()
False
The method was added to the float type in Python 2.6.
Take into account that in Python 2, 1/3 is 0 (floor division for integer operands!), and that floating point arithmetic can be imprecise (a float is an approximation using binary fractions, not a precise real number). But adjusting your loop a little this gives:
>>> for n in range(12000, -1, -1):
... if (n ** (1.0/3)).is_integer():
... print n
...
27
8
1
0
which means that anything over 3 cubed, (including 10648) was missed out due to the aforementioned imprecision:
>>> (4**3) ** (1.0/3)
3.9999999999999996
>>> 10648 ** (1.0/3)
21.999999999999996
You'd have to check for numbers close to the whole number instead, or not use float() to find your number. Like rounding down the cube root of 12000:
>>> int(12000 ** (1.0/3))
22
>>> 22 ** 3
10648
If you are using Python 3.5 or newer, you can use the math.isclose() function to see if a floating point value is within a configurable margin:
>>> from math import isclose
>>> isclose((4**3) ** (1.0/3), 4)
True
>>> isclose(10648 ** (1.0/3), 22)
True
For older versions, the naive implementation of that function (skipping error checking and ignoring infinity and NaN) as mentioned in PEP485:
def isclose(a, b, rel_tol=1e-9, abs_tol=0.0):
return abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)
|
URL query parameters to dict python
|
Is there a way to parse a URL (with some python library) and return a python dictionary with the keys and values of a query parameters part of the URL?
For example:
url = "http://www.example.org/default.html?ct=32&op=92&item=98"
expected return:
{'ct':32, 'op':92, 'item':98}
|
Use the urlparse library.:
>>> import urlparse
>>> url = "http://www.example.org/default.html?ct=32&op=92&item=98"
>>> urlparse.urlsplit(url)
SplitResult(scheme='http', netloc='www.example.org', path='/default.html', query='ct=32&op=92&item=98', fragment='')
>>> urlparse.parse_qs(urlparse.urlsplit(url).query)
{'item': ['98'], 'op': ['92'], 'ct': ['32']}
>>> dict(urlparse.parse_qsl(urlparse.urlsplit(url).query))
{'item': '98', 'op': '92', 'ct': '32'}
The urlparse.parse_qs() and urlparse.parse_qsl() methods parse out query strings, taking into account that keys can occur more than once and that order may matter.
In Python 3, urlparse has been moved to urllib.parse.
|
fibonacci works in python but fails in Java
|
I have this code for calculating fibonacci number in python. It works and gives the expected result. but when I translated the same to Java, it fails. Any idea of what is going wrong here?
In python:
def fib3(n):
a,b=0,1
while n>0:
a,b=b,a+b
n-=1
return a
fib3(12) --> 144
In Java:
public static int fib2(int n){
int a = 0;
int b =1;
while(n-- >0){
a=b;
b=a+b;
}
return a;
}
fib2(12) --> 2048
|
In this section:
a=b;
b=a+b;
you're assigning b to a+b, but a is already b. So really you're doubling b
Easiest solution is a temp variable:
public static int fib2(int n){
int a = 0;
int b =1;
while(n-- >0){
int old_a;
old_a = a;
a=b;
b=old_a+b;
}
return a;
}
In python, a, b = b, a + b stores an intermediate tuple automatically before assigning the new values to the variables, while in Java you need to be explicit about it
Breaking down Python's instructions, a, b = b, a + b is executing this disassembly:
5 17 LOAD_FAST 1 (b)
20 LOAD_FAST 0 (a)
23 LOAD_FAST 1 (b)
26 BINARY_ADD
27 ROT_TWO
28 STORE_FAST 0 (a)
31 STORE_FAST 1 (b)
In a simpler sense, staying python, here's the process:
temp_tuple = (b, a + b)
a, b = temp_tuple
|
How do I get logger to delete existing log file before writing to it again?
|
Using the configuration below, my logfile will be called 'test-debug.log' and it will grow infinitely for everytime I run the script. I just want this logfile to contain the log records from the most recent run of the script. The log should be deleted before starting again.
How do I do that?
logger = logging.getLogger('test') #Create a log with the same name as the script that created it
logger.setLevel('DEBUG')
#Create handlers and set their logging level
filehandler_dbg = logging.FileHandler(logger.name + '-debug.log')
filehandler_dbg.setLevel('DEBUG')
#Create custom formats of the logrecord fit for both the logfile and the console
streamformatter = logging.Formatter(fmt='%(levelname)s:\t%(threadName)s:\t%(funcName)s:\t\t%(message)s', datefmt='%H:%M:%S') #We only want to see certain parts of the message
#Apply formatters to handlers
filehandler_dbg.setFormatter(streamformatter)
#Add handlers to logger
logger.addHandler(filehandler_dbg)
|
Try this:
filehandler_dbg = logging.FileHandler(logger.name + '-debug.log', mode='w')
to open the filename in write mode instead of append mode, clobbering logger.name
More information: logging.FileHandler docs
|
python XlsxWriter set border around multiple cells
|
I need an easy way to set border around multiple cells, like so:
All I found was border of 1 cell, and merge cells, which is not what I need.
I was expecting for something like:
worksheet.range_border(first_row, first_col, last_row, last_col)
Is there a way that this can be done (that is not involving setting top_border, bottom_border,
left_border, right_border for each cell individually)?
|
XlsxWriter is an awesome module that made my old job 1,000x easier (thanks John!), but formatting cells with it can be time-consuming. I've got a couple helper functions I use to do stuff like this.
First, you need to be able to create a new format by adding properties to an existing format:
def add_to_format(existing_format, dict_of_properties, workbook):
"""Give a format you want to extend and a dict of the properties you want to
extend it with, and you get them returned in a single format"""
new_dict={}
for key, value in existing_format.__dict__.iteritems():
if (value != 0) and (value != {}) and (value != None):
new_dict[key]=value
del new_dict['escapes']
return(workbook.add_format(dict(new_dict.items() + dict_of_properties.items())))
Now build off of that function with:
def box(workbook, sheet_name, row_start, col_start, row_stop, col_stop):
"""Makes an RxC box. Use integers, not the 'A1' format"""
rows = row_stop - row_start + 1
cols = col_stop - col_start + 1
for x in xrange((rows) * (cols)): # Total number of cells in the rectangle
box_form = workbook.add_format() # The format resets each loop
row = row_start + (x // cols)
column = col_start + (x % cols)
if x < (cols): # If it's on the top row
box_form = add_to_format(box_form, {'top':1}, workbook)
if x >= ((rows * cols) - cols): # If it's on the bottom row
box_form = add_to_format(box_form, {'bottom':1}, workbook)
if x % cols == 0: # If it's on the left column
box_form = add_to_format(box_form, {'left':1}, workbook)
if x % cols == (cols - 1): # If it's on the right column
box_form = add_to_format(box_form, {'right':1}, workbook)
sheet_name.write(row, column, "", box_form)
|
python requests on Google App Engine not working for HTTPS
|
I'm using python-request on Google App Engine and it's not working as expected for HTTPS. Let's see an example:
import requests
requests.get('https://www.digitalocean.com')
That line works perfectly if I execute it in a terminal. Response is 200 OK (without redirects).
However, if I execute it on GAE a TooManyRedirects error is raised.
Trying to figure out what's the problem I execute with allow_redirects=False and I can see that the response is a redirect (301) which points to the same url!!! ('location' header value is 'https://www.digitalocean.com'). This obviously (when allow_redirect=True) happens over and over again until the TooManyRedirects error is raised.
So it seems that python-requests is not working on GAE for HTTPS (I've tested with several URL). However HTTP works perfectly.
Any idea about what's happening?
Thanks in advance.
|
Downgrading to requests==2.1.0 worked for me.
Having an up-to-date urllib3 is important for resolving an unrelated bug (import pwd, as I recall).
Hopefully App Engine fixes this soon, as requests won't.
EDIT:
I think you can also patch this in the latest requests by commenting lines 161-175 in sessions.py. Untested.
|
Python-3.2 coroutine: AttributeError: 'generator' object has no attribute 'next'
|
#!/usr/bin/python3.2
import sys
def match_text(pattern):
line = (yield)
if pattern in line:
print(line)
x = match_text('apple')
x.next()
for line in input('>>>> '):
if x.send(line):
print(line)
x.close()
This is a coroutine but Python3.2 sees it as a generator - why? What is going on here? I'm referring to Python Essential Reference by David Beazeley pg:20.
To quote the relevant section:
Normally, functions operate on a single set of input arguments. However, a function can
also be written to operate as a task that processes a sequence of inputs sent to
it.This type of function is known as a coroutine and is created by using the yield
statement as an expression (yield) as shown in this example:
def print_matches(matchtext):
print "Looking for", matchtext
while True:
line = (yield) # Get a line of text
if matchtext in line:
print line
To use this function, you first call it, advance it to the first (yield), and then
start sending data to it using send(). For example:
>>> matcher = print_matches("python")
>>> matcher.next() # Advance to the first (yield)
Looking for python
>>> matcher.send("Hello World")
>>> matcher.send("python is cool")
python is cool
>>> matcher.send("yow!")
>>> matcher.close() # Done with the matcher function call
Why doesn't my code work - not that DB's works..
deathstar> python3.2 xxx
Traceback (most recent call last):
File "xxx", line 9, in <module>
matcher.next() # Advance to the first (yield)
AttributeError: 'generator' object has no attribute 'next'
|
You're getting thrown off by the error message; type-wise, Python doesn't make a distinction - you can .send to anything that uses yield, even if it doesn't do anything with the sent value internally.
In 3.x, there is no longer a .next method attached to these; instead, use the built-in free function next.
|
Unable to pass jinja2 variables into javascript snippet
|
How do i pass jinja2 data into javascript.
I have a Flask REST url as /logs/<test_case_name>
I am trying use .getJSON() to query the above URL and hence would want to pass the jinja2 data which has the testcasename to .getJSON function.
sample code:
<script type="text/javascript">
alert({{name}});
</script>
It doesn't work.
Any suggestions please?
|
Try with quotes:
alert("{{name}}");
|
Changing hostname in a url
|
I am trying to use python to change the hostname in a url, and have been playing around with the urlparse module for a while now without finding a satisfactory solution. As an example, consider the url:
https://www.google.dk:80/barbaz
I would like to replace "www.google.dk" with e.g. "www.foo.dk", so I get the following url:
https://www.foo.dk:80/barbaz.
So the part I want to replace is what urlparse.urlsplit refers to as hostname. I had hoped that the result of urlsplit would let me make changes, but the resulting type ParseResult doesn't allow me to. If nothing else I can of course reconstruct the new url by appending all the parts together with +, but this would leave me with some quite ugly code with a lot of conditionals to get "://" and ":" in the correct places.
|
You can use urlparse.urlparse function and ParseResult._replace method:
>>> import urlparse
>>> parsed = urlparse.urlparse("https://www.google.dk:80/barbaz")
>>> replaced = parsed._replace(netloc="www.foo.dk:80")
>>> print replaced
ParseResult(scheme='https', netloc='www.foo.dk:80', path='/barbaz', params='', query='', fragment='')
ParseResult is a subclass of namedtuple and _replace is a namedtuple method that:
returns a new instance of the named tuple replacing specified fields
with new values
UPDATE:
As @2rs2ts said in the comment netloc attribute includes a port number.
Good news: ParseResult has hostname and port attributes.
Bad news: hostname and port are not the members of namedtuple, they're dynamic properties and you can't do parsed._replace(hostname="www.foo.dk"). It'll throw an exception.
If you don't want to split on : and your url always has a port number and doesn't have username and password (that's urls like "https://username:password@www.google.dk:80/barbaz") you can do:
parsed._replace(netloc="{}:{}".format(parsed.hostname, parsed.port))
|
Celery: is there a way to write custom JSON Encoder/Decoder?
|
I have some objects I want to send to celery tasks on my application. Those objects are obviously not json serializable using the default json library. Is there a way to make celery serialize/de-serialize those objects with custom JSON Encoder/Decoder?
|
A bit late here, but you should be able to define a custom encoder and decoder by registering them in the kombu serializer registry, as in the docs: http://docs.celeryproject.org/en/latest/userguide/calling.html#serializers.
For example, the following is a custom datetime serializer/deserializer (subclassing python's builtin json module) for Django:
myjson.py (put it in the same folder of your settings.py file)
import json
from datetime import datetime
from time import mktime
class MyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, datetime):
return {
'__type__': '__datetime__',
'epoch': int(mktime(obj.timetuple()))
}
else:
return json.JSONEncoder.default(self, obj)
def my_decoder(obj):
if '__type__' in obj:
if obj['__type__'] == '__datetime__':
return datetime.fromtimestamp(obj['epoch'])
return obj
# Encoder function
def my_dumps(obj):
return json.dumps(obj, cls=MyEncoder)
# Decoder function
def my_loads(obj):
return json.loads(obj, object_hook=my_decoder)
settings.py
# Register your new serializer methods into kombu
from kombu.serialization import register
from .myjson import my_dumps, my_loads
register('myjson', my_dumps, my_loads,
content_type='application/x-myjson',
content_encoding='utf-8')
# Tell celery to use your new serializer:
CELERY_ACCEPT_CONTENT = ['myjson']
CELERY_TASK_SERIALIZER = 'myjson'
CELERY_RESULT_SERIALIZER = 'myjson'
|
Proper way for user authentication with angularjs and flask
|
I'm currently working my way through Web development with flask. I want to build a webapp with flask as backend and angular.js at the frontend. The Json part is straight forward, and my first steps work out well. But now I got stuck with User Authentication. I read a lot but found out, that WTFForms works not as well with angular (without the CSFR Token), so i can not use something like flask-security.
What is a proper way or maybe the best pratice for implementing User authentication with Angular und Flask? (Maybe someone knows a good example app I can learn from by example)
|
I have written several tutorials on RESTful APIs with Flask, all with examples that are ready to use:
http://blog.miguelgrinberg.com/category/REST
The tutorials are:
Designing a RESTful API with Python and Flask
This is a simple RESTful API written in Flask, with authentication.
Writing a Javascript REST client
In this one I implement a Javascript client for the API built in the first tutorial. Here you can see how the client authenticates. I used Knockout instead of Angular for the client, but the principles are the same.
Designing a RESTful API using Flask-RESTful
This one is another way to write the server, using the Flask-RESTful extension to simplify some tasks.
RESTful Authentication with Flask
This is a tutorial specifically dedicated to authentication.
I hope they are useful.
|
Tutorial for scipy.cluster.hierarchy
|
I'm trying to understand how to manipulate a hierarchy cluster but the documentation is too ... technical?... and I can't understand how it works.
Is there any tutorial that can help me to start with, explaining step by step some simple tasks?
Let's say I have the following data set:
a = np.array([[0, 0 ],
[1, 0 ],
[0, 1 ],
[1, 1 ],
[0.5, 0 ],
[0, 0.5],
[0.5, 0.5],
[2, 2 ],
[2, 3 ],
[3, 2 ],
[3, 3 ]])
I can easily do the hierarchy cluster and plot the dendrogram:
z = linkage(a)
d = dendrogram(z)
Now, how I can recover a specific cluster? Let's say the one with elements [0,1,2,4,5,6] in the dendrogram?
How I can get back the values of that elements?
|
There are three steps in hierarchical agglomerative clustering (HAC):
Quantify Data (metric argument)
Cluster Data (method argument)
Choose the number of clusters
Doing
z = linkage(a)
will accomplish the first two steps. Since you did not specify any parameters it uses the standard values
metric = 'euclidean'
method = 'single'
So z = linkage(a) will give you a single linked hierachical agglomerative clustering of a. This clustering is kind of a hierarchy of solutions. From this hierarchy you get some information about the structure of your data. What you might do now is:
Check which metric is appropriate, e. g. cityblock or chebychev will quantify your data differently (cityblock, euclidean and chebychev correspond to L1, L2, and L_inf norm)
Check the different properties / behaviours of the methdos (e. g. single, complete and average)
Check how to determine the number of clusters, e. g. by reading the wiki about it
Compute indices on the found solutions (clusterings) such as the silhouette coefficient (with this coefficient you get a feedback on the quality of how good a point/observation fits to the cluster it is assigned to by the clustering). Different indices use different criteria to qualify a clustering.
Here is something to start with
import numpy as np
import scipy.cluster.hierarchy as hac
import matplotlib.pyplot as plt
a = np.array([[0.1, 2.5],
[1.5, .4 ],
[0.3, 1 ],
[1 , .8 ],
[0.5, 0 ],
[0 , 0.5],
[0.5, 0.5],
[2.7, 2 ],
[2.2, 3.1],
[3 , 2 ],
[3.2, 1.3]])
fig, axes23 = plt.subplots(2, 3)
for method, axes in zip(['single', 'complete'], axes23):
z = hac.linkage(a, method=method)
# Plotting
axes[0].plot(range(1, len(z)+1), z[::-1, 2])
knee = np.diff(z[::-1, 2], 2)
axes[0].plot(range(2, len(z)), knee)
num_clust1 = knee.argmax() + 2
knee[knee.argmax()] = 0
num_clust2 = knee.argmax() + 2
axes[0].text(num_clust1, z[::-1, 2][num_clust1-1], 'possible\n<- knee point')
part1 = hac.fcluster(z, num_clust1, 'maxclust')
part2 = hac.fcluster(z, num_clust2, 'maxclust')
clr = ['#2200CC' ,'#D9007E' ,'#FF6600' ,'#FFCC00' ,'#ACE600' ,'#0099CC' ,
'#8900CC' ,'#FF0000' ,'#FF9900' ,'#FFFF00' ,'#00CC01' ,'#0055CC']
for part, ax in zip([part1, part2], axes[1:]):
for cluster in set(part):
ax.scatter(a[part == cluster, 0], a[part == cluster, 1],
color=clr[cluster])
m = '\n(method: {})'.format(method)
plt.setp(axes[0], title='Screeplot{}'.format(m), xlabel='partition',
ylabel='{}\ncluster distance'.format(m))
plt.setp(axes[1], title='{} Clusters'.format(num_clust1))
plt.setp(axes[2], title='{} Clusters'.format(num_clust2))
plt.tight_layout()
plt.show()
Gives
|
Python "SyntaxError: Non-ASCII character '\xe2' in file"
|
I am writing some python code and I am receiving the error message as in the title, from searching this has to do with the character set.
Here is the line that causes the error
hc = HealthCheck("instance_health", interval=15, target808="HTTP:8080/index.html")
I cannot figure out what character is not in the ANSI ASCII set? Furthermore searching "\xe2" does not give anymore information as to what character that appears as. Which character in that line is causing the issue?
I have also seen a few fixes for this issue but I am not sure which to use. Could someone clarify what the issue is (python doesn't interpret unicode unless told to do so?), and how I would clear it up properly?
EDIT:
Here are all the lines near the one that errors
def createLoadBalancer():
conn = ELBConnection(creds.awsAccessKey, creds.awsSecretKey)
hc = HealthCheck("instance_health", interval=15, target808="HTTP:8080/index.html")
lb = conn.create_load_balancer('my_lb', ['us-east-1a', 'us-east-1b'],[(80, 8080, 'http'), (443, 8443, 'tcp')])
lb.configure_health_check(hc)
return lb
|
If you are just trying to use UTF-8 characters or don't care if they are in your code, add this line to the top of your .py file
# -*- coding: utf-8 -*-
|
Scatter plots in Pandas/Pyplot: How to plot by category
|
I am trying to make a simple scatter plot in pyplot using a Pandas DataFrame object, but want an efficient way of plotting two variables but have the symbols dictated by a third column (key). I have tried various ways using df.groupby, but not successfully. A sample df script is below. This colours the markers according to 'key1', but Id like to see a legend with 'key1' categories. Am I close? Thanks.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame(np.random.normal(10,1,30).reshape(10,3), index = pd.date_range('2010-01-01', freq = 'M', periods = 10), columns = ('one', 'two', 'three'))
df['key1'] = (4,4,4,6,6,6,8,8,8,8)
fig1 = plt.figure(1)
ax1 = fig1.add_subplot(111)
ax1.scatter(df['one'], df['two'], marker = 'o', c = df['key1'], alpha = 0.8)
plt.show()
|
You can use scatter for this, but that requires having numerical values for your key1, and you won't have a legend, as you noticed.
It's better to just use plot for discrete categories like this. For example:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
np.random.seed(1974)
# Generate Data
num = 20
x, y = np.random.random((2, num))
labels = np.random.choice(['a', 'b', 'c'], num)
df = pd.DataFrame(dict(x=x, y=y, label=labels))
groups = df.groupby('label')
# Plot
fig, ax = plt.subplots()
ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling
for name, group in groups:
ax.plot(group.x, group.y, marker='o', linestyle='', ms=12, label=name)
ax.legend()
plt.show()
If you'd like things to look like the default pandas style, then just update the rcParams with the pandas stylesheet and use its color generator. (I'm also tweaking the legend slightly):
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
np.random.seed(1974)
# Generate Data
num = 20
x, y = np.random.random((2, num))
labels = np.random.choice(['a', 'b', 'c'], num)
df = pd.DataFrame(dict(x=x, y=y, label=labels))
groups = df.groupby('label')
# Plot
plt.rcParams.update(pd.tools.plotting.mpl_stylesheet)
colors = pd.tools.plotting._get_standard_colors(len(groups), color_type='random')
fig, ax = plt.subplots()
ax.set_color_cycle(colors)
ax.margins(0.05)
for name, group in groups:
ax.plot(group.x, group.y, marker='o', linestyle='', ms=12, label=name)
ax.legend(numpoints=1, loc='upper left')
plt.show()
|
Simple explanation of Google App Engine NDB Datastore
|
I'm creating a Google App Engine application (python) and I'm learning about the general framework. I've been looking at the tutorial and documentation for the NDB datastore, and I'm having some difficulty wrapping my head around the concepts. I have a large background with SQL databases and I've never worked with any other type of data storage system, so I'm thinking that's where I'm running into trouble.
My current understanding is this: The NDB datastore is a collection of entities (analogous to DB records) that have properties (analogous to DB fields/columns). Entities are created using a Model (analogous to a DB schema). Every entity has a key that is generated for it when it is stored. This is where I run into trouble because these keys do not seem to have an analogy to anything in SQL DB concepts. They seem similar to primary keys for tables, but those are more tightly bound to records, and in fact are fields themselves. These NDB keys are not properties of entities, but are considered separate objects from entities. If an entity is stored in the datastore, you can retrieve that entity using its key.
One of my big questions is where do you get the keys for this? Some of the documentation I saw showed examples in which keys were simply created. I don't understand this. It seemed that when entities are stored, the put() method returns a key that can be used later. So how can you just create keys and define ids if the original keys are generated by the datastore?
Another thing that I seem to be struggling with is the concept of ancestry with keys. You can define parent keys of whatever kind you want. Is there a predefined schema for this? For example, if I had a model subclass called 'Person', and I created a key of kind 'Person', can I use that key as a parent of any other type? Like if I wanted a 'Shoe' key to be a child of a 'Person' key, could I also then declare a 'Car' key to be a child of that same 'Person' key? Or will I be unable to after adding the 'Shoe' key?
I'd really just like a simple explanation of the NDB datastore and its API for someone coming from a primarily SQL background.
|
I think you've overcomplicating things in your mind. When you create an entity, you can either give it a named key that you've chosen yourself, or leave that out and let the datastore choose a numeric ID. Either way, when you call put, the datastore will return the key, which is stored in the form [<entity_kind>, <id_or_name>] (actually this also includes the application ID and any namespace, but I'll leave that out for clarity).
You can make entities members of an entity group by giving them an ancestor. That ancestor doesn't actually have to refer to an existing entity, although it usually does. All that happens with an ancestor is that the entity's key includes the key of the ancestor: so it now looks like [<parent_entity_kind>, <parent_id_or_name>, <entity_kind>, <id_or_name>]. You can now only get the entity by including its parent key. So, in your example, the Shoe entity could be a child of the Person, whether or not that Person has previously been created: it's the child that knows about the ancestor, not the other way round.
(Note that that ancestry path can be extended arbitrarily: the child entity can itself be an ancestor, and so on. In this case, the group is determined by the entity at the top of the tree.)
Saving entities as part of a group has advantages in terms of consistency, in that a query inside an entity group is always guaranteed to be fully consistent, whereas outside the query is only eventually consistent. However, there are also disadvantages, in that the write rate of an entity group is limited to 1 per second for the whole group.
|
How to sharex when using subplot2grid
|
I'm a Matlab user recently converted to Python. Most of the Python skills I manage on my own, but with plotting I have hit the wall and need some help.
This is what I'm trying to do...
I need to make a figure that consists of 3 subplots with following properties:
subplot layout is 311, 312, 313
the height of 312 and 313 is approximately half of the 311
all subplots share common X axis
the space between the subplots is 0 (they touch each other at X axis)
By the way I know how to make all this, only not in a single figure. That is the problem I'm facing now.
For example, this is my ideal subplot layout:
import numpy as np
import matplotlib.pyplot as plt
t = np.arange(0.0, 2.0, 0.01)
s1 = np.sin(2*np.pi*t)
s2 = np.exp(-t)
s3 = s1*s2
fig = plt.figure()
ax1 = plt.subplot2grid((4,3), (0,0), colspan=3, rowspan=2)
ax2 = plt.subplot2grid((4,3), (2,0), colspan=3)
ax3 = plt.subplot2grid((4,3), (3,0), colspan=3)
ax1.plot(t,s1)
ax2.plot(t[:150],s2[:150])
ax3.plot(t[30:],s3[30:])
plt.tight_layout()
plt.show()
Notice how the x axis of different subplots is misaligned. I do not know how to align the x axis in this figure, but if I do something like this:
import numpy as np
import matplotlib.pyplot as plt
t = np.arange(0.0, 2.0, 0.01)
s1 = np.sin(2*np.pi*t)
s2 = np.exp(-t)
s3 = s1*s2
fig2, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, sharex=True)
ax1.plot(t,s1)
ax2.plot(t[:150],s2[:150])
ax3.plot(t[30:],s3[30:])
plt.tight_layout()
plt.show()
Now the x axis is aligned between the subplots, but all subplots are the same size (which is not what I want)
Furthermore, I would like that the subplots are touching at x axis like this:
import numpy as np
import matplotlib.pyplot as plt
t = np.arange(0.0, 2.0, 0.01)
s1 = np.sin(2*np.pi*t)
s2 = np.exp(-t)
s3 = s1*s2
fig1 = plt.figure()
plt.subplots_adjust(hspace=0)
ax1 = plt.subplot(311)
ax2 = plt.subplot(312, sharex=ax1)
ax3 = plt.subplot(313, sharex=ax1)
ax1.plot(t,s1)
ax2.plot(t[:150],s2[:150])
ax3.plot(t[30:],s3[30:])
xticklabels = ax1.get_xticklabels()+ax2.get_xticklabels()
plt.setp(xticklabels, visible=False)
plt.show()
So to rephrase my question:
I would like to use
plt.subplot2grid(..., colspan=3, rowspan=2)
plt.subplots(..., sharex=True)
plt.subplots_adjust(hspace=0)
and
plt.tight_layout()
together in the same figure. How to do that?
|
Just specify sharex=ax1 when creating your second and third subplots.
import numpy as np
import matplotlib.pyplot as plt
t = np.arange(0.0, 2.0, 0.01)
s1 = np.sin(2*np.pi*t)
s2 = np.exp(-t)
s3 = s1*s2
fig = plt.figure()
ax1 = plt.subplot2grid((4,3), (0,0), colspan=3, rowspan=2)
ax2 = plt.subplot2grid((4,3), (2,0), colspan=3, sharex=ax1)
ax3 = plt.subplot2grid((4,3), (3,0), colspan=3, sharex=ax1)
ax1.plot(t,s1)
ax2.plot(t[:150],s2[:150])
ax3.plot(t[30:],s3[30:])
fig.subplots_adjust(hspace=0)
for ax in [ax1, ax2]:
plt.setp(ax.get_xticklabels(), visible=False)
# The y-ticks will overlap with "hspace=0", so we'll hide the bottom tick
ax.set_yticks(ax.get_yticks()[1:])
plt.show()
If you still what to use fig.tight_layout(), you'll need to call it before fig.subplots_adjust(hspace=0). The reason for this is that tight_layout works by automatically calculating parameters for subplots_adjust and then calling it, so if subplots_adjust is manually called first, anything in the first call to it will be overridden by tight_layout.
E.g.
fig.tight_layout()
fig.subplots_adjust(hspace=0)
|
Python 'list indices must be integers, not tuple"
|
I have been banging my head against this for two days now. I am new to python and programming so the other examples of this type of error have not helped me to much. I am reading through the documentation for lists and tuples, but haven't found anything that helps. Any pointer would be much appreciated. Not looking for the answer necessarily, just more resources on where to look. I am using Python 2.7.6. Thanks
measure = raw_input("How would you like to measure the coins? Enter 1 for grams 2 for pounds. ")
coin_args = [
["pennies", '2.5', '50.0', '.01']
["nickles", '5.0', '40.0', '.05']
["dimes", '2.268', '50.0', '.1']
["quarters", '5.67', '40.0', '.25']
]
if measure == 2:
for coin, coin_weight, rolls, worth in coin_args:
print "Enter the weight of your %s" % (coin)
weight = float(raw_input())
convert2grams = weight * 453.592
num_coin = convert2grams / (float(coin_weight))
num_roll = round(num_coin / (float(rolls)))
amount = round(num_coin * (float(worth)), 2)
print "You have %d %s, worth $ %d, and will need %d rolls." % (num_coin, coin, amount, num_roll)
else:
for coin, coin_weight, rolls, worth in coin_args:
print "Enter the weight of your %s" % (coin)
weight = float(raw_input())
num_coin = weight / (float(coin_weight))
num_roll = round(num_coin / (float(rolls)))
amount = round(num_coin * (float(worth)), 2)
print "You have %d %s, worth $ %d, and will need %d rolls." % (num_coin, coin, amount, num_roll)
This is the stack trace:
File ".\coin_estimator_by_weight.py", line 5, in <module>
["nickles", '5.0', '40.0', '.05']
TypeError: list indices must be integers, not tuple
|
The problem is that [...] in python has two distinct meanings
expr [ index ] means accessing an element of a list
[ expr1, expr2, expr3 ] means building a list of three elements from three expressions
In your code you forgot the comma between the expressions for the items in the outer list:
[ [a, b, c] [d, e, f] [g, h, i] ]
therefore Python interpreted the start of second element as an index to be applied to the first and this is what the error message is saying.
The correct syntax for what you're looking for is
[ [a, b, c], [d, e, f], [g, h, i] ]
|
linux tee is not working with python?
|
I made a python script which communicates with a web server using an infinite loop.
I want to log every communication data to a file and also monitor them from terminal at same time. so I used tee command like this.
python client.py | tee logfile
however, I got nothing from terminal nor logfile.
the python script is working fine.
what is happening here?
am I missing something?
some advice would be appreciated.
thank you in advance.
|
From man python:
-u Force stdin, stdout and stderr to be totally unbuffered. On systems
where it matters, also put stdin, stdout and stderr in binary mode. Note
that there is internal buffering in xreadlines(), readlines() and file-
object iterators ("for line in sys.stdin") which is not influenced by
this option. To work around this, you will want to use "sys.stdin.readâ
line()" inside a "while 1:" loop.
So what you can do is:
/usr/bin/python -u client.py >> logfile 2>&1
Or using tee:
python -u client.py | tee logfile
|
Getting "cannot write mode P as JPEG" while operating on JPG image
|
I am trying to resize some images, most of which are JPG. But in a few images, I am getting the error:
Traceback (most recent call last):
File "image_operation_new.py", line 168, in modifyImage
tempImage.save(finalName);
File "/Users/kshitiz/.virtualenvs/django_project/lib/python2.7/site- packages/PIL/Image.py", line 1465, in save
save_handler(self, fp, filename)
File "/Users/kshitiz/.virtualenvs/django_project/lib/python2.7/site- packages/PIL/JpegImagePlugin.py", line 455, in _save
raise IOError("cannot write mode %s as JPEG" % im.mode)
IOError: cannot write mode P as JPEG
I am not changing the image type and I'm using the pillow library. My OS is Mac OS X. How can I resolve the issue?
|
You need to convert the image to RGB mode.
Image.open('old.jpeg').convert('RGB').save('new.jpeg')
|
What user will Ansible run my commands as?
|
Background
My question seems simple, but it gets more complex really fast.
Basically, I got really tired of maintaining my servers manually (screams in background) and I decided it was time to find a way to make being a server admin much more livable. That's when I found Ansible. Great huh? Sure beats making bash scripts (louder scream) for everything I wanted to automate.
What's the problem?
I'm having a lot of trouble figuring out what user my ansible playbook will run certain things as. I also need the ability to specify what user certain tasks will run as. Here are some specific use cases:
Cloning a repo as another user:
My purpose with this is it run my node.js webapp from another user, who we'll call bill (that can only use sudo to run a script that I made that starts the node server, as opposed to root or my user that can use sudo for all commands). To do this, I need the ability to have Ansible's git module clone my git repo as bill. How would I do that?
Knowing how Ansible will gain root:
As far as I understand, you can set what user Ansible will connect to the server you're maintaining by defining 'user' and the beginning of the playbook file. Here's what I don't understand: if I tell it to connect via my username, joe, and ask it to update a package via the apt module, how will it gain root? Sudo usually prompts me for my password, and I'd prefer keeping it that way (for security).
Final request
I've scoured the Ansible docs, done some (what I thought was thorough) Googling, and generally just tried to figure it out on my own, but this information continues to illude me.
I am very new to Ansible, and while it's mostly straight-forwards, I would benefit greatly if I could understand exactly how ansible runs, on which users it runs, and how/where I can specify what user to use at different times.
Thank you tons in advance
|
You may find it useful to read the Hosts and Users section on Ansible's documentation site:
http://docs.ansible.com/playbooks_intro.html#hosts-and-users
In summary, ansible will run all commands in a playbook as the user specified in the remote_user variable (assuming you're using ansible >= 1.4, user before that). You can specify this variable on a per-task basis as well, in case a task needs to run as a certain user.
Use sudo: true in any playbook/task to use sudo to run it. Use the sudo_user variable to specify a user to sudo to if you don't want to use root.
In practice, I've found it easiest to run my playbook as a deploy user that has sudo privileges. I set up my SSH keys so I can SSH into any host as deploy without using a password. This means that I can run my playbook without using a password and even use sudo if I need to.
I use this same user to do things like cloning git repos and starting/stopping services. If a service needs to run as a lower-privileged user, I let the init script take care of that. A quick Google search for a node.js init.d script revealed this one for CentOS:
https://gist.github.com/nariyu/1211413
Doing things this way helps to keep it simple, which I like.
Hope that helps.
|
Link ATLAS/MKL to an installed Numpy
|
TL;DR how to link ATLAS/MKL to existing Numpy without rebuilding.
I have used Numpy to calculate with the large matrix and I found that it is very slow because Numpy only use 1 core to do calculation. After doing a lot of search I figure that my Numpy does not link to some optimized library like ATLAS/MKL. Here is my config of numpy:
>>>import numpy as np
>>>np.__config__.show()
blas_info:
libraries = ['blas']
library_dirs = ['/usr/lib']
language = f77
lapack_info:
libraries = ['lapack']
library_dirs = ['/usr/lib']
language = f77
atlas_threads_info:
NOT AVAILABLE
blas_opt_info:
libraries = ['blas']
library_dirs = ['/usr/lib']
language = f77
define_macros = [('NO_ATLAS_INFO', 1)]
atlas_blas_threads_info:
NOT AVAILABLE
openblas_info:
NOT AVAILABLE
lapack_opt_info:
libraries = ['lapack', 'blas']
library_dirs = ['/usr/lib']
language = f77
define_macros = [('NO_ATLAS_INFO', 1)]
atlas_info:
NOT AVAILABLE
lapack_mkl_info:
NOT AVAILABLE
blas_mkl_info:
NOT AVAILABLE
atlas_blas_info:
NOT AVAILABLE
mkl_info:
NOT AVAILABLE
For this reason, I want to link ATLAS/MKL to Numpy. However, my Numpy is installed from PIP so I don't want to install manually because I want to use the latest version. I have done some search but they are only for building from scratch. For this reason, my question are:
Are there any way to link ATLAS/MKL to Numpy without rebuilding again?
I have found that the config info is saved in _config_.py in the installed folder of Numpy. So will modifying it solve my problem? If yes, would you please show me how?
|
Assuming you're running some flavour of linux, here's one way you could do it:
Find out what BLAS library numpy is currently linked against using ldd.
For versions of numpy older than v1.10:
$ ldd /<path_to_site-packages>/numpy/core/_dotblas.so
For example, if I install numpy via apt-get, it links to
...
libblas.so.3 => /usr/lib/libblas.so.3 (0x00007fed81de8000)
...
If _dotblas.so doesn't exist, this probably means that numpy failed to detect any BLAS libraries when it was originally installed, in which case it simply doesn't build any of the BLAS-dependent components. This often happens if you install numpy using pip without manually specifying a BLAS library (see below). I'm afraid you'll have no option but to rebuild numpy if you want to link against an external BLAS library.
For numpy v1.10 and newer:
_dotblas.so has been removed from recent versions of numpy, but you should be able to check the dependencies of multiarray.so instead:
$ ldd /<path_to_site-packages>/numpy/core/multiarray.so
Install ATLAS/MKL/OpenBLAS if you haven't already. By the way, I would definitely recommend OpenBLAS over ATLAS - take a look at this answer (although the benchmarking data is now probably a bit out of date).
Use update-alternatives to create a symlink to the new BLAS library of your choice. For example, if you installed libopenblas.so into /opt/OpenBLAS/lib, you would do:
$ sudo update-alternatives --install /usr/lib/libblas.so.3 \
libblas.so.3 \
/opt/OpenBLAS/lib/libopenblas.so \
50
You can have multiple symlinks configured for a single target library, allowing you to manually switch between multiple installed BLAS libraries.
For example, when I call $ sudo update-alternatives --config libblas.so.3, I can choose between one of 3 libraries:
Selection Path Priority Status
------------------------------------------------------------
0 /opt/OpenBLAS/lib/libopenblas.so 40 auto mode
1 /opt/OpenBLAS/lib/libopenblas.so 40 manual mode
2 /usr/lib/atlas-base/atlas/libblas.so.3 35 manual mode
* 3 /usr/lib/libblas/libblas.so.3 10 manual mode
If you really want the "newest" version of numpy, you could also take a look at my answer on compiling numpy from source with OpenBLAS integration.
Installing numpy with BLAS support using pip
As @tndoan mentioned in the comments, it's possible to make pip respect a particular configuration for numpy by placing a config file in ~/.numpy-site.cfg - see this answer for more details.
My personal preference is to configure and build numpy by hand. It's not particularly difficult, and it gives you better control over numpy's configuration.
|
TypeError: only length-1 arrays can be converted to Python scalars while trying to exponentially fit data
|
f=np.loadtxt('Single Small Angle 1.txt',unpack=True,skiprows=2)
g=np.loadtxt('Single Small Angle 5.txt',unpack=True,skiprows=2)
x = f-g[:,:11944]
t=range(len(x))
m=math.log10(abs(x))
np.polyfit(t,m)
plt.plot(t,abs(x))
plt.show()
I'm just not sure on how to fix my issue. It keeps saying:
m=math.log10(abs(x))
TypeError: only length-1 arrays can be converted to Python scalars
|
Non-numpy functions like math.abs() or math.log10() don't play nicely with numpy arrays. Just replace the line raising an error with:
m = np.log10(np.abs(x))
Apart from that the np.polyfit() call will not work because it is missing a parameter (and you are not assigning the result for further use anyway).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.