instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
|---|---|---|
Python 3 TypeError: must be str, not bytes with sys.stdout.write()
|
I was looking for a way to run an external process from python script and print its stdout messages during the execution.
The code below works, but prints no stdout output during runtime. When it exits I am getting the following error:
sys.stdout.write(nextline) TypeError:must be str,not bytes
p = subprocess.Popen(["demo.exe"],stdout = subprocess.PIPE, stderr= subprocess.PIPE)
# Poll process for new output until finished
while True:
nextline = p.stdout.readline()
if nextline == '' and p.poll() != None:
break
sys.stdout.write(nextline)
sys.stdout.flush()
output = p.communicate()[0]
exitCode = p.returncode
I am using python 3.3.2
|
Python 3 handles strings a bit different. Originally there was just one type for
strings: str. When unicode gained traction in the '90s the new unicode type
was added to handle Unicode without breaking pre-existing code1. This is
effectively the same as str but with multibyte support.
In Python 3 there are two different types:
The bytes type. This is just a sequence of bytes, Python doesn't know
anything about how to interpret this as characters.
The str type. This is also a sequence of bytes, but Python knows how to
interpret those bytes as characters.
The separate unicode type was dropped. str now supports unicode.
In Python 2 implicitly assuming an encoding could cause a lot of problems; you
could end up using the wrong encoding, or the data may not have an encoding at
all (e.g. itâs a PNG image).
Explicitly telling Python which encoding to use (or explicitly telling it to
guess) is often a lot better and much more in line with the "Python philosophy"
of "explicit is better than implicit".
This change is incompatible with Python 2 as many return values have changed,
leading to subtle problems like this one; it's probably the main reason why
Python 3 adoption has been so slow. Since Python doesn't have static typing2
it's impossible to change this automatically with a script (such as the bundled
2to3).
You can convert str to bytes with bytes('hâ¬llo', 'utf-8'); this should
produce b'H\xe2\x82\xacllo'. Note how one character was converted to three
bytes.
You can convert bytes to str with b'H\xe2\x82\xacllo'.decode('utf-8').
Of course, UTF-8 may not be the correct character set in your case, so be sure
to use the correct one.
In your specific piece of code, nextline is of type bytes, not str,
reading stdout and stdin from subprocess changed in Python 3 from str to
bytes. This is because Python can't be sure which encoding this uses. It
probably uses the same as sys.stdin.encoding (the encoding of your system),
but it can't be sure.
You need to replace:
sys.stdout.write(nextline)
with:
sys.stdout.write(nextline.decode('utf-8'))
or maybe:
sys.stdout.write(nextline.decode(sys.stdout.encoding))
You will also need to modify if nextline == '' to if nextline == b'' since:
>>> '' == b''
False
Also see the Python 3 ChangeLog, PEP 358, and PEP 3112.
1 There are some neat tricks you can do with ASCII that you can't do with multibyte character sets; the most famous example is the "xor with space to switch case" (e.g. chr(ord('a') ^ ord(' ')) == 'A') and "set 6th bit to make a control character" (e.g. ord('\t') + ord('@') == ord('I')). ASCII was designed in a time when manipulating individual bits was an operation with a non-negligible performance impact.
2 Yes, you can use function annotations, but it's a comparatively new feature and little used.
|
pandas attribute error : no attribute 'Factor' found
|
I'm trying to run code provided by yhat in their article about random forests in Python, but I keep getting following error message:
File "test_iris_with_rf.py", line 11, in <module>
df['species'] = pd.Factor(iris.target, iris.target_names)
AttributeError: 'module' object has no attribute 'Factor'
Code:
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
import numpy as np
iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
print df
print iris.target_names
df['is_train'] = np.random.uniform(0, 1, len(df)) <= .75
df['species'] = pd.Factor(iris.target, iris.target_names)
df.head()
|
In newer versions of pandas, the Factor is called Categorical instead. Change your line to:
df['species'] = pd.Categorical.from_codes(iris.target, iris.target_names)
|
Why is this iterative Collatz method 30% slower than its recursive version in Python?
|
Prelude
I have two implementations for a particular problem, one recursive and one iterative, and I want to know what causes the iterative solution to be ~30% slower than the recursive one.
Given the recursive solution, I write an iterative solution making the stack explicit.
Clearly, I simply mimic what the recursion is doing, so of course the Python engine is better optimized to handle the bookkeeping. But can we write an iterative method with similar performance?
My case study is Problem #14 on Project Euler.
Find the longest Collatz chain with a starting number below one million.
Code
Here is a parsimonious recursive solution (credit due to veritas in the problem thread plus an optimization from jJjjJ):
def solve_PE14_recursive(ub=10**6):
def collatz_r(n):
if not n in table:
if n % 2 == 0:
table[n] = collatz_r(n // 2) + 1
elif n % 4 == 3:
table[n] = collatz_r((3 * n + 1) // 2) + 2
else:
table[n] = collatz_r((3 * n + 1) // 4) + 3
return table[n]
table = {1: 1}
return max(xrange(ub // 2 + 1, ub, 2), key=collatz_r)
Here's my iterative version:
def solve_PE14_iterative(ub=10**6):
def collatz_i(n):
stack = []
while not n in table:
if n % 2 == 0:
x, y = n // 2, 1
elif n % 4 == 3:
x, y = (3 * n + 1) // 2, 2
else:
x, y = (3 * n + 1) // 4, 3
stack.append((n, y))
n = x
ysum = table[n]
for x, y in reversed(stack):
ysum += y
table[x] = ysum
return ysum
table = {1: 1}
return max(xrange(ub // 2 + 1, ub, 2), key=collatz_i)
And the timings on my machine (i7 machine with lots of memory) using IPython:
In [3]: %timeit solve_PE14_recursive()
1 loops, best of 3: 942 ms per loop
In [4]: %timeit solve_PE14_iterative()
1 loops, best of 3: 1.35 s per loop
Comments
The recursive solution is awesome:
Optimized to skip a step or two depending on the two least significant bits.
My original solution didn't skip any Collatz steps and took ~1.86 s
It is difficult to hit Python's default recursion limit of 1000.
collatz_r(9780657630) returns 1133 but requires less than 1000 recursive calls.
Memoization avoids retracing
collatz_r length calculated on-demand for max
Playing around with it, timings seem to be precise to +/- 5 ms.
Languages with static typing like C and Haskell can get timings below 100 ms.
I put the initialization of the memoization table in the method by design for this question, so that timings would reflect the "re-discovery" of the table values on each invocation.
collatz_r(2**1002) raises RuntimeError: maximum recursion depth exceeded.
collatz_i(2**1002) happily returns with 1003.
I am familiar with generators, coroutines, and decorators.
I am using Python 2.7. I am also happy to use Numpy (1.8 on my machine).
What I am looking for
an iterative solution that closes the performance gap
discussion on how Python handles recursion
the finer details of the performance penalties associated with an explicit stack
I'm looking mostly for the first, though the second and third are very important to this problem and would increase my understanding of Python.
|
Here's my shot at a (partial) explanation after running some benchmarks, which confirm your figures.
While recursive function calls are expensive in CPython, they aren't nearly as expensive as emulating a call stack using lists. The stack for a recursive call is a compact structure implemented in C (see Eli Bendersky's explanation and the file Python/ceval.c in the source code).
By contrast, your emulated stack is a Python list object, i.e. a heap-allocated, dynamically growing array of pointers to tuple objects, which in turn point to the actual values; goodbye, locality of reference, hello cache misses. You then use Python's notoriously slow iteration on these objects. A line-by-line profiling with kernprof confirms that iteration and list handling are taking a lot of time:
Line # Hits Time Per Hit % Time Line Contents
==============================================================
16 @profile
17 def collatz_i(n):
18 750000 339195 0.5 2.4 stack = []
19 3702825 1996913 0.5 14.2 while not n in table:
20 2952825 1329819 0.5 9.5 if n % 2 == 0:
21 864633 416307 0.5 3.0 x, y = n // 2, 1
22 2088192 906202 0.4 6.4 elif n % 4 == 3:
23 1043583 617536 0.6 4.4 x, y = (3 * n + 1) // 2, 2
24 else:
25 1044609 601008 0.6 4.3 x, y = (3 * n + 1) // 4, 3
26 2952825 1543300 0.5 11.0 stack.append((n, y))
27 2952825 1150867 0.4 8.2 n = x
28 750000 352395 0.5 2.5 ysum = table[n]
29 3702825 1693252 0.5 12.0 for x, y in reversed(stack):
30 2952825 1254553 0.4 8.9 ysum += y
31 2952825 1560177 0.5 11.1 table[x] = ysum
32 750000 305911 0.4 2.2 return ysum
Interestingly, even n = x takes around 8% of the total running time.
(Unfortunately, I couldn't get kernprof to produce something similar for the recursive version.)
|
Simulating the Knight Sequence Tour
|
I am currently trying to write a simple multi-threading program using Python. However I have run on to a bug I think I am missing. I am trying to simply write a program that uses a brute force approach the problem below:
As can be seen from the image there is a chess board where the knight travels all respective squares.
My approach is simply try each possible way where each possible way is a new thread. If in the end of the thread there is no possible moves count how many squares has been visited if it is equal to 63 write solution on a simple text file...
The code is as below:
from thread import start_new_thread
import sys
i=1
coor_x = raw_input("Please enter x[0-7]: ")
coor_y = raw_input("Please enter y[0-7]: ")
coordinate = int(coor_x), int(coor_y)
def checker(coordinates, previous_moves):
possible_moves = [(coordinates[0]+1, coordinates[1]+2), (coordinates[0]+1, coordinates[1]-2),
(coordinates[0]-1, coordinates[1]+2), (coordinates[0]-1, coordinates[1]-2),
(coordinates[0]+2, coordinates[1]+1), (coordinates[0]+2, coordinates[1]-1),
(coordinates[0]-2, coordinates[1]+1), (coordinates[0]-2, coordinates[1]-1)]
to_be_removed = []
for index in possible_moves:
(index_x, index_y) = index
if index_x < 0 or index_x > 7 or index_y < 0 or index_y > 7:
to_be_removed.append(index)
for index in previous_moves:
if index in possible_moves:
to_be_removed.append(index)
if not to_be_removed:
for index in to_be_removed:
possible_moves.remove(index)
if len(possible_moves) == 0:
if not end_checker(previous_moves):
print "This solution is not correct"
else:
return possible_moves
def end_checker(previous_moves):
if len(previous_moves) == 63:
writer = open("knightstour.txt", "w")
writer.write(previous_moves)
writer.close()
return True
else:
return False
def runner(previous_moves, coordinates, i):
if not end_checker(previous_moves):
process_que = checker(coordinates, previous_moves)
for processing in process_que:
previous_moves.append(processing)
i = i+1
print "Thread number:"+str(i)
start_new_thread(runner, (previous_moves, processing, i))
else:
sys.exit()
previous_move = []
previous_move.append(coordinate)
runner(previous_move, coordinate, i)
c = raw_input("Type something to exit !")
I am open to all suggestions...
My sample output is as below:
Please enter x[0-7]: 4
Please enter y[0-7]: 0
Thread number:2
Thread number:3
Thread number:4
Thread number:5Thread number:4
Thread number:5
Thread number:6Thread number:3Thread number:6Thread number:5Thread number:6
Thread number:7
Thread number:6Thread number:8
Thread number:7
Thread number:8Thread number:7
Thread number:8
Thread number:4
Thread number:5
Thread number:6Thread number:9Thread number:7Thread number:9
Thread number:10
Thread number:11
Thread number:7
Thread number:8
Thread number:9
Thread number:10
Thread number:11
Thread number:12
Thread number:5Thread number:5
Thread number:6
Thread number:7
Thread number:8
Thread number:9
Thread number:6
Thread number:7
Thread number:8
Thread number:9
If seems for some reason the number of threads are stuck at 12...
Any help would be most welcomed...
Thank you
|
Your so-called Quest of the Knights Who Say Ni problem, while a clever rephrasing for asking a Python question, is more widely known as the Knights Tour mathematical problem. Given that and the fact you're a math teacher, I suspect your question's likely a fool's errand (aka snipe hunt) and that you're fully aware of the following fact:
According to a section of Wikipedia's article on the Knights Tour problem:
5.1 Brute force algorithms
A brute-force search for a knight's tour is impractical on all but the
smallest boards; for example, on an 8x8 board there are approximately
4x1051 possible move sequences∗, and it is well beyond the capacity
of modern computers (or networks of computers) to perform operations
on such a large set.
∗ Exactly 3,926,356,053,343,005,839,641,342,729,308,535,057,127,083,875,101,072 of them
according to a footnote link.
|
How do I use the 'json' module to read in one JSON object at a time?
|
I have a multi-gigabyte JSON file. The file is made up of JSON objects that are no more than a few thousand characters each, but there are no line breaks between the records.
Using Python 3 and the json module, how can I read one JSON object at a time from the file into memory?
The data is in a plain text file. Here is an example of a similar record. The actual records contains many nested dictionaries and lists.
Record in readable format:
{
"results": {
"__metadata": {
"type": "DataServiceProviderDemo.Address"
},
"Street": "NE 228th",
"City": "Sammamish",
"State": "WA",
"ZipCode": "98074",
"Country": "USA"
}
}
}
Actual format. New records start one after the other without any breaks.
{"results": { "__metadata": {"type": "DataServiceProviderDemo.Address"},"Street": "NE 228th","City": "Sammamish","State": "WA","ZipCode": "98074","Country": "USA" } } }{"results": { "__metadata": {"type": "DataServiceProviderDemo.Address"},"Street": "NE 228th","City": "Sammamish","State": "WA","ZipCode": "98074","Country": "USA" } } }{"results": { "__metadata": {"type": "DataServiceProviderDemo.Address"},"Street": "NE 228th","City": "Sammamish","State": "WA","ZipCode": "98074","Country": "USA" } } }
|
You can parse data in chunks using the JSONDecoder.raw_decode() method.
The following will yield complete objects as the parser finds them:
from json import JSONDecoder
from functools import partial
def json_parse(fileobj, decoder=JSONDecoder(), buffersize=2048):
buffer = ''
for chunk in iter(partial(fileobj.read, buffersize), ''):
buffer += chunk
while buffer:
try:
result, index = decoder.raw_decode(buffer)
yield result
buffer = buffer[index:]
except ValueError:
# Not enough data to decode, read more
break
This function will read chunks from the given file object in buffersize chunks, and have the decoder object parse whole JSON objects from the buffer. Each parsed object is yielded to the caller.
Use it like this:
with open('yourfilename', 'r') as infh:
for data in json_parse(infh):
# process object
Use this only if your JSON objects are written to a file back-to-back, with no newlines in between. If you do have newlines, use Loading & Parsing JSON file in python instead.
|
Need a fast way to count and sum an iterable in a single pass
|
Can any one help me? I'm trying to come up with a way to compute
>>> sum_widths = sum(col.width for col in cols if not col.hide)
and also count the number of items in this sum, without having to make two passes over cols.
It seems unbelievable but after scanning the std-lib (built-in functions, itertools, functools, etc), I couldn't even find a function which would count the number of members in an iterable. I found the function itertools.count, which sounds like what I want, but It's really just a deceptively named range function.
After a little thought I came up with the following (which is so simple that the lack of a library function may be excusable, except for its obtuseness):
>>> visable_col_count = sum(col is col for col in cols if not col.hide)
However, using these two functions requires two passes of the iterable, which just rubs me the wrong way.
As an alternative, the following function does what I want:
>>> def count_and_sum(iter):
>>> count = sum = 0
>>> for item in iter:
>>> count += 1
>>> sum += item
>>> return count, sum
The problem with this is that it takes 100 times as long (according to timeit) as the sum of a generator expression form.
If anybody can come up with a simple one-liner which does what I want, please let me know (using Python 3.3).
Edit 1
Lots of great ideas here, guys. Thanks to all who replied. It will take me a while to digest all these answers, but I will and I will try to pick one to check.
Edit 2
I repeated the timings on my two humble suggestions (count_and_sum function and 2 separate sum functions) and discovered that my original timing was way off, probably due to an auto-scheduled backup process running in the background.
I also timed most of the excellent suggestions given as answers here, all with the same model. Analysing these answers has been quite an education for me: new uses for deque, enumerate and reduce and first time for count and accumulate. Thanks to all!
Here are the results (from my slow netbook) using the software I'm developing for display:
âââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â Count and Sum Timing â
ââââââââââââââââââââââââââââ¬ââââââââââââ¬âââââââââââââââââ¤
â Method âTime (usec)âTime (% of base)â
ââââââââââââââââââââââââââââ¼ââââââââââââ¼âââââââââââââââââ¤
âcount_and_sum (base) â 7.2â 100%â
âTwo sums â 7.5â 104%â
âdeque enumerate accumulateâ 7.3â 101%â
âmax enumerate accumulate â 7.3â 101%â
âreduce â 7.4â 103%â
âcount sum â 7.3â 101%â
ââââââââââââââââââââââââââââ´ââââââââââââ´âââââââââââââââââ
(I didn't time the complex and fold methods as being just too obscure, but thanks anyway.)
Since there's very little difference in timing among all these methods I decided to use the count_and_sum function (with an explicit for loop) as being the most readable, explicit and simple (Python Zen) and it also happens to be the fastest!
I wish I could accept one of these amazing answers as correct but they are all equally good though more or less obscure, so I'm just up-voting everybody and accepting my own answer as correct (count_and_sum function) since that's what I'm using.
What was that about "There should be one-- and preferably only one --obvious way to do it."?
|
Using complex numbers
z = [1, 2, 4, 5, 6]
y = sum(x + 1j for x in z)
sum_z, count_z = y.real, int(y.imag)
print sum_z, count_z
18.0 5
|
Find all columns of dataframe in Pandas whose type is float, or a particular type?
|
I have a dataframe, df, that has some columns of type float64, while the others are of object. Due to the mixed nature, I cannot use
df.fillna('unknown') #getting error "ValueError: could not convert string to float:"
as the error happened with the columns whose type is float64 (what a misleading error message!)
so I'd wish that I could do something like
for col in df.columns[<dtype == object>]:
df[col] = df[col].fillna("unknown")
So my question is if there is any such filter expression that I can use with df.columns?
I guess alternatively, less elegantly, I could do:
for col in df.columns:
if (df[col].dtype == dtype('O')): # for object type
df[col] = df[col].fillna('')
# still puzzled, only empty string works as replacement, 'unknown' would not work for certain value leading to error of "ValueError: Error parsing datetime string "unknown" at position 0"
I also would like to know why in the above code replacing '' with 'unknown' the code would work for certain cells but failed with a cell with the error of "ValueError: Error parsing datetime string "unknown" at position 0"
Thanks a lot!
Yu
|
This is conciser:
# select the float columns
df_num = df.select_dtypes(include=[np.float])
# select non-numeric columns
df_num = df.select_dtypes(exclude=[np.number])
|
Use of input/raw_input in python 2 and 3
|
I would like to set a user prompt with the following question:
save_flag is not set to 1; data will not be saved. Press enter to continue.
input() works in python3 but not python2. raw_input() works in python2 but not python3. Is there a way to do this so that the code is compatible with both python 2 and python 3?
|
Bind raw_input to input in Python 2:
try:
input = raw_input
except NameError:
pass
Now input will return a string in Python 2 as well.
|
Convert True/False value read from file to boolean
|
I'm reading a True - False value from a file and I need to convert it to boolean. Currently it always converts it to True even if the value is set to False.
Here's a MWE of what I'm trying to do:
with open('file.dat', mode="r") as f:
for line in f:
reader = line.split()
# Convert to boolean <-- Not working?
flag = bool(reader[0])
if flag:
print 'flag == True'
else:
print 'flag == False'
The file.dat file basically consists of a single string with the value True or False written inside. The arrangement looks very convoluted because this is a minimal example from a much larger code and this is how I read parameters into it.
Why is flag always converting to True?
|
bool('True') and bool('False') always return True because strings 'True' and 'False' are not empty.
To quote a a great man (and Python documentation):
5.1. Truth Value Testing
Any object can be tested for truth value, for use in an if or while
condition or as operand of the Boolean operations below. The
following values are considered false:
…
zero of any numeric type, for example, 0, 0L, 0.0, 0j.
any empty sequence, for example, '', (), [].
…
All other values are considered true — so objects of many types
are always true.
The built-in bool function uses the standard truth testing procedure. That's why you're always getting True.
To convert a string to boolean you need to do something like this:
def str_to_bool(s):
if s == 'True':
return True
elif s == 'False':
return False
else:
raise ValueError # evil ValueError that doesn't tell you what the wrong value was
|
How to close a SQLAlchemy session?
|
Following what we commented in How to close sqlalchemy connection in MySQL, I am checking the connections that SQLAlchemy creates into my database and I cannot manage to close them without exiting from Python.
If I run this code in a python console, it keeps the session opened until I exit from python:
from sqlalchemy.orm import sessionmaker
from models import OneTable, get_engine
engine = get_engine(database="mydb")
session = sessionmaker(bind=engine)()
results = session.query(OneTable.company_name).all()
# some work with the data #
session.close()
and the only workaround I found to close it is to call engine.dispose() at the end.
As per the comments in the link I gave above, my question are now:
Why is engine.dispose() necessary to close sessions?
Doesn't session.close() suffice?
|
There's a central confusion here over the word "session". I'm not sure here, but it appears like you may be confusing the SQLAlchemy Session with a MySQL @@session, which refers to the scope of when you first make a connection to MySQL and when you disconnect.
These two concepts are not the same. A SQLAlchemy Session generally represents the scope of one or more transactions, upon a particular database connection.
Therefore, the answer to your question as literally asked, is to call session.close(), that is, "how to properly close a SQLAlchemy session".
However, the rest of your question indicates you'd like some functionality whereby when a particular Session is closed, you'd like the actual DBAPI connection to be closed as well.
What this basically means is that you wish to disable connection pooling. Which as other answers mention, easy enough, use NullPool.
|
Python MySQLdb TypeError: not all arguments converted during string formatting
|
So I think I'm going crazy but upon running this script:
#! /usr/bin/env python
import MySQLdb as mdb
import sys
class Test:
def check(self, search):
try:
con = mdb.connect('localhost', 'root', 'password', 'recordsdb');
cur = con.cursor()
cur.execute( "SELECT * FROM records WHERE email LIKE '%s'", search )
ver = cur.fetchone()
print "Output : %s " % ver
except mdb.Error, e:
print "Error %d: %s" % (e.args[0],e.args[1])
sys.exit(1)
finally:
if con:
con.close()
test = Test()
test.check("test")
I get an error of:
./lookup
Traceback (most recent call last):
File "./lookup", line 27, in <module>
test.check("test")
File "./lookup", line 11, in creep
cur.execute( "SELECT * FROM records WHERE email LIKE '%s'", search )
File "/usr/local/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 187, in execute
query = query % tuple([db.literal(item) for item in args])
TypeError: not all arguments converted during string formatting
I have zero idea why, I'm trying to do parameterized querys but it's been nothing but a pain. I'm somewhat new to python so it's probably an obvious problem.
|
Instead of this:
cur.execute( "SELECT * FROM records WHERE email LIKE '%s'", search )
Try this:
cur.execute( "SELECT * FROM records WHERE email LIKE '%s'", [search] )
See the MySQLdb documentation. The reasoning is that execute's second parameter represents a list of the objects to be converted, because you could have an arbitrary number of objects in a parameterized query. In this case, you have only one, but it still needs to be an iterable (a tuple instead of a list would also be fine).
|
Check and wait until a file exists to read it
|
I need to wait until a file is created then read it in. I have the below code, but sure it does not work:
import os.path
if os.path.isfile(file_path):
read file in
else:
wait
Any ideas please?
|
A simple implementation could be:
import os.path
import time
while not os.path.exists(file_path):
time.sleep(1)
if os.path.isfile(file_path):
# read file
else:
raise ValueError("%s isn't a file!" % file_path)
You wait a certain amount of time after each check, and then read the file when the path exists. The script can be stopped with the KeyboardInterruption exception if the file is never created. You should also check if the path is a file after, to avoid some unwanted exceptions.
|
Python: why pickle?
|
I have been using pickle and was very happy, then I saw this article: Don't Pickle Your Data
Reading further it seems like:
Pickle is slow
Pickle is unsafe
Pickle isnât human readable
Pickle isnât language-agnostic
Iâve switched to saving my data as JSON, but I wanted to know about best practice:
Given all these issues, when would you ever use pickle? What specific situations call for using it?
|
Pickle is unsafe because it constructs arbitrary Python objects by invoking arbitrary functions. However, this is also gives it the power to serialize almost any Python object, without any boilerplate or even white-/black-listing (in the common case). That's very desirable for some use cases:
Quick & easy serialization, for example for pausing and resuming a long-running but simple script. None of the concerns matter here, you just want to dump the program's state as-is and load it later.
Sending arbitrary Python data to other processes or computers, as in multiprocessing. The security concerns may apply (but mostly don't), the generality is absolutely necessary, and humans won't have to read it.
In other cases, none of the drawbacks is quite enough to justify the work of mapping your stuff to JSON or another restrictive data model. Maybe you don't expect to need human readability/safety/cross-language compatibility or maybe you can do without. Remember, You Ain't Gonna Need It. Using JSON would be the right thing⢠but right doesn't always equal good.
You'll notice that I completely ignored the "slow" downside. That's because it's partially misleading: Pickle is indeed slower for data that fits the JSON model (strings, numbers, arrays, maps) perfectly, but if your data's like that you should use JSON for other reasons anyway. If your data isn't like that (very likely), you also need to take into account the custom code you'll need to turn your objects into JSON data, and the custom code you'll need to turn JSON data back into your objects. It adds both engineering effort and run-time overhead, which must be quantified on a case-by-case basis.
|
Pandas dataframe total row
|
I have a dataframe, something like:
foo bar qux
0 a 1 3.14
1 b 3 2.72
2 c 2 1.62
3 d 9 1.41
4 e 3 0.58
and I would like to add a 'total' row to the end of the dataframe:
foo bar qux
0 a 1 3.14
1 b 3 2.72
2 c 2 1.62
3 d 9 1.41
4 e 3 0.58
5 tot 15 9.47
I've tried to use the sum command but I end up with a Series, which although I can convert back to a Dataframe, doesn't maintain the data types:
tot_row = pd.DataFrame(df.sum()).T
tot_row['foo'] = 'tot'
tot_row.dtypes:
foo object
bar object
qux object
I would like to maintain the data types from the original data frame as I need to apply other operations to the total row, something like:
baz = 2*tot_row['qux'] + 3*tot_row['bar']
|
Append a totals row with
df.append(df.sum(numeric_only=True), ignore_index=True)
The conversion is necessary only if you have a column of strings or objects.
It's a bit of a fragile solution so I'd recommend sticking to operations on the dataframe, though. eg.
baz = 2*df['qux'].sum() + 3*df['bar'].sum()
|
Factorial in numpy and scipy
|
How can I import factorial function from numpy and scipy separately in order to see which one is faster?
I already imported factorial from python itself by import math. But, it does not work for numpy and scipy.
|
You can import them like this:
In [7]: import scipy, numpy, math
In [8]: scipy.math.factorial, numpy.math.factorial, math.factorial
Out[8]:
(<function math.factorial>,
<function math.factorial>,
<function math.factorial>)
|
if or elif either true then do something
|
this is just for academic interest. I encounter the following situation a lot.
either_true = False
if x:
...do something1
either_true = True
elif y:
...do something2
either_true = True
if either_true:
..do something3
is there any pythonic way of doing it, or in general better programming way of doing it.
Basically do something3 executes only if or elif is true.
|
You could also omit the either_true flag completely if doSomething3 is a single line of code (e.g. a function call):
if x:
..do something 1
..do something 3
elif y:
..do something 2
..do something 3
It maintains the nice property of evaluating x and y at most once (and y won't be evaluated if x is true).
|
TypeError: got multiple values for argument
|
I read the other threads that had to do with this error and it seems that my problem has an interesting distinct difference than all the posts I read so far, namely, all the other posts so far have the error in regards to either a user created class or a builtin system resource. I am experiencing this problem when calling a function, I can't figure out what it could be for. Any ideas?
BOX_LENGTH = 100
turtle.speed(0)
fill = 0
for i in range(8):
fill += 1
if fill % 2 == 0:
Horizontol_drawbox(BOX_LENGTH, fillBox = False)
else:
Horizontol_drawbox(BOX_LENGTH, fillBox = True)
for i in range(8):
fill += 1
if fill % 2 == 0:
Vertical_drawbox(BOX_LENGTH,fillBox = False)
else:
Vertical_drawbox(BOX_LENGTH,fillBox = True)
Error message:
Horizontol_drawbox(BOX_LENGTH, fillBox = True)
TypeError: Horizontol_drawbox() got multiple values for argument 'fillBox'
|
This happens when a keyword argument is specified that overwrites a positional argument. For example, let's imagine a function that draws a colored box. The function selects the color to be used and delegates the drawing of the box to another function, relaying all extra arguments.
def color_box(color, *args, **kwargs):
painter.select_color(color)
painter.draw_box(*args, **kwargs)
Then the call
color_box("blellow", color="green", height=20, width=30)
will fail because two values are assigned to color: "blellow" as positional and "green" as keyword. (painter.draw_box is supposed to accept the height and width arguments).
This is easy to see in the example, but of course if one mixes up the arguments at call, it may not be easy to debug:
# misplaced height and width
color_box(20, 30, color="green")
Here, color is assigned 20, then args=[30] and color is again assigned "green".
|
Python Set Comprehension
|
So I have these two problems for a homework assignment and I'm stuck on the second one.
Use a Python Set Comprehension (Python's equivalent of Set Builder notation) to generate a set of all of the prime numbers that are less than 100. Recall that a prime number is an integer that is greater than 1 and not divisible by any integer other than itself and 1. Store your set of primes in a variable (you will need it for additional parts). Output your set of primes (e.g., with the print function).
Use a Python Set Comprehension to generate a set of ordered pairs (tuples of length 2) consisting of all of the prime pairs consisting of primes less than 100. A Prime Pair is a pair of consecutive odd numbers that are both prime. Store your set of Prime Pairs in a variable. Your set of number 1 will be very helpful. Output your Set of Prime Pairs.
For the first one, this works perfectly:
r= {x for x in range(2, 101)
if not any(x % y == 0 for y in range(2, x))}
However, I'm pretty stumped on the second one. I think I may have to take the Cartesian product of the set r with something but I'm just not sure.
This gets me somewhat close but I just want the consecutive pairs.
cart = { (x, y) for x in r for y in r
if x < y }
Any help is greatly appreciated, thanks.
|
primes = {x for x in range(2, 101) if all(x%y for y in range(2, min(x, 11)))}
I simplified the test a bit - if all(x%y instead of if not any(not x%y
I also limited y's range; there is no point in testing for divisors > sqrt(x). So max(x) == 100 implies max(y) == 10. For x <= 10, y must also be < x.
pairs = {(x, x+2) for x in primes if x+2 in primes}
Instead of generating pairs of primes and testing them, get one and see if the corresponding higher prime exists.
|
finding non-numeric rows in dataframe in pandas?
|
I have a large dataframe in pandas that apart from the column used as index is supposed to have only numeric values:
df = pandas.DataFrame({"item": ["a", "b", "c", "d", "e"], "a": [1,2,3,"bad",5], "b":[0.1,0.2,0.3,0.4,0.5]})
df = df.set_index("item")
How can I find the row of the dataframe df that has a non-numeric value in it? In this example it's the fourth row in the dataframe, which has the string "bad" in the a column. How can this row be found programmatically? thanks.
|
You could use np.isreal to check the type of each element (applymap applies a function to each element in the DataFrame):
In [11]: df.applymap(np.isreal)
Out[11]:
a b
item
a True True
b True True
c True True
d False True
e True True
If all in the row are True then they are all numeric:
In [12]: df.applymap(np.isreal).all(1)
Out[12]:
item
a True
b True
c True
d False
e True
dtype: bool
So to get the subDataFrame of rouges, (Note: the negation, ~, of the above finds the ones which have at least one rogue non-numeric):
In [13]: df[~df.applymap(np.isreal).all(1)]
Out[13]:
a b
item
d bad 0.4
You could also find the location of the first offender you could use argmin:
In [14]: np.argmin(df.applymap(np.isreal).all(1))
Out[14]: 'd'
As @CTZhu points out, it may be slightly faster to check whether it's an instance of either int or float (there is some additional overhead with np.isreal):
df.applymap(lambda x: isinstance(x, (int, float)))
|
Multi-tenant SAAS in Django
|
I have developed Multi-tenant SAAS apps in PHP/Laravel but recently I had a challenge to develop one in Django/Python. I am still learning Django though and I really like Django rest framework (DRF). But I have difficulties to figure out the highlighted areas below, If someone shows some light, I will be good to go:
How to handle subdomains/domains and selecting the right tenant db
How to manage and dynamically handle different database in django
Can multi-tenant apps backend still be managed from Django admin interface
I will be using queues and other apps scalling techniques, need tips and tricks if any
Any example out there
Any challenge experience when developing SAAS through Django
|
Well...
django-subdomains
There are people who asked in SO questions about dynamic databases in django (including, ahem... me). I'm not entirely sure I understood what you mean by "dynamically handle different database" so I guess just go to the links I just mentioned and pick out the best one for your project. (also - perhaps this is more relevant?)
Check out django-multitenant-schemas and this answer too.
There was a video (link rot) of the guys behind Disqus (one of the largest django app in the world) explaining how they handle scaling. Here another interesting article on the subject. Also, the legendary SO question Does Django Scale.
(and 6.) Check out this answer
I hope that's detailed enough. I know this might be a disappointing only-links answer, but this is the reality of it - Django is a newer framework, and web development with python is still less common than php. With that in mind, understand that for all the awesomness of django (and it is awesome), with more complex needs there's more you'll have to do yourself.
In this case, you'll have to figure out how to do each part of the way seperatly and then combine it all. You can easily find a way to create a REST django app for example, but then you'll need to figure out how to combine it with another package (such as the above subdomains).
You can find a million examples out there of people doing freaky things with django. It's really powerful (when I learned about dynamic models I was blown away). But the more complex your app, the more you'll need to do yourself.
Pick it up, one step at a time, and come back to SO with specific issues you're having (or the django users google group). Good luck!
|
UnicodeDecodeError when performing os.walk
|
I am getting the error:
'ascii' codec can't decode byte 0x8b in position 14: ordinal not in range(128)
when trying to do os.walk. The error occurs because some of the files in a directory have the 0x8b (non-utf8) character in them. The files come from a Windows system (hence the utf-16 filenames), but I have copied the files over to a Linux system and am using python 2.7 (running in Linux) to traverse the directories.
I have tried passing a unicode start path to os.walk, and all the files & dirs it generates are unicode names until it comes to a non-utf8 name, and then for some reason, it doesn't convert those names to unicode and then the code chokes on the utf-16 names. Is there anyway to solve the problem short of manually finding and changing all the offensive names?
If there is not a solution in python2.7, can a script be written in python3 to traverse the file tree and fix the bad filenames by converting them to utf-8 (by removing the non-utf8 chars)? N.B. there are many non-utf8 chars in the names besides 0x8b, so it would need to work in a general fashion.
UPDATE: The fact that 0x8b is still only a btye char (just not valid ascii) makes it even more puzzling. I have verified that there is a problem converting such a string to unicode, but that a unicode version can be created directly. To wit:
>>> test = 'a string \x8b with non-ascii'
>>> test
'a string \x8b with non-ascii'
>>> unicode(test)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0x8b in position 9: ordinal not in range(128)
>>>
>>> test2 = u'a string \x8b with non-ascii'
>>> test2
u'a string \x8b with non-ascii'
Here's a traceback of the error I am getting:
80. for root, dirs, files in os.walk(unicode(startpath)):
File "/usr/lib/python2.7/os.py" in walk
294. for x in walk(new_path, topdown, onerror, followlinks):
File "/usr/lib/python2.7/os.py" in walk
294. for x in walk(new_path, topdown, onerror, followlinks):
File "/usr/lib/python2.7/os.py" in walk
284. if isdir(join(top, name)):
File "/usr/lib/python2.7/posixpath.py" in join
71. path += '/' + b
Exception Type: UnicodeDecodeError at /admin/casebuilder/company/883/
Exception Value: 'ascii' codec can't decode byte 0x8b in position 14: ordinal not in range(128)
The root of the problem occurs in the list of files returned from listdir (on line 276 of os.walk):
names = listdir(top)
The names with chars > 128 are returned as non-unicode strings.
|
I can reproduce the os.listdir() behavior: os.listdir(unicode_name) returns undecodable entries as bytes on Python 2.7:
>>> import os
>>> os.listdir(u'.')
[u'abc', '<--\x8b-->']
Notice: the second name is a bytestring despite listdir()'s argument being a Unicode string.
A big question remains however - how can this be solved without resorting to this hack?
Python 3 solves undecodable bytes (using filesystem's character encoding) bytes in filenames via surrogateescape error handler (os.fsencode/os.fsdecode). See PEP-383: Non-decodable Bytes in System Character Interfaces:
>>> os.listdir(u'.')
['abc', '<--\udc8b-->']
Notice: both string are Unicode (Python 3). And surrogateescape error handler was used for the second name. To get the original bytes back:
>>> os.fsencode('<--\udc8b-->')
b'<--\x8b-->'
In Python 2, use Unicode strings for filenames on Windows (Unicode API), OS X (utf-8 is enforced) and use bytestrings on Linux and other systems.
|
Counting the number of non-NaN elements in a numpy ndarray matrix in Python
|
I need to calculate the number of non-NaN elements in a numpy ndarray matrix. How would one efficiently do this in Python? Here is my simple code for achieving this:
import numpy as np
def numberOfNonNans(data):
count = 0
for i in data:
if not np.isnan(i):
count += 1
return count
Is there a built-in function for this in numpy? Efficiency is important because I'm doing Big Data analysis.
Thnx for any help!
|
np.count_nonzero(~np.isnan(data))
~ inverts the boolean matrix returned from np.isnan.
np.count_nonzero counts values that is not 0\false. .sum should give the same result. But maybe more clearly to use count_nonzero
Testing speed:
In [23]: data = np.random.random((10000,10000))
In [24]: data[[np.random.random_integers(0,10000, 100)],:][:, [np.random.random_integers(0,99, 100)]] = np.nan
In [25]: %timeit data.size - np.count_nonzero(np.isnan(data))
1 loops, best of 3: 309 ms per loop
In [26]: %timeit np.count_nonzero(~np.isnan(data))
1 loops, best of 3: 345 ms per loop
In [27]: %timeit data.size - np.isnan(data).sum()
1 loops, best of 3: 339 ms per loop
data.size - np.count_nonzero(np.isnan(data)) seems to barely be the fastest here. other data might give different relative speed results.
|
sqlalchemy IS NOT NULL select
|
How can I add the filter as in SQL to select values that are NOT NULL from a certain column ?
SELECT *
FROM table
WHERE YourColumn IS NOT NULL;
How can I do the same with SQLAlchemy filters?
select = select(table).select_from(table).where(all_filters)
|
column_obj != None will produce a IS NOT NULL constraint:
In a column context, produces the clause a != b. If the target is None, produces a IS NOT NULL.
or use isnot() (new in 0.7.9):
Implement the IS NOT operator.
Normally, IS NOT is generated automatically when comparing to a value of None, which resolves to NULL. However, explicit usage of IS NOT may be desirable if comparing to boolean values on certain platforms.
Demo:
>>> from sqlalchemy.sql import column
>>> column('YourColumn') != None
<sqlalchemy.sql.elements.BinaryExpression object at 0x10c8d8b90>
>>> str(column('YourColumn') != None)
'"YourColumn" IS NOT NULL'
>>> column('YourColumn').isnot(None)
<sqlalchemy.sql.elements.BinaryExpression object at 0x104603850>
>>> str(column('YourColumn').isnot(None))
'"YourColumn" IS NOT NULL'
|
Pandas left outer join multiple dataframes on multiple columns
|
I am new to using DataFrame and I would like to know how to perform a SQL equivalent of left outer join on multiple columns on a series of tables
Example:
df1:
Year Week Colour Val1
2014 A Red 50
2014 B Red 60
2014 B Black 70
2014 C Red 10
2014 D Green 20
df2:
Year Week Colour Val2
2014 A Black 30
2014 B Black 100
2014 C Green 50
2014 C Red 20
2014 D Red 40
df3:
Year Week Colour Val3
2013 B Red 60
2013 C Black 80
2013 B Black 10
2013 D Green 20
2013 D Red 50__
Essentially I want to do something like this SQL code (Notice that df3 is not joined on Year):
SELECT df1.*, df2.Val2, df3.Val3
FROM df1
LEFT OUTER JOIN df2
ON df1.Year = df2.Year
AND df1.Week = df2.Week
AND df1.Colour = df2.Colour
LEFT OUTER JOIN df3
ON df1.Week = df3.Week
AND df1.Colour = df3.Colour
The result should look like:
Year Week Colour Val1 Val2 Val3
2014 A Red 50 Null Null
2014 B Red 60 Null 60
2014 B Black 70 100 Null
2014 C Red 10 20 Null
2014 D Green 20 Null Null
I have tried using merge and join but can't figure out how to do it on multiple tables and when there are multiple joints involved. Could someone help me on this please?
Thanks
|
Merge them in two steps, df1 and df2 first, and then the result of that to df3.
In [33]: s1 = pd.merge(df1, df2, how='left', on=['Year', 'Week', 'Colour'])
I dropped year from df3 since you don't need it for the last join.
In [39]: df = pd.merge(s1, df3[['Week', 'Colour', 'Val3']],
how='left', on=['Week', 'Colour'])
In [40]: df
Out[40]:
Year Week Colour Val1 Val2 Val3
0 2014 A Red 50 NaN NaN
1 2014 B Red 60 NaN 60
2 2014 B Black 70 100 10
3 2014 C Red 10 20 NaN
4 2014 D Green 20 NaN 20
[5 rows x 6 columns]
|
converting epoch time with milliseconds to datetime
|
I have used a ruby script to convert iso time stamp to epoch, the files that I am parsing has following time stamp structure:
2009-03-08T00:27:31.807
Since I want to keep milliseconds I used following ruby code to convert it to epoch time:
irb(main):010:0> DateTime.parse('2009-03-08T00:27:31.807').strftime("%Q")
=> "1236472051807"
But In python I tried following:
import time
time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(1236472051807))
But I don't get the original time date time back,
>>> time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(1236472051807))
'41152-03-29 02:50:07'
>>>
I wonder is it related to how I am formatting?
|
Use datetime.datetime.fromtimestamp:
>>> import datetime
>>> s = 1236472051807 / 1000.0
>>> datetime.datetime.fromtimestamp(s).strftime('%Y-%m-%d %H:%M:%S.%f')
'2009-03-08 09:27:31.807000'
%f directive is only supported by datetime.datetime.strftime, not by time.strftime.
UPDATE Alternative using %, str.format:
>>> import time
>>> s, ms = divmod(1236472051807, 1000) # (1236472051, 807)
>>> '%s.%03d' % (time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(s)), ms)
'2009-03-08 00:27:31.807'
>>> '{}.{:03d}'.format(time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(s)), ms)
'2009-03-08 00:27:31.807'
|
How to use PyCharm to debug Scrapy projects
|
I am working on Scrapy 0.20 with Python 2.7. I found PyCharm has a good Python debugger. I want to test my Scrapy spiders using it. Anyone knows how to do that please?
What I have tried
Actually I tried to run the spider as a scrip. As a result, I built that scrip. Then, I tried to add my Scrapy project to PyCharm as a model like this:
File->Setting->Project structure->Add content root.
But I don't know what else I have to do
|
The scrapy command is a python script which means you can start it from inside PyCharm.
When you examine the scrapy binary (which scrapy) you will notice that this is actually a python script:
#!/usr/bin/python
from scrapy.cmdline import execute
execute()
This means that a command like
scrapy crawl IcecatCrawler can also be executed like this: python /Library/Python/2.7/site-packages/scrapy/cmdline.py crawl IcecatCrawler
Try to find the scrapy.cmdline package.
In my case the location was here: /Library/Python/2.7/site-packages/scrapy/cmdline.py
Create a run/debug configuration inside PyCharm with that script as script. Fill the script parameters with the scrapy command and spider. In this case crawl IcecatCrawler.
Like this:
Put your breakpoints anywhere in your crawling code and it should workâ¢.
|
Python Pandas: Get index of rows which column matches certain value
|
Given a DataFrame with a column "BoolCol", we want to find the indexes of the DataFrame in which the values for "BoolCol" == True
I currently have the iterating way to do it, which works perfectly:
for i in range(100,3000):
if df.iloc[i]['BoolCol']== True:
print i,df.iloc[i]['BoolCol']
But this is not the correct panda's way to do it.
After some research, I am currently using this code:
df[df['BoolCol'] == True].index.tolist()
This one gives me a list of indexes, but they dont match, when I check them by doing:
df.iloc[i]['BoolCol']
The result is actually False!!
Which would be the correct Pandas way to do this?
|
df.iloc[i] returns the ith row of df. i does not refer to the index value, i is a 0-based index.
In contrast, the attribute index is returning index values:
df[df['BoolCol'] == True].index.tolist()
or equivalently,
df[df['BoolCol']].index.tolist()
You can see the difference quite clearly by playing with a DataFrame with
an "unusual" index:
df = pd.DataFrame({'BoolCol': [True, False, False, True, True]}, index=[10,20,30,40,50])
In [53]: df
Out[53]:
BoolCol
10 True
20 False
30 False
40 True
50 True
[5 rows x 1 columns]
In [54]: df[df['BoolCol']].index.tolist()
Out[54]: [10, 40, 50]
If you want to use the index values,
In [56]: idx = df[df['BoolCol']].index.tolist()
In [57]: idx
Out[57]: [10, 40, 50]
then you can select the rows using loc instead of iloc:
In [58]: df.loc[idx]
Out[58]:
BoolCol
10 True
40 True
50 True
[3 rows x 1 columns]
Note that loc can also accept boolean arrays:
In [55]: df.loc[df['BoolCol']]
Out[55]:
BoolCol
10 True
40 True
50 True
[3 rows x 1 columns]
If you have a boolean array, mask and need ordinal index values, you can compute them using np.flatnonzero:
In [110]: np.flatnonzero(df['BoolCol'])
Out[112]: array([0, 3, 4])
Use df.iloc to select rows by ordinal index:
In [113]: df.iloc[np.flatnonzero(df['BoolCol'])]
Out[113]:
BoolCol
10 True
40 True
50 True
|
Comparing two large lists in python
|
I have one list which contains about 400 words. And another list of lists, in which each list contains about 150,000 words. This list has 20 such lists.
Now I want to see how many of these 400 words appear in all of these 150,000 words list. I also want to know a word from this 400 words, appear how many times in 150k words list, which of these words occur most, how many times etc.
Only solution I can think of is polynomial time solution. It is a very bad solution and will be hell lot slow:
for one_list in list_of_150kwords:
for key in 400_words:
for word in one_list:
if key == word:
# count this word
# do other stuff
This is a very ugly and bad solution, but I can't think of any better. I tried the same with NumPy by converting these lists to NumPy arrays:
list_of_150kwords = numpy.array(list_of_150kwords)
...
But I still find it very slow. Any other solution? Or any library?
|
This sounds like a good opportunity for using a set:
set_of_150kwords = set(list_of_150kwords)
one_set = set(one_list)
len(one_set & set_of_150kwords) # set intersection is &
=> number of elements common to both sets
As per set theory, the intersection of two sets gives the elements that appear in both sets, then it's a simple matter of taking its length. For the second part (which of these words occur most, how many times etc.) Create a Counter with list_of_150kwords, That will tell you how many times each word appears in the list. And the intersection set will tell you which are the common words, solving both of your requirements.
|
Using ipdb with emacs' gud without explicit breakpoints in code
|
I'm using python.el
If I choose 'debugger' from the menu, and enter 'python -m pdb myfile.py', gud starts, and in a split frame I see the (Pdb) prompt in one, and my python code in the other with a caret on the first line, indicating that it's ready to go. For example 'n' steps to the next line and the caret moves accordingly.
If instead I enter 'python -m ipdb myfile.py', the frame splits, and one split is labeled gud, but there's no ipdb console evident. In other words, this way of starting ipdb doesn't seem to work. Ipdb works just fine if I manually insert a breakpoint into my python code using ipdb.set_trace(), except that it does not use the gud interface. Is this intentional so that ipdb's stack trace will work nicely?
If so, that's fine, but is there a way to start ipdb from emacs without manually adding a set_trace() command?
|
There basic problem here is that gud is looking for a (Pdb) prompt and ipdb doesn't prompt this way. There are three ways to fix this: fix ipdb to give a (Pdb) prompt, fix gud not to need to look for (Pdb) or (my favorite) use something else either on the gud side or on the ipdb side.
The problem with fixing up gud is that it is rather old and to my mind a bit creaky using global variables and not making use of Emacs Lisp data structures available other than lists and cons cells. A total rewrite of gud is called realgud, it is currently in MELPA and at some point will be in ELPA as well. However right now it doesn't support ipdb. There are instructions for how one would add ipdb support though.
The second option is to use something else, so let me suggest the Python trepan debugger which is already integrated into realgud (but not gud since I consider that a dead end). Although the backtraces it gives are not exactly like ipdb's it does colorize them and the source code.
|
cv2.imshow command doesn't work properly in opencv-python
|
I'm using opencv 2.4.2, python 2.7
The following simple code created a window of the correct name, but its content is just blank and doesn't show the image:
import cv2
img=cv2.imread('C:/Python27/03323_HD.jpg')
cv2.imshow('ImageWindow',img)
does anyone knows about this issue?
|
imshow() only works with waitKey()
import cv2
img=cv2.imread('C:/Python27/03323_HD.jpg')
cv2.imshow('ImageWindow',img)
cv2.waitKey()
(the whole message-loop nessecary for updating the window is hidden in there)
|
In Python, how do I iterate over one iterator and then another?
|
I'd like to iterate two different iterators, something like this:
file1 = open('file1', 'r')
file2 = open('file2', 'r')
for item in one_then_another(file1, file2):
print item
Which I'd expect to print all the lines of file1, then all the lines of file2.
I'd like something generic, as the iterators might not be files, this is just an example. I know I could do this with:
for item in [file1]+[file2]:
but this reads both files into memory, which I'd prefer to avoid.
|
Use itertools.chain:
from itertools import chain
for line in chain(file1, file2):
pass
fileinput module also provides a similar feature:
import fileinput
for line in fileinput.input(['file1', 'file2']):
pass
|
Why does pylint object to single character variable names?
|
I'm still getting used to python conventions and using pylint to make my code more pythonic, but I'm puzzled by the fact that pylint doesn't like single character variable names. I have a few loops like this:
for x in x_values:
my_list.append(x)
and when I run pylint, I'm getting Invalid name "x" for type variable (should match [a-z_][a-z0-9_]{2,30} -- that suggests that a valid variable name must be between 3 and 31 characters long, but I've looked through the PEP8 naming conventions and I don't see anything explicit regarding single lower case letters, and I do see a lot of examples that use them.
Is there something I'm missing in PEP8 or is this a standard that is unique to pylint?
|
PyLint checks not only PEP8 recomendations. It has also its own recommendations, one of which is that a variable name should be descriptive and not too short.
You can use this to avoid such short names:
my_list.extend(x_values)
Or use _ as a placeholder for temporary variables (which the regex will pass)
|
How to append new data onto a new line
|
My code looks like this:
def storescores():
hs = open("hst.txt","a")
hs.write(name)
hs.close()
so if I run it and enter "Ryan"
then run it again and enter "Bob"
the file hst.txt looks like
RyanBob
instead of
Ryan
Bob
How do I fix this?
|
If you want a newline, you have to write one explicitly. The usual way is like this:
hs.write(name + "\n")
This uses a backslash escape, \n, which Python converts to a newline character in string literals. It just concatenates your string, name, and that newline character into a bigger string, which gets written to the file.
It's also possible to use a multi-line string literal instead, which looks like this:
"""
"""
Or, you may want to use string formatting instead of concatenation:
hs.write("{}\n".format(name))
All of this is explained in the Input and Output chapter in the tutorial.
|
Python: Find a substring in a string and returning the index of the substring
|
For example i have a function def find_str(s, char) and a string "Happy Birthday", i essentially want to input "py" and return 3 but i keep getting 2 to return instead.
def find_str(s, char):
index = 0
if char in s:
char = char[0]
for ch in s:
if ch in s:
index += 1
if ch == char:
return index
else:
return -1
print(find_str("Happy birthday", "py"))
Not sure what's wrong!
|
There's a builtin method on string objects to do this in python you know?
s = "Happy Birthday"
s2 = "py"
print s.find(s2)
Python is a "batteries included language" there's code written to do most of what you want already (whatever you want).. unless this is homework :)
|
Difference between np.int, np.int_, int, and np.int_t in cython?
|
I am a bit struggled with so many int data types in cython.
np.int, np.int_, np.int_t, int
I guess int in pure python is equivalent to np.int_, then where does np.int come from? I cannot find the document from numpy? Also, why does np.int_ exist given we do already have int?
In cython, I guess int becomes a C type when used as cdef int or ndarray[int], and when used as int() it stays as the python caster?
Is np.int_ equivalent to long in C? so cdef long is the identical to cdef np.int_?
Under what circumstances should I use np.int_t instead of np.int? e.g. cdef np.int_t, ndarray[np.int_t] ...
Can someone briefly explain how the wrong use of those types would affect the performance of compiled cython code?
|
np.int_ is the default integer type (as defined in the NumPy docs), on a 64bit system this would be a C long. np.intc is the default C int either int32 or int64. np.int is an alias to the built-in int function
>>> np.int(2.4)
2
>>> np.int is int # object id equality
True
The cython datatypes should reflect C datatypes, so cdef int a is a C int and so on.
As for np.int_t that is the Cython compile time equivalent of the NumPy np.int_ datatype, np.int64_t is the Cython compile time equivalent of np.int64
|
How to make sure that my AJAX requests are originating from the same server in Python
|
I have already asked a question about IP Authentication here: TastyPie Authentication from the same server
However, I need something more! An IP address could be very easily spoofed.
Scenario: My API (TastyPie) and Client App (in javascript) are on the same server/site/domain. My users don't login. I want to consume my API in my javascript client side.
Question: How can I make sure (authentication) that my AJAX requests are originating from the same server?
I'm using Tatypie. I need to authentication that the requests from the client are being made on the same server/domain etc. I cannot use 'logged in sessions' as my users don't login.
I have looked at private keys and generating a signature but they can viewed in the javascript making that method insecure. If I do it in a way to request a signature form the server (hiding the private key in some python code) anyone can make the same http request to get_signature that my javascript makes, thus defeating the point.
I also tried to have the Django view put the signature in the view eliminating the need to make the get_signature call. This is safe, but means that I have to now refresh the page every time to get a new signature. From a users point of view only the first call to the API would work, after which they need to refresh, again pointless.
I cannot believe I'm the only person with this requirement. This is a common scenario I'm sure. Please help :) An example using custom authentication in Tastypie would be welcome too.
Thanks
Added:
|
Depending on your infrastructure @dragonx's answer might interest you most.
my 2c
You want to make sure that only if a client visits your website can use the api? Hmm does the bot, robot, crawler fall in the same category with the client then? Or am I wrong? This can be easily exploited in case you really want to secure it really.
I cannot believe I'm the only person with this requirement.
Maybe not, but as you can see you are prone to several attacks to your API and that can be a reason for someone not sharing your design and making security stricter with auth.
EDIT
Since we are talking about AJAX requests what does the IP part has to do with this? The IP will always be the Client's IP! So probably, you want a public API...
I would Go with the tokens/session/cookie part.
I 'd go with a generated token that lasts a little while and a flow described below.
I'd go with a limiter per some time, like Github does. Eg 60 requests per hour per ip or more for registered users
To overcome the problem with the refreshing token I would just do this:
Client visits the site
-> server generates API TOKEN INIT
-> Client gets API TOKEN INIT which is valid only for starting 1 request.
Client makes AJAX Request to API
-> Client uses API TOKEN INIT
-> Server checks against API TOKEN INIT and limits
-> Server accepts request
-> Server passes back API TOKEN
-> Client consumes response data and stores API TOKEN for further usage (Will be stored in browser memory via JS)
Client Starts Comm with the API for a limited amount of time or requests. Notice that you know also the init token date so you can use it to check against the 1st visit on the page.
The 1st token is generated via the server when the client visits.
Then the client uses that token in order to obtain a real one, that lasts for some time or something else as of limitation.
This makes someone actually visit the webpage and then he can access the API for a limit amount of time, requests perhaps etc.
This way you don't need refreshing.
Of course the above scenario could be simplified with only one token and a time limit as mentioned above.
Of course the above scenario is prone to advanced crawlers, etc since you have no authentication.
Of course a clever attacker can grab tokens from server and repeat the steps but, then you already had that that problem from start.
Some extra points
As the comments provided please close writes to the API. You don't want to be a victim of DOS attacks with writes if you have doubts about your implementation(if not use auth) or for extra security
The token scenario as described above can also become more complicated eg by constantly exchanging tokens
Just for reference GAE Cloud storage uses signed_urls for kind of the same purpose.
Hope it helps.
PS. regarding IP spoofing and Defense against spoofing attacks wikipedia says so packet's won't be returned to the attacker:
Some upper layer protocols provide their own defense against IP
spoofing attacks. For example, Transmission Control Protocol (TCP)
uses sequence numbers negotiated with the remote machine to ensure
that arriving packets are part of an established connection. Since the
attacker normally can't see any reply packets, the sequence number
must be guessed in order to hijack the connection. The poor
implementation in many older operating systems and network devices,
however, means that TCP sequence numbers can be predicted.
|
Project Euler - How is this haskell code so fast?
|
I'm working on problem 401 in project euler, I coded up my solution in python but it's going to take a few days to run, obviously I'll need to speed it up or use a different approach. I came across a solution in Haskell that looks almost identical to my python solution but completes almost instantaneously.
Can someone explain how it is so fast? (I AM NOT ASKING FOR HELP OR SOLUTIONS TO PROBLEM 401)
divisors n = filter (\x -> n `mod` x == 0) [1..(n`div`2)] ++ [n]
sigma2 n = sum $ map (\x -> x * x) (divisors n)
sigma2big n = sum $ map (sigma2)[1..n]
let s2b = sigma2big 10^15
putStrLn ("SIGMA2(10^15) mod 10^9 is " ++ (show (mod s2b 10^9)))
From my understanding it is just using trial division to generate a list of divisors, squaring and summing them, and then summing the results from 1 to n.
EDIT: forgot my python code
from time import clock
def timer(function):
def wrapper(*args, **kwargs):
start = clock()
print(function(*args, **kwargs))
runtime = clock() - start
print("Runtime: %f seconds." % runtime)
return wrapper
@timer
def find_answer():
return big_sigma2(10**15) % 10**9
def get_divisors(n):
divs = set()
for i in range(1, int(sqrt(n)) + 1):
if n % i == 0:
divs.add(i)
divs.add(n // i)
return divs
def sigma2(n):
return sum(map(lambda x: x**2, get_divisors(n)))
def big_sigma2(n):
total = 0
for i in range(1, n + 1):
total += sigma2(i)
return total
if __name__ == "__main__":
find_answer()
|
Prelude> sigma2big 1000
401382971
(0.48 secs, 28491864 bytes)
Prelude> sigma2big 10^3
103161709
(0.02 secs, 1035252 bytes)
Prelude> (sigma2big 10)^3
103161709
function precedence (shh...)
|
Pip doesn't install latest available version from pypi (argparse in this case)
|
The problem
I worked on some python projects lately and had lots of problems with pip not installing the latest versions of some requirements. I am on osx and and I used brew to install Python 2.7.6. In the project I'm working on, we simply pip install -r requirements.txt. In the current case, I needed to install argparse==1.2.1. This is the actual latest version shown on the pypi website
Here's my output
Downloading/unpacking argparse==1.2.1 (from -r requirements.txt (line 4))
Could not find a version that satisfies the requirement argparse==1.2.1 (from -r requirements.txt (line 4)) (from versions: 0.1.0, 0.2.0, 0.3.0, 0.4.0, 0.5.0, 0.6.0, 0.7.0, 0.8.0, 0.9.0, 0.9.1, 1.0.1, 1.0, 1.1)
Some externally hosted files were ignored (use --allow-external to allow).
Cleaning up...
No distributions matching the version for argparse==1.2.1 (from -r requirements.txt (line 4))
I had similar problems with different kinds of requirements such as matplotlib which I installed manually as seen here.
As you can see, pip on my mac only has those argparse versions: 0.1.0, 0.2.0, 0.3.0, 0.4.0, 0.5.0, 0.6.0, 0.7.0, 0.8.0, 0.9.0, 0.9.1, 1.0.1, 1.0, 1.1
Attempts to fix
I tried reinstalling python with brew reinstall python, then also tried to reinstall all of my installed python packages with some xargs magic: pip freeze | xargs -I {} sudo pip install {} --upgrade --force-reinstall.
While trying to reinstall everything, I had trouble with most of the packages: error: invalid command 'egg_info'. I figured out I had an old setuptools so I pip install --upgrade setuptools and I could now reinstall everything, but still, same problem with argparse. Still the same problem with argparse.
I asked a friend with a freshly installed osx to pip install argparse and he got 1.1 So I've setup a precise32 vagrant box for a clean ubuntu install with python-dev + libevent-dev and had no trouble at all installing argparse==1.2.1.
Temp fix
To continue working on the project, I installed argparse 1.1 on osx and it seems to work fine atm for what I'm working on.
Questions
I'm not very good with pypi (yet), but is-there any reason why I'm not getting the latest versions shown on pypi? Sounds like not all the libs on pypi are available for osx. Is there a way to know version availability for different os?
Edit: solution inside
argparse 1.1 seems to be the same as 1.2.1 as shown on this output
vagrant@precise32:~$ python
Python 2.7.3 (default, Sep 26 2013, 20:08:41)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import argparse
>>> argparse.__version__
'1.1'
>>> quit()
vagrant@precise32:~$ pip freeze | grep argparse
argparse==1.2.1
I tried to use --allow-external on osx, but did not realize it needed the name of the package again.
sudo pip install argparse --allow-external argparse --upgrade
and voilà :)
argparse an externally hosted file and may be unreliable
Downloading/unpacking argparse from http://argparse.googlecode.com/files/argparse-1.2.1.tar.gz#md5=2fbef8cb61e506c706957ab6e135840c
Downloading argparse-1.2.1.tar.gz (69kB): 69kB downloaded
As pointed out by Peter, latest version of pip default to disallowing externally hosted files.
|
I think this line is the key:
Some externally hosted files were ignored (use --allow-external to allow).
When I install argparse here I get
You are installing an externally hosted file. Future versions of pip will default to disallowing externally hosted files.
Downloading argparse-1.2.1.tar.gz (69kB): 69kB downloaded
So you have a newer version of pip that is disallowing externally hosted files by default
|
Floating point math in different programming languages
|
I know that floating point math can be ugly at best but I am wondering if somebody can explain the following quirk. In most of the programing languages I tested the addition of 0.4 to 0.2 gave a slight error, where as 0.4 + 0.1 + 0.1 gave non.
What is the reason for the inequality of both calculation and what measures can one undertake in the respective programing languages to obtain correct results.
In python2/3
.4 + .2
0.6000000000000001
.4 + .1 + .1
0.6
The same happens in Julia 0.3
julia> .4 + .2
0.6000000000000001
julia> .4 + .1 + .1
0.6
and Scala:
scala> 0.4 + 0.2
res0: Double = 0.6000000000000001
scala> 0.4 + 0.1 + 0.1
res1: Double = 0.6
and Haskell:
Prelude> 0.4 + 0.2
0.6000000000000001
Prelude> 0.4 + 0.1 + 0.1
0.6
but R v3 gets it right:
> .4 + .2
[1] 0.6
> .4 + .1 + .1
[1] 0.6
|
All these languages are using the system-provided floating-point format, which represents values in binary rather than in decimal. Values like 0.2 and 0.4 can't be represented exactly in that format, so instead the closest representable value is stored, resulting in a small error. For example, the numeric literal 0.2 results in a floating-point number whose exact value is 0.200000000000000011102230246251565404236316680908203125. Similarly, any given arithmetic operation on floating-point numbers may result in a value that's not exactly representable, so the true mathematical result is replaced with the closest representable value. These are the fundamental reasons for the errors you're seeing.
However, this doesn't explain the differences between languages: in all of your examples, the exact same computations are being made and the exact same results are being arrived at. The difference then lies in the way that the various languages choose to display the results.
Strictly speaking, none of the answers you show is correct. Making the (fairly safe) assumption of IEEE 754 binary 64 arithmetic with a round-to-nearest rounding mode, the exact value of the first sum is:
0.600000000000000088817841970012523233890533447265625
while the exact value of the second sum is:
0.59999999999999997779553950749686919152736663818359375
However, neither of those outputs is particularly user-friendly, and clearly all of the languages you tested made the sensible decision to abbreviate the output when printing. However, they don't all adopt the same strategy for formatting the output, which is why you're seeing differences.
There are many possible strategies for formatting, but three particularly common ones are:
Compute and display 17 correctly-rounded significant digits, possibly stripping trailing zeros where they appear. The output of 17 digits guarantees that distinct binary64 floats will have distinct representations, so that a floating-point value can be unambiguously recovered from its representation; 17 is the smallest integer with this property. This is the strategy that Python 2.6 uses, for example.
Compute and display the shortest decimal string that rounds back to the given binary64 value under the usual round-ties-to-even rounding mode. This is rather more complicated to implement than strategy 1, but preserves the property that distinct floats have distinct representations, and tends to make for pleasanter output. This appears to be the strategy that all of the languages you tested (besides R) are using.
Compute and display 15 (or fewer) correctly-rounded significant digits. This has the effect of hiding the errors involved in the decimal-to-binary conversions, giving the illusion of exact decimal arithmetic. It has the drawback that distinct floats can have the same representation. This appears to be what R is doing. (Thanks to @hadley for pointing out in the comments that there's an R setting which controls the number of digits used for display; the default is to use 7 significant digits.)
|
Saving a figure after invoking pyplot.show() results in an empty file
|
The following example code generates a simple plot, then saves it to 'fig1.pdf', then displays it, then saves it again to 'fig2.pdf'. The first image looks as expected, but the second one is blank (contains a white square). What's actually going on here? The line plt.show() apparently messes something up, but I can't figure out what/how!
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-1, 1, 100)
y = x**2
plt.plot(x,y)
plt.savefig('fig1.pdf')
plt.show()
plt.savefig('fig2.pdf')
|
If you want to save the figure after displaying it, you'll need to hold on to the figure instance. The reason that plt.savefig doesn't work after calling show is that the current figure has been reset.
pyplot keeps track of which figures, axes, etc are "current" (i.e. have not yet been displayed with show) behind-the-scenes. gcf and gca get the current figure and current axes instances, respectively. plt.savefig (and essentially any other pyplot method) just does plt.gcf().savefig(...). In other words, get the current figure instance and call its savefig method. Similarly plt.plot basically does plt.gca().plot(...).
After show is called, the list of "current" figures and axes is empty.
In general, you're better off directly using the figure and axes instances to plot/save/show/etc, rather than using plt.plot, etc, to implicitly get the current figure/axes and plot on it. There's nothing wrong with using pyplot for everything (especially interactively), but it makes it easier to shoot yourself in the foot.
Use pyplot for plt.show() and to generate a figure and an axes object(s), but then use the figure or axes methods directly. (e.g. ax.plot(x, y) instead of plt.plot(x, y), etc) The main advantage of this is that it's explicit. You know what objects you're plotting on, and don't have to reason about what the pyplot state-machine does (though it's not that hard to understand the state-machine interface, either).
As an example of the "recommended" way of doing things, do something like:
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-1, 1, 100)
y = x**2
fig, ax = plt.subplots()
ax.plot(x, y)
fig.savefig('fig1.pdf')
plt.show()
fig.savefig('fig2.pdf')
If you'd rather use the pyplot interface for everything, then just grab the figure instance before you call show. For example:
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-1, 1, 100)
y = x**2
plt.plot(x, y)
fig = plt.gcf()
fig.savefig('fig1.pdf')
plt.show()
fig.savefig('fig2.pdf')
|
Why does Python handle '1 is 1**2' differently from '1000 is 10**3'?
|
Inspired by this question about caching small integers and strings I discovered the following behavior which I don't understand.
>>> 1000 is 10**3
False
I thought I understood this behavior: 1000 is to big to be cached. 1000 and 10**3 point to 2 different objects. But I had it wrong:
>>> 1000 is 1000
True
So, maybe Python treats calculations differently from 'normal' integers. But that assumption is also not correct:
>>> 1 is 1**2
True
How can this behavior be explained?
|
There are two separate things going on here: Python stores int literals (and other literals) as constants with compiled bytecode and small integer objects are cached as singletons.
When you run 1000 is 1000 only one such constant is stored and reused. You are really looking at the same object:
>>> import dis
>>> compile('1000 is 1000', '<stdin>', 'eval').co_consts
(1000,)
>>> dis.dis(compile('1000 is 1000', '<stdin>', 'eval'))
1 0 LOAD_CONST 0 (1000)
3 LOAD_CONST 0 (1000)
6 COMPARE_OP 8 (is)
9 RETURN_VALUE
Here LOAD_CONST refers to the constant at index 0; you can see the stored constants in the .co_consts attribute of the bytecode object.
Compare this to the 1000 is 10 ** 3 case:
>>> compile('1000 is 10**3', '<stdin>', 'eval').co_consts
(1000, 10, 3, 1000)
>>> dis.dis(compile('1000 is 10**3', '<stdin>', 'eval'))
1 0 LOAD_CONST 0 (1000)
3 LOAD_CONST 3 (1000)
6 COMPARE_OP 8 (is)
9 RETURN_VALUE
There is a peephole optimization that pre-computes expressions on constants at compile time, and this optimization has replaced 10 ** 3 with 1000, but the optimization doesn't re-use pre-existing constants. As a result, the LOAD_CONST opcodes are loading two different integer objects, at index 0 and 3, and these are two different int objects.
Then there are optimisations in place where small integers are interned; only one copy of the 1 object is ever created during the lifetime of a Python program; this applies to all integers between -5 and 256.
Thus, for the 1 is 1**2 case, the Python internals use a singleton int() object from the internal cache. This is a CPython implementation detail.
The moral of this story is that you should never use is when you really wanted to compare by value. Use == for integers, always.
|
Why does ~True result in -2?
|
In Python console:
~True
Gives me:
-2
Why? Can someone explain this particular case to me in binary?
|
What is int(True)? It is 1.
1 is:
00000001
and ~1 is:
11111110
Which is -2 in Two's complement1
1 Flip all the bits, add 1 to the resulting number and interpret the result as a binary representation of the magnitude and add a negative sign (since the number begins with 1):
11111110 â 00000001 â 00000010
â â
Flip Add 1
Which is 2, but the sign is negative since the MSB is 1.
Worth mentioning:
Think about bool, you'll find that it's numeric in nature - It has two values, True and False, and they are just "customized" versions of the integers 1 and 0 that only print themselves differently. They are subclasses of the integer type int.
So they behave exactly as 1 and 0, except that bool redefines str and repr to display them differently.
>>> type(True)
<class 'bool'>
>>> isinstance(True, int)
True
>>> True == 1
True
>>> True is 1 # they're still different objects
False
|
numpy concatenate two arrays vertically
|
I tried the following:
>>> a = np.array([1,2,3])
>>> b = np.array([4,5,6])
>>> np.concatenate((a,b), axis=0)
array([1, 2, 3, 4, 5, 6])
>>> np.concatenate((a,b), axis=1)
array([1, 2, 3, 4, 5, 6])
However, I'd expect at least that one result looks like this
array([[1, 2, 3],
[4, 5, 6]])
Why is it not concatenated vertically?
|
Because both a and b have only one axis, as their shape is (3), and the axis parameter specifically refers to the axis of the elements to concatenate.
this example should clarify what concatenate is doing with axis. Take two vectors with two axis, with shape (2,3):
a = np.array([[1,5,9],[2,6,10]])
b = np.array([[3,7,11],[4,8,12]])
concatenates along the 1st axis (rows of the 1st, then rows of the 2nd):
print concatenate((a,b),axis=0)
array([[ 1, 5, 9],
[ 2, 6, 10],
[ 3, 7, 11],
[ 4, 8, 12]])
concatenates along the 2nd axis (columns of the 1st, then columns of the 2nd):
print concatenate((a,b),axis=1)
array([[ 1, 5, 9, 3, 7, 11],
[ 2, 6, 10, 4, 8, 12]])
to obtain the output you presented, you can use
a = np.array([1,2,3])
b = np.array([4,5,6])
vstack((a,b))
You can still do it with concatenate, but it takes longer:
a=a.reshape(1,3)
b=b.reshape(1,3)
print concatenate((a,b))
|
What is the good python3 equivalent for auto tuple unpacking in lambda?
|
Consider the following python2 code
In [5]: points = [ (1,2), (2,3)]
In [6]: min(points, key=lambda (x, y): (x*x + y*y))
Out[6]: (1, 2)
This is not supported in python3 and I have to do the following:
>>> min(points, key=lambda p: p[0]*p[0] + p[1]*p[1])
(1, 2)
This is very ugly. If the lambda was a function, I could do
def some_name_to_think_of(p):
x, y = p
return x*x + y*y
Removing this feature in python3 forces the code to either do the ugly way(with magic indexes) or create unnecessary functions(The most bothering part is to think of good names for these unnecessary functions)
I think the feature should be added back at least to lambdas alone. Is there a good alternative?
Update: I am using the following helper extending the idea in the answer
def star(f):
return lambda args: f(*args)
min(points, key=star(lambda x,y: (x*x + y*y))
|
According to http://www.python.org/dev/peps/pep-3113/ tuple unpacking are gone, and 2to3 will translate them like so:
As tuple parameters are used by lambdas because of the single
expression limitation, they must also be supported. This is done by
having the expected sequence argument bound to a single parameter and
then indexing on that parameter:
lambda (x, y): x + y
will be translated into:
lambda x_y: x_y[0] + x_y[1]
Which is quite similar to your implementation.
|
isinstance(foo,bar) vs type(foo) is bar
|
A question of semantics, really.
Up until recently, if I had to do any typechecking on a structure, I would use type(obj) is list et. al. However since joining SO I've noticed everyone (and I mean EVERYONE) uses isinstance(obj,list) instead. It seems they are synonymous, and timeit reveals almost IDENTICAL speed between them.
def a(): return type(list()) is list
def b(): return isinstance(list(),list)
from timeit import timeit
timeit(a)
# 0.5239454597495582
timeit(b)
# 0.5021292075273176
Indeed even dis agrees they're synonymous, with the exception of type is's COMPARE_OP
from dis import dis
dis(a)
# 2 0 LOAD_GLOBAL 0 (type)
# 3 LOAD_GLOBAL 1 (list)
# 6 CALL_FUNCTION 0 (0 positional, 0 keyword pair)
# 9 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
# 12 LOAD_GLOBAL 1 (list)
# 15 COMPARE_OP 8 (is)
# 18 RETURN_VALUE
dis(b)
# 2 0 LOAD_GLOBAL 0 (isinstance)
# 3 LOAD_GLOBAL 1 (list)
# 6 CALL_FUNCTION 0 (0 positional, 0 keyword pair)
# 9 LOAD_GLOBAL 1 (list)
# 12 CALL_FUNCTION 2 (2 positional, 0 keyword pair)
# 15 RETURN_VALUE
I frankly find it more readable to say if type(foo) is list: than if isinstance(foo,list):, the first is basically just pseudo-code and the second calls some function (which I have to look up every time to be isinstance or instanceof) with some arguments. It doesn't look like a type cast, and there's no explicit way of knowing whether isinstance(a,b) is checking if b is an instance of a or vice-versa.
I understand from this question that we use isinstance because it's nicer about inheritance. type(ClassDerivedFromList) is list will fail while isinstance(ClassDerivedFromList,list) will succeed. But if I'm checking what should ALWAYS BE A BASE OBJECT, what do I really lose from doing type is?
|
if I'm checking what should ALWAYS BE A BASE OBJECT, what do I really lose from doing type is?
well, it's nice you give the full documented answer in your question, so your answer is you lose nothing! The only times where isinstance() is necessary is when checking inheritance of a given class compared to another, as you well said and referenced. type() shall be only used to check whether an instance is exactly of a given base type.
|
python pandas not reading first column from csv file
|
I have a simple 2 column csv file called st1.csv:
GRID St1
1457 614
1458 657
1459 679
1460 732
1461 754
1462 811
1463 748
However, when I try to read the csv file, the first column is not loaded:
a = pandas.DataFrame.from_csv('st1.csv')
a.columns
outputs:
Index([u'ST1'], dtype=object)
Why is the first column not being read?
|
Judging by your data it looks like the delimiter you're using is a .
Try the following:
a = pandas.DataFrame.from_csv('st1.csv', sep=' ')
The other issue is that it's assuming your first column is an index, which we can also disable:
a = pandas.DataFrame.from_csv('st1.csv', index_col=False)
|
How do I compare two strings in python?
|
I have two strings like
string1="abc def ghi"
and
string2="def ghi abc"
How to get that this two string are same without breaking the words?
|
Seems question is not about strings equality, but of sets equality. You can compare them this way only by splitting strings and converting them to sets:
s1 = 'abc def ghi'
s2 = 'def ghi abc'
set1 = set(s1.split(' '))
set2 = set(s2.split(' '))
print set1 == set2
Result will be
True
|
Why set_xticks doesn't set the lables of ticks?
|
import pylab as plt
x = range(1, 7)
y = (220, 300, 300, 290, 320, 315)
def test(axes):
axes.bar(x,y)
axes.set_xticks(x, [i+100 for i in x])
a = plt.subplot(1,2,1)
test(a)
b = plt.subplot(1,2,2)
test(b)
I am expecting the xlabs as 101, 102 ...
However, if i switch to use plt.xticks(x, [i+100 for i in x]) and rewrite the function explicitly, it works.
|
.set_xticks() on the axes will set the locations and set_xticklabels() will set the displayed text.
def test(axes):
axes.bar(x,y)
axes.set_xticks(x)
axes.set_xticklabels([i+100 for i in x])
|
Settting Different error bar colors in bar plot in matplotlib
|
Following Settting Different Bar color in matplotlib Python
I would like to change the error bar colors. I have figured out a way after a number of attempts:
a = plt.gca()
b = a.bar(range(4), [2]*4, yerr=range(4))
c = a.get_children()[8]
c.set_color(['r','r','b','r'])
Is there any better way? Certainly a.get_children()[8] is not a general solution at all.
|
If you just want to set them to a single color, use the error_kw kwarg (expected to be a dict of keyword arguments that's passed on to ax.errorbar).
Also, just so you know, you can pass a sequence of facecolors directly to bar, though this won't change the errorbar color.
As a quick example:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.bar(range(4), [2] * 4, yerr=range(1, 5), alpha=0.5,
color=['red', 'green', 'blue', 'cyan', 'magenta'],
error_kw=dict(ecolor='gray', lw=2, capsize=5, capthick=2))
ax.margins(0.05)
plt.show()
However, if you want the errorbars to be different colors, you'll either need to plot them individually or modify them afterwards.
If you use the latter option, the capline colors actually can't be changed individually (note that they're not changed in @falsetru's example either). For example:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
colors = ['red', 'green', 'blue', 'cyan', 'magenta']
container = ax.bar(range(4), [2] * 4, yerr=range(1, 5), alpha=0.5, color=colors,
error_kw=dict(lw=2, capsize=5, capthick=2))
ax.margins(0.05)
connector, caplines, (vertical_lines,) = container.errorbar.lines
vertical_lines.set_color(colors)
plt.show()
The caplines object in the answer above is a tuple of two Line2Ds: One line for all of the top caps, and one line for all of the bottom caps. There's not way to change the colors of the caps individually (it's easy to set them all to the same color) without removing that artist and creating a LineCollection in its place.
Therefore, you're better off just plotting the errorbars individually in this case.
E.g.
import matplotlib.pyplot as plt
x, height, error = range(4), [2] * 4, range(1,5)
colors = ['red', 'green', 'blue', 'cyan', 'magenta']
fig, ax = plt.subplots()
ax.bar(x, height, alpha=0.5, color=colors)
ax.margins(0.05)
for pos, y, err, color in zip(x, height, error, colors):
ax.errorbar(pos + 0.4, y, err, lw=2, capsize=5, capthick=2, color=color)
plt.show()
|
numpy negative indexing a[:-0]
|
I want to use array slicing to trim my array
i.e.
a_trimmed = a[trim_left:-trim_right]
this is great, except if trim_right is 0, I get a[trim_left:0], which is an empty array.
I suppose I can could it to
a[trim_left:a.shape[0]-trim_right]
but it's uglier. What's the cleanest way to express this?
|
None is a valid slice endpoint:
a[trim_left:-trim_right or None]
|
python setuptools install_requires is ignored when overriding cmdclass
|
I have a setup.py that looks like this:
from setuptools import setup
from subprocess import call
from setuptools.command.install import install
class MyInstall(install):
def run(self):
call(["pip install -r requirements.txt --no-clean"], shell=True)
install.run(self)
setup(
author='Attila Zseder',
version='0.1',
name='entity_extractor',
packages=['...'],
install_requires=['DAWG', 'mrjob', 'cchardet'],
package_dir={'': 'modules'},
scripts=['...'],
cmdclass={'install': MyInstall},
)
I need MyInstall because I want to install some libraries from github and I didn't want to use dependency_links option, because it's discouraged (for example here), so I can do this with requirements.txt.
When I install this package with pip, everything is working fine, but for some reasons I have to solve this in a way that it also works with pure python setup.py install. And it doesn't.
When overriding cmdclass in setup() with my own class, install_requires seems to be ignored. As soon as I comment out that line, those packages are being installed.
I know that install_requires is not supported for example in distutils (if I remember well), but it is in setuptools. And then cmdclass wouldn't have any effect on install_requires.
I googled this problem for hours, found a lot of kind of related answers on stackoverflow, but not for this particular problem.
With putting every needed package to requirements.txt, everything's working fine, but I would like to understand why this is happening. Thanks!
|
The same problem just happened to me. It somehow seems like something triggers setuptools to do an 'old-style install' with distutils, which indeed does not support install_requires.
You call install.run(self) which calls run(self) in setuptools/setuptools/command/install.py, line 51-74
https://bitbucket.org/pypa/setuptools/src/8e8c50925f18eafb7e66fe020aa91a85b9a4b122/setuptools/command/install.py?at=default
def run(self):
# Explicit request for old-style install? Just do it
if self.old_and_unmanageable or self.single_version_externally_managed:
return _install.run(self)
# Attempt to detect whether we were called from setup() or by another
# command. If we were called by setup(), our caller will be the
# 'run_command' method in 'distutils.dist', and *its* caller will be
# the 'run_commands' method. If we were called any other way, our
# immediate caller *might* be 'run_command', but it won't have been
# called by 'run_commands'. This is slightly kludgy, but seems to
# work.
#
caller = sys._getframe(2)
caller_module = caller.f_globals.get('__name__','')
caller_name = caller.f_code.co_name
if caller_module != 'distutils.dist' or caller_name!='run_commands':
# We weren't called from the command line or setup(), so we
# should run in backward-compatibility mode to support bdist_*
# commands.
_install.run(self)
else:
self.do_egg_install()
I'm not sure whether this behaviour is intended, but replacing
install.run(self)
with
install.do_egg_install()
should solve your problem. At least it works for me, but I would also appreciate a more detailed answer. Thanks!
|
Rotating axes label text in 3D matplotlib
|
How do I rotate the z-label so the text reads (bottom => top) rather than (top => bottom)?
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_zlabel('label text flipped', rotation=90)
ax.azim = 225
plt.show()
I want this to hold no matter what my ax.azim setting is. This seems to be an old feature request on github but there isn't a work on it. Is there a workaround?
|
As a workaround, you could set the direction of the z-label manually by:
ax.zaxis.set_rotate_label(False) # disable automatic rotation
ax.set_zlabel('label text', rotation=90)
Please note that the direction of your z-label also depends on your viewpoint, e.g:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fg = plt.figure(1); fg.clf()
axx = [fg.add_subplot(4,1,1+i, projection='3d') for i in range(4)]
for ax,azel in zip(axx, [(115,10), (115,-10), (-115,10), (-115,-10)]):
ax.set_title(u"Azim, elev = {}°, {}°".format(*azel))
ax.set_zlabel('label text')
ax.azim, ax.elev = azel
fg.canvas.draw()
plt.show()
gives
Update: It is also possible, to adjust the z-label direction of a plot, which is already drawn (but not beforehand). This is the adjusted version to modify the labels:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fg = plt.figure(1); fg.clf()
axx = [fg.add_subplot(4,1,1+i, projection='3d') for i in range(4)]
for ax,azel in zip(axx, [(115,10), (115,-10), (-115,10), (-115,-10)]):
ax.set_title(u"Azim, elev = {}°, {}°".format(*azel))
ax.set_zlabel('label text')
ax.azim, ax.elev = azel
fg.canvas.draw() # the angles of the text are calculated here
# Read drawn z-label rotations and switch them if needed
for ax in axx:
ax.zaxis.set_rotate_label(False)
a = ax.zaxis.label.get_rotation()
if a<180:
a += 180
ax.zaxis.label.set_rotation(a)
a = ax.zaxis.label.get_rotation() # put the actual angle in the z-label
ax.set_zlabel(u'z-rot = {:.1f}°'.format(a))
fg.canvas.draw()
plt.show()
|
Matplotlib log scale tick label number formatting
|
With matplotlib when a log scale is specified for an axis, the default method of labeling that axis is with numbers that are 10 to a power eg. 10^6. Is there an easy way to change all of these labels to be their full numerical representation? eg. 1, 10, 100, etc.
Note that I do not know what the range of powers will be and want to support an arbitrary range (negatives included).
|
Sure, just change the formatter.
For example, if we have this plot:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.axis([1, 10000, 1, 100000])
ax.loglog()
plt.show()
You could set the tick labels manually, but then the tick locations and labels would be fixed when you zoom/pan/etc. Therefore, it's best to change the formatter:
from matplotlib.ticker import ScalarFormatter
for axis in [ax.xaxis, ax.yaxis]:
axis.set_major_formatter(ScalarFormatter())
|
Convert Django Model object to dict with all of the fields intact
|
How does one convert a django Model object to a dict with all of its fields? All ideally includes foreign keys and fields with editable=False.
Let me elaborate. Let's say I have a django model like the following:
from django.db import models
class OtherModel(models.Model): pass
class SomeModel(models.Model):
value = models.IntegerField()
value2 = models.IntegerField(editable=False)
created = models.DateTimeField(auto_now_add=True)
reference1 = models.ForeignKey(OtherModel, related_name="ref1")
reference2 = models.ManyToManyField(OtherModel, related_name="ref2")
In the terminal, I have done the following:
other_model = OtherModel()
other_model.save()
instance = SomeModel()
instance.value = 1
instance.value2 = 2
instance.reference1 = other_model
instance.save()
instance.reference2.add(other_model)
instance.save()
I want to convert this to the following dictionary:
{'created': datetime.datetime(2015, 3, 16, 21, 34, 14, 926738, tzinfo=<UTC>),
u'id': 1,
'reference1': 1,
'reference2': [1],
'value': 1,
'value2': 2}
Questions with unsatisfactory answers:
Django: Converting an entire set of a Model's objects into a single dictionary
How can I turn Django Model objects into a dictionary and still have their foreign keys?
|
There are many ways to convert instance to a dictionary, with varying degrees of corner case handling and closeness to the desired result.
1. instance.__dict__
instance.__dict__
which returns
{'_reference1_cache': <OtherModel: OtherModel object>,
'_state': <django.db.models.base.ModelState at 0x1f63310>,
'created': datetime.datetime(2014, 2, 21, 4, 38, 51, 844795, tzinfo=<UTC>),
'id': 1L,
'reference1_id': 1L,
'value': 1,
'value2': 2}
This is by far the simplest, but is missing reference2, reference1 is misnamed, and it has two extra things in it.
2. model_to_dict
from django.forms.models import model_to_dict
model_to_dict(instance)
which returns
{u'id': 1L, 'reference1': 1L, 'reference2': [1L], 'value': 1}
This is the only one with reference2, but is missing the uneditable fields.
3. model_to_dict with fields
from django.forms.models import model_to_dict
model_to_dict(instance, fields=[field.name for field in instance._meta.fields])
which returns
{u'id': 1L, 'reference1': 1L, 'value': 1}
This is strictly worse than the standard model_to_dict invocation.
4. query_set.values()
SomeModel.objects.filter(id=instance.id).values()[0]
which returns
{'created': datetime.datetime(2014, 2, 21, 4, 38, 51, tzinfo=<UTC>),
u'id': 1L,
'reference1_id': 1L,
'value': 1L,
'value2': 2L}
This is the same output as instance.__dict__ but without the extra fields.
5. Custom Function
The code for django's model_to_dict had most of the answer. It explicitly removed non-editable fields, so removing that check results in the following code which behaves as desired:
from django.db.models.fields.related import ManyToManyField
def to_dict(instance):
opts = instance._meta
data = {}
for f in opts.concrete_fields + opts.many_to_many:
if isinstance(f, ManyToManyField):
if instance.pk is None:
data[f.name] = []
else:
data[f.name] = list(f.value_from_object(instance).values_list('pk', flat=True))
else:
data[f.name] = f.value_from_object(instance)
return data
While this is the most complicated option, calling to_dict(instance) gives us exactly the desired result:
{'created': datetime.datetime(2015, 3, 16, 21, 34, 14, 926738, tzinfo=<UTC>),
u'id': 1,
'reference1': 1,
'reference2': [1],
'value': 1,
'value2': 2}
Bonus Round
If you want a django model that has a better python command-line display, have your models child class the following:
from django.db import models
from django.db.models.fields.related import ManyToManyField
class PrintableModel(models.Model):
def __repr__(self):
return str(self.to_dict())
def to_dict(self):
opts = self._meta
data = {}
for f in opts.concrete_fields + opts.many_to_many:
if isinstance(f, ManyToManyField):
if self.pk is None:
data[f.name] = []
else:
data[f.name] = list(f.value_from_object(self).values_list('pk', flat=True))
else:
data[f.name] = f.value_from_object(self)
return data
class Meta:
abstract = True
So, for example, if we define our models as such:
class OtherModel(PrintableModel): pass
class SomeModel(PrintableModel):
value = models.IntegerField()
value2 = models.IntegerField(editable=False)
created = models.DateTimeField(auto_now_add=True)
reference1 = models.ForeignKey(OtherModel, related_name="ref1")
reference2 = models.ManyToManyField(OtherModel, related_name="ref2")
Calling SomeModel.objects.first() now gives output like this:
{'created': datetime.datetime(2015, 3, 16, 21, 34, 14, 926738, tzinfo=<UTC>),
'value': 1, 'value2': 2, 'reference1': 1, u'id': 1, 'reference2': [1]}
|
Login to Facebook using python requests
|
I'm trying to find a way to automatically login to Facebook without browser using Python. I experimented with "requests" lib. Tried several ways:
URL = 'http://m.facebook.com'
requests.get(URL, auth = ('email@domain.com', 'mypassword'))
...
form_data = {'email': 'email@domain.com',
'pass' : 'mypassword'
}
requests.post(URL, data = form_data)
...
requests.post(URL + '?email=email@domain.com&pass=mypassword')
The last method fills "email" box on a page but "pass" box remains empty...
Could someone help me with this please? Is it possible to emulate FB login using requests?
Thanks!
|
You need to send a complete form. The easiest way to find out what Facebook expects is to use something like Google Chrome's developer tools to monitor your web requests.
To make your life easier I've monitored my own login on Facebook, and reproduced it below (with private information redacted, obviously) with the unimportant information stripped:
Request URL:https://m.facebook.com/login.php?refsrc=https%3A%2F%2Fm.facebook.com%2F&refid=8
Request Method:POST
Form Data:
lsd:AVqAE5Wf
charset_test:â¬,´,â¬,´,æ°´,Ð,Ð
version:1
ajax:0
width:0
pxr:0
gps:0
m_ts:1392974963
li:cxwHUxatQiaLv1nZEYPp0aTB
email:...
pass:...
login:Log In
As you can see, the form contains a lot of fields. All of these need to be provided to allow you to log in. Email and password will be provided by your code. The rest of the fields actually have their values set by the HTML that Facebook serves you. This means, to emulate a browser login you need to perform the following steps:
Do a GET to the login page (https://m.facebook.com/)
Use a HTML parsing library (e.g. BeautifulSoup) to parse the HTML and find the default values of the form fields.
The default values are all in <input> HTML elements below the #login_form element. You'll want to find them by name (e.g. charset_test) and then pull out their value attribute.
Working out how to do this is outside the scope of this answer, so I'm not going to go into it.
Combine the default values of the form fields with your email and password, like so:
data = {
'lsd': lsd,
'charset_test': csettest,
'version': version,
'ajax': ajax,
'width': width,
'pxr': pxr,
'gps': gps,
'm_ts': mts,
'li': li,
}
data['email'] = email
data['pass'] = pass
data['login'] = 'Log In'
Send your login using a Requests Session:
s = requests.Session()
r = s.post(url, data=data)
r.raise_for_status()
Send all your future HTTP traffic through that Session.
As you can see, this is a non-trivial way of doing things. That's because it's not expected that programs will use the website to log in: instead, you're expected to use their SDK or their web API instead.
|
Blocking and Non Blocking subprocess calls
|
I'm completely confused between subprocess.call() , subprocess.Popen(), subprocess.check_call().
Which is blocking and which is not ?
What I mean to say is if I use subprocess.Popen() whether the parent process waits for the child process to return/exit before it keep on its execution.
How does shell=True affect these calls?
|
Popen is nonblocking. call and check_call are blocking.
You can make the Popen instance block by calling its wait or communicate method.
If you look in the source code, you'll see call calls Popen(...).wait(), which is why it is blocking.
check_call calls call, which is why it blocks as well.
Strictly speaking, shell=True is orthogonal to the issue of blocking. However, shell=True causes Python to exec a shell and then run the command in the shell. If you use a blocking call, the call will return when the shell finishes. Since the shell may spawn a subprocess to run the command, the shell may finish before the spawned subprocess. For example,
import subprocess
import time
proc = subprocess.Popen('ls -lRa /', shell=True)
time.sleep(3)
proc.terminate()
proc.wait()
Here two processes are spawned: Popen spawns one subprocess running the shell. The shell in turn spawns a subprocess running ls. proc.terminate() kills the shell, but the subprocess running ls remains. (That is manifested by copious output, even after the python script has ended. Be prepared to kill the ls with pkill ls.)
|
PEP8: continuation line over-indented for visual indent
|
I have this line of code which goes over the line and when testing for pep8 errors I get:
line too long. So to try and fix this I used slash('\') but then I get continuation line over-indented for visual indent. What can I do to fix this?
Things I've tried:
if first_index < 0 or second_index > \
self._number_of_plates - 1:
raise ValueError
continuation line over-indented for visual indent
if first_index < 0 \
or second_index > \
self._number_of_plates - 1:
raise ValueError
continuation line over-indented for visual indent
if first_index < 0 or \
second_index > self._number_of_plates - 1:
raise ValueError
continuation line over-indented for visual indent
if first_index \
< 0 or second_index \
> self._number_of_plates - 1:
raise ValueError
continuation line over-indented for visual indent
|
The line-extending backslash has the issue of having trailing whitespace that can break your code. This is a popular fix and is PEP8-compliant:
if (first_index < 0 or
second_index > self._number_of_plates - 1):
|
Why does python's built in binary search function run so much faster?
|
(Already answered by sharth's comment.)
I've written a binary search algorithm in python, that more or less follows the same structure as the bisect_left function found in the bisect module. In fact it has a couple less conditionals as I know that the high point will be the length of the list and the low will be 0. Yet for some reason the built in function runs 5 times as fast as mine.
My code is as follows:
def bisection_search(word, t):
high = len(t)
low = 0
while low < high:
half = (high+low)/2
if t[half] < word:
low = half + 1
else:
high = half
return low
The source code for the built in function is:
def bisect_left(a, x, lo=0, hi=None):
if lo < 0:
raise ValueError('lo must be non-negative')
if hi is None:
hi = len(a)
while lo < hi:
mid = (lo+hi)//2
if a[mid] < x: lo = mid+1
else: hi = mid
return lo
As you can see, virtually identical. However the timed out put for the my function (searching for the last term in an ordered list of 100,000 words) is -3.60012054443e-05, where as the built in achieves -6.91413879395e-06. What explains this difference?
In the source code there is a comment at the end that says "Overwrite above definitions with a fast C implementation" - is this what explains the difference? If so, how would I go about creating such a precompiled module?
Any advice is greatly appreciated.
|
To summarise the remarks above so the question can be closed, the reason the built in module is faster is because the modules are precompiled in c. There are basically two options to attempt to replicate such performance, one is to use a JIT compiler like PyPy where the compilation is done at run time, the other is to compile your own modules in C, using Cython or some other variant to integrate the C code with python. The link from sharth above to the c code for bisect is particularly helpful and can be found here. Thanks again for all the help.
|
When does Python create new list objects for empty lists?
|
The following makes sense to me:
>>> [] is []
False
Given that lists are mutable, I would expect [] to be a new empty list object every time it appears in an expression. Using this explanation however, the following surprises me:
id([]) == id([])
True
Why? What is the explanation?
|
In the first example, [] is not [] precisely because the lists are mutable. If they weren't, they could safely map to the same one without issue.
In the second example, id([]) creates a list, gets the id, and deallocates the list. The second time around it creates a list again, but "puts it in the same place" because nothing much else has happened. id is only valid during an object's lifetime, and in this case its lifetime is virtually nil
From the docs on id:
This is an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value.
Commented disassembly:
0 LOAD_GLOBAL 0 (id) # load the id function
3 BUILD_LIST 0 # create the first list
6 CALL_FUNCTION 1 # get the id
9 LOAD_GLOBAL 0 (id) # load the id function
12 BUILD_LIST 0 # create the second list
15 CALL_FUNCTION 1 # get the id
18 COMPARE_OP 2 (==) # compare the two ids
21 RETURN_VALUE # return the comparison
Note there is no STORE_FAST to retain the list. Therefore it was discarded immediately after getting passed to the id function.
|
Use Jinja2 template engine in external javascript file
|
I working on a web project using Python and Flask. I was just wondering if I can access parameters sent by python in my external javascript files? It's working well with html files or with js embedded in html files but not when javascript is extern.
See below.
The python code
@app.route('/index')
def index():
return render_template('index.html', firstArg = 2, secondArg = 3)
The index.html code
...
<body>
<p>The first arg is {{firstArg}}.</p>
<script src="index.js"></script>
</body>
...
And the index.js file
window.onload=function(){
console.log('{{secondArg}}');
};
So the first arg is correct within the html file but the second doesn't work in the js file. The browser is showing Unexpected token {.
Maybe it's not possible to use it in external js?
Otherwise I would need to insert the secondArg as an input data in html and get it within the js file but it's not very clean.
If someone can help, thanks.
|
The index.js is probably not served by your flask instance, but it is most definitely not processed by your templateing engine and even if it would it would not have the same context as the html it is requested for.
I think the cleanest solution would be to have an initiation function in your index.js and call it from the html file:
<body>
<p>The first arg is {{firstArg}}.</p>
<script type="text/javascript" src="index.js"></script>
<script type="text/javascript">
yourInitFunction({{secondArg}});
</script>
</body>
You also could tell flask to route the index.js, too: @yourapp.route('index.js') just like you did with the route('/index') however this is probably not a very good idea.
|
Python - Enable access control on simple http server
|
I have the following shell script for very simple http server
#!/bin/sh
echo "Serving at http://localhost:3000"
python -m SimpleHTTPServer 3000
and I was wondering how I can enable /add [ Access-Control-Allow-Origin: * ] on this server?
|
Unfortunately, SimpleHTTPServer is really that simple that it does not allow any customization, especially not of the headers it sends. You can however create a simple HTTP server yourself, using most of SimpleHTTPServerRequestHandler, and just add that desired header.
Simply create a file simple-cors-http-server.py (or whatever) and put the following inside:
#!/usr/bin/env python2
from SimpleHTTPServer import SimpleHTTPRequestHandler
import BaseHTTPServer
class CORSRequestHandler (SimpleHTTPRequestHandler):
def end_headers (self):
self.send_header('Access-Control-Allow-Origin', '*')
SimpleHTTPRequestHandler.end_headers(self)
if __name__ == '__main__':
BaseHTTPServer.test(CORSRequestHandler, BaseHTTPServer.HTTPServer)
Then you can do python simple-cors-http-server.py and it will launch your modified server which will set the CORS header for every response.
With the shebang at the top, make the file executable and put it into your PATH, and you can just run it using simple-cors-http-server.py too.
For a solution that works with Python 3, you could simple change the imports above to import from the http.server module (all the types are there).
Alternatively, you could use this which works on both Python 3 and Python 2. The script first tries to import from the Python 3 locations and falls back to Python 2:
#!/usr/bin/env python
try:
# Python 3
from http.server import HTTPServer, SimpleHTTPRequestHandler, test as test_orig
import sys
def test (*args):
test_orig(*args, port=int(sys.argv[1]) if len(sys.argv) > 1 else 8000)
except ImportError: # Python 2
from BaseHTTPServer import HTTPServer, test
from SimpleHTTPServer import SimpleHTTPRequestHandler
class CORSRequestHandler (SimpleHTTPRequestHandler):
def end_headers (self):
self.send_header('Access-Control-Allow-Origin', '*')
SimpleHTTPRequestHandler.end_headers(self)
if __name__ == '__main__':
test(CORSRequestHandler, HTTPServer)
|
Setting the window to a fixed size with Tkinter
|
This program will create a window where a message is displayed according to a check box.
from Tkinter import *
class App:
def __init__(self,master):
self.var = IntVar()
frame = Frame(master)
frame.grid()
f2 = Frame(master,width=200,height=100)
f2.grid(row=0,column=1)
button = Checkbutton(frame,text='show',variable=self.var,command=self.fx)
button.grid(row=0,column=0)
msg2="""I feel bound to give them full satisfaction on this point"""
self.v= Message(f2,text=msg2)
def fx(self):
if self.var.get():
self.v.grid(column=1,row=0,sticky=N)
else:
self.v.grid_remove()
top = Tk()
app = App(top)
top.mainloop()
How can I make the window size constant when the message is displayed and the message is not displayed.
|
This code makes a window with the conditions that the user cannot change the dimensions of the Tk() window, and also disables the maximise button.
import tkinter as tk
root = tk.Tk()
root.resizable(width=False, height=False)
root.mainloop()
Within the program you can change the window dimensions with @Carpetsmoker's answer, or by doing this:
root.geometry('{}x{}'.format(<widthpixels>, <heightpixels>))
It should be fairly easy for you to implement that into your code. :)
|
How to print all variables values when debugging Python with pdb, without specifying each variable?
|
I'm debugging my Python scripts using pdb and the manual says I can use p variables command to print the values of the specified variables at a certain point. But what if I had lots of variables, like 20 variables, and I would like to track the value of all of them? How do I print all of them without specifying each one manually? Take for example this script:
a = 1
b = 2
c = 3
I can debug it with pdb and print all of them using p a, b, c like this:
$ python -m pdb test.py
> /media/test.py(1)<module>()
-> a = 1
(Pdb) n
> /media/test.py(2)<module>()
-> b = 2
(Pdb) n
> /media/test.py(3)<module>()
(Pdb) n
--Return--
> /media/test.py(3)<module>()->None
-> c = 3
(Pdb) p a, b, c
(1, 2, 3)
(Pdb)
But I have to manually specify each variable. Is there a way of print all variables at once, without specifying each one of them?
|
pdb is a fully featured python shell, so you can execute arbitrary commands.
locals() and globals() will display all the variables in scope with their values.
You can use dir() if you're not interested in the values.
When you declare a variable in Python, it's put into locals or globals as appropriate, and there's no way to distinguish a variable you defined and something that's in your scope for another reason.
When you use dir(), it's likely that the variables you're interested in are at the beginning or end of that list. If you want to get the key, value pairs
Filtering locals() might look something like:
>>> x = 10
>>> y = 20
>>> {k: v for k,v in locals().iteritems() if '__' not in k and 'pdb' not in k}
{'y': 20, 'x': 10}
If your locals() is a real mess, you'll need something a little more heavy handed. You can put the following function in a module on your pythonpath and import it during your debugging session.
def debug_nice(locals_dict, keys=[]):
globals()['types'] = `__import__`('types')
exclude_keys = ['copyright', 'credits', 'False',
'True', 'None', 'Ellipsis', 'quit']
exclude_valuetypes = [types.BuiltinFunctionType,
types.BuiltinMethodType,
types.ModuleType,
types.TypeType,
types.FunctionType]
return {k: v for k,v in locals_dict.iteritems() if not
(k in keys or
k in exclude_keys or
type(v) in exclude_valuetypes) and
k[0] != '_'}
I've added an example session on pastebin
There are a couple of cases this misses. And you might want to extend it to allow you to pass in types too. But it should let you filter most everything but the variables you defined.
dir()
If you just want the last 20 values so you get output like >>> p var1 var2 ... varn would give you, then you're better off slicing dir() like dir()[-20:], but you won't easily see the relationship between the variables and values. eg: "Did I declare foo before or after bar?"
If you want to see that relationship, you can try something like this, which assumes that your variables are at the end of dir(). You can slice differently if they're at the beginning. This won't work well if your variables aren't contiguous.
>>> zip(dir(), [eval(var) for var in dir()])[-4:]
[('a', 10), ('var', 'var'), ('x', 30), ('y', 50)]
|
Timeout for python requests.get entire response
|
I'm gathering statistics on a list of websites and I'm using requests for it for simplicity. Here is my code:
data=[]
websites=['http://google.com', 'http://bbc.co.uk']
for w in websites:
r= requests.get(w, verify=False)
data.append( (r.url, len(r.content), r.elapsed.total_seconds(), str([(l.status_code, l.url) for l in r.history]), str(r.headers.items()), str(r.cookies.items())) )
Now, I want requests.get to timeout after 10 seconds so the loop doesn't get stuck.
This question has been of interest before too but none of the answers are clean. I will be putting some bounty on this to get a nice answer.
I hear that maybe not using requests is a good idea but then how should I get the nice things requests offer. (the ones in the tuple)
|
What about using eventlet? If you want to timeout the request after 10 seconds, even if data is being received, this snippet will work for you:
import requests
import eventlet
eventlet.monkey_patch()
with eventlet.Timeout(10):
requests.get("http://ipv4.download.thinkbroadband.com/1GB.zip", verify=False)
|
Python based asynchronous workflow modules : What is difference between celery workflow and luigi workflow?
|
I am using django as a web framework. I need a workflow engine that can do synchronous as well as asynchronous(batch tasks) chain of tasks. I found celery and luigi as batch processing workflow. My first question is what is the difference between these two modules.
Luigi allows us to rerun failed chain of task and only failed sub-tasks get re-executed. What about celery: if we rerun the chain (after fixing failed sub-task code), will it rerun the already succeed sub-tasks?
Suppose I have two sub-tasks. The first one creates some files and the second one reads those files. When I put these into chain in celery, the whole chain fails due to buggy code in second task. What happens when I rerun the chain after fixing the code in second task? Will the first task try to recreate those files?
|
(I'm the author of Luigi)
Luigi is not meant for synchronous low-latency framework. It's meant for large batch processes that run for hours or days. So I think for your use case, Celery might actually be slightly better
|
how to get user email with python social auth with facebook and save it
|
I'm trying to implement python-social-auth in django.
I want users to authenticate through facebook and save their email.
I'm able to authenticate users but the extended permission for email is not showing up in the facebook authentification box and it's not storing the email in the database.
In settings.py I have the follwoing:
SOCIAL_AUTH_FACEBOOK_KEY='xxx'
SOCIAL_AUTH_FACEBOOK_SECRET='xxx'
FACEBOOK_EXTENDED_PERMISSIONS = ['email']
AUTHENTICATION_BACKENDS = (
'social.backends.facebook.FacebookOAuth2',
'social.backends.email.EmailAuth',
'django.contrib.auth.backends.ModelBackend',
)
LOGIN_URL = '/login/'
LOGIN_REDIRECT_URL = '/done/'
LOGOUT_REDIRECT_URL = '/'
URL_PATH = ''
SOCIAL_AUTH_STRATEGY = 'social.strategies.django_strategy.DjangoStrategy'
SOCIAL_AUTH_STORAGE = 'social.apps.django_app.default.models.DjangoStorage'
SOCIAL_AUTH_PIPELINE = (
'social.pipeline.social_auth.social_details',
'social.pipeline.social_auth.social_uid',
'social.pipeline.social_auth.auth_allowed',
'social.pipeline.social_auth.social_user',
'social.pipeline.user.get_username',
'social.pipeline.social_auth.associate_by_email',
# 'users.pipeline.require_email',
'social.pipeline.mail.mail_validation',
'social.pipeline.user.create_user',
'social.pipeline.social_auth.associate_user',
'social.pipeline.social_auth.load_extra_data',
'social.pipeline.user.user_details'
)
The facebook dialog box...
How can I solve this?
|
After some changes in Facebook Login API - Facebook's Graph API v2.4
You will have to add these lines to fetch email
SOCIAL_AUTH_FACEBOOK_SCOPE = ['email']
SOCIAL_AUTH_FACEBOOK_PROFILE_EXTRA_PARAMS = {
'fields': 'id,name,email',
}
|
What is a "scalar" in numpy?
|
The documentation states the purpose of scalars, such as the fact that conventional Python numbers like float and integer are too primitive therefore more complex data types are neccessary. It also states certain kinds of scalars(data type hierarchy); as well as a couple attributes of scalar. But it never gives a concrete definition of exactly what a scalar is in the context of Python.
I want to get to the heart of the issue on this. So my question is, in the simplest terms possible, explain to me what a pythonic scalar is.
|
A NumPy scalar is any object which is an instance of np.generic or whose type is in np.ScalarType:
In [12]: np.ScalarType
Out[13]:
(int,
float,
complex,
long,
bool,
str,
unicode,
buffer,
numpy.int16,
numpy.float16,
numpy.int8,
numpy.uint64,
numpy.complex192,
numpy.void,
numpy.uint32,
numpy.complex128,
numpy.unicode_,
numpy.uint32,
numpy.complex64,
numpy.string_,
numpy.uint16,
numpy.timedelta64,
numpy.bool_,
numpy.uint8,
numpy.datetime64,
numpy.object_,
numpy.int64,
numpy.float96,
numpy.int32,
numpy.float64,
numpy.int32,
numpy.float32)
This definition comes from looking at the source code for np.isscalar:
def isscalar(num):
if isinstance(num, generic):
return True
else:
return type(num) in ScalarType
Note that you can test if something is a scalar by using np.isscalar:
>>> np.isscalar(3.1)
True
>>> np.isscalar([3.1])
False
>>> np.isscalar(False)
True
How do we know what we know?
I like learning how people know what they know -- more than the answers themselves. So let me try to explain where the above answer comes from.
Having the right tools can help you figure out things like this for yourself.
I found this out by using IPython. Using its TAB-completion feature, typing
In [19]: import numpy as np
In [20]: np.[TAB]
causes IPython to display all variables in the np module namespace. A search for the string "scalar" will lead you to np.ScalarType and np.isscalar. Typing
In [20]: np.isscalar?
(note the question mark at the end) prompts IPython to show you where np.isscalar is defined:
File: /data1/unutbu/.virtualenvs/dev/lib/python2.7/site-packages/numpy/core/numeric.py
which is how I got to the definition of isscalar. Alternatively, the numpy documentation for isscalar has a link to the source code as well.
|
PyCharm include and modify External library in project
|
I have an issue where I am developing a Django project which includes other libraries we are also developing.
My current structure is as follows:
Main Project
App1
App2
Libraries
Library 1
Library 2
All libraries have their own setup scripts and are in separate git repositories, and we are adding them in PyCharm in the PYTHONPATH, and referencing them simply by their name. Which works good, but they are not in my current project, which means no re-factoring ( renaming, moving etc... ) and I have to use External search to find my class from the libraries.
How do I set some libraries as project related to make them view-able and refactorable like we do on the currently set project.
|
Well, you can add other directories as content roots:
Then simply mark the directory as a source root:
This should allow you to refactor, rename and do all the things you've wanted to do.
|
How do I increase the cell width of the ipython notebook in my browser?
|
I would like to increase the width of the ipython notebook in my browser. I have a high-resolution screen, and I would like to expand the cell width/size to make use of this extra space.
Thanks!
|
That div.cell solution didn't actually work on my IPython, however luckily someone suggested a working solution for new IPythons:
Create a file ~/.ipython/profile_default/static/custom/custom.css (iPython) or ~/.jupyter/custom/custom.css (Jupyter) with content
.container { width:100% !important; }
Then restart iPython/Jupyter notebooks. Note that this will affect all notebooks.
|
Using grequests to make several thousand get requests to sourceforge, get "Max retries exceeded with url"
|
I am very new to all of this; I need to obtain data on several thousand sourceforge projects for a paper I am writing. The data is all freely available in json format at the url http://sourceforge.net/api/project/name/[project name]/json. I have a list of several thousand of these URL's and I am using the following code.
import grequests
rs = (grequests.get(u) for u in ulist)
answers = grequests.map(rs)
Using this code I am able to obtain the data for any 200 or so projects I like, i.e. rs = (grequests.get(u) for u in ulist[0:199]) works, but as soon as I go over that, all attempts are met with
ConnectionError: HTTPConnectionPool(host='sourceforge.net', port=80): Max retries exceeded with url: /api/project/name/p2p-fs/json (Caused by <class 'socket.gaierror'>: [Errno 8] nodename nor servname provided, or not known)
<Greenlet at 0x109b790f0: <bound method AsyncRequest.send of <grequests.AsyncRequest object at 0x10999ef50>>(stream=False)> failed with ConnectionError
I am then unable to make any more requests until I quit python, but as soon as I restart python I can make another 200 requests.
I've tried using grequests.map(rs,size=200) but this seems to do nothing.
|
So, I'm answering here, maybe it will help others.
In my case, it was not rate limiting by the destination server, but something much simpler: I didn't explicitly close the responses, so they were keeping the socket open, and the python process ran out of file handles.
My solution (don't know for sure which one fixed the issue - theoretically either of them should) was to:
Set stream=False in grequests.get:
rs = (grequests.get(u, stream=False) for u in urls)
Call explicitly response.close() after I read response.content:
responses = grequests.map(rs)
for response in responses:
make_use_of(response.content)
response.close()
Note: simply destroying the response object (assigning None to it, calling gc.collect()) was not enough - this did not close the file handlers.
|
Improving Python NetworkX graph layout
|
I am having some problems in visualizing the graphs created with python-networkx, I want to able to reduce clutter and regulate the distance between the nodes (I have also tried spring_layout, it just lays out the nodes in an elliptical fashion). Please advise.
Parts of code:
nx.draw_networkx_edges(G, pos, edgelist=predges, edge_color='red', arrows=True)
nx.draw_networkx_edges(G, pos, edgelist=black_edges, arrows=False, style='dashed')
# label fonts
nx.draw_networkx_labels(G,pos,font_size=7,font_family='sans-serif')
nx.draw_networkx_edge_labels(G,pos,q_list,label_pos=0.3)
|
In networkx, it's worth checking out the graph drawing algorithms provided by graphviz via nx.graphviz_layout.
I've had good success with neato but the other possible inputs are
dot - "hierarchical" or layered drawings of directed graphs. This is the default tool to use if edges have directionality.
neato - "spring model'' layouts. This is the default tool to use if the graph is not too large (about 100 nodes) and you don't know anything else about it. Neato attempts to minimize a global energy function, which is equivalent to statistical multi-dimensional scaling.
fdp - "spring model'' layouts similar to those of neato, but does this by reducing forces rather than working with energy.
sfdp - multiscale version of fdp for the layout of large graphs.
twopi - radial layouts, after Graham Wills 97. Nodes are placed on concentric circles depending their distance from a given root node.
circo - circular layout, after Six and Tollis 99, Kauffman and Wiese 02. This is suitable for certain diagrams of multiple cyclic structures, such as certain telecommunications networks.
In general, graph drawing is a hard problem. If these algorithms are not sufficient, you'll have to write your own or have networkx draw parts individually.
|
How to change size of bokeh figure
|
I have read most of the documentation on bokeh and many of the examples. All of them contain the default square window. The only example I have seen that is the slightly different is here which has subplots and sets height and width in the creation of a Plot object.
|
If you've already created the plot, then you can use the bokeh.plotting.curplot() function to return the "current" plot, and then set its height and width attributes. If you are building up a Plot object using the lower-level interfaces (e.g. the examples in bokeh/examples/glyph/, then you can just set those attributes directly as well on the plot object or in the Plot() constructor.
Alternatively, if you are using any of the glyph generation functions in bokeh.plotting, you can pass the plot_width and plot_height keyword arguments, e.g.:
line(x,y, color="#0000FF", tools="pan,wheel_zoom,box_zoom,reset",
name="line_example", plot_width=800, plot_height=300)
|
How to pass dictionary items as function arguments in python?
|
My code
1st file:
data = {'school':'DAV', 'standard': '7', 'name': 'abc', 'city': 'delhi'}
my_function(*data)
2nd file:
my_function(*data):
schoolname = school
cityname = city
standard = standard
studentname = name
in the above code, only keys of "data" dictionary were get passed to my_function(), but i want key-value pairs to pass. How to correct this ?
I want the my_function() to get modified like this
my_function(school='DAV', standard='7', name='abc', city='delhi')
and this is my requirement, give answers according to this
EDIT: dictionary key class is changed to standard
|
If you want to use them like that, define the function with the variable names as normal (but use klass for class, you can't use reserved words):
def my_function(school, city, klass, name):
schoolname = school
cityname = city
standard = klass
studentname = name
Now (as long as you rename class to klass in your dictionary) you can use ** when you call the function:
my_function(**data)
and it will work as you want.
|
Legend only shows one label when plotting with pandas
|
I have two Pandas DataFrames that I'm hoping to plot in single figure. I'm using IPython notebook.
I would like the legend to show the label for both of the DataFrames, but so far I've been able to get only the latter one to show. Also any suggestions as to how to go about writing the code in a more sensible way would be appreciated. I'm new to all this and don't really understand object oriented plotting.
%pylab inline
import pandas as pd
#creating data
prng = pd.period_range('1/1/2011', '1/1/2012', freq='M')
var=pd.DataFrame(randn(len(prng)),index=prng,columns=['total'])
shares=pd.DataFrame(randn(len(prng)),index=index,columns=['average'])
#plotting
ax=var.total.plot(label='Variance')
ax=shares.average.plot(secondary_y=True,label='Average Age')
ax.left_ax.set_ylabel('Variance of log wages')
ax.right_ax.set_ylabel('Average age')
plt.legend(loc='upper center')
plt.title('Wage Variance and Mean Age')
plt.show()
|
This is indeed a bit confusing. I think it boils down to how Matplotlib handles the secondary axes. Pandas probably calls ax.twinx() somewhere which superimposes a secondary axes on the first one, but this is actually a separate axes. Therefore also with separate lines & labels and a separate legend. Calling plt.legend() only applies to one of the axes (the active one) which in your example is the second axes.
Pandas fortunately does store both axes, so you can grab all line objects from both of them and pass them to the .legend() command yourself. Given your example data:
You can plot exactly as you did:
ax = var.total.plot(label='Variance')
ax = shares.average.plot(secondary_y=True, label='Average Age')
ax.set_ylabel('Variance of log wages')
ax.right_ax.set_ylabel('Average age')
Both axes objects are available with ax (left axe) and ax.right_ax, so you can grab the line objects from them. Matplotlib's .get_lines() return a list so you can merge them by simple addition.
lines = ax.get_lines() + ax.right_ax.get_lines()
The line objects have a label property which can be used to read and pass the label to the .legend() command.
ax.legend(lines, [l.get_label() for l in lines], loc='upper center')
And the rest of the plotting:
ax.set_title('Wage Variance and Mean Age')
plt.show()
edit:
It might be less confusing if you separate the Pandas (data) and the Matplotlib (plotting) parts more strictly, so avoid using the Pandas build-in plotting (which only wraps Matplotlib anyway):
fig, ax = plt.subplots()
ax.plot(var.index.to_datetime(), var.total, 'b', label='Variance')
ax.set_ylabel('Variance of log wages')
ax2 = ax.twinx()
ax2.plot(shares.index.to_datetime(), shares.average, 'g' , label='Average Age')
ax2.set_ylabel('Average age')
lines = ax.get_lines() + ax2.get_lines()
ax.legend(lines, [line.get_label() for line in lines], loc='upper center')
ax.set_title('Wage Variance and Mean Age')
plt.show()
|
Finding index of maximum value in array with NumPy
|
I would like to find a maximum in a float64 array, excluding nan values.
I saw np.nanmax function but it doesn't give the index corresponding to the found value.
it 's quite strange to scan after to the value specially the function necessarily use the index ??? Can't it be a mistake searching like that .
isn't there a way to recover the index directly ?
|
Numpy has an argmax function that returns just that, although you will have to deal with the nans manually. nans always get sorted to the end of an array, so with that in mind you can do:
a = np.random.rand(10000)
a[np.random.randint(10000, size=(10,))] = np.nan
a = a.reshape(100, 100)
def nanargmax(a):
idx = np.argmax(a, axis=None)
multi_idx = np.unravel_index(idx, a.shape)
if np.isnan(a[multi_idx]):
nan_count = np.sum(np.isnan(a))
# In numpy < 1.8 use idx = np.argsort(a, axis=None)[-nan_count-1]
idx = np.argpartition(a, -nan_count-1, axis=None)[-nan_count-1]
multi_idx = np.unravel_index(idx, a.shape)
return multi_idx
>>> nanargmax(a)
(20, 93)
|
abs() vs fabs() speed difference and advantage of fabs()
|
I ran some simple tests on abs() and fabs() functions and I don't understand what are the advantages of using fabs(), if it is:
1) slower
2) works only on floats
3) will throw an exception if used on a different type
In [1]: %timeit abs(5)
10000000 loops, best of 3: 86.5 ns per loop
In [3]: %timeit fabs(5)
10000000 loops, best of 3: 115 ns per loop
In [4]: %timeit abs(-5)
10000000 loops, best of 3: 88.3 ns per loop
In [5]: %timeit fabs(-5)
10000000 loops, best of 3: 114 ns per loop
In [6]: %timeit abs(5.0)
10000000 loops, best of 3: 92.5 ns per loop
In [7]: %timeit fabs(5.0)
10000000 loops, best of 3: 93.2 ns per loop
it's even slower on floats!
From where I am standing the only advantage of using fabs() is to make your code more readable, because by using it, you are clearly stating your intention of working with float/double point values
Is there any other use of fabs()?
|
From an email response from Tim Peters:
Why does math have an fabs function? Both it and the abs builtin function
wind up calling fabs() for floats. abs() is faster to boot.
Nothing deep -- the math module supplies everything in C89's standard
libm (+ a few extensions), fabs() is a std C89 libm function.
There isn't a clear (to me) reason why one would be faster than the
other; sounds accidental; math.fabs() could certainly be made faster
(as currently implemented (via math_1), it endures a pile of
general-purpose "try to guess whether libm should have set errno"
boilerplate that's wasted (there are no domain or range errors
possible for fabs())).
It seems there is no advantageous reason to use fabs. Just use abs for virtually all purposes.
|
Python Social Auth Django template example
|
Does someone has an open example using Python Social Auth with Django in templates?
I took a look in their Github repo, and in the django exmaple, there is nothing about how to deal with it in templates (e.g. doing login, logout, etc).
|
Letâs say you followed Python Social Auth configuration guidelines at http://psa.matiasaguirre.net/docs/configuration/django.html and you want using facebook login.
Your backend settings in settings.py should look:
AUTHENTICATION_BACKENDS = (
'social.backends.facebook.FacebookOAuth2',
'django.contrib.auth.backends.ModelBackend',
)
You should register as facebook developer and create an app and then fill in additional data in your settings.py file:
SOCIAL_AUTH_FACEBOOK_KEY = 'xxxxxxxxxxxxxx'
SOCIAL_AUTH_FACEBOOK_SECRET = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
SOCIAL_AUTH_FACEBOOK_SCOPE = ['email']
Let us assume after login you want users to be redirected to members page, so you add this setting to your settings.py:
LOGIN_REDIRECT_URL = '/members'
Letâs say you created login_app in your django project as well as created your home view with home.html template and also created members view with members.html template (you should have your template directory working).
According to configuration guidelines our urls.py should look:
from django.conf.urls import patterns, include, url
from django.contrib import admin
urlpatterns = patterns('',
url('', include('social.apps.django_app.urls', namespace='social')),
url(r'^admin/', include(admin.site.urls)),
)
If we would try bla-bla-bla url with DEBUG=True settings, we would get an error:
Using the URLconf defined in your_project.urls, Django tried these URL patterns, in this order:
^login/(?P<backend>[^/]+)/$ [name='begin']
^complete/(?P<backend>[^/]+)/$ [name='complete']
^disconnect/(?P<backend>[^/]+)/$ [name='disconnect']
^disconnect/(?P<backend>[^/]+)/(?P<association_id>[^/]+)/$ [name='disconnect_individual']
^admin/
The current URL, bla-bla-bla/, didn't match any of these.
For a very simple test we need to add home view, members view and logout (login is already handled), so our updated urls.py should look:
from django.conf.urls import patterns, include, url
from django.contrib import admin
urlpatterns = patterns('',
url('', include('social.apps.django_app.urls', namespace='social')),
url(r'^admin/', include(admin.site.urls)),
url(r'^$', 'login_app.views.home', name='home'),
url(r'^members/', 'login_app.views.members', name='members'),
url(r'^logout/$', 'login_app.views.logout', name='logout'),
)
Under our login_app directory we should have files (do not pay attention to *.pyc files and migrations folder is present because I use django 1.7b4 version):
login_app/
âââ admin.py
âââ __init__.py
âââ __init__.pyc
âââ migrations
â âââ __init__.py
âââ models.py
âââ tests.py
âââ views.py
âââ views.pyc
Our views.py should look like:
from django.shortcuts import render, redirect
from django.contrib.auth import logout as auth_logout
def home(request):
context = {}
template = 'home.html'
return render(request, template, context)
def members(request):
context = {}
template = 'members.html'
return render(request, template, context)
def logout(request):
auth_logout(request)
return redirect('/')
Other files (including models.py) we may leave without adding anything.
In order to login with facebook we should redirect your users to âlogin/facebookâ. So you can just add this link or button where appropriate somewhere in your home.html template:
<a href="login/facebook">Login with facebook</a>
After this link is pressed (in case settings.py, urls.py, views.py are ok and your facebook app is configured well) users will be logged in with facebook and redirected to members page. If you login to django admin, you should be able to see new entry under [ Home ⺠Default ⺠User social auths ] and new user in [ Home ⺠Authentication and Authorization ⺠Users ].
When user is authenticated and redirected to members page, you can have userâs information such as username, first name, last name, e-mail. You can display that information by adding to your members.html template:
<p>User's name and surname: {{ user.first_name }} {{ user.last_name}}</p>
<p>Username: {{ user.username }}</p>
<p>E-mail: {{ user.email }}</p>
As you already noticed, for logout we made an app in our views.py:
def logout(request):
auth_logout(request)
return redirect('/')
So we can add a link in our members.html template:
<a href="/logout">Logout</a>
And that would be enough to log out the user and redirect to initial home page.
This would be very simple example which may give a better understanding how to login and logout with Python Social Auth.
|
Join multiple tables in SQLAlchemy/Flask
|
I am trying to figure out the correct join query setup within SQLAlchemy, but I can't seem to get my head around it.
I have the following table setup (simplified, I left out the non-essential fields):
class Group(db.Model):
id = db.Column(db.Integer, primary_key = True)
number = db.Column(db.SmallInteger, index = True, unique = True)
member = db.relationship('Member', backref = 'groups', lazy = 'dynamic')
class Member(db.Model):
id = db.Column(db.Integer, primary_key = True)
number = db.Column(db.SmallInteger, index = True)
groupid = db.Column(db.Integer, db.ForeignKey('group.id'))
item = db.relationship('Item', backref = 'members', lazy = 'dynamic')
class Version(db.Model):
id = db.Column(db.Integer, primary_key = True)
name = db.Column(db.String(80), index = True)
items = db.relationship('Item', backref='versions', lazy='dynamic')
class Item(db.Model):
id = db.Column(db.Integer, primary_key = True)
member = db.Column(db.Integer, db.ForeignKey('member.id'))
version = db.Column(db.Integer, db.ForeignKey('version.id'))
So the relationships are the following:
1:n Group Member
1:n Member Item
1:n Version Item
I would like to construct a query by selecting all Item-Rows from the database, that have a certain version. Then I would like to order them by Group and then by Member. The output using Flask/WTForm should look something like this:
* GroupA
* MemberA
* ItemA (version = selected by user)
* ItemB ( dito )
* Member B
* ItemC ( dito )
....
I have come up with something like the following query, but I am pretty sure that it is not correct (and inefficient)
session.query(Item,Member,Group,Version)
.join(Member).filter(version.id==1)
.order_by(Group).order_by(Member).all()
My first intuitive approach would have been to create something like
Item.query.join(Member, Item.member==Member.id)
.filter(Member.versions.name=='MySelection')
.order_by(Member.number).order_by(Group.number)
but obviously, this doesn't work at all. The join operation on the Version table does not seem to produce the type of join between the two tables that I expected. Maybe I am totally misunderstanding the concept, but after reading the tutorials this would have made sense to me.
|
Following will give you the objects you need in one query:
q = (session.query(Group, Member, Item, Version)
.join(Member)
.join(Item)
.join(Version)
.filter(Version.name == my_version)
.order_by(Group.number)
.order_by(Member.number)
).all()
print_tree(q)
However, the result you get will be a list of tuples (Group, Member, Item, Version). Now it is up to you to display it in a tree form. Code below might prove useful though:
def print_tree(rows):
def get_level_diff(row1, row2):
""" Returns tuple: (from, to) of different item positions. """
if row1 is None: # first row handling
return (0, len(row2))
assert len(row1) == len(row2)
for col in range(len(row1)):
if row1[col] != row2[col]:
return (col, len(row2))
assert False, "should not have duplicates"
prev_row = None
for row in rows:
level = get_level_diff(prev_row, row)
for l in range(*level):
print 2 * l * " ", row[l]
prev_row = row
Update-1: If you are willing to forgo lazy = 'dynamic' for the first two relationships, you can a query to load a whole object network (as opposed to tuples above) with the code:
q = (session.query(Group)
.join(Member)
.join(Item)
.join(Version)
# @note: here we are tricking sqlalchemy to think that we loaded all these relationships,
# even though we filter them out by version. Please use this only to get data and display,
# but not to continue working with it as if it were a regular UnitOfWork
.options(
contains_eager(Group.member).
contains_eager(Member.items).
contains_eager(Item.version)
)
.filter(Version.name == my_version)
.order_by(Group.number)
.order_by(Member.number)
).all()
# print tree: easy navigation of relationships
for g in q:
print "", g
for m in g.member:
print 2 * " ", m
for i in m.items:
print 4 * " ", i
|
How to make an optional value for argument using argparse?
|
I am creating a python script where I want to have an argument that manipulates how many search results you get as output. I've currently named the argument --head. This is the functionality I'd like it to have:
When --head is not passed at the command line I'd like it to default to one value. In this case, a rather big one, like 80
When --head is passed without any value, I'd like it to default to another value. IN this case, something limited, like 10
When --head is passed with a value, I'd like it to store the value it was passed.
Here is some code describing the problem:
>>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('-h',
'--head',
dest='size',
const=80,
default=10,
action="I don't know",
help='Only print the head of the output')
>>> # OFC, that last line will fail because the action is uknown,
... # but here is how I'd like it to work
... parser.parse_args(''.split())
Namespace(size=80)
>>> parser.parse_args('--head'.split())
Namespace(size=10)
>>> parser.parse_args('--head 15'.split())
Namespace(size=15)
I know I probably can write a custom action for this, but I first want to see if there is any default behaviour that does this.
|
After a little more reading in the documentation I found what I needed: nargs='?'.
This is used with the store action, and does exactly what I want.
Here is an example:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--head',
dest='size',
const=10,
default=80,
action='store',
nargs='?',
type=int,
help='Only print the head of the output')
print(parser.parse_args(''.split()))
# output: Namespace(size=80)
print(parser.parse_args('--head'.split()))
# output: Namespace(size=10)
print(parser.parse_args('--head 15'.split()))
# output: Namespace(size=15)
source: http://docs.python.org/3.3/library/argparse.html#nargs
|
Get virtualenv's bin folder path from script
|
I'm using virtualenvwrapper with a django project that has a management task that automatically writes some config files, so the user just has to
./manage.py generate_configuration > much_nice.conf
And then move the file elsewhere. One of the generated config files is a task for supervisord that launches a celery worker. The problem I'm getting is that I don't know how to output the path of the celery executable that is within the bin folder of the virtualenv. Essentially, I'd like to have the output of the command
which celery
One option is using sys.executable, get the folder (which seems to be the bin folder of the virtualenv) and that's it... but I'm not sure.
Doesn't virtualenv have any kind of method to get the path itself?
|
The path to the virtual env is in the environment variable VIRTUAL_ENV
echo $VIRTUAL_ENV
|
Convert Columns to String in Pandas
|
I have the following DataFrame from a SQL query:
(Pdb) pp total_rows
ColumnID RespondentCount
0 -1 2
1 3030096843 1
2 3030096845 1
and I want to pivot it like this:
total_data = total_rows.pivot_table(cols=['ColumnID'])
(Pdb) pp total_data
ColumnID -1 3030096843 3030096845
RespondentCount 2 1 1
[1 rows x 3 columns]
total_rows.pivot_table(cols=['ColumnID']).to_dict('records')[0]
{3030096843: 1, 3030096845: 1, -1: 2}
but I want to make sure the 303 columns are casted as strings instead of integers so that I get this:
{'3030096843': 1, '3030096845': 1, -1: 2}
|
One way to convert to string is to use astype:
total_rows['ColumnID'] = total_rows['ColumnID'].astype(str)
However, perhaps you are looking for the to_json function, which will convert keys to valid json (and therefore your keys to strings):
In [11]: df = pd.DataFrame([['A', 2], ['A', 4], ['B', 6]])
In [12]: df.to_json()
Out[12]: '{"0":{"0":"A","1":"A","2":"B"},"1":{"0":2,"1":4,"2":6}}'
In [13]: df[0].to_json()
Out[13]: '{"0":"A","1":"A","2":"B"}'
Note: you can pass in a buffer/file to save this to, along with some other options...
|
Detect if a cube and a cone intersect each other?
|
Consider two geometrical objects in 3D:
a cube aligned with the axes and defined by the position of its center and its extent (edge length)
a cone not aligned with the axes and defined by the position of its vertex, the position of the center of its base, and the half-angle at the vertex
Here is a small code to define these objects in C++:
// Preprocessor
#include <iostream>
#include <cmath>
#include <array>
// 3D cube from the position of its center and the side extent
class cube
{
public:
cube(const std::array<double, 3>& pos, const double ext)
: _position(pos), _extent(ext)
{;}
double center(const unsigned int idim)
{return _position[idim];}
double min(const unsigned int idim)
{return _position[idim]-_extent/2;}
double max(const unsigned int idim)
{return _position[idim]+_extent/2;}
double extent()
{return _extent;}
double volume()
{return std::pow(_extent, 3);}
protected:
std::array<double, 3> _position;
double _extent;
};
// 3d cone from the position of its vertex, the base center, and the angle
class cone
{
public:
cone(const std::array<double, 3>& vert,
const std::array<double, 3>& bas,
const double ang)
: _vertex(vert), _base(bas), _angle(ang)
{;}
double vertex(const unsigned int idim)
{return _vertex[idim];}
double base(const unsigned int idim)
{return _base[idim];}
double angle()
{return _angle;}
double height()
{return std::sqrt(std::pow(_vertex[0]-_base[0], 2)+std::pow(
_vertex[1]-_base[1], 2)+std::pow(_vertex[2]-_base[2], 2));}
double radius()
{return std::tan(_angle)*height();}
double circle()
{return 4*std::atan(1)*std::pow(radius(), 2);}
double volume()
{return circle()*height()/3;}
protected:
std::array<double, 3> _vertex;
std::array<double, 3> _base;
double _angle;
};
I would like to write a function to detect whether the intersection of a cube and a cone is empty or not:
// Detect whether the intersection between a 3d cube and a 3d cone is not null
bool intersection(const cube& x, const cone& y)
{
// Function that returns false if the intersection of x and y is empty
// and true otherwise
}
Here is an illustration of the problem (the illustration is in 2D, but my problem is in 3D):
How to do that efficiently (I am searching for an algorithm, so the answer can be in C, C++ or Python) ?
Note: Here intersection is defined as: it exists a non-null 3D volume that is in the cube and in the cone (if the cube is inside the cone, or if the cone is inside the cube, they intersect).
|
imagine 2 infinite lines
axis of a cone
line going through a point P (cube center for starters) which is perpendicular to cone axis.
Cone axis is known to you so that is easy, second line is defined as
P+t*(perpendicular vector to cone axis)
This vector can be obtained by cross product of cone axis vector and vector perpendicular to your image (assuming Z axis). The t is scalar value parameter ...
compute intersection of these 2 lines/axises
if you do not know the equations derive them or google them. Let the intersection point be Q
if intersection point Q does not lie inside cone
(between vertex and base) then point P is not intersecting cone. From intersection equations you will obtain parameters t1 and t2
let t1 be for P axis line
and t2 for cone axis line
if your axis line direction vector is also the cone length then intersection is inside cone if t2 = <0,1>
if P is not inside triangle (cutted cone to plane generated by those 2 axises)
this is also easy you know the position of Q inside cone (t2) so you know that the cone is in P-axis from Q to distance of R*t2 where R is base radius of cone. So you can compute |P-Q| and check if it is <=R*t2 or use directly t1 (if P axis direction vector is unit).
if the distance is bigger then R*t2 point P does not intersect cone.
if #3 and #4 are positive then P intersects cone
hope you dont mind here is your image with few things added for clarity
[notes]
Now the hard part there are edge cases when no vertex of cube is intersecting cone but the cube itself is intersecting the cone anyway. This can occur when the ||P-Q|-R*t2| = <0,half cube size> In this case you should check more points then just cube vertexes along the closest cube face.
Another approach is:
create a transformation matrix for cone
Where:
its vertex as origin
its axis as +Z axis
and XY plane is parallel to its base
so any point is inside cone if
Z = <0,h>
X*X + Y*Y <= (R*Z/h)^2 or X*X + Y*Y <= (R*Z*tan(angle))^2
convert cube vertexes into cone space
and check if any vertex is inside cone also you can check all cube edge lines with the conditions from #1 (algebraically) or use more points along cube faces as in previous method.
Chat discussion: http://chat.stackoverflow.com/rooms/48756/discussion-between-spektre-and-joojaa
|
PyPy and PyInstaller
|
Is it possible to build a single-binary or single-directory package with PyInstaller that uses pypy as interpreter?
Any special tricks to do that?
Some alternative to PyInstaller?
If not, what are the fundamental technical reasons?
Notes why/how pyinstaller doesn't doesn't with pypy out of the box:
distutils.sysconfig.get_config_h_filename missing, fixed in pytinstaller trunk
(distutils.|)sysconfig.(_|)get_makefile_filename missing, actually optional
tries to link against libpython2.7.so.1, pypy in single executable, not a shared object
|
I have tried this out and it failed, on many occasions, because PyPy is only able to work with a few subset of what CPython uses. PyInstaller is a full blown CPython application, so there are not able to communicate.
If you need improved speed and hiding your code away from people, you can try out Cython. I use both Cython and PyInstaller a lot, and I love their cross platform nature.
When you are through with both, you can then use PyInstaller & CPython to package your app.
|
openssl, python requests error: "certificate verify failed"
|
If I run the following command from my development box:
$ openssl s_client -connect github.com:443
I get the following last line of output:
Verify return code: 20 (unable to get local issuer certificate)
If I try to do this with requests I get another failed request:
>>> import requests
>>> r = requests.get('https://github.com/', verify=True)
With an exception raised:
SSLError: [Errno 1] _ssl.c:507: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
I can also run the first command with the verify flag and get similar output:
$ openssl s_client -connect github.com:443 -verify 9
...
Verify return code: 27 (certificate not trusted)
Basically this is telling me that there is a problem with the certificates. I can specify a specific certificate with both methods and it will work:
$ openssl s_client -connect github.com:443 -CAfile /etc/ssl/certs/DigiCert_High_Assurance_EV_Root_CA.pem -verify 9
...
Verify return code: 0 (ok)
and:
>>> r = requests.get('https://github.com/', verify='/etc/ssl/certs/DigiCert...pem')
<Response [200]>
So, to my question, what exactly is wrong here? Shouldn't requests/openssl already know where to find valid certs?
Other Info:
Python==2.7.6
requests==2.2.1
openssl 0.9.8h
Also, I know passing verify=False to the requests.get method will work too, but I do want to verify.
EDIT
I've confirmed that, as @Heikki Toivonen indicated in an answer, specifying the -CAfile flag for the version of openssl that I'm running works.
$ openssl s_client -connect github.com:443 -CAfile `python -c 'import requests; print(requests.certs.where())'`
...
Verify return code: 0 (ok)
So there is nothing wrong with the version of openssl that I'm running, and there is nothing wrong with the default cacert.pem file that requests provides.
Now that I know openssl is meant to work that way, that the CAfile or the place to find certs has to be specified, I'm more concerned about getting requests to work.
If I run:
>>> r = requests.get('https://github.com/', verify='path to cacert.pem file')
I'm still getting the same error as before. I even tried downloading the cacert.pem file from http://curl.haxx.se/ca and it still didn't work. requests only seems to work (on this specific machine) if I specify a specific vendor cert file.
A side note: On my local machine everything is working as expected. There are several difference between the two machines though. I so far haven't been able to determine what the specific difference is that causes this issue.
|
If I run the following command from my development box:
$ openssl s_client -connect github.com:443
I get the following last line of output:
Verify return code: 20 (unable to get local issuer certificate)
You are missing DigiCert High Assurance EV CA-1 as a root of trust:
$ openssl s_client -connect github.com:443
CONNECTED(00000003)
depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance EV CA-1
verify error:num=20:unable to get local issuer certificate
verify return:0
---
Certificate chain
0 s:/businessCategory=Private Organization/1.3.6.1.4.1.311.60.2.1.3=US/1.3.6.1.4.1.311.60.2.1.2=Delaware/serialNumber=5157550/street=548 4th Street/postalCode=94107/C=US/ST=California/L=San Francisco/O=GitHub, Inc./CN=github.com
i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV CA-1
1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV CA-1
i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA
---
Server certificate
...
Start Time: 1393392088
Timeout : 300 (sec)
Verify return code: 20 (unable to get local issuer certificate)
Download DigiCert High Assurance EV CA-1 from DigiCert Trusted Root Authority Certificates:
$ wget https://www.digicert.com/CACerts/DigiCertHighAssuranceEVCA-1.crt
--2014-02-26 00:27:50-- https://www.digicert.com/CACerts/DigiCertHighAssuranceEVCA-1.crt
Resolving www.digicert.com (www.digicert.com)... 64.78.193.234
...
Convert the DER encoded certifcate to PEM:
$ openssl x509 -in DigiCertHighAssuranceEVCA-1.crt -inform DER -out DigiCertHighAssuranceEVCA-1.pem -outform PEM
Then, use it with OpenSSL via the -CAfile:
$ openssl s_client -CAfile DigiCertHighAssuranceEVCA-1.pem -connect github.com:443
CONNECTED(00000003)
depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance EV Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance EV CA-1
verify return:1
depth=0 businessCategory = Private Organization, 1.3.6.1.4.1.311.60.2.1.3 = US, 1.3.6.1.4.1.311.60.2.1.2 = Delaware, serialNumber = 5157550, street = 548 4th Street, postalCode = 94107, C = US, ST = California, L = San Francisco, O = "GitHub, Inc.", CN = github.com
verify return:1
---
Certificate chain
0 s:/businessCategory=Private Organization/1.3.6.1.4.1.311.60.2.1.3=US/1.3.6.1.4.1.311.60.2.1.2=Delaware/serialNumber=5157550/street=548 4th Street/postalCode=94107/C=US/ST=California/L=San Francisco/O=GitHub, Inc./CN=github.com
i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV CA-1
1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV CA-1
i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIHOjCCBiKgAwIBAgIQBH++LkveAITSyvjj7P5wWDANBgkqhkiG9w0BAQUFADBp
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSgwJgYDVQQDEx9EaWdpQ2VydCBIaWdoIEFzc3VyYW5j
ZSBFViBDQS0xMB4XDTEzMDYxMDAwMDAwMFoXDTE1MDkwMjEyMDAwMFowgfAxHTAb
BgNVBA8MFFByaXZhdGUgT3JnYW5pemF0aW9uMRMwEQYLKwYBBAGCNzwCAQMTAlVT
MRkwFwYLKwYBBAGCNzwCAQITCERlbGF3YXJlMRAwDgYDVQQFEwc1MTU3NTUwMRcw
FQYDVQQJEw41NDggNHRoIFN0cmVldDEOMAwGA1UEERMFOTQxMDcxCzAJBgNVBAYT
AlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1TYW4gRnJhbmNpc2Nv
MRUwEwYDVQQKEwxHaXRIdWIsIEluYy4xEzARBgNVBAMTCmdpdGh1Yi5jb20wggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDt04nDXXByCfMzTxpydNm2WpVQ
u2hhn/f7Hxnh2gQxrxV8Gn/5c68d5UMrVgkARWlK6MRb38J3UlEZW9Er2TllNqAy
GRxBc/sysj2fmOyCWws3ZDkstxCDcs3w6iRL+tmULsOFFTmpOvaI2vQniaaVT4Si
N058JXg6yYNtAheVeH1HqFWD7hPIGRqzPPFf/jsC4YX7EWarCV2fTEPwxyReKXIo
ztR1aE8kcimuOSj8341PTYNzdAxvEZun3WLe/+LrF+b/DL/ALTE71lmi8t2HSkh7
bTMRFE00nzI49sgZnfG2PcVG71ELisYz7UhhxB0XG718tmfpOc+lUoAK9OrNAgMB
AAGjggNUMIIDUDAfBgNVHSMEGDAWgBRMWMsl8EFPUvQoyIFDm6aooOaS5TAdBgNV
HQ4EFgQUh9GPGW7kh29TjHeRB1Dfo79VRyAwJQYDVR0RBB4wHIIKZ2l0aHViLmNv
bYIOd3d3LmdpdGh1Yi5jb20wDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsG
AQUFBwMBBggrBgEFBQcDAjBjBgNVHR8EXDBaMCugKaAnhiVodHRwOi8vY3JsMy5k
aWdpY2VydC5jb20vZXZjYTEtZzIuY3JsMCugKaAnhiVodHRwOi8vY3JsNC5kaWdp
Y2VydC5jb20vZXZjYTEtZzIuY3JsMIIBxAYDVR0gBIIBuzCCAbcwggGzBglghkgB
hv1sAgEwggGkMDoGCCsGAQUFBwIBFi5odHRwOi8vd3d3LmRpZ2ljZXJ0LmNvbS9z
c2wtY3BzLXJlcG9zaXRvcnkuaHRtMIIBZAYIKwYBBQUHAgIwggFWHoIBUgBBAG4A
eQAgAHUAcwBlACAAbwBmACAAdABoAGkAcwAgAEMAZQByAHQAaQBmAGkAYwBhAHQA
ZQAgAGMAbwBuAHMAdABpAHQAdQB0AGUAcwAgAGEAYwBjAGUAcAB0AGEAbgBjAGUA
IABvAGYAIAB0AGgAZQAgAEQAaQBnAGkAQwBlAHIAdAAgAEMAUAAvAEMAUABTACAA
YQBuAGQAIAB0AGgAZQAgAFIAZQBsAHkAaQBuAGcAIABQAGEAcgB0AHkAIABBAGcA
cgBlAGUAbQBlAG4AdAAgAHcAaABpAGMAaAAgAGwAaQBtAGkAdAAgAGwAaQBhAGIA
aQBsAGkAdAB5ACAAYQBuAGQAIABhAHIAZQAgAGkAbgBjAG8AcgBwAG8AcgBhAHQA
ZQBkACAAaABlAHIAZQBpAG4AIABiAHkAIAByAGUAZgBlAHIAZQBuAGMAZQAuMH0G
CCsGAQUFBwEBBHEwbzAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQu
Y29tMEcGCCsGAQUFBzAChjtodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGln
aUNlcnRIaWdoQXNzdXJhbmNlRVZDQS0xLmNydDAMBgNVHRMBAf8EAjAAMA0GCSqG
SIb3DQEBBQUAA4IBAQBfFW1nwzrVo94WnEUzJtU9yRZ0NMqHSBsUkG31q0eGufW4
4wFFZWjuqRJ1n3Ym7xF8fTjP3fdKGQnxIHKSsE0nuuh/XbQX5DpBJknHdGFoLwY8
xZ9JPI57vgvzLo8+fwHyZp3Vm/o5IYLEQViSo+nlOSUQ8YAVqu6KcsP/e612UiqS
+UMBmgdx9KPDDzZy4MJZC2hbfUoXj9A54mJN8cuEOPyw3c3yKOcq/h48KzVguQXi
SdJbwfqNIbQ9oJM+YzDjzS62+TCtNSNWzWbwABZCmuQxK0oEOSbTmbhxUF7rND3/
+mx9u8cY//7uAxLWYS5gIZlCbxcf0lkiKSHJB319
-----END CERTIFICATE-----
subject=/businessCategory=Private Organization/1.3.6.1.4.1.311.60.2.1.3=US/1.3.6.1.4.1.311.60.2.1.2=Delaware/serialNumber=5157550/street=548 4th Street/postalCode=94107/C=US/ST=California/L=San Francisco/O=GitHub, Inc./CN=github.com
issuer=/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV CA-1
---
No client certificate CA names sent
---
SSL handshake has read 4139 bytes and written 446 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES128-GCM-SHA256
Session-ID: 59D2883BBCE8E81E63E5551FAE7D1ACC00C49A9473C1618237BBBB0DD9016B8D
Session-ID-ctx:
Master-Key: B6D2763FF29E77C67AD83296946A4D44CDBA4F37ED6F20BC27602F1B1A2D137FACDEAC862C11279C01095594F9776F79
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1393392673
Timeout : 300 (sec)
Verify return code: 0 (ok)
Shouldn't requests/openssl already know where to find valid certs?
No. OpenSSL trusts nothing by default. Its a polar opposite of a browser's model, where nearly everything is trusted by default.
$ openssl s_client -connect github.com:443 -CAfile `python -c 'import requests; print(requests.certs.where())'`
...
>>> r = requests.get('https://github.com/', verify='path to cacert.pem file')
Why would you trust hundreds of CAs and subordinate CAs (re: cacert.pem) when you know the one CA that is certifying the public key for the site? Trust the one required root and nothing more: DigiCert High Assurance EV CA-1.
Trusting everything - as in the browser's model - is what allowed Comodo Hacker to spoof certificates for Gmail, Hotmail, Yahoo, etc when the Diginotar root was compromised.
|
Flask-restful: marshal complex object to json
|
I have a question regarding flask restful extension. I'm just started to use it and faced one problem. I have flask-sqlalchemy entities that are connected many-to-one relation and I want that restful endpoint return parent entity with all its children in json using marshaller. In my case Set contains many parameters. I looked at flask-restful docs but there wasn't any explanation how to solve this case.
Seems like I'm missing something obvious but cannot figure out any solution.
Here is my code:
# entities
class Set(db.Model):
id = db.Column("id", db.Integer, db.Sequence("set_id_seq"), primary_key=True)
title = db.Column("title", db.String(256))
parameters = db.relationship("Parameters", backref="set", cascade="all")
class Parameters(db.Model):
id = db.Column("id", db.Integer, db.Sequence("parameter_id_seq"), primary_key=True)
flag = db.Column("flag", db.String(256))
value = db.Column("value", db.String(256))
set_id = db.Column("set_id", db.Integer, db.ForeignKey("set.id"))
# marshallers
from flask.ext.restful import fields
parameter_marshaller = {
"flag": fields.String,
"value": fields.String
}
set_marshaller = {
'id': fields.String,
'title': fields.String,
'parameters': fields.List(fields.Nested(parameter_marshaller))
}
# endpoint
class SetApi(Resource):
@marshal_with(marshallers.set_marshaller)
def get(self, set_id):
entity = Set.query.get(set_id)
return entity
restful_api = Api(app)
restful_api.add_resource(SetApi, "/api/set/<int:set_id>")
Now when i call /api/set/1 I get server error:
TypeError: 'Set' object is unsubscriptable
So I need a way to correctly define set_marshaller that endpoint return this json:
{
"id": : "1",
"title": "any-title",
"parameters": [
{"flag": "any-flag", "value": "any-value" },
{"flag": "any-flag", "value": "any-value" },
.....
]
}
I appreciate any help.
|
I found solution to that problem myself.
After playing around with flask-restful i find out that i made few mistakes:
Firstly set_marshaller should look like this:
blob_marshaller = {
'id': fields.String,
'title': fields.String,
'parameters': fields.Nested(parameter_marshaller)
}
Restless marshaller can handle case if parameter is list and marshals to json list.
Another problem was that in API Set parameters has lazy loading, so when i try to marshall Set i got KeyError: 'parameters', so I need explicitly load parameters like this:
class SetApi(Resource):
@marshal_with(marshallers.set_marshaller)
def get(self, set_id):
entity = Set.query.get(set_id)
entity.parameters # loads parameters from db
return entity
Or another option is to change model relationship:
parameters = db.relationship("Parameters", backref="set", cascade="all" lazy="joined")
|
How to use Bigquery streaming insertall on app engine & python
|
I would like to develop an app engine application that directly stream data into a BigQuery table.
According to Google's documentation there is a simple way to stream data into bigquery:
http://googlecloudplatform.blogspot.co.il/2013/09/google-bigquery-goes-real-time-with-streaming-inserts-time-based-queries-and-more.html
https://developers.google.com/bigquery/streaming-data-into-bigquery#streaminginsertexamples
(note: in the above link you should select the python tab and not Java)
Here is the sample code snippet on how streaming insert should be coded:
body = {"rows":[
{"json": {"column_name":7.7,}}
]}
response = bigquery.tabledata().insertAll(
projectId=PROJECT_ID,
datasetId=DATASET_ID,
tableId=TABLE_ID,
body=body).execute()
Although I've downloaded the client api I didn't find any reference to a "bigquery" module/object referenced in the above Google's example.
Where is the the bigquery object (from snippet) should be located?
Can anyone show a more complete way to use this snippet (with the right imports)?
I've Been searching for that a lot and found documentation confusing and partial.
|
Minimal working (as long as you fill in the right ids for your project) example:
import httplib2
from apiclient import discovery
from oauth2client import appengine
_SCOPE = 'https://www.googleapis.com/auth/bigquery'
# Change the following 3 values:
PROJECT_ID = 'your_project'
DATASET_ID = 'your_dataset'
TABLE_ID = 'TestTable'
body = {"rows":[
{"json": {"Col1":7,}}
]}
credentials = appengine.AppAssertionCredentials(scope=_SCOPE)
http = credentials.authorize(httplib2.Http())
bigquery = discovery.build('bigquery', 'v2', http=http)
response = bigquery.tabledata().insertAll(
projectId=PROJECT_ID,
datasetId=DATASET_ID,
tableId=TABLE_ID,
body=body).execute()
print response
As Jordan says: "Note that this uses the appengine robot to authenticate with BigQuery, so you'll to add the robot account to the ACL of the dataset. Note that if you also want to use the robot to run queries, not just stream, you need the robot to be a member of the project 'team' so that it is authorized to run jobs."
|
Difference between numpy.array shape (R, 1) and (R,)
|
In numpy, some of the operations return in shape (R, 1) but some return (R,). This will make matrix multiplication more tedious since explicit reshape is required. For example, given a matrix M, if we want to do numpy.dot(M[:,0], numpy.ones((1, R))) where R is the number of rows (of course, the same issue also occurs column-wise). We will get matrices are not aligned error since M[:,0] is in shape (R,) but numpy.ones((1, R)) is in shape (1, R).
So my questions are:
What's the difference between shape (R, 1) and (R,). I know literally it's list of numbers and list of lists where all list contains only a number. Just wondering why not design numpy so that it favors shape (R, 1) instead of (R,) for easier matrix multiplication.
Are there better ways for the above example? Without explicitly reshape like this: numpy.dot(M[:,0].reshape(R, 1), numpy.ones((1, R)))
|
1. The meaning of shapes in NumPy
You write, "I know literally it's list of numbers and list of lists where all list contains only a number" but that's a bit of an unhelpful way to think about it.
The best way to think about NumPy arrays is that they consist of two parts, a data buffer which is just a block of raw elements, and a view which describes how to interpret the data buffer.
For example, if we create an array of 12 integers:
>>> a = numpy.arange(12)
>>> a
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
Then a consists of a data buffer, arranged something like this:
ââââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ
â 0 â 1 â 2 â 3 â 4 â 5 â 6 â 7 â 8 â 9 â 10 â 11 â
ââââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ
and a view which describes how to interpret the data:
>>> a.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
>>> a.dtype
dtype('int64')
>>> a.itemsize
8
>>> a.shape
(12,)
Here the shape (12,) means the array is indexed by a single index which runs from 0 to 11. Conceptually, if we label this single index i, the array a looks like this:
i= 0 1 2 3 4 5 6 7 8 9 10 11
ââââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ
â 0 â 1 â 2 â 3 â 4 â 5 â 6 â 7 â 8 â 9 â 10 â 11 â
ââââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ
If we reshape an array, this doesn't change the data buffer. Instead, it creates a new view that describes a different way to interpret the data. So after:
>>> b = a.reshape((3, 4))
the array b has the same data buffer as a, but now it is indexed by two indices which run from 0 to 2 and 0 to 3 respectively. If we label the two indices i and j, the array b looks like this:
i= 0 0 0 0 1 1 1 1 2 2 2 2
j= 0 1 2 3 0 1 2 3 0 1 2 3
ââââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ
â 0 â 1 â 2 â 3 â 4 â 5 â 6 â 7 â 8 â 9 â 10 â 11 â
ââââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ
which means that:
>>> b[2,1]
9
You can see that the second index changes quickly and the first index changes slowly. If you prefer this to be the other way round, you can specify the order parameter:
>>> c = a.reshape((3, 4), order='F')
which results in an array indexed like this:
i= 0 1 2 0 1 2 0 1 2 0 1 2
j= 0 0 0 1 1 1 2 2 2 3 3 3
ââââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ
â 0 â 1 â 2 â 3 â 4 â 5 â 6 â 7 â 8 â 9 â 10 â 11 â
ââââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ
which means that:
>>> c[2,1]
5
It should now be clear what it means for an array to have a shape with one or more dimensions of size 1. After:
>>> d = a.reshape((12, 1))
the array d is indexed by two indices, the first of which runs from 0 to 11, and the second index is always 0:
i= 0 1 2 3 4 5 6 7 8 9 10 11
j= 0 0 0 0 0 0 0 0 0 0 0 0
ââââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ
â 0 â 1 â 2 â 3 â 4 â 5 â 6 â 7 â 8 â 9 â 10 â 11 â
ââââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ
and so:
>>> d[10,0]
10
A dimension of length 1 is "free" (in some sense), so there's nothing stopping you from going to town:
>>> e = a.reshape((1, 2, 1, 6, 1))
giving an array indexed like this:
i= 0 0 0 0 0 0 0 0 0 0 0 0
j= 0 0 0 0 0 0 1 1 1 1 1 1
k= 0 0 0 0 0 0 0 0 0 0 0 0
l= 0 1 2 3 4 5 0 1 2 3 4 5
m= 0 0 0 0 0 0 0 0 0 0 0 0
ââââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ¬âââââ
â 0 â 1 â 2 â 3 â 4 â 5 â 6 â 7 â 8 â 9 â 10 â 11 â
ââââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ´âââââ
and so:
>>> e[0,1,0,0,0]
6
See the NumPy internals documentation for more details about how arrays are implemented.
2. What to do?
Since numpy.reshape just creates a new view, you shouldn't be scared about using it whenever necessary. It's the right tool to use when you want to index an array in a different way.
However, in a long computation it's usually possible to arrange to construct arrays with the "right" shape in the first place, and so minimize the number of reshapes and transposes. But without seeing the actual context that led to the need for a reshape, it's hard to say what should be changed.
The example in your question is:
numpy.dot(M[:,0], numpy.ones((1, R)))
but this is not realistic. First, this expression:
M[:,0].sum()
computes the result more simply. Second, is there really something special about column 0? Perhaps what you actually need is:
M.sum(axis=0)
|
Failed to install Python Cryptography package with PIP and setup.py
|
When I try to install the Cryptography package for Python through either pip install cryptography or by downloading the package from their site and running python setup.py, I get the following error:
D:\Anaconda\Scripts\pip-script.py run on 02/27/14 16:13:17
Downloading/unpacking cryptography
Getting page https://pypi.python.org/simple/cryptography/
URLs to search for versions for cryptography:
* https://pypi.python.org/simple/cryptography/
Analyzing links from page https://pypi.python.org/simple/cryptography/
Skipping https://pypi.python.org/packages/cp26/c/cryptography/cryptography-0.2-cp26-none-win32.whl#md5=13e5c4b19520e7dc6f07c6502b3f74e2 (from https://pypi.python.org/simple/cryptography/) because it is not compatible with this Python
Skipping https://pypi.python.org/packages/cp26/c/cryptography/cryptography-0.2.1-cp26-none-win32.whl#md5=00e733648ee5cdb9e58876238b1328f8 (from https://pypi.python.org/simple/cryptography/) because it is not compatible with this Python
Skipping https://pypi.python.org/packages/cp27/c/cryptography/cryptography-0.2-cp27-none-win32.whl#md5=013ccafa6a5a3ea92c73f2c1c4879406 (from https://pypi.python.org/simple/cryptography/) because it is not compatible with this Python
Skipping https://pypi.python.org/packages/cp27/c/cryptography/cryptography-0.2.1-cp27-none-win32.whl#md5=127d6a5dc687250721f892d55720a06c (from https://pypi.python.org/simple/cryptography/) because it is not compatible with this Python
Skipping https://pypi.python.org/packages/cp32/c/cryptography/cryptography-0.2-cp32-none-win32.whl#md5=051424a36e91039807b72f112333ded3 (from https://pypi.python.org/simple/cryptography/) because it is not compatible with this Python
Skipping https://pypi.python.org/packages/cp32/c/cryptography/cryptography-0.2.1-cp32-none-win32.whl#md5=53f6f57db8e952d64283baaa14cbde3d (from https://pypi.python.org/simple/cryptography/) because it is not compatible with this Python
Skipping https://pypi.python.org/packages/cp33/c/cryptography/cryptography-0.2-cp33-none-win32.whl#md5=302812c1c1a035cf9ba3292f8dbf3f9e (from https://pypi.python.org/simple/cryptography/) because it is not compatible with this Python
Skipping https://pypi.python.org/packages/cp33/c/cryptography/cryptography-0.2.1-cp33-none-win32.whl#md5=81acca90caf8a45f2ca73f3f9859fae4 (from https://pypi.python.org/simple/cryptography/) because it is not compatible with this Python
Found link https://pypi.python.org/packages/source/c/cryptography/cryptography-0.1.tar.gz#md5=bdc1c5fe069deca7467b71a0cc538f17 (from https://pypi.python.org/simple/cryptography/), version: 0.1
Found link https://pypi.python.org/packages/source/c/cryptography/cryptography-0.2.1.tar.gz#md5=872fc04268dadc66a0305ae5ab1c123b (from https://pypi.python.org/simple/cryptography/), version: 0.2.1
Found link https://pypi.python.org/packages/source/c/cryptography/cryptography-0.2.tar.gz#md5=8a3d21e837a21e1b7634ee1f22b06bb6 (from https://pypi.python.org/simple/cryptography/), version: 0.2
Using version 0.2.1 (newest of versions: 0.2.1, 0.2, 0.1)
Downloading from URL https://pypi.python.org/packages/source/c/cryptography/cryptography-0.2.1.tar.gz#md5=872fc04268dadc66a0305ae5ab1c123b (from https://pypi.python.org/simple/cryptography/)
Running setup.py (path:c:\users\paco\appdata\local\temp\pip_build_Paco\cryptography\setup.py) egg_info for package cryptography
In file included from c/_cffi_backend.c:7:0:
c/misc_win32.h:225:23: error: two or more data types in declaration specifiers
c/misc_win32.h:225:1: warning: useless type name in empty declaration [enabled by default]
c/_cffi_backend.c: In function 'convert_array_from_object':
c/_cffi_backend.c:1105:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1105:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:1130:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1130:30: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:1150:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1150:30: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'convert_struct_from_object':
c/_cffi_backend.c:1183:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1183:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:1196:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1196:30: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'cdata_repr':
c/_cffi_backend.c:1583:13: warning: unknown conversion type character 'L' in format [-Wformat]
c/_cffi_backend.c:1583:13: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:1595:9: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1595:9: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'cdataowning_repr':
c/_cffi_backend.c:1647:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1647:30: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function '_cdata_get_indexed_ptr':
c/_cffi_backend.c:1820:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1820:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1820:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function '_cdata_getslicearg':
c/_cffi_backend.c:1872:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1872:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1872:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'cdata_ass_slice':
c/_cffi_backend.c:1951:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1951:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1951:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:1969:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1969:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1969:30: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:1983:22: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1983:22: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'cdata_call':
c/_cffi_backend.c:2367:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:2367:30: warning: format '%s' expects argument of type 'char *', but argument 3 has type 'Py_ssize_t' [-Wformat]
c/_cffi_backend.c:2367:30: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'cast_to_integer_or_char':
c/_cffi_backend.c:2916:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:2916:26: warning: format '%s' expects argument of type 'char *', but argument 3 has type 'Py_ssize_t' [-Wformat]
c/_cffi_backend.c:2916:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:2928:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:2928:26: warning: format '%s' expects argument of type 'char *', but argument 3 has type 'Py_ssize_t' [-Wformat]
c/_cffi_backend.c:2928:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'new_array_type':
c/_cffi_backend.c:3480:9: warning: unknown conversion type character 'l' in format [-Wformat]
c/_cffi_backend.c:3480:9: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'b_complete_struct_or_union':
c/_cffi_backend.c:3878:22: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:3878:22: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:3878:22: warning: too many arguments for format [-Wformat-extra-args]
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "c:\users\paco\appdata\local\temp\pip_build_Paco\cryptography\setup.py", line 113, in <module>
"build": cffi_build,
File "D:\Anaconda\lib\distutils\core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "build\bdist.win-amd64\egg\setuptools\dist.py", line 239, in __init__
File "build\bdist.win-amd64\egg\setuptools\dist.py", line 264, in fetch_build_eggs
File "build\bdist.win-amd64\egg\pkg_resources.py", line 580, in resolve
dist = best[req.key] = env.best_match(req, ws, installer)
File "build\bdist.win-amd64\egg\pkg_resources.py", line 818, in best_match
return self.obtain(req, installer) # try and download/install
File "build\bdist.win-amd64\egg\pkg_resources.py", line 830, in obtain
return installer(requirement)
File "build\bdist.win-amd64\egg\setuptools\dist.py", line 314, in fetch_build_egg
File "build\bdist.win-amd64\egg\setuptools\command\easy_install.py", line 593, in easy_install
File "build\bdist.win-amd64\egg\setuptools\command\easy_install.py", line 623, in install_item
File "build\bdist.win-amd64\egg\setuptools\command\easy_install.py", line 809, in install_eggs
File "build\bdist.win-amd64\egg\setuptools\command\easy_install.py", line 1015, in build_and_install
File "build\bdist.win-amd64\egg\setuptools\command\easy_install.py", line 1003, in run_setup
distutils.errors.DistutilsError: Setup script exited with error: command 'gcc' failed with exit status 1
Complete output from command python setup.py egg_info:
In file included from c/_cffi_backend.c:7:0:
c/misc_win32.h:225:23: error: two or more data types in declaration specifiers
c/misc_win32.h:225:1: warning: useless type name in empty declaration [enabled by default]
c/_cffi_backend.c: In function 'convert_array_from_object':
c/_cffi_backend.c:1105:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1105:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:1130:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1130:30: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:1150:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1150:30: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'convert_struct_from_object':
c/_cffi_backend.c:1183:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1183:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:1196:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1196:30: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'cdata_repr':
c/_cffi_backend.c:1583:13: warning: unknown conversion type character 'L' in format [-Wformat]
c/_cffi_backend.c:1583:13: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:1595:9: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1595:9: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'cdataowning_repr':
c/_cffi_backend.c:1647:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1647:30: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function '_cdata_get_indexed_ptr':
c/_cffi_backend.c:1820:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1820:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1820:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function '_cdata_getslicearg':
c/_cffi_backend.c:1872:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1872:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1872:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'cdata_ass_slice':
c/_cffi_backend.c:1951:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1951:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1951:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:1969:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1969:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1969:30: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:1983:22: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:1983:22: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'cdata_call':
c/_cffi_backend.c:2367:30: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:2367:30: warning: format '%s' expects argument of type 'char *', but argument 3 has type 'Py_ssize_t' [-Wformat]
c/_cffi_backend.c:2367:30: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'cast_to_integer_or_char':
c/_cffi_backend.c:2916:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:2916:26: warning: format '%s' expects argument of type 'char *', but argument 3 has type 'Py_ssize_t' [-Wformat]
c/_cffi_backend.c:2916:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c:2928:26: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:2928:26: warning: format '%s' expects argument of type 'char *', but argument 3 has type 'Py_ssize_t' [-Wformat]
c/_cffi_backend.c:2928:26: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'new_array_type':
c/_cffi_backend.c:3480:9: warning: unknown conversion type character 'l' in format [-Wformat]
c/_cffi_backend.c:3480:9: warning: too many arguments for format [-Wformat-extra-args]
c/_cffi_backend.c: In function 'b_complete_struct_or_union':
c/_cffi_backend.c:3878:22: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:3878:22: warning: unknown conversion type character 'z' in format [-Wformat]
c/_cffi_backend.c:3878:22: warning: too many arguments for format [-Wformat-extra-args]
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "c:\users\paco\appdata\local\temp\pip_build_Paco\cryptography\setup.py", line 113, in <module>
"build": cffi_build,
File "D:\Anaconda\lib\distutils\core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "build\bdist.win-amd64\egg\setuptools\dist.py", line 239, in __init__
File "build\bdist.win-amd64\egg\setuptools\dist.py", line 264, in fetch_build_eggs
File "build\bdist.win-amd64\egg\pkg_resources.py", line 580, in resolve
dist = best[req.key] = env.best_match(req, ws, installer)
File "build\bdist.win-amd64\egg\pkg_resources.py", line 818, in best_match
return self.obtain(req, installer) # try and download/install
File "build\bdist.win-amd64\egg\pkg_resources.py", line 830, in obtain
return installer(requirement)
File "build\bdist.win-amd64\egg\setuptools\dist.py", line 314, in fetch_build_egg
File "build\bdist.win-amd64\egg\setuptools\command\easy_install.py", line 593, in easy_install
File "build\bdist.win-amd64\egg\setuptools\command\easy_install.py", line 623, in install_item
File "build\bdist.win-amd64\egg\setuptools\command\easy_install.py", line 809, in install_eggs
File "build\bdist.win-amd64\egg\setuptools\command\easy_install.py", line 1015, in build_and_install
File "build\bdist.win-amd64\egg\setuptools\command\easy_install.py", line 1003, in run_setup
distutils.errors.DistutilsError: Setup script exited with error: command 'gcc' failed with exit status 1
----------------------------------------
Cleaning up...
Removing temporary dir c:\users\paco\appdata\local\temp\pip_build_Paco...
Command python setup.py egg_info failed with error code 1 in c:\users\paco\appdata\local\temp\pip_build_Paco\cryptography
Exception information:
Traceback (most recent call last):
File "D:\Anaconda\lib\site-packages\pip-1.5.4-py2.7.egg\pip\basecommand.py", line 122, in main
status = self.run(options, args)
File "D:\Anaconda\lib\site-packages\pip-1.5.4-py2.7.egg\pip\commands\install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "D:\Anaconda\lib\site-packages\pip-1.5.4-py2.7.egg\pip\req.py", line 1229, in prepare_files
req_to_install.run_egg_info()
File "D:\Anaconda\lib\site-packages\pip-1.5.4-py2.7.egg\pip\req.py", line 325, in run_egg_info
command_desc='python setup.py egg_info')
File "D:\Anaconda\lib\site-packages\pip-1.5.4-py2.7.egg\pip\util.py", line 697, in call_subprocess
% (command_desc, proc.returncode, cwd))
InstallationError: Command python setup.py egg_info failed with error code 1 in c:\users\paco\appdata\local\temp\pip_build_Paco\cryptography
I found other egg_info error posts (here and here) but the solutions there provided wouldn't solve my problem. Also, I am able to install other packages through PIP.
PIP version 1.5.4 setuptools version 2.2
|
I had a similar issue, and found i was simply missing a dependancy (libssl-dev, for me). As referenced in https://cryptography.io/en/latest/installation/, ensure that all dependancies are met:
On Windows
If youâre on Windows youâll need to make sure you have OpenSSL installed. There are pre-compiled binaries available. If your installation is in an unusual location set the LIB and INCLUDE environment variables to include the
corresponding locations. For example:
C:\> \path\to\vcvarsall.bat x86_amd64
C:\> set LIB=C:\OpenSSL-1.0.1f-64bit\lib;%LIB%
C:\> set INCLUDE=C:\OpenSSL-1.0.1f-64bit\include;%INCLUDE%
C:\> pip install cryptography
Building cryptography on Linux
cryptography should build very easily on Linux provided you have a C compiler, headers for Python (if youâre
not using pypy), and headers for the OpenSSL and libffi libraries available on your system.
For Debian and Ubuntu, the following command will ensure that the required dependencies are installed:
sudo apt-get install build-essential libssl-dev libffi-dev python-dev
For Fedora and RHEL-derivatives, the following command will ensure that the required dependencies are installed:
sudo yum install gcc libffi-devel python-devel openssl-devel
You should now be able to build and install cryptography with the usual.
pip install cryptography
|
Installing py-ldap on Mac OS X Mavericks (missing sasl.h)
|
I can't seem to be able to get the python ldap module installed on my OS X Mavericks 10.9.1 machine.
Kernel details:
uname -a
Darwin 13.0.0 Darwin Kernel Version 13.0.0: Thu Sep 19 22:22:27 PDT 2013; root:xnu-2422.1.72~6/RELEASE_X86_64 x86_64
I tried what was suggested here:
http://projects.skurfer.com/posts/2011/python_ldap_lion/
But when I try to use pip I get a different error
Modules/LDAPObject.c:18:10: fatal error: 'sasl.h' file not found
*#include sasl.h
I also tried what was suggested here:
python-ldap OS X 10.6 and Python 2.6
But with the same error.
I am hoping someone could help me out here.
|
using pieces from both @hharnisc and @mick-t answers.
pip install python-ldap \
--global-option=build_ext \
--global-option="-I$(xcrun --show-sdk-path)/usr/include/sasl"
|
Create a PyCharm configuration that runs a module a la "python -m foo"
|
My python entrypoint needs to be run as a module (not a script), as in:
python -m foo.bar
The following does not work (and is not supposed to):
python foo/bar.py
How can I create a run confirguration in pycharm that runs my code using the first invokation above?
|
According to man python, the -m option
-m module-name
Searches sys.path for the named module and runs the corresponding .py file as a script.
So most of the time you can just right-click on bar.py in the Project tool window and select Run bar.
If you really need to use the -m option, then specify it as an Interpreter option, with the module name as the Script in the Edit Configurations dialog:
|
How do Scrapy rules work with crawl spider
|
I have hard time to understand scrapy crawl spider rules. I have example that doesn't work as I would like it did, so it can be two things:
I don't understand how rules work.
I formed incorrect regex that prevents me to get results that I need.
OK here it is what I want to do:
I want to write crawl spider that will get all available statistics information from http://www.euroleague.net website.
The website page that hosts all the information that I need for the start is here.
Step 1
First step what I am thinking is extract "Seasons" link(s) and fallow it.
Here it is HTML/href that I am intending to match (I want to match all links in the "Seasons" section one by one, but I think that it will be easer to have one link as an example):
href="/main/results/by-date?seasoncode=E2001"
And here is a rule/regex that I created for it:
Rule(SgmlLinkExtractor(allow=('by-date\?seasoncode\=E\d+',)),follow=True),
Step 2
When I am brought by spider to the web page http://www.euroleague.net/main/results/by-date?seasoncode=E2001 for the second step I want that spider extracted link(s) from section "Regular season". At this case lets say it should be "Round 1". The HTML/href that I am looking for is:
<a href="/main/results/by-date?seasoncode=E2001&gamenumber=1&phasetypecode=RS"
And rule/regex that I constructed would be:
Rule(SgmlLinkExtractor(allow=('seasoncode\=E\d+\&gamenumber\=\d+\&phasetypecode\=\w+',)),follow=True),
Step 3
Now I reached page (http://www.euroleague.net/main/results/by-date?seasoncode=E2001&gamenumber=1&phasetypecode=RS) I am ready to extract links that leads to the pages that has all the information that I need:
I am looking for HTML/href:
href="/main/results/showgame?gamenumber=1&phasetypecode=RS&gamecode=4&seasoncode=E2001#!boxscore"
And my regex that has to follow would be:
Rule(SgmlLinkExtractor(allow=('gamenumber\=\d+\&phasetypecode\=\w+\&gamecode\=\d+\&seasoncode\=E\d+',)),callback='parse_item'),
The problem
I think that crawler should work something like this:
That rules crawler is something like a loop. When first link is matched the crawler will follow to the "Step 2" page, than to "step 3" and after that it will extract data. After doing that it will return to "step 1" to match second link and start loop again to the point when there is no links in first step.
What I see from terminal it seems that crawler loops in "Step 1". It loops through all "Step 1" links, but doesn't involves "step 2"/"step 3" rules.
2014-02-28 00:20:31+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2000> (referer: http:// www.euroleague.net/main/results/by-date)
2014-02-28 00:20:31+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2001> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 00:20:31+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2002> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 00:20:32+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2003> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 00:20:33+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2004> (referer: http://www.euroleague.net/main/results/by-date)
After it loops through all the "Seasons" links it starts with links that I don't see, in any of three steps that I mentioned:
http://www.euroleague.net/main/results/by-date?gamenumber=23&phasetypecode=TS++++++++&seasoncode=E2013
And such link structure you can find only if you loop through all the links in "Step 2" without returning to the "Step 1" starting point.
The question would be:
How rules work? Is it working step by step like I am intending it should work with this example or every rule has it's own loop and goes from rule to rule only after it's finished looping through the first rule?
That is how I see it. Of course it could be something wrong with my rules/regex and it is very possible.
And here is all what I am getting from the terminal:
scrapy crawl basketsp_test -o item6.xml -t xml
2014-02-28 01:09:20+0200 [scrapy] INFO: Scrapy 0.20.0 started (bot: basketbase)
2014-02-28 01:09:20+0200 [scrapy] DEBUG: Optional features available: ssl, http11, boto, django
2014-02-28 01:09:20+0200 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE': 'basketbase.spiders', 'FEED_FORMAT': 'xml', 'SPIDER_MODULES': ['basketbase.spiders'], 'FEED_URI': 'item6.xml', 'BOT_NAME': 'basketbase'}
2014-02-28 01:09:21+0200 [scrapy] DEBUG: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-02-28 01:09:21+0200 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-02-28 01:09:21+0200 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-02-28 01:09:21+0200 [scrapy] DEBUG: Enabled item pipelines: Basketpipeline3, Basketpipeline1db
2014-02-28 01:09:21+0200 [basketsp_test] INFO: Spider opened
2014-02-28 01:09:21+0200 [basketsp_test] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-02-28 01:09:21+0200 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-02-28 01:09:21+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-02-28 01:09:21+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date> (referer: None)
2014-02-28 01:09:22+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:22+0200 [basketsp_test] DEBUG: Filtered duplicate request: <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2013> - no more duplicates will be shown (see DUPEFILTER_CLASS)
2014-02-28 01:09:22+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2000> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:23+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2001> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:23+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2002> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:24+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2003> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:24+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2004> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:25+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2005> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:26+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2006> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:26+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2007> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:27+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2008> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:27+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2009> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:28+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2010> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:29+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2011> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:29+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2012> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:30+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=24&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:30+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=23&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:31+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=22&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:32+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=21&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:32+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=20&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:33+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=19&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:34+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=18&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:34+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=17&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:35+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=16&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:35+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=15&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:36+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=14&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:37+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=13&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:37+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=12&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:38+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=11&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:39+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=10&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:39+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=9&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:40+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=8&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:40+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=7&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:41+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=6&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:42+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=5&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:42+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=4&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:43+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=3&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:44+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=2&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:44+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=1&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:44+0200 [basketsp_test] INFO: Closing spider (finished)
2014-02-28 01:09:44+0200 [basketsp_test] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 13663,
'downloader/request_count': 39,
'downloader/request_method_count/GET': 39,
'downloader/response_bytes': 527838,
'downloader/response_count': 39,
'downloader/response_status_count/200': 39,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 2, 27, 23, 9, 44, 569579),
'log_count/DEBUG': 46,
'log_count/INFO': 3,
'request_depth_max': 2,
'response_received_count': 39,
'scheduler/dequeued': 39,
'scheduler/dequeued/memory': 39,
'scheduler/enqueued': 39,
'scheduler/enqueued/memory': 39,
'start_time': datetime.datetime(2014, 2, 27, 23, 9, 21, 111255)}
2014-02-28 01:09:44+0200 [basketsp_test] INFO: Spider closed (finished)
And here is a rules part from the crawler:
class Basketspider(CrawlSpider):
name = "basketsp_test"
download_delay = 0.5
allowed_domains = ["www.euroleague.net"]
start_urls = ["http://www.euroleague.net/main/results/by-date"]
rules = (
Rule(SgmlLinkExtractor(allow=('by-date\?seasoncode\=E\d+',)),follow=True),
Rule(SgmlLinkExtractor(allow=('seasoncode\=E\d+\&gamenumber\=\d+\&phasetypecode\=\w+',)),follow=True),
Rule(SgmlLinkExtractor(allow=('gamenumber\=\d+\&phasetypecode\=\w+\&gamecode\=\d+\&seasoncode\=E\d+',)),callback='parse_item'),
)
|
You are right, according to the source code before returning each response to the callback function, the crawler loops over the Rules, starting, from the first. You should have it in mind, when you write the rules. For example the following rules:
rules(
Rule(SgmlLinkExtractor(allow=(r'/items',)), callback='parse_item',follow=True),
Rule(SgmlLinkExtractor(allow=(r'/items/electronics',)), callback='parse_electronic_item',follow=True),
)
The second rule will never be applied since all the links will be extracted by the first rule with parse_item callback. The matches for the second rule will be filtered out as duplicates by the scrapy.dupefilter.RFPDupeFilter. You should use deny for correct matching of links:
rules(
Rule(SgmlLinkExtractor(allow=(r'/items',)), deny=(r'/items/electronics',), callback='parse_item',follow=True),
Rule(SgmlLinkExtractor(allow=(r'/items/electronics',)), callback='parse_electronic_item',follow=True),
)
|
Django: How to pre-populate FormView with dynamic (non-model) data?
|
I have a FormView view, with some additional GET context supplied using get_context_data():
class SignUpView(FormView):
template_name = 'pages_fixed/accounts/signup.html'
form_class = SignUpForm
def get_context_data(self, **kwargs):
context = super(SignUpView, self).get_context_data(**kwargs)
context = {
'plans': common.plans,
'pricing': common.pricing,
}
return context
This works fine. However, I also have some values in session (not from any bound model) which I would like to pre-populate into the form. These vary depending on user's actions on previous page(s). I know (from my other post) that I can pass the form into the context (with initial=) but is it possible in a FormView situation per above?
|
You can override the FormView class's 'get_initial' method. See here for more info,
e.g.
def get_initial(self):
"""
Returns the initial data to use for forms on this view.
"""
initial = super(SignUpView, self).get_initial()
initial['my_form_field1'] = self.request.something
return initial
'get_initial' should return a dictionary where the keys are the names of the fields on the form and the values are the initial values to use when showing the form to the user.
|
Pandas DataFrame performance
|
Pandas is really great, but I am really surprised by how inefficient it is to retrieve values from a Pandas.DataFrame. In the following toy example, even the DataFrame.iloc method is more than 100 times slower than a dictionary.
The question: Is the lesson here just that dictionaries are the better way to look up values? Yes, I get that that is precisely what they were made for. But I just wonder if there is something I am missing about DataFrame lookup performance.
I realize this question is more "musing" than "asking" but I will accept an answer that provides insight or perspective on this. Thanks.
import timeit
setup = '''
import numpy, pandas
df = pandas.DataFrame(numpy.zeros(shape=[10, 10]))
dictionary = df.to_dict()
'''
f = ['value = dictionary[5][5]', 'value = df.loc[5, 5]', 'value = df.iloc[5, 5]']
for func in f:
print func
print min(timeit.Timer(func, setup).repeat(3, 100000))
value = dictionary[5][5]
0.130625009537
value = df.loc[5, 5]
19.4681699276
value = df.iloc[5, 5]
17.2575249672
|
A dict is to a bicycle as a DataFrame is to a car.
You can pedal 10 feet on a bicycle faster than you can start a car, get it in gear, etc, etc. But if you need to go a mile, the car wins.
For certain small, targeted purposes, a dict may be faster.
And if that is all you need, then use a dict, for sure! But if you need/want the power and luxury of a DataFrame, then a dict is no substitute. It is meaningless to compare speed if the data structure does not first satisfy your needs.
Now for example -- to be more concrete -- a dict is good for accessing columns, but it is not so convenient for accessing rows.
import timeit
setup = '''
import numpy, pandas
df = pandas.DataFrame(numpy.zeros(shape=[10, 1000]))
dictionary = df.to_dict()
'''
# f = ['value = dictionary[5][5]', 'value = df.loc[5, 5]', 'value = df.iloc[5, 5]']
f = ['value = [val[5] for col,val in dictionary.items()]', 'value = df.loc[5]', 'value = df.iloc[5]']
for func in f:
print(func)
print(min(timeit.Timer(func, setup).repeat(3, 100000)))
yields
value = [val[5] for col,val in dictionary.iteritems()]
25.5416321754
value = df.loc[5]
5.68071913719
value = df.iloc[5]
4.56006002426
So the dict of lists is 5 times slower at retrieving rows than df.iloc. The speed deficit becomes greater as the number of columns grows. (The number of columns is like the number of feet in the bicycle analogy. The longer the distance, the more convenient the car becomes...)
This is just one example of when a dict of lists would be less convenient/slower than a DataFrame.
Another example would be when you have a DatetimeIndex for the rows and wish to select all rows between certain dates. With a DataFrame you can use
df.loc['2000-1-1':'2000-3-31']
There is no easy analogue for that if you were to use a dict of lists. And the Python loops you would need to use to select the right rows would again be terribly slow compared to the DataFrame.
|
how do you filter pandas dataframes by multiple columns
|
To filter a dataframe (df) by a single column, if we consider data with male and females we might:
males = df[df[Gender]=='Male']
Question 1 - But what if the data spanned multiple years and i wanted to only see males for 2014?
In other languages I might do something like:
if A = "Male" and if B = "2014" then
(except I want to do this and get a subset of the original dataframe in a new dataframe object)
Question 2. How do I do this in a loop, and create a dataframe object for each unique sets of year and gender (i.e. a df for: 2013-Male, 2013-Female, 2014-Male, and 2014-Female
for y in year:
for g in gender:
df = .....
|
Using & operator, don't forget to wrap the sub-statements with ():
males = df[(df[Gender]=='Male') & (df[Year]==2014)]
To store your dataframes in a dict using a for loop:
from collections import defaultdict
dic={}
for g in ['male', 'female']:
dic[g]=defaultdict(dict)
for y in [2013, 2014]:
dic[g][y]=df[(df[Gender]==g) & (df[Year]==y)] #store the DataFrames to a dict of dict
EDIT:
A demo for your getDF:
def getDF(dic, gender, year):
return dic[gender][year]
print genDF(dic, 'male', 2014)
|
Unpacking more than one list as argument for a function
|
If I have a function like:
def f(a,b,c,d):
print a,b,c,d
Then why does this works:
f(1,2,3,4)
f(*[1,2,3,4])
But not this:
f(*[1,2] , *[3,4])
f(*[1,2] , *[3,4])
^
SyntaxError: invalid syntax
?
EDIT :
For information, the original problem was to replace one of the argument in a function wrapper. I wanted to replace a given member of the inputted *args and tried something like:
def vectorize_pos(f,n=0):
'''
Decorator, vectorize the processing of the nth argument
:param f: function that dont accept a list as nth argument
'''
def vectorizedFunction(*args,**kwargs):
if isinstance(args[n],list):
return map(lambda x : f( *(args[:n]) , x , *(args[n+1,:]), **kwargs),args[n])
else:
return f(*args,**kwargs)
return vectorizedFunction
That's where the question arose from. And I know there is other way do do the same thing but only wanted to understand why unpacking one sequence worked but not for more.
|
Because, as per the Function call syntax, this is how the argument list is defined
argument_list ::= positional_arguments ["," keyword_arguments]
["," "*" expression] ["," keyword_arguments]
["," "**" expression]
| keyword_arguments ["," "*" expression]
["," keyword_arguments] ["," "**" expression]
| "*" expression ["," keyword_arguments] ["," "**" expression]
| "**" expression
So, you can pass only one * expression per function call.
|
Consuming a kinesis stream in python
|
I cant seem to find a decent example that shows how can I consume an AWS Kinesis stream via Python. Can someone please provide me with some examples I could look into?
Best
|
you should use boto.kinesis:
from boto import kinesis
After you created a stream:
step 1: connect to aws kinesis:
auth = {"aws_access_key_id":"id", "aws_secret_access_key":"key"}
connection = kinesis.connect_to_region('us-east-1',**auth)
step 2: get the stream info (like how many shards, if it is active ..)
tries = 0
while tries < 10:
tries += 1
time.sleep(1)
try:
response = connection.describe_stream('stream_name')
if response['StreamDescription']['StreamStatus'] == 'ACTIVE':
break
except :
logger.error('error while trying to describe kinesis stream : %s')
else:
raise TimeoutError('Stream is still not active, aborting...')
step 3 : get all shard ids, and for each shared id get the shard iterator:
shard_ids = []
stream_name = None
if response and 'StreamDescription' in response:
stream_name = response['StreamDescription']['StreamName']
for shard_id in response['StreamDescription']['Shards']:
shard_id = shard_id['ShardId']
shard_iterator = connection.get_shard_iterator(stream_name, shard_id, shard_iterator_type)
shard_ids.append({'shard_id' : shard_id ,'shard_iterator' : shard_iterator['ShardIterator'] })
step 4 : read the data for each shard
limit is the limit of records that you want to receive. (you can receive up to 10 MB)
shard_iterator is the shared from previous step.
tries = 0
result = []
while tries < 100:
tries += 1
response = connection.get_records(shard_iterator = shard_iterator , limit = limit)
shard_iterator = response['NextShardIterator']
if len(response['Records'])> 0:
for res in response['Records']:
result.append(res['Data'])
return result , shard_iterator
in your next call to get_records, you should use the shard_iterator that you received with the result of the previous get_records.
note: in one call to get_records, (limit = None) you can receive empty records.
if calling to get_records with a limit, you will get the records that are in the same partition key (when you put data in to stream, you have to use partition key :
connection.put_record(stream_name, data, partition_key)
|
Can not get mysql-connector-python to install in virtualenv
|
I'm using Amazon Linux AMI release 2013.09. I've install virtualenv and after activation then I run pip install mysql-connector-python, but when I run my app I get an error: ImportError: No module named mysql.connector. Has anyone else had trouble doing this? I can install it outside of virtualenv and my script runs without issues. Thanks in advance for any help!
|
Several things. There is an inconsistency in package naming so you may want to do:
pip search mysql-connector
to find out what it is called on your platform. I got two results mysql-connector-python and mysql-connector-repackaged.
so try this first:
pip install mysql-connector-python
this may additionally give an error like this:
Some externally hosted files were ignored (use
--allow-external mysql-connector-python to allow).
so finally this should do the job:
pip install mysql-connector-python --allow-external mysql-connector-python
|
Get total physical memory in Python
|
How can I get the total physical memory within Python in a distribution agnostic fashion? I don't need used memory, just the total physical memory.
|
your best bet for a cross-platform solution is to use the psutil package (available on PyPI).
from psutil import virtual_memory
mem = virtual_memory()
mem.total # total physical memory available
Documentation for virtual_memory is here.
|
Are list-comprehensions and functional functions faster than "for loops"?
|
In terms of performance in Python, is a list-comprehension, or functions like map(), filter() and reduce() faster than a for loop? Why, technically, they "run in a C speed", while "the for loop runs in the python virtual machine speed"?.
Suppose that in a game that I'm developing I need to draw complex and huge maps using for loops. This question would be definitely relevant, for if a list-comprehension, for example, is indeed faster, it would be a much better option in order to avoid lags (Despite the visual complexity of the code).
|
The following are rough guidelines and educated guesses based on experience. You should timeit or profile your concrete use case to get hard numbers, and those numbers may occasionally disagree with the below.
A list comprehension is usually a tiny bit faster than the precisely equivalent for loop (that actually builds a list), most likely because it doesn't have to look up the list and its append method on every iteration. However, a list comprehension still does a bytecode-level loop:
>>> dis.dis(<the code object for `[x for x in range(10)]`>)
1 0 BUILD_LIST 0
3 LOAD_FAST 0 (.0)
>> 6 FOR_ITER 12 (to 21)
9 STORE_FAST 1 (x)
12 LOAD_FAST 1 (x)
15 LIST_APPEND 2
18 JUMP_ABSOLUTE 6
>> 21 RETURN_VALUE
Using a list comprehension in place of a loop that doesn't build a list, nonsensically accumulating a list of meaningless values and then throwing the list away, is often slower because of the overhead of creating and extending the list. List comprehensions aren't magic that is inherently faster than a good old loop.
As for functional list processing functions: While these are written in C and probably outperform equivalent functions written in Python, they are not necessarily the fastest option. Some speed up is expected if the function is written in C too. But most cases using a lambda (or other Python function), the overhead of repeatedly setting up Python stack frames etc. eats up any savings. Simply doing the same work in-line, without function calls (e.g. a list comprehension instead of map or filter) is often slightly faster.
Suppose that in a game that I'm developing I need to draw complex and huge maps using for loops. This question would be definitely relevant, for if a list-comprehension, for example, is indeed faster, it would be a much better option in order to avoid lags (Despite the visual complexity of the code).
Chances are, if code like this isn't already fast enough when written in good non-"optimized" Python, no amount of Python level micro optimization is going to make it fast enough and you should start thinking about dropping to C. While extensive micro optimizations can often speed up Python code considerably, there is a low (in absolute terms) limit to this. Moreover, even before you hit that ceiling, it becomes simply more cost efficient (15% speedup vs. 300% speed up with the same effort) to bite the bullet and write some C.
|
OverflowError Python int too large to convert to C long
|
#!/usr/bin/python
import sys,math
n = input("enter a number to find the factors : ")
j,flag,b= 0l,False,0l
for b in xrange(1,n+1):
a = n + (b*b)
j = long(math.sqrt(a))
if a == j*j:
flag = True
break
if flag:
c = j+b
d = j-b
print "the first factor is : ",c ," and the second factor is : ",d
when I run this code it is throwing different types of errors for different inputs.
The following is the one kind of input
linux@terminal:~$ ./fermat.py
enter a number to find the factors : 544564564545456
Traceback (most recent call last):
File "./fermat.py", line 8, in <module>
for b in range(1,n+1):
MemoryError
This is for second input
linux@terminal:~$ ./fermat.py
enter a number to find the factors : 28888888888888888888888888888888888444444444444444444444444
Traceback (most recent call last):
File "./fermat.py", line 8, in <module>
for b in range(1,n+1):
OverflowError: range() result has too many items
And this is for third output
linux@terminal:~$ ./fermat.py
enter a number to find the factors : 28888888888888888888888888888888888444444444444444444444444
Traceback (most recent call last):
File "./fermat.py", line 8, in <module>
for b in xrange(1,n+1):
OverflowError: Python int too large to convert to C long
Actually I was writing code for Fermat factorization to find the factors of a given number. And my requirement is even if give a hundred digit number as input it should give the output for that input number.
Is there any way to get rid this kind of problem?
I am using Ubuntu with python 2.7.5+
|
Annoyingly, in Python 2, xrange requires its arguments to fit into a C long. There isn't quite a drop-in replacement in the standard library. However, you don't quite need a drop-in replacement. You just need to keep going until the loop breaks. That means you want itertools.count, which is like an xrange that just keeps going:
import itertools
for b in itertools.count(1):
...
Also, note that your code has other bugs. It attempts to apply Fermat factorization to even numbers, but Fermat factorization doesn't work on even numbers. Additionally, it fails to consider the case where n is a square, so it won't work for n=9.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.