title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Find the second highest value out of various lists of numbers | 38,665,990 | <p>I made a simple code to find the highest value out of various lists of numbers</p>
<pre><code>lists = [[1,-3,5,2,6,11,78,5,-345,-3,6,98,-5,0],[1,2,3,4,5,6,7,6,5,4,4],[-435,-64,-4,-6,-45,-8,-98,-7,-8],[32,45,56,554,12,33]]
for w in lists:
lst = w
a = float ("-inf")
for x in range (0, len (lst)):
b = lst [x]
if (b > a):
a = b
c = x
z = lst
print ("The list is:",z)
print ("The highest value is: " , a)
print ("The position is:", c+1)
Out:
The list is: [32, 45, 56, 554, 12, 33]
The highest value is: 554
The position is: 4
</code></pre>
<p>But how can I know the second, third and so on??</p>
<p>I'm looking for something like this:</p>
<pre><code>Out:
The list is: [1,-3,5,2,6,11,78,5,-345,-3,6,98,-5,0]
The second highest value is: 98
The position is: 12
</code></pre>
| 0 | 2016-07-29T19:11:07Z | 38,666,039 | <pre><code>>>> lst = [1,-3,5,2,6,11,78,5,-345,-3,6,98,-5,0]
>>> sorted(lst)
[-345, -5, -3, -3, 0, 1, 2, 5, 5, 6, 6, 11, 78, 98]
>>>
>>> second_most = sorted(lst)[-2]
>>> second_most
78
>>>
>>> lst.index(78)
6
>>>
</code></pre>
<p>You can sort the list, then take the second to last value to get second most in the list</p>
| 1 | 2016-07-29T19:14:50Z | [
"python",
"find",
"value"
] |
Find the second highest value out of various lists of numbers | 38,665,990 | <p>I made a simple code to find the highest value out of various lists of numbers</p>
<pre><code>lists = [[1,-3,5,2,6,11,78,5,-345,-3,6,98,-5,0],[1,2,3,4,5,6,7,6,5,4,4],[-435,-64,-4,-6,-45,-8,-98,-7,-8],[32,45,56,554,12,33]]
for w in lists:
lst = w
a = float ("-inf")
for x in range (0, len (lst)):
b = lst [x]
if (b > a):
a = b
c = x
z = lst
print ("The list is:",z)
print ("The highest value is: " , a)
print ("The position is:", c+1)
Out:
The list is: [32, 45, 56, 554, 12, 33]
The highest value is: 554
The position is: 4
</code></pre>
<p>But how can I know the second, third and so on??</p>
<p>I'm looking for something like this:</p>
<pre><code>Out:
The list is: [1,-3,5,2,6,11,78,5,-345,-3,6,98,-5,0]
The second highest value is: 98
The position is: 12
</code></pre>
| 0 | 2016-07-29T19:11:07Z | 38,666,370 | <p>You can use numpy to do this. The <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html" rel="nofollow"><code>np.argsort</code></a> method returns a numpy array of the indices which would sort the list.</p>
<pre><code>>>> import numpy as np
>>> list = [1,-3,5,2,6,11,78,5,-345,-3,6,98,-5,0]
>>> inds = np.argsort(list)
>>> print('The highest value is: {0}'.format(list[inds[-1]]))
The highest value is: 98
>>> print('Second highest value is: {0}'.format(list[inds[-2]]))
Second highest value is: 78
>>> print('Third highest value is: {0}'.format(list[inds[-3]]))
Third highest value is: 11
</code></pre>
<p>If what you actually want is the second highest <strong>absolute value</strong>, then you can simply take the absolute value of the list using <code>np.abs</code> ahead of time:</p>
<pre><code>>>> import numpy as np
>>> list = [1,-3,5,2,6,11,78,5,-345,-3,6,98,-5,0]
>>> inds = np.argsort(np.abs(list))
>>> print('The highest absolute value is: {0}'.format(list[inds[-1]]))
The highest absolute value is: -345
>>> print('Second highest absolute value is: {0}'.format(list[inds[-2]]))
Second highest absolute value is: 98
>>> print('Third highest absolute value is: {0}'.format(list[inds[-3]]))
Third highest absolute value is: 78
</code></pre>
| 0 | 2016-07-29T19:40:24Z | [
"python",
"find",
"value"
] |
Find the second highest value out of various lists of numbers | 38,665,990 | <p>I made a simple code to find the highest value out of various lists of numbers</p>
<pre><code>lists = [[1,-3,5,2,6,11,78,5,-345,-3,6,98,-5,0],[1,2,3,4,5,6,7,6,5,4,4],[-435,-64,-4,-6,-45,-8,-98,-7,-8],[32,45,56,554,12,33]]
for w in lists:
lst = w
a = float ("-inf")
for x in range (0, len (lst)):
b = lst [x]
if (b > a):
a = b
c = x
z = lst
print ("The list is:",z)
print ("The highest value is: " , a)
print ("The position is:", c+1)
Out:
The list is: [32, 45, 56, 554, 12, 33]
The highest value is: 554
The position is: 4
</code></pre>
<p>But how can I know the second, third and so on??</p>
<p>I'm looking for something like this:</p>
<pre><code>Out:
The list is: [1,-3,5,2,6,11,78,5,-345,-3,6,98,-5,0]
The second highest value is: 98
The position is: 12
</code></pre>
| 0 | 2016-07-29T19:11:07Z | 38,668,309 | <p>Try this approach, that maps all the positions and rankings in a dictionary:</p>
<pre><code>from operator import itemgetter
lists = [[1,-3,5,2,6,11,78,5,-345,-3,6,98,-5,0],
[1,2,3,4,5,6,7,6,5,4,4],
[-435,-64,-4,-6,-45,-8,-98,-7,-8],
[32,45,56,554,12,33]]
rank = 0
mapping = {(rank, lst_no, pos): val
for lst_no, lst in enumerate(lists)
for pos, val in enumerate(lst)}
value = float('nan')
rank_incr = 0
for (_, lst_no, pos), val in sorted(
temp.items(), reverse=True, key=itemgetter(1)):
# The following section is to assign the same rank
# to repeated values, and continue counting thereafter.
if val != value:
value = val
rank += rank_incr
rank_incr = 1
else:
rank_incr += 1
# -----------------
del mapping((0, lst_no, pos))
mapping[(rank, lst_no, pos)] = val
</code></pre>
<p>You can access any value from this dictionary named mapping. It has all the information you need:
the keys are tupples of ( rank, list no, position )
and the values are the individual values</p>
<pre><code>for (rank, lst_no, pos), val in sorted(mapping.items()):
print("Ranking No. {}".format(rank))
print(" The value: {}".format(val))
print(" The list No. {}, is: {}".format(lst_no, lists[lst_no]))
print(" The position is: {}".format(pos))
print()
</code></pre>
| 0 | 2016-07-29T22:24:19Z | [
"python",
"find",
"value"
] |
TensorFlow: AttributeError: 'Tensor' object has no attribute 'shape' | 38,666,040 | <p>I have the following code which uses TensorFlow. After I reshape a list, it says </p>
<blockquote>
<p>AttributeError: 'Tensor' object has no attribute 'shape'</p>
</blockquote>
<p>when I try to print its shape.</p>
<pre><code># Get the shape of the training data.
print "train_data.shape: " + str(train_data.shape)
train_data = tf.reshape(train_data, [400, 1])
print "train_data.shape: " + str(train_data.shape)
train_size,num_features = train_data.shape
</code></pre>
<p>Output:</p>
<blockquote>
<p>train_data.shape: (400,)
Traceback (most recent call last): File
"", line 1, in File
"/home/shehab/Downloads/tools/python/pycharm-edu-2.0.4/helpers/pydev/pydev_import_hook.py",
line 21, in do_import
module = self._system_import(name, *args, **kwargs) File "/home/shehab/Dropbox/py-projects/try-tf/logistic_regression.py", line
77, in
print "train_data.shape: " + str(train_data.shape) AttributeError: 'Tensor' object has no attribute 'shape'</p>
</blockquote>
<p>Could anyone please tell me what I am missing?</p>
| 0 | 2016-07-29T19:14:55Z | 38,666,116 | <p>Indeed, <code>tf.Tensor</code> doesn't have a <code>.shape</code> property. You should use the <code>Tensor.get_shape()</code> method instead:</p>
<pre><code>train_data = tf.reshape(train_data, [400, 1])
print "train_data.shape: " + str(train_data.get_shape())
</code></pre>
<p>Note that in general you might not be able to get the actual shape of the result of a TensorFlow operation. In some cases, the shape will be a computed value that depends on running the computation to find its value; and it may even vary from one run to the next (e.g. the shape of <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/math_ops.html#unique" rel="nofollow"><code>tf.unique()</code></a>). In that case, the result of <code>get_shape()</code> for some dimensions may be <code>None</code> (or <code>"?"</code>).</p>
| 2 | 2016-07-29T19:21:18Z | [
"python",
"neural-network",
"tensorflow"
] |
Fast Queue of read only numpy arrays | 38,666,078 | <p>I have a multiprocessing job where I'm queuing read only numpy arrays, as part of a producer consumer pipeline.</p>
<p>Currently they're being pickled, because this is the default behaviour of <code>multiprocessing.Queue</code> which slows down performance.</p>
<p>Is there any pythonic way to pass references to shared memory instead of pickling the arrays? </p>
<p>Unfortunately the arrays are being generated after the consumer is started, and there is no easy way around that. (So the global variable approach would be ugly...).</p>
<p>[Note that in the following code we are not expecting h(x0) and h(x1) to be computed in parallel. Instead we see h(x0) and g(h(x1)) computed in parallel (like a pipelining in a CPU).]</p>
<pre><code>from multiprocessing import Process, Queue
import numpy as np
class __EndToken(object):
pass
def parrallel_pipeline(buffer_size=50):
def parrallel_pipeline_with_args(f):
def consumer(xs, q):
for x in xs:
q.put(x)
q.put(__EndToken())
def parallel_generator(f_xs):
q = Queue(buffer_size)
consumer_process = Process(target=consumer,args=(f_xs,q,))
consumer_process.start()
while True:
x = q.get()
if isinstance(x, __EndToken):
break
yield x
def f_wrapper(xs):
return parallel_generator(f(xs))
return f_wrapper
return parrallel_pipeline_with_args
@parrallel_pipeline(3)
def f(xs):
for x in xs:
yield x + 1.0
@parrallel_pipeline(3)
def g(xs):
for x in xs:
yield x * 3
@parrallel_pipeline(3)
def h(xs):
for x in xs:
yield x * x
def xs():
for i in range(1000):
yield np.random.uniform(0,1,(500,2000))
if __name__ == "__main__":
rs = f(g(h(xs())))
for r in rs:
print r
</code></pre>
| 10 | 2016-07-29T19:18:18Z | 38,774,630 | <p>Your example does not seem to run on my computer, although that may have to do with the fact that I'm running windows (issues pickling anything not in <code>__main__</code> namespace (anything decorated))... would something like this help? (you would have to be put pack and unpack inside each of f(), g(), and h()) </p>
<p>Note* I'm not sure this would actually be any faster... Just a stab at what others have suggested..</p>
<pre><code>from multiprocessing import Process, freeze_support
from multiprocessing.sharedctypes import Value, Array
import numpy as np
def package(arr):
shape = Array('i', arr.shape, lock=False)
if arr.dtype == float:
ctype = Value('c', b'd') #d for double #f for single
if arr.dtype == int:
ctype = Value('c', b'i') #if statements could be avoided if data is always the same
data = Array(ctype.value, arr.reshape(-1),lock=False)
return data, shape
def unpack(data, shape):
return np.array(data[:]).reshape(shape[:])
#test
def f(args):
print(unpack(*args))
if __name__ == '__main__':
freeze_support()
a = np.array([1,2,3,4,5])
a_packed = package(a)
print('array has been packaged')
p = Process(target=f, args=(a_packed,))
print('passing to parallel process')
p.start()
print('joining to parent process')
p.join()
print('finished')
</code></pre>
| 0 | 2016-08-04T18:16:59Z | [
"python",
"numpy",
"parallel-processing",
"multiprocessing"
] |
Fast Queue of read only numpy arrays | 38,666,078 | <p>I have a multiprocessing job where I'm queuing read only numpy arrays, as part of a producer consumer pipeline.</p>
<p>Currently they're being pickled, because this is the default behaviour of <code>multiprocessing.Queue</code> which slows down performance.</p>
<p>Is there any pythonic way to pass references to shared memory instead of pickling the arrays? </p>
<p>Unfortunately the arrays are being generated after the consumer is started, and there is no easy way around that. (So the global variable approach would be ugly...).</p>
<p>[Note that in the following code we are not expecting h(x0) and h(x1) to be computed in parallel. Instead we see h(x0) and g(h(x1)) computed in parallel (like a pipelining in a CPU).]</p>
<pre><code>from multiprocessing import Process, Queue
import numpy as np
class __EndToken(object):
pass
def parrallel_pipeline(buffer_size=50):
def parrallel_pipeline_with_args(f):
def consumer(xs, q):
for x in xs:
q.put(x)
q.put(__EndToken())
def parallel_generator(f_xs):
q = Queue(buffer_size)
consumer_process = Process(target=consumer,args=(f_xs,q,))
consumer_process.start()
while True:
x = q.get()
if isinstance(x, __EndToken):
break
yield x
def f_wrapper(xs):
return parallel_generator(f(xs))
return f_wrapper
return parrallel_pipeline_with_args
@parrallel_pipeline(3)
def f(xs):
for x in xs:
yield x + 1.0
@parrallel_pipeline(3)
def g(xs):
for x in xs:
yield x * 3
@parrallel_pipeline(3)
def h(xs):
for x in xs:
yield x * x
def xs():
for i in range(1000):
yield np.random.uniform(0,1,(500,2000))
if __name__ == "__main__":
rs = f(g(h(xs())))
for r in rs:
print r
</code></pre>
| 10 | 2016-07-29T19:18:18Z | 38,775,513 | <h2>Sharing memory between threads or processes</h2>
<h3>Use threading instead of multiprocessing</h3>
<p>Since you're using numpy, you can take advantage of the fact that <a href="http://scipy-cookbook.readthedocs.io/items/ParallelProgramming.html" rel="nofollow">the global interpreter lock is released during numpy computations</a>. This means you can do parallel processing with standard threads and shared memory, instead of multiprocessing and inter-process communication. Here's a version of your code, tweaked to use threading.Thread and Queue.Queue instead of multiprocessing.Process and multiprocessing.Queue. This passes a numpy ndarray via a queue without pickling it. On my computer, this runs about 3 times faster than your code. (However, it's only about 20% faster than the serial version of your code. I have suggested some other approaches further down.)</p>
<pre><code>from threading import Thread
from Queue import Queue
import numpy as np
class __EndToken(object):
pass
def parallel_pipeline(buffer_size=50):
def parallel_pipeline_with_args(f):
def consumer(xs, q):
for x in xs:
q.put(x)
q.put(__EndToken())
def parallel_generator(f_xs):
q = Queue(buffer_size)
consumer_process = Thread(target=consumer,args=(f_xs,q,))
consumer_process.start()
while True:
x = q.get()
if isinstance(x, __EndToken):
break
yield x
def f_wrapper(xs):
return parallel_generator(f(xs))
return f_wrapper
return parallel_pipeline_with_args
@parallel_pipeline(3)
def f(xs):
for x in xs:
yield x + 1.0
@parallel_pipeline(3)
def g(xs):
for x in xs:
yield x * 3
@parallel_pipeline(3)
def h(xs):
for x in xs:
yield x * x
def xs():
for i in range(1000):
yield np.random.uniform(0,1,(500,2000))
rs = f(g(h(xs())))
%time print sum(r.sum() for r in rs) # 12.2s
</code></pre>
<h3>Store numpy arrays in shared memory</h3>
<p>Another option, close to what you requested, would be to continue using the multiprocessing package, but pass data between processes using arrays stored in shared memory. The code below creates a new ArrayQueue class to do that. The ArrayQueue object should be created before spawning subprocesses. It creates and manages a pool of numpy arrays backed by shared memory. When a result array is pushed onto the queue, ArrayQueue copies the data from that array into an existing shared-memory array, then passes the id of the shared-memory array through the queue. This is much faster than sending the whole array through the queue, since it avoids pickling the arrays. This has similar performance to the threaded version above (about 10% slower), and may scale better if the global interpreter lock is an issue (i.e., you run a lot of python code in the functions). </p>
<pre><code>from multiprocessing import Process, Queue, Array
import numpy as np
class ArrayQueue(object):
def __init__(self, template, maxsize=0):
if type(template) is not np.ndarray:
raise ValueError('ArrayQueue(template, maxsize) must use a numpy.ndarray as the template.')
if maxsize == 0:
# this queue cannot be infinite, because it will be backed by real objects
raise ValueError('ArrayQueue(template, maxsize) must use a finite value for maxsize.')
# find the size and data type for the arrays
# note: every ndarray put on the queue must be this size
self.dtype = template.dtype
self.shape = template.shape
self.byte_count = len(template.data)
# make a pool of numpy arrays, each backed by shared memory,
# and create a queue to keep track of which ones are free
self.array_pool = [None] * maxsize
self.free_arrays = Queue(maxsize)
for i in range(maxsize):
buf = Array('c', self.byte_count, lock=False)
self.array_pool[i] = np.frombuffer(buf, dtype=self.dtype).reshape(self.shape)
self.free_arrays.put(i)
self.q = Queue(maxsize)
def put(self, item, *args, **kwargs):
if type(item) is np.ndarray:
if item.dtype == self.dtype and item.shape == self.shape and len(item.data)==self.byte_count:
# get the ID of an available shared-memory array
id = self.free_arrays.get()
# copy item to the shared-memory array
self.array_pool[id][:] = item
# put the array's id (not the whole array) onto the queue
new_item = id
else:
raise ValueError(
'ndarray does not match type or shape of template used to initialize ArrayQueue'
)
else:
# not an ndarray
# put the original item on the queue (as a tuple, so we know it's not an ID)
new_item = (item,)
self.q.put(new_item, *args, **kwargs)
def get(self, *args, **kwargs):
item = self.q.get(*args, **kwargs)
if type(item) is tuple:
# unpack the original item
return item[0]
else:
# item is the id of a shared-memory array
# copy the array
arr = self.array_pool[item].copy()
# put the shared-memory array back into the pool
self.free_arrays.put(item)
return arr
class __EndToken(object):
pass
def parallel_pipeline(buffer_size=50):
def parallel_pipeline_with_args(f):
def consumer(xs, q):
for x in xs:
q.put(x)
q.put(__EndToken())
def parallel_generator(f_xs):
q = ArrayQueue(template=np.zeros(0,1,(500,2000)), maxsize=buffer_size)
consumer_process = Process(target=consumer,args=(f_xs,q,))
consumer_process.start()
while True:
x = q.get()
if isinstance(x, __EndToken):
break
yield x
def f_wrapper(xs):
return parallel_generator(f(xs))
return f_wrapper
return parallel_pipeline_with_args
@parallel_pipeline(3)
def f(xs):
for x in xs:
yield x + 1.0
@parallel_pipeline(3)
def g(xs):
for x in xs:
yield x * 3
@parallel_pipeline(3)
def h(xs):
for x in xs:
yield x * x
def xs():
for i in range(1000):
yield np.random.uniform(0,1,(500,2000))
print "multiprocessing with shared-memory arrays:"
%time print sum(r.sum() for r in f(g(h(xs())))) # 13.5s
</code></pre>
<h2>Parallel processing of samples instead of functions</h2>
<p>The code above is only about 20% faster than a single-threaded version (12.2s vs. 14.8s for the serial version shown below). That is because each function is run in a single thread or process, and most of the work is done by xs(). The execution time for the example above is nearly the same as if you just ran <code>%time print sum(1 for x in xs())</code>. </p>
<p>If your real project has many more intermediate functions and/or they are more complex than the ones you showed, then the workload may be distributed better among processors, and this may not be a problem. However, if your workload really does resemble the code you provided, then you may want to refactor your code to allocate one sample to each thread instead of one function to each thread. That would look like the code below (both threading and multiprocessing versions are shown):</p>
<pre><code>import multiprocessing
import threading, Queue
import numpy as np
def f(x):
return x + 1.0
def g(x):
return x * 3
def h(x):
return x * x
def final(i):
return f(g(h(x(i))))
def final_sum(i):
return f(g(h(x(i)))).sum()
def x(i):
# produce sample number i
return np.random.uniform(0, 1, (500, 2000))
def rs_serial(func, n):
for i in range(n):
yield func(i)
def rs_parallel_threaded(func, n):
todo = range(n)
q = Queue.Queue(2*n_workers)
def worker():
while True:
try:
# the global interpreter lock ensures only one thread does this at a time
i = todo.pop()
q.put(func(i))
except IndexError:
# none left to do
q.put(None)
break
threads = []
for j in range(n_workers):
t = threading.Thread(target=worker)
t.daemon=False
threads.append(t) # in case it's needed later
t.start()
while True:
x = q.get()
if x is None:
break
else:
yield x
def rs_parallel_mp(func, n):
pool = multiprocessing.Pool(n_workers)
return pool.imap_unordered(func, range(n))
n_workers = 4
n_samples = 1000
print "serial:" # 14.8s
%time print sum(r.sum() for r in rs_serial(final, n_samples))
print "threaded:" # 10.1s
%time print sum(r.sum() for r in rs_parallel_threaded(final, n_samples))
print "mp return arrays:" # 19.6s
%time print sum(r.sum() for r in rs_parallel_mp(final, n_samples))
print "mp return results:" # 8.4s
%time print sum(r_sum for r_sum in rs_parallel_mp(final_sum, n_samples))
</code></pre>
<p>The threaded version of this code is only slightly faster than the first example I gave, and only about 30% faster than the serial version. That's not as much of a speedup as I would have expected; maybe Python is still getting partly bogged down by the GIL? </p>
<p>The multiprocessing version performs significantly faster than your original multiprocessing code, primarily because all the functions get chained together in a single process, rather than queueing (and pickling) intermediate results. However, it is still slower than the serial version because all the result arrays have to get pickled (in the worker process) and unpickled (in the main process) before being returned by imap_unordered. However, if you can arrange it so that your pipeline returns aggregate results instead of the complete arrays, then you can avoid the pickling overhead, and the multiprocessing version is fastest: about 43% faster than the serial version.</p>
<p>OK, now for the sake of completeness, here's a version of the second example that uses multiprocessing with your original generator functions instead of the finer-scale functions shown above. This uses some tricks to spread the samples among multiple processes, which may make it unsuitable for many workflows. But using generators does seem to be slightly faster than using the finer-scale functions, and this method can get you up to a 54% speedup vs. the serial version shown above. However, that is only available if you don't need to return the full arrays from the worker functions.</p>
<pre><code>import multiprocessing, itertools, math
import numpy as np
def f(xs):
for x in xs:
yield x + 1.0
def g(xs):
for x in xs:
yield x * 3
def h(xs):
for x in xs:
yield x * x
def xs():
for i in range(1000):
yield np.random.uniform(0,1,(500,2000))
def final():
return f(g(h(xs())))
def final_sum():
for x in f(g(h(xs()))):
yield x.sum()
def get_chunk(args):
"""Retrieve n values (n=args[1]) from a generator function (f=args[0]) and return them as a list.
This runs in a worker process and does all the computation."""
return list(itertools.islice(args[0](), args[1]))
def parallelize(gen_func, max_items, n_workers=4, chunk_size=50):
"""Pull up to max_items items from several copies of gen_func, in small groups in parallel processes.
chunk_size should be big enough to improve efficiency (one copy of gen_func will be run for each chunk)
but small enough to avoid exhausting memory (each worker will keep chunk_size items in memory)."""
pool = multiprocessing.Pool(n_workers)
# how many chunks will be needed to yield at least max_items items?
n_chunks = int(math.ceil(float(max_items)/float(chunk_size)))
# generate a suitable series of arguments for get_chunk()
args_list = itertools.repeat((gen_func, chunk_size), n_chunks)
# chunk_gen will yield a series of chunks (lists of results) from the generator function,
# totaling n_chunks * chunk_size items (which is >= max_items)
chunk_gen = pool.imap_unordered(get_chunk, args_list)
# parallel_gen flattens the chunks, and yields individual items
parallel_gen = itertools.chain.from_iterable(chunk_gen)
# limit the output to max_items items
return itertools.islice(parallel_gen, max_items)
# in this case, the parallel version is slower than a single process, probably
# due to overhead of gathering numpy arrays in imap_unordered (via pickle?)
print "serial, return arrays:" # 15.3s
%time print sum(r.sum() for r in final())
print "parallel, return arrays:" # 24.2s
%time print sum(r.sum() for r in parallelize(final, max_items=1000))
# in this case, the parallel version is more than twice as fast as the single-thread version
print "serial, return result:" # 15.1s
%time print sum(r for r in final_sum())
print "parallel, return result:" # 6.8s
%time print sum(r for r in parallelize(final_sum, max_items=1000))
</code></pre>
| 6 | 2016-08-04T19:10:25Z | [
"python",
"numpy",
"parallel-processing",
"multiprocessing"
] |
Fast Queue of read only numpy arrays | 38,666,078 | <p>I have a multiprocessing job where I'm queuing read only numpy arrays, as part of a producer consumer pipeline.</p>
<p>Currently they're being pickled, because this is the default behaviour of <code>multiprocessing.Queue</code> which slows down performance.</p>
<p>Is there any pythonic way to pass references to shared memory instead of pickling the arrays? </p>
<p>Unfortunately the arrays are being generated after the consumer is started, and there is no easy way around that. (So the global variable approach would be ugly...).</p>
<p>[Note that in the following code we are not expecting h(x0) and h(x1) to be computed in parallel. Instead we see h(x0) and g(h(x1)) computed in parallel (like a pipelining in a CPU).]</p>
<pre><code>from multiprocessing import Process, Queue
import numpy as np
class __EndToken(object):
pass
def parrallel_pipeline(buffer_size=50):
def parrallel_pipeline_with_args(f):
def consumer(xs, q):
for x in xs:
q.put(x)
q.put(__EndToken())
def parallel_generator(f_xs):
q = Queue(buffer_size)
consumer_process = Process(target=consumer,args=(f_xs,q,))
consumer_process.start()
while True:
x = q.get()
if isinstance(x, __EndToken):
break
yield x
def f_wrapper(xs):
return parallel_generator(f(xs))
return f_wrapper
return parrallel_pipeline_with_args
@parrallel_pipeline(3)
def f(xs):
for x in xs:
yield x + 1.0
@parrallel_pipeline(3)
def g(xs):
for x in xs:
yield x * 3
@parrallel_pipeline(3)
def h(xs):
for x in xs:
yield x * x
def xs():
for i in range(1000):
yield np.random.uniform(0,1,(500,2000))
if __name__ == "__main__":
rs = f(g(h(xs())))
for r in rs:
print r
</code></pre>
| 10 | 2016-07-29T19:18:18Z | 38,799,008 | <p>Check out the <a href="https://github.com/uqfoundation/pathos/blob/master/pathos/multiprocessing.py" rel="nofollow">Pathos-multiprocessing project</a>, which avoids the standard <code>multiprocessing</code> reliance on pickling. This should allow you to get around both the inefficiencies of pickling, and give you access to common memory for read-only shared resources. Note that while Pathos is nearing deployment in a full pip package, in the interim I'd recommend installing with <code>pip install git+https://github.com/uqfoundation/pathos</code> </p>
| 0 | 2016-08-05T23:22:44Z | [
"python",
"numpy",
"parallel-processing",
"multiprocessing"
] |
Why does a column from pandas DataFrame not work in this loop? | 38,666,111 | <p>I have a DataFrame that I took from basketball-reference with player names. The code below is how I built the DataFrame. It has 5 columns of player names, but each name also has the player's position.</p>
<pre><code>url = "http://www.basketball-reference.com/awards/all_league.html"
dframe_list = pd.io.html.read_html(url)
df = dframe_list[0]
df.drop(df.columns[[0,1,2]], inplace=True, axis=1)
column_names = ['name1', 'name2', 'name3', 'name4', 'name5']
df.columns = column_names
df = df[df.name1.notnull()]
</code></pre>
<p>I am trying to split off the position. To do so I had planned to make a DataFrame for each name column:</p>
<pre><code>name1 = pd.DataFrame(df.name1.str.split().tolist()).ix[:,0:1]
name1[0] = name1[0] + " " + name1[1]
name1.drop(name1.columns[[1]], inplace=True, axis=1)
</code></pre>
<p>Since I have five columns I thought I would do this with a loop</p>
<pre><code>column_names = ['name1', 'name2', 'name3', 'name4', 'name5']
for column in column_names:
column = pd.DataFrame(df.column.str.split().tolist()).ix[:,0:1]
column[0] = column[0] + " " + column[1]
column.drop(column.columns[[1]], inplace=True, axis=1)
column.columns = column
</code></pre>
<p>And then I'd join all these DataFrames back together. </p>
<pre><code>df_NBA = [name1, name2, name3, name4, name5]
df_NBA = pd.concat(df_NBA, axis=1)
</code></pre>
<p>I'm new to python, so I'm sure I'm doing this in a pretty cumbersome fashion and would love suggestions as to how I might do this faster. But my main question is, when I run the code on individual columns it works fine, but if when I run the loop I get the error:</p>
<pre><code>AttributeError: 'DataFrame' object has no attribute 'column'
</code></pre>
<p>It seems that the part of the loop <code>df.column.str</code> is causing some problem? I've fiddled around with the list, with bracketing column (I still don't understand why sometimes I bracket a DataFrame column and sometimes it's .column, but that's a bigger issue) and other random things.</p>
<p>When I try @BrenBarn's suggestion </p>
<pre><code>df.apply(lambda c: c.str[:-2])
</code></pre>
<p>The following pops up in the Jupyter notebook:</p>
<pre><code>SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
if __name__ == '__main__':
</code></pre>
<p>Looking at the DataFrame, nothing has actually changed and if I understand the documentation correctly this method creates a copy of the DataFrame with the edits, but that this is a temporary copy that get's thrown out afterward so the actual DataFrame doesn't change.</p>
| 1 | 2016-07-29T19:20:52Z | 38,666,726 | <p>If the position labels are always only one character, the simple solution is this:</p>
<pre><code>>>> df.apply(lambda c: c.str[:-2])
name1 name2
0 Marc Gasol Lebron James
1 Pau Gasol Kevin Durant
2 Dwight Howard Kyrie Irving
</code></pre>
<p>The <code>str</code> attribute of a Series lets you do string operations, including indexing, so this just trims the last two characters off each value.</p>
<p>As for your question about <code>df.column</code>, this issue is more general than pandas. These two things are not the same:</p>
<pre><code># works
obj.attr
# doesn't work
attrName = 'attr'
obj.attrName
</code></pre>
<p>You can't use the dot notation when you want to access an attribute whose name is stored in a variable. In general, you can use the <code>getattr</code> function instead. However, pandas provides the bracket notation for accessing a column by specifying the name as a <em>string</em> (rather than a source-code identifier). So these two are equivalent:</p>
<pre><code>df.some_column
columnName = "some_column"
df[columnName]
</code></pre>
<p>In your example, changing your reference to <code>df.column</code> to <code>df[column]</code> should resolve that issue. However, as I mentioned in a comment, your code has other problems too. As far as solving the task at hand, the string-indexing approach I showed at the beginning of my answer is much simpler.</p>
| 2 | 2016-07-29T20:08:35Z | [
"python",
"pandas",
"for-loop",
"dataframe"
] |
Getting an instance's actual used (allocated) disk space in vmware with pyvmomi | 38,666,195 | <p>I've recently started using pyvmomi to get a detailed inventory of vmware servers prior to migrating instances into AWS.</p>
<p>In the vcenter web interface, or vsphere client, I can inspect an instance and see its disks, and it'll tell me the disk size (provisioned), and how much of it is in use (used storage).</p>
<p>From the samples github repo (<a href="https://github.com/vmware/pyvmomi-community-samples" rel="nofollow">https://github.com/vmware/pyvmomi-community-samples</a>) I could quickly learn how to get information on the instances, so getting the disk size is trivial (there's even a question in SO that shows an easy way to get the drives - <a href="http://stackoverflow.com/questions/36026470/how-to-get-sizes-of-vmware-vm-disks-using-pyvmomi">How to get sizes of VMWare VM disks using PyVMomi</a>), but I can't figure out how to get the actual used storage the web/client can show.</p>
<p>So, how do I get the used space for a given instance disks?</p>
| 3 | 2016-07-29T19:27:35Z | 38,868,247 | <p>For getting the <strong><em>freespace</em></strong> from the VM via <strong>PyVMomi</strong> first you have to check if the <em>VMware tools</em> for VM's is installed on your system or not. For checking if its installed, check from your <em>VM's guest information from its summary page (via MOB)</em> if it shows:</p>
<ol>
<li><p><strong>toolsStatus - VirtualMachineToolsStatus - "toolsNotInstalled"</strong>:
This means that you have to install the VMware tools to your respective VM, you can refer following links to install: a)<a href="https://my.vmware.com/web/vmware/details?productId=491&downloadGroup=VMTOOLS1000" rel="nofollow">https://my.vmware.com/web/vmware/details?productId=491&downloadGroup=VMTOOLS1000</a> or, b)<a href="https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1018377" rel="nofollow">https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1018377</a></p></li>
<li><p><strong>toolsStatus - VirtualMachineToolsStatus - "toolsOk"</strong>: This means that your VM already has VMware tools installed, and you can get the <em>diskPath</em>, <em>capacity</em> and <em>freeSpace</em> properties values from <strong>vim.vm.GuestInfo.DiskInfo</strong>. (If you install VMware tools manually as mentioned above, following should be true)</p></li>
</ol>
<p>Once, the above environment is set, you can get the respective information from your VM via following code:</p>
<pre><code>service_instance = None
vcenter_host = "HOSTNAME"
vcenter_port = NUMERIC_PORT
vcenter_username = "USERNAME"
vcenter_password = "PASSWORD"
vmName = "VM_NAME";
try:
#For trying to connect to VM
service_instance = connect.SmartConnect(host=vcenter_host, user=vcenter_username, pwd=vcenter_password, port=vcenter_port, sslContext=context)
atexit.register(connect.Disconnect, service_instance)
content = service_instance.RetrieveContent()
container = content.rootFolder # starting point to look into
viewType = [vim.VirtualMachine] # object types to look for
recursive = True # whether we should look into it recursively
containerView = content.viewManager.CreateContainerView(
container, viewType, recursive)
#getting all the VM's from the connection
children = containerView.view
#going 1 by 1 to every VM
for child in children:
vm = child.summary.config.name
#check for the VM
if(vm == vmName):
vmSummary = child.summary
#get the diskInfo of the selected VM
info = vmSummary.vm.guest.disk
#check for the freeSpace property of each disk
for each in info:
#To get the freeSPace in GB's
diskFreeSpace = each.freeSpace/1024/1024/1024
</code></pre>
<p>Hope it resolves your issue.</p>
| 1 | 2016-08-10T08:39:39Z | [
"python",
"vmware",
"pyvmomi"
] |
How to install python if it doesn't exist from setuptools msi | 38,666,274 | <p>I wanted to package up my python installer so it would be easier to integrate into our WIX installer or other forms of product distribution. I was able to successfully build an exe (<code>python setup.py bdist_wininst</code>) and the msi (<code>python setup.py bdist_msi</code>) using setuptools, but what about the case where a user doesn't have python installed? Is there a way to add python itself as a dependency or otherwise have the msi/exe from setuptools install python if it is missing?</p>
| 0 | 2016-07-29T19:33:24Z | 38,666,614 | <p>The user of your <code>.exe</code> does not need to have python installed; that's beauty of creating binaries. All of the instructions that the client's computer needs to run the program are already in the <code>.exe</code></p>
| 0 | 2016-07-29T20:00:31Z | [
"python",
"setuptools"
] |
Is it possible to assign a default value when unpacking? | 38,666,283 | <p>I have the following:</p>
<pre><code>>>> myString = "has spaces"
>>> first, second = myString.split()
>>> myString = "doesNotHaveSpaces"
>>> first, second = myString.split()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: need more than 1 value to unpack
</code></pre>
<p>I would like to have <code>second</code> default to <code>None</code> if the string does not have any white space. I currently have the following, but am wondering if it can be done in one line:</p>
<pre><code>splitted = myString.split(maxsplit=1)
first = splitted[0]
second = splitted[1:] or None
</code></pre>
| 2 | 2016-07-29T19:33:54Z | 38,666,361 | <p>May I suggest you to consider using a different method, i.e. <code>partition</code> instead of <code>split</code>: </p>
<pre><code>>>> myString = "has spaces"
>>> left, separator, right = myString.partition(' ')
>>> left
'has'
>>> myString = "doesNotHaveSpaces"
>>> left, separator, right = myString.partition(' ')
>>> left
'doesNotHaveSpaces'
</code></pre>
<p>If you are on python3, you have this option available:</p>
<pre><code>>>> myString = "doesNotHaveSpaces"
>>> first, *rest = myString.split()
>>> first
'doesNotHaveSpaces'
>>> rest
[]
</code></pre>
| 5 | 2016-07-29T19:39:37Z | [
"python",
"iterable-unpacking"
] |
Is it possible to assign a default value when unpacking? | 38,666,283 | <p>I have the following:</p>
<pre><code>>>> myString = "has spaces"
>>> first, second = myString.split()
>>> myString = "doesNotHaveSpaces"
>>> first, second = myString.split()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: need more than 1 value to unpack
</code></pre>
<p>I would like to have <code>second</code> default to <code>None</code> if the string does not have any white space. I currently have the following, but am wondering if it can be done in one line:</p>
<pre><code>splitted = myString.split(maxsplit=1)
first = splitted[0]
second = splitted[1:] or None
</code></pre>
| 2 | 2016-07-29T19:33:54Z | 38,666,737 | <p>A general solution would be to <a href="https://docs.python.org/3.5/library/itertools.html#itertools.chain" rel="nofollow"><code>chain</code></a> your iterable with a <a href="https://docs.python.org/3.5/library/itertools.html#itertools.repeat" rel="nofollow"><code>repeat</code></a> of <code>None</code> values and then use an <a href="https://docs.python.org/3.5/library/itertools.html#itertools.islice" rel="nofollow"><code>islice</code></a> of the result:</p>
<pre><code>from itertools import chain, islice, repeat
none_repat = repeat(None)
example_iter = iter(range(1)) #or range(2) or range(0)
first, second = islice(chain(example_iter, none_repeat), 2)
</code></pre>
<p>this would fill in missing values with <code>None</code>, if you need this kind of functionality a lot you can put it into a function like this:</p>
<pre><code>def fill_iter(it, size, fill_value=None):
return islice(chain(it, repeat(fill_value)), size)
</code></pre>
<p>Although the most common use is by far for strings which is why <a href="https://docs.python.org/3.5/library/stdtypes.html?highlight=str.partition#str.partition" rel="nofollow"><code>str.partition</code></a> exists.</p>
| 1 | 2016-07-29T20:09:16Z | [
"python",
"iterable-unpacking"
] |
Treeview Tkinter widget - clickable links | 38,666,326 | <p>I have ten links in my treeview widget (<a href="http://www.example.com" rel="nofollow">http://www.example.com</a>, <a href="http://www.example1.com" rel="nofollow">http://www.example1.com</a> and so on). It is plain text inserted into treeview. Is it possible to make it clickable? How can I convert text into links? Is it possible inside treeview widget?</p>
<p>This is a part of my treeview:
<a href="http://i.stack.imgur.com/fYumY.png" rel="nofollow"><img src="http://i.stack.imgur.com/fYumY.png" alt="treeview"></a></p>
<p>I would like to make those lines clickable links (like in normal browser). Simply click, open default browser and go to page (<a href="http://dieta.pl/" rel="nofollow">http://dieta.pl/</a> for example).</p>
<p>Here is example of my code (a part):</p>
<pre><code># !/usr/bin/env python
# -*- coding: utf-8 -*-
import os
from google import search
from urlparse import urlparse
from SiteCrawler import SiteCrawler
import Tkinter as tk
from Tkinter import *
import ttk
# from Tkinter.font import Font
class Main(Frame):
def __init__(self):
self.fraza = None
self.master = tk.Tk()
if os.name == 'nt':
self.master.state('zoomed')
else:
self.master.wm_attributes('-zoomed', 1)
self.master.title('Site crawler')
self.master.geometry("800x600+600+200")
# Frame gÅowny
self.f = Frame(self.master)
self.f.place(relx=.5, rely=.35, anchor="c")
# Label do wpisania frazy
L1 = Label(self.master, text=u"Wpisz frazÄ", font="Verdana 15 bold")
L1.grid(in_=self.f, row=1, column=2)
# Entry box do wpisania frazy
self.phrase = Entry(self.master, font="Verdana 15 bold",
justify=CENTER)
self.phrase.grid(in_=self.f, row=1, column=3, columnspan=3)
# Button do zatwierdzenia frazy
Bt1 = Button(self.master, text=u'Wczytaj frazÄ',
command=lambda: self.results(self.phrase.get()), width=20)
Bt1.grid(in_=self.f, row=2, column=3, columnspan=3)
# ttk.tree widget
tree_cols = ('Lp', 'Url', 'Fraza w Title',
'Fraza w description', 'Fraza w Keywords',
'Fraza w H1', 'Fraza w H2', 'Fraza w caÅej stronie')
self.tree = ttk.Treeview(columns=tree_cols,
show='headings', height=10)
for i in tree_cols:
self.tree.heading(i, text=i)
self.tree.column('Lp', width=50, anchor=CENTER)
# self.tree.heading("two", text="Fraza w Title")
# self.tree.heading("three", text="Fraza w Description")
# self.tree.heading("four", text="Fraza w Description")
self.tree.grid(in_=self.f, row=4, column=1, columnspan=4, sticky=NSEW)
self.master.mainloop()
def results(self, phrase):
Crawler = SiteCrawler()
self.fraza = phrase
domains = []
for i in search(phrase, stop=10):
print i
parsed_url = urlparse(i)
domain = '{uri.scheme}://{uri.netloc}/'.format(uri=parsed_url)
if domain not in domains:
domains.append(domain)
for index, url in enumerate(domains[:10]):
h = ['h1', 'h2']
Crawler.load_url(url, self.fraza)
Crawler.title()
Crawler.get_description()
Crawler.get_keywords()
for i in h:
Crawler.count_H(i)
Crawler.all_keywords()
self.tree.insert('', 'end', values=(
index + 1, url, Crawler.title(), Crawler.get_description(),
Crawler.get_keywords(), Crawler.count_H('h1'),
Crawler.count_H('h2'), Crawler.all_keywords()))
if __name__ == "__main__":
main = Main()
main.results()
</code></pre>
| 0 | 2016-07-29T19:36:40Z | 38,676,455 | <p>For each widget, you can <a href="http://effbot.org/tkinterbook/tkinter-events-and-bindings.htm" rel="nofollow">bind</a> Python functions and methods to events.<code>Bind</code> a function to your <code>treeview</code>.You need to bind your tree,add this in <code>__init__</code> function:</p>
<pre><code>self.tree.bind("<Double-1>", self.link_tree)
</code></pre>
<p>Create a function:</p>
<pre><code>def link_tree(self,event):
input_id = self.tree.selection()
self.input_item = self.tree.item(input_id,"text")
#for opening the link in browser
import webbrowser
webbrowser.open('{}'.format(self.input_item))
#do whatever you want
</code></pre>
| 1 | 2016-07-30T17:06:57Z | [
"python",
"tkinter",
"treeview"
] |
Jupyter notebook wrong path | 38,666,329 | <p>I need to test a few functions from a code I am building which I import into a jupyter notebook.</p>
<p>issue is, <em>simTools_path</em> is different in the functions and the jupyter notebook. More, when I call these functions from my main python script, it works fine.</p>
<p><strong>MWE</strong></p>
<p><em>simTools_path/objects/classes.py</em></p>
<pre><code>simTools_path = os.path.abspath(os.getenv('SIMTOOLS_PATH'))
sys.path.append(simTools_path)
def testPath():
print 'testPath', simTools_path
</code></pre>
<p><em>jupyter notebook</em></p>
<pre><code>import os,sys
# paths
simTools_path = os.path.abspath('../')
os.environ["SIMTOOLS_PATH"] = "simTools_path"
os.environ["PYTHONPATH"] = "simTools_path"
sys.path.append(simTools_path)
from objects.classes import testPath
print simTools_path
testPath()
</code></pre>
<p>results:</p>
<pre><code>simTools_path= /home/jhumberto/WORK/Projects/code/simulations_2016-07-14/simTools
testPath= /home/jhumberto/WORK/Projects/code/simulations_2016-07-14/simTools/jupyterNotebooks/simTools_path
</code></pre>
<p><em>Notes:</em></p>
<p>1) I use this path variable in different functions inside different modules to load file data relatively to the <em>simTools_path</em> path.</p>
<p>2) my jupyter notebook is located in <em>/home/jhumberto/WORK/Projects/code/simulations_2016-07-14/simTools/jupyterNotebooks</em></p>
<p>Any ideas?</p>
| 0 | 2016-07-29T19:36:55Z | 38,748,048 | <p>You have confused the variable <code>simTools_path</code> and the literal string <code>"simTools_path"</code>. To correct the problem, simply change the line as follows:</p>
<pre><code>os.environ["SIMTOOLS_PATH"] = simTools_path
</code></pre>
| 1 | 2016-08-03T15:37:42Z | [
"python",
"path",
"jupyter-notebook",
"sys.path"
] |
Calling a python script from another script and passing variables to it | 38,666,348 | <p>I have a python script that contains a large function that filters a list and exports the contents to excel. I want to be able to make different python scripts where I can specify different filters for different cases and pass them to my python scripts with the function that applies the filter.</p>
<p>For example</p>
<pre><code>#Script called function_script with function to apply filters
def my_funct(filter1, filter2, filter3, etc):
apply filters to list
export results to excel
</code></pre>
<p>I will have different filters for some common use cases and instead of changing my filters I would rather create a python script for each use case that calls the script containing my functions and passes the filters. Something like.</p>
<pre><code>#One script where filters for category 1 are defined
filter1 = 'Foo'
filter2 = 'Bar'
filter3 = 'Test'
function_script(filter1, filter2, filter3)
</code></pre>
| 0 | 2016-07-29T19:38:28Z | 38,666,456 | <p>execfile(filename[, globals[, locals]])</p>
| -1 | 2016-07-29T19:47:05Z | [
"python"
] |
Calling a python script from another script and passing variables to it | 38,666,348 | <p>I have a python script that contains a large function that filters a list and exports the contents to excel. I want to be able to make different python scripts where I can specify different filters for different cases and pass them to my python scripts with the function that applies the filter.</p>
<p>For example</p>
<pre><code>#Script called function_script with function to apply filters
def my_funct(filter1, filter2, filter3, etc):
apply filters to list
export results to excel
</code></pre>
<p>I will have different filters for some common use cases and instead of changing my filters I would rather create a python script for each use case that calls the script containing my functions and passes the filters. Something like.</p>
<pre><code>#One script where filters for category 1 are defined
filter1 = 'Foo'
filter2 = 'Bar'
filter3 = 'Test'
function_script(filter1, filter2, filter3)
</code></pre>
| 0 | 2016-07-29T19:38:28Z | 38,666,539 | <p>Let's assume that file with large function is in one directory with all others :</p>
<pre><code>some folder
|-functions_file.py
|-main.py
</code></pre>
<p>So your functions file will contain your functions for example :</p>
<pre><code>def add(num1, num2):
return num1+num2
def subtract(num1, num2):
return num1-num2
#here is example function to show you how to pass more values to function
def add_all(numbers):
#if you need individual values
for number in numbers:
print number
return sum(number)
</code></pre>
<p>And to call that functions from your <code>main.py</code> you can do :</p>
<pre><code>import functions_file
print(functions_file.add(1,20)) #prints 21 if you're on py2 you don't need () for print
print(functions_file.add_all([1,2,3,4,5,6]))
#you pass parameters in list -> 21 is output
</code></pre>
| 0 | 2016-07-29T19:53:23Z | [
"python"
] |
Matplotlib: RGB colors appear black with Python 2 | 38,666,357 | <p>I have a code snippet that works fine with Python 3 but doesn't with Python 2.
I'm trying to use RGB codes to define a color palette: I get the right colors with Python 3, but Python 2 show them all black... </p>
<p>Below is a very simple code snippet that shows this weird behavior:</p>
<pre><code>%matplotlib inline
import pandas as pd
import matplotlib.pylab as plt
import numpy as np
colors = {
'A': (234, 142, 142),
'B': (255, 224, 137),
'C': (189, 235, 165)}
df = pd.DataFrame(np.random.randn(20, 3), columns=list('ABC')).cumsum()
fig, ax = plt.subplots()
for col in df.columns:
ax.plot(df.index.tolist(), df[col].values, color=(tuple(i/255 for i in colors[col])))
plt.show()
</code></pre>
<p><strong>Python 2</strong> </p>
<p><a href="http://i.stack.imgur.com/I5J5v.png" rel="nofollow"><img src="http://i.stack.imgur.com/I5J5v.png" alt="Using Python2"></a> </p>
<p><strong>Python 3 (OK)</strong> </p>
<p><a href="http://i.stack.imgur.com/UuwCV.png" rel="nofollow"><img src="http://i.stack.imgur.com/UuwCV.png" alt="Using Python3"></a></p>
<p>Is that a bug or matplotlib handling RGB colors a different way on purpose? How should I adapt my code?</p>
<p>Software | Version
Python | 2.7.11 64bit<br>
IPython | 4.0.3<br>
OS | Windows 7 6.1.7601 SP1<br>
matplotlib | 1.5.1 </p>
| 0 | 2016-07-29T19:39:18Z | 38,666,419 | <p>Problem occurs in this line:</p>
<pre><code>i/255 for i in colors[col]
</code></pre>
<p>It's because integer division is different in python 2 and python 3. </p>
<p>Python 2</p>
<pre><code>>>> 2/3
>>> 0
</code></pre>
<p>Python 3</p>
<pre><code>>>> 2/3
>>> 0.66...
</code></pre>
<p>To get the same behaviour in Python 2, you can use:</p>
<pre><code>from __future__ import division
</code></pre>
<p>You should check <a class='doc-link' href="http://stackoverflow.com/documentation/python/809/compatibility-between-python-3-and-python-2/2797/integer-division#t=201607291949090037884">documentation about integer divison</a> for more information.</p>
| 1 | 2016-07-29T19:44:47Z | [
"python",
"python-2.7",
"matplotlib"
] |
Matplotlib: RGB colors appear black with Python 2 | 38,666,357 | <p>I have a code snippet that works fine with Python 3 but doesn't with Python 2.
I'm trying to use RGB codes to define a color palette: I get the right colors with Python 3, but Python 2 show them all black... </p>
<p>Below is a very simple code snippet that shows this weird behavior:</p>
<pre><code>%matplotlib inline
import pandas as pd
import matplotlib.pylab as plt
import numpy as np
colors = {
'A': (234, 142, 142),
'B': (255, 224, 137),
'C': (189, 235, 165)}
df = pd.DataFrame(np.random.randn(20, 3), columns=list('ABC')).cumsum()
fig, ax = plt.subplots()
for col in df.columns:
ax.plot(df.index.tolist(), df[col].values, color=(tuple(i/255 for i in colors[col])))
plt.show()
</code></pre>
<p><strong>Python 2</strong> </p>
<p><a href="http://i.stack.imgur.com/I5J5v.png" rel="nofollow"><img src="http://i.stack.imgur.com/I5J5v.png" alt="Using Python2"></a> </p>
<p><strong>Python 3 (OK)</strong> </p>
<p><a href="http://i.stack.imgur.com/UuwCV.png" rel="nofollow"><img src="http://i.stack.imgur.com/UuwCV.png" alt="Using Python3"></a></p>
<p>Is that a bug or matplotlib handling RGB colors a different way on purpose? How should I adapt my code?</p>
<p>Software | Version
Python | 2.7.11 64bit<br>
IPython | 4.0.3<br>
OS | Windows 7 6.1.7601 SP1<br>
matplotlib | 1.5.1 </p>
| 0 | 2016-07-29T19:39:18Z | 38,666,448 | <p>It looks like you never heard of different behavior of division in python 2 and python 3. Shortly - add this to the top of your python code - <code>from __future__ import division</code>. Python 2 will correct it's unobvious behavior, and python 3 will just ignore this statement - it's already fixed.</p>
| 1 | 2016-07-29T19:46:46Z | [
"python",
"python-2.7",
"matplotlib"
] |
Calculating time between text field interactions | 38,666,464 | <p>I have a dataset of text field interactions across several dozen users of my application across the span of several months. I'm trying to calculate the average time between keystrokes in pandas. The data look something like this:</p>
<pre><code>timestamp before_text after_text
1453481138188 NULL a
1453481138600 a ab
1453481138900 ab abc
1453481139400 abc abcd
1453484000000 Enter some numbers 1
1453484000100 1 12
1453484000600 12 123
</code></pre>
<p><code>timestamp</code> contains the unix time that the user pressed the key, <code>before_text</code> is the what the text field contained before the user hit the key, and <code>after_text</code> is what the field looked like after the keystroke.</p>
<p>What's the best way to go about doing this? I know that's not as simple as doing something like:</p>
<pre><code>(df["timestamp"] - df["timestamp"].shift()).mean()
</code></pre>
<p>because this will calculate a very large time difference on the boundary between two interactions. It seems like the best way to do this would be to pass some function of each row to <code>df.groupby</code> so that I can apply the above snippet to each row. If I had this <code>magic_function</code> I could do something like:</p>
<pre><code>df.groupby(magic_function).apply(lambda x: x["timestamp"] - x["timestamp"].shift()).mean()
</code></pre>
<p>What's a good way to implement <code>magic_function</code>, or am I thinking about this all wrong?</p>
| 4 | 2016-07-29T19:47:26Z | 38,667,035 | <p>Your problem essentially is to identify when a given interaction stops and when another begins. Perhaps compute the difference between <code>timestamp</code>s, and if greater than a threshold, set a flag on which you can group.</p>
<pre><code>thresh = 1e5
ts = (df['timestamp'] - df['timestamp'].shift()) > thresh
grp = [0]
for i in range(len(ts)):
if ts.iloc[i]:
grp.append(grp[-1] + 1)
else:
grp.append(grp[-1])
grp.append(grp[-1])
df['grouper'] = grp
</code></pre>
<p>Now you can simply group like so: <code>grouped = df.groupby('grouper')</code>, then subtract the <code>timestamp</code>s within the group, and compute the average difference.</p>
<p>I am trying to think of a way to avoid the loop, but till then try this and let me know how it goes.</p>
| 0 | 2016-07-29T20:31:23Z | [
"python",
"pandas",
"time-series"
] |
Calculating time between text field interactions | 38,666,464 | <p>I have a dataset of text field interactions across several dozen users of my application across the span of several months. I'm trying to calculate the average time between keystrokes in pandas. The data look something like this:</p>
<pre><code>timestamp before_text after_text
1453481138188 NULL a
1453481138600 a ab
1453481138900 ab abc
1453481139400 abc abcd
1453484000000 Enter some numbers 1
1453484000100 1 12
1453484000600 12 123
</code></pre>
<p><code>timestamp</code> contains the unix time that the user pressed the key, <code>before_text</code> is the what the text field contained before the user hit the key, and <code>after_text</code> is what the field looked like after the keystroke.</p>
<p>What's the best way to go about doing this? I know that's not as simple as doing something like:</p>
<pre><code>(df["timestamp"] - df["timestamp"].shift()).mean()
</code></pre>
<p>because this will calculate a very large time difference on the boundary between two interactions. It seems like the best way to do this would be to pass some function of each row to <code>df.groupby</code> so that I can apply the above snippet to each row. If I had this <code>magic_function</code> I could do something like:</p>
<pre><code>df.groupby(magic_function).apply(lambda x: x["timestamp"] - x["timestamp"].shift()).mean()
</code></pre>
<p>What's a good way to implement <code>magic_function</code>, or am I thinking about this all wrong?</p>
| 4 | 2016-07-29T19:47:26Z | 38,667,058 | <p>I'd do it by calculating the text difference between 'before' and 'after'. If the difference is greater than some threshold, then that is a new session.</p>
<p>It requires <code>from Levenshtein import distance as ld</code>. I installed it via <code>pip</code> like so:</p>
<pre><code>pip install python-levenshtein
</code></pre>
<p>Then:</p>
<pre><code>from Levenshtein import distance as ld
import pandas as pd
# taking just these two columns and transposing and back filling.
# I back fill for one reason, to fill that pesky NA with after text.
before_after = df[['before_text', 'after_text']].T.bfill()
distances = before_after.apply(lambda x: ld(*x))
# threshold should be how much distance constitutes an obvious break in sessions.
threshold = 2
magic_function = (distances > 2).cumsum()
df.groupby(magic_function) \
.apply(lambda x: x["timestamp"] - x["timestamp"].shift()) \
.mean()
362.4
</code></pre>
| 3 | 2016-07-29T20:33:21Z | [
"python",
"pandas",
"time-series"
] |
What is the necessity of plt.figure() in matplotlib for scatter plots? | 38,666,527 | <pre><code>plt.figure(figsize=(10,8))
plt.scatter(df['attacker_size'][df['year'] == 298],
# attacker size in year 298 as the y axis
df['defender_size'][df['year'] == 298],
# the marker as
marker='x',
# the color
color='b',
# the alpha
alpha=0.7,
# with size
s = 124,
# labelled this
label='Year 298')
</code></pre>
<p>In the above snippet of code collected from <a href="http://chrisalbon.com/python/matplotlib_simple_scatterplot.html" rel="nofollow">Scatterplot in Matplotlib</a>, what is the necessity of <code>plt.figure()</code>? </p>
| 2 | 2016-07-29T19:52:34Z | 38,666,557 | <p>It is not always necessary because a <code>figure</code> is implicitly created when you create a <code>scatter</code> plot; however, in the case you have shown, the figure is being created explicitly using <code>plt.figure</code> so that the figure will be a specific size rather than the default size.</p>
<p>The other option would be to use <code>gcf</code> to get the current figure after creating the <code>scatter</code> plot and set the figure size retrospectively:</p>
<pre><code># Create scatter plot here
plt.gcf().set_size_inches(10, 8)
</code></pre>
| 1 | 2016-07-29T19:54:35Z | [
"python",
"matplotlib"
] |
List of duplicate values in a dictionary python | 38,666,639 | <p>I have a dictionary that has has filename like 1.xml and then the DeviceIDs like 3 and 12.</p>
<pre><code>{'1.xml': ['3', '12'], '2.xml': ['23', '17''], '3.xml': ['1', '12']}
</code></pre>
<p>And I have a code that compares the DeviceIDs and displays when there are duplicates. Right now it only works when all of the files include the DeviceID.
When running this code: </p>
<pre><code>it = iter(dict.values())
intersection = set(next(it))
print(intersection)
for vals in it:
intersection &= set(vals)
</code></pre>
<p>it returns</p>
<pre><code>set()
</code></pre>
<p>because the DeviceID is only in first and third file, but not in second. Can someone help me to modify this code to get it to display the DeviceID when it is only a duplicate in some of the files?</p>
| 2 | 2016-07-29T20:02:53Z | 38,666,804 | <p>The <code>set</code> intersection drops all the previous duplicates when a new value in the dictionary does not contain them. So instead of the <code>set</code>, you can use a <em>multiset</em> - <a href="https://docs.python.org/2/library/collections.html#collections.Counter" rel="nofollow"><code>collections.Counter</code></a> - to get a count of the number of times each <em>DeviceID</em> appears in the <em>filename-deviceid</em> dictionary:</p>
<pre><code>from collections import Counter
d = {'1.xml': ['3', '12'], '2.xml': ['23', '17'], '3.xml': ['1', '12']}
c = Counter(i for val in d.values() for i in val)
print(c)
# Counter({'12': 2, '1': 1, '17': 1, '23': 1, '3': 1})
print(c.most_common(1))
# [('12', 2)]
</code></pre>
<p>If you have a large number of items and you're not sure of which number to pass to <code>most_common</code> in order to get the duplicated IDs, then you could use:</p>
<pre><code>dupe_ids = [id for id, count in c.items() if count > 1]
</code></pre>
| 3 | 2016-07-29T20:14:21Z | [
"python",
"dictionary"
] |
List of duplicate values in a dictionary python | 38,666,639 | <p>I have a dictionary that has has filename like 1.xml and then the DeviceIDs like 3 and 12.</p>
<pre><code>{'1.xml': ['3', '12'], '2.xml': ['23', '17''], '3.xml': ['1', '12']}
</code></pre>
<p>And I have a code that compares the DeviceIDs and displays when there are duplicates. Right now it only works when all of the files include the DeviceID.
When running this code: </p>
<pre><code>it = iter(dict.values())
intersection = set(next(it))
print(intersection)
for vals in it:
intersection &= set(vals)
</code></pre>
<p>it returns</p>
<pre><code>set()
</code></pre>
<p>because the DeviceID is only in first and third file, but not in second. Can someone help me to modify this code to get it to display the DeviceID when it is only a duplicate in some of the files?</p>
| 2 | 2016-07-29T20:02:53Z | 38,667,156 | <p>The answer posted by Moses is fewer lines of code, but this addresses your question more directly and might perform better, depending on the dataset:</p>
<p>The reason your code doesn't work is because rather than <code>&</code>-ing the intersections together, you actually want to take the union of all intersections. The following updates to your code illustrate how to do this:</p>
<pre><code>dev_ids = {'1.xml': ['3', '12'], '2.xml': ['23', '17'], '3.xml': ['1', '12']}
it = iter(dev_ids.values())
all_ids = set(next(it))
dups = set()
for vals in it:
vals_set = set(vals)
dups.update(all_ids.intersection(vals_set))
all_ids.update(vals_set)
print(dups)
</code></pre>
<p>As you can see, we accumulate all the IDs into a set - <code>.update()</code> is essentially an in-place union operation - and perform intersections on it as we go. Each intersection can be thought of as the "duplicates" contained in that file. We accumulate the duplicates into the variable <code>dup</code> and this becomes our answer.</p>
| 1 | 2016-07-29T20:42:03Z | [
"python",
"dictionary"
] |
Running Fabric command in Seperate Process Group Hanging | 38,666,727 | <p>I'm having trouble understanding exactly why this is hanging. I have stripped down this example to the core components. I have a file, let's call it <code>do_ls.py</code></p>
<pre><code>import fabric.api
import time
host = "myhost.mydomain"
username = "username"
password = "password"
def main():
with fabric.api.settings(host_string=host,user=username,password=password):
result = fabric.api.run("ls")
if __name__ == "__main__":
main()
</code></pre>
<p>If I run this command: <code>python do_ls.py</code> it will execute correctly. Now for the problem. I would like to run this in it's own process. So I have this file, let's call it <code>main.py</code></p>
<pre><code>import sys
import os
import logging
import subprocess as sp
import time
def main():
logging.basicConfig(level=logging.DEBUG)
cmd = [sys.executable, "/path/to/do_ls.py"]
p = sp.Popen(cmd, preexec_fn=os.setpgrp)
while p.poll() is None:
print "Sleeping..."
time.sleep(0.5)
print "All Done."
if __name__ == "__main__":
main()
</code></pre>
<p>Now if I run <code>python main.py</code> this will hang forever. The problem as far as I know is that I'm running the process in a subgroup (i.e. if I take out <code>preexec_fn=os.setpgrp</code> then it will work correctly). What I don't understand is, why this is the case. Especially given that the following works:</p>
<pre><code> cmd = ["ssh", "-q", "username@hostname", "ls"]
p = sp.Popen(cmd, preexec_fn=os.setpgrp)
</code></pre>
<p>Any insight would be greatly appreciated.</p>
| 2 | 2016-07-29T20:08:37Z | 38,790,749 | <p>Since the following lines work:</p>
<pre><code>cmd = ["ssh", "-q", "username@hostname", "ls"]
p = sp.Popen(cmd, preexec_fn=os.setpgrp)
</code></pre>
<p>but <code>main.py</code> continuously hangs, I assume that </p>
<pre><code>while p.poll() is None:
</code></pre>
<p>never evaluates to <code>False</code>. So <code>p.poll()</code> must always be returning <code>None</code>, possibly even after the process completes. A quick search returned <a href="http://bugs.python.org/issue2475" rel="nofollow">this conversation</a> on Python's bug reporting site. As per that conversation, try calling <code>sp.Popen()</code> with the (undocumented) <code>_deadstate='dead'</code> option:</p>
<blockquote>
<p>The problem is that <code>os.wait()</code> does not play nicely with <code>subprocess.py</code>.
<code>Popen.poll()</code> and <code>Popen.wait()</code> use <code>os.waitpid(pid, ...)</code> which will
raise OSError if pid has already been reported by <code>os.wait()</code>.
<code>Popen.poll()</code> swallows OSError and by default returns None.</p>
<p>You can (sort of) fix your program by using
<code>p.popen(_deadstate='dead')</code> in place of <code>p.popen()</code>. This will make
<code>poll()</code> return <code>'dead'</code> instead of <code>None</code> if OSError gets caught, but this
is undocumented.</p>
</blockquote>
| 1 | 2016-08-05T13:40:18Z | [
"python",
"subprocess",
"multiprocessing",
"fabric"
] |
Where is a Python built-in object's __enter__() and __exit__() defined? | 38,666,733 | <p>I've read that the object's __ enter__() and __ exit__() methods are called every time 'with' is used. I understand that for user-defined objects, you can define those methods yourself, but I don't understand how this works for built-in objects/functions like 'open' or even the testcases.</p>
<p>This code works as expected and I assume it closes the file with __ exit__():</p>
<pre><code>with open('output.txt', 'w') as f:
f.write('Hi there!')
</code></pre>
<p>or</p>
<pre><code>with self.assertRaises(ValueError):
remove_driver(self.driver) # self refers to a class that inherits from the default unittest.TestCase
</code></pre>
<p>Yet, there's no such __ enter__() or __ exit__() method on either object when I inspect it:</p>
<p><a href="http://i.stack.imgur.com/wRRsB.png" rel="nofollow"><img src="http://i.stack.imgur.com/wRRsB.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/swxCF.png" rel="nofollow"><img src="http://i.stack.imgur.com/swxCF.png" alt="enter image description here"></a></p>
<p>So how is 'open' working with 'with'? Shouldn't objects that support context management protocol have __ enter__() and __ exit__() methods defined and inspectable?</p>
| 2 | 2016-07-29T20:09:03Z | 38,666,817 | <p>You're checking whether the <code>open</code> function itself or the <code>assertRaises</code> method itself has <code>__enter__</code> and <code>__exit__</code> methods, when you should be looking at what methods the return value has.</p>
| 2 | 2016-07-29T20:15:14Z | [
"python",
"with-statement",
"contextmanager",
"code-inspection"
] |
Where is a Python built-in object's __enter__() and __exit__() defined? | 38,666,733 | <p>I've read that the object's __ enter__() and __ exit__() methods are called every time 'with' is used. I understand that for user-defined objects, you can define those methods yourself, but I don't understand how this works for built-in objects/functions like 'open' or even the testcases.</p>
<p>This code works as expected and I assume it closes the file with __ exit__():</p>
<pre><code>with open('output.txt', 'w') as f:
f.write('Hi there!')
</code></pre>
<p>or</p>
<pre><code>with self.assertRaises(ValueError):
remove_driver(self.driver) # self refers to a class that inherits from the default unittest.TestCase
</code></pre>
<p>Yet, there's no such __ enter__() or __ exit__() method on either object when I inspect it:</p>
<p><a href="http://i.stack.imgur.com/wRRsB.png" rel="nofollow"><img src="http://i.stack.imgur.com/wRRsB.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/swxCF.png" rel="nofollow"><img src="http://i.stack.imgur.com/swxCF.png" alt="enter image description here"></a></p>
<p>So how is 'open' working with 'with'? Shouldn't objects that support context management protocol have __ enter__() and __ exit__() methods defined and inspectable?</p>
| 2 | 2016-07-29T20:09:03Z | 38,666,820 | <p><code>open</code> is a function that returns a file object with the context methods, and <code>self.assertRaises</code> is a method that returns an object with the context methods, try checking the <code>dir</code> of their return value:</p>
<pre><code>>>> x = open(__file__, "r")
>>> x
<_io.TextIOWrapper name='test.py' mode='r' encoding='US-ASCII'>
>>> type(x)
<class '_io.TextIOWrapper'>
>>> "__exit__" in dir(x)
True
</code></pre>
| 5 | 2016-07-29T20:15:30Z | [
"python",
"with-statement",
"contextmanager",
"code-inspection"
] |
Where is a Python built-in object's __enter__() and __exit__() defined? | 38,666,733 | <p>I've read that the object's __ enter__() and __ exit__() methods are called every time 'with' is used. I understand that for user-defined objects, you can define those methods yourself, but I don't understand how this works for built-in objects/functions like 'open' or even the testcases.</p>
<p>This code works as expected and I assume it closes the file with __ exit__():</p>
<pre><code>with open('output.txt', 'w') as f:
f.write('Hi there!')
</code></pre>
<p>or</p>
<pre><code>with self.assertRaises(ValueError):
remove_driver(self.driver) # self refers to a class that inherits from the default unittest.TestCase
</code></pre>
<p>Yet, there's no such __ enter__() or __ exit__() method on either object when I inspect it:</p>
<p><a href="http://i.stack.imgur.com/wRRsB.png" rel="nofollow"><img src="http://i.stack.imgur.com/wRRsB.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/swxCF.png" rel="nofollow"><img src="http://i.stack.imgur.com/swxCF.png" alt="enter image description here"></a></p>
<p>So how is 'open' working with 'with'? Shouldn't objects that support context management protocol have __ enter__() and __ exit__() methods defined and inspectable?</p>
| 2 | 2016-07-29T20:09:03Z | 38,666,836 | <p><code>open()</code> is a function. It <em>returns</em> something that has an <code>__enter__</code> and <code>__exit__</code> method. Look at something like this:</p>
<pre><code>>>> class f:
... def __init__(self):
... print 'init'
... def __enter__(self):
... print 'enter'
... def __exit__(self, *a):
... print 'exit'
...
>>> with f():
... pass
...
init
enter
exit
>>> def return_f():
... return f()
...
>>> with return_f():
... pass
...
init
enter
exit
</code></pre>
<p>Of course, <code>return_f</code> itself does not have those methods, but what it returns does.</p>
| 7 | 2016-07-29T20:16:49Z | [
"python",
"with-statement",
"contextmanager",
"code-inspection"
] |
The most regex-y way to understand commutative operations? | 38,666,744 | <p>I want to parse both 1.05*f and f*1.05 to be equivalent things where f is a fixed letter, the number is any positive float and the * is always between the 'f' and the float (i.e. multiplication). If there is no multiplication, then that is ok too and 'f' as the entire string is ok - so the '1.05*' is optional. Note that 1.05*f*1.05 should not work. gf*1.05 should not work and f*1.05f should break.</p>
<p>I am using python. I am actually having a hard time getting the f*1.05 to work by itself because f*1.05f also works - when I put a dollar sign at the end of the option multiplication and float then nothing works.</p>
<pre><code>^f(\\*(\\d*[.])?\\d+)? # f*1.05 matches, but unfortunately so does f*1.05f
^f((\\*(\\d*[.])?\\d+)?)$ # the $ makes f*1.05f not match, but f*1.05 doesn't match either!
</code></pre>
<p>Really my question is about whether there is a clever way to make 1.05*f, f, and f*1.05 work all in one go without using a '|' operator to choose between the float being on the left or right.</p>
| 1 | 2016-07-29T20:10:02Z | 38,667,299 | <p>Negative look(ahead|behind)s to the rescue:</p>
<pre><code>pattern=r'((?!.*f\*)[\d.]+\*)?f((?<!\*f)\*[\d.]+)?'
s="""1.05*f*1.05
1.05*f
f*1.05
f"""
for line in s.split("\n"):
if re.match(pattern, line):
print("yay")
else:
print("nay")
</code></pre>
<p>prints:</p>
<pre><code>nay
yay
yay
yay
</code></pre>
<p><strong>Explanation:</strong> The pattern consists of two optional groups (the number groups) and an f in the middle. The left group has a negative lookahead in front of it, matching any sequence of characters, followed by an f and an asterisk. This lookahead will therefore not match if the f somewhere further down the string is followed by an asterisk. The whole group is optional (<code>?</code>). The <code>f</code> is then followed by the same thing again, but this time checking for a <code>*f</code> directly before it, using a negative lookbehind. If it finds that, the group won't match, which won't break the whole regex since it's again optional.</p>
<p>I still don't understand why you would want that, a <code>|</code> is vastly superior.</p>
| -1 | 2016-07-29T20:52:33Z | [
"python",
"regex"
] |
The most regex-y way to understand commutative operations? | 38,666,744 | <p>I want to parse both 1.05*f and f*1.05 to be equivalent things where f is a fixed letter, the number is any positive float and the * is always between the 'f' and the float (i.e. multiplication). If there is no multiplication, then that is ok too and 'f' as the entire string is ok - so the '1.05*' is optional. Note that 1.05*f*1.05 should not work. gf*1.05 should not work and f*1.05f should break.</p>
<p>I am using python. I am actually having a hard time getting the f*1.05 to work by itself because f*1.05f also works - when I put a dollar sign at the end of the option multiplication and float then nothing works.</p>
<pre><code>^f(\\*(\\d*[.])?\\d+)? # f*1.05 matches, but unfortunately so does f*1.05f
^f((\\*(\\d*[.])?\\d+)?)$ # the $ makes f*1.05f not match, but f*1.05 doesn't match either!
</code></pre>
<p>Really my question is about whether there is a clever way to make 1.05*f, f, and f*1.05 work all in one go without using a '|' operator to choose between the float being on the left or right.</p>
| 1 | 2016-07-29T20:10:02Z | 38,667,353 | <p>With multi-line modifier on:</p>
<pre><code>^(?:f\*\d*\.\d+|\d*\.\d+\*f|f)$
</code></pre>
<p><a href="https://regex101.com/r/uU8uM0/2" rel="nofollow">Live demo</a></p>
| 1 | 2016-07-29T20:56:29Z | [
"python",
"regex"
] |
Using a boolean in returned tuple for "if" statement? | 38,666,852 | <p>Current situation:</p>
<pre><code>def isTooLarge(intValue):
if intValue > 100: print "too large"; return True
return False
if isTooLarge(101): break
</code></pre>
<p>Now I like to make the function more "library-friendly" by returning the errormessagetext instead of printing it:</p>
<pre><code>def isTooLarge(intValue):
if intValue > 100: return True, "too large"
return False
bln,str = isTooLarge(101)
if bln: specialprint(str); break
</code></pre>
<p>Any idea how I can evaluate it as an one-liner again? (something like "if ,str isTooLarge(101): specialprint(str); break" or what is the Python way here?</p>
<p>No problem to put the errormessage into a global variable like "lasterrormessage" and keep the rest as is.</p>
| 0 | 2016-07-29T20:17:52Z | 38,667,074 | <pre><code>def isTooLarge(intValue):
return "too large" if intValue > 100 else False
x = isTooLarge(101); x and print(x)
</code></pre>
<p>However. Please don't do this. Exceptions exist for a reason. Putting things in one line simply for the sake of it makes your code hard to read.</p>
| 0 | 2016-07-29T20:34:28Z | [
"python",
"python-2.7"
] |
Using a boolean in returned tuple for "if" statement? | 38,666,852 | <p>Current situation:</p>
<pre><code>def isTooLarge(intValue):
if intValue > 100: print "too large"; return True
return False
if isTooLarge(101): break
</code></pre>
<p>Now I like to make the function more "library-friendly" by returning the errormessagetext instead of printing it:</p>
<pre><code>def isTooLarge(intValue):
if intValue > 100: return True, "too large"
return False
bln,str = isTooLarge(101)
if bln: specialprint(str); break
</code></pre>
<p>Any idea how I can evaluate it as an one-liner again? (something like "if ,str isTooLarge(101): specialprint(str); break" or what is the Python way here?</p>
<p>No problem to put the errormessage into a global variable like "lasterrormessage" and keep the rest as is.</p>
| 0 | 2016-07-29T20:17:52Z | 38,667,134 | <p>You can be much more "library friendly" by using errors like they are meant for, as errors:</p>
<pre><code>def CheckNotTooLarge(intValue):
if intValue > 100:
raise ValueError("too large") #or AssertionError
return #or maybe do something else?
</code></pre>
<p>then a user could use the error message completely separately using <code>try: except:</code></p>
<pre><code>try:
CheckNotTooLarge(101)
except ValueError:
traceback.print_exc() #print error message
#handle too large
else:
#handle not too large
</code></pre>
<p>I can see how this would quickly get annoying if you just want to check without handling errors so I'd recommend having two functions, one that just returns a boolean, no extra work and another that raises/returns the error text:</p>
<pre><code>def isTooLarge(intValue):
return intValue<=100 #now this is a one liner!
def checkIsTooLarge(intValue):
"uses isTooLarge to return an error text if the number is too large"
if isTooLarge(intValue):
return "too large" #or raise ...
else:
return False
</code></pre>
| 2 | 2016-07-29T20:40:06Z | [
"python",
"python-2.7"
] |
python: how to run a program with a command line call (that takes a user's keystroke as input) from within another program? | 38,666,864 | <p>I can run one program by typing: <code>python enable_robot.py -e</code> in the command line, but I want to run it from within another program.</p>
<p>In the other program, I imported subprocess and had <code>subprocess.Popen(['enable_robot', 'baxter_tools/scripts/enable_robot.py','-e'])</code>, but I get an error message saying something about a callback. </p>
<p>If I comment out this line, the rest of my program works perfectly fine. </p>
<p>Any suggestions on how I could change this line to get my code to work or if I shouldn't be using subprocess at all? </p>
| 1 | 2016-07-29T20:18:39Z | 38,666,985 | <p>If <code>enable_robot.py</code> requires user input, probably it wasn't meant to run from another python script. you might want to import it as a module: <code>import enable_robot</code> and run the functions you want to use from there.</p>
<p>If you want to stick to the subprocess, you can pass input with <code>communicate</code>:</p>
<pre><code>p = subprocess.Popen(['enable_robot', 'baxter_tools/scripts/enable_robot.py','-e'])
p.communicate(input=b'whatever string\nnext line')
</code></pre>
<p><code>communicate</code> <a href="https://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate" rel="nofollow">documentation</a>, <a href="http://stackoverflow.com/questions/163542/python-how-do-i-pass-a-string-into-subprocess-popen-using-the-stdin-argument">example</a>.</p>
| 0 | 2016-07-29T20:27:43Z | [
"python",
"ros"
] |
python: how to run a program with a command line call (that takes a user's keystroke as input) from within another program? | 38,666,864 | <p>I can run one program by typing: <code>python enable_robot.py -e</code> in the command line, but I want to run it from within another program.</p>
<p>In the other program, I imported subprocess and had <code>subprocess.Popen(['enable_robot', 'baxter_tools/scripts/enable_robot.py','-e'])</code>, but I get an error message saying something about a callback. </p>
<p>If I comment out this line, the rest of my program works perfectly fine. </p>
<p>Any suggestions on how I could change this line to get my code to work or if I shouldn't be using subprocess at all? </p>
| 1 | 2016-07-29T20:18:39Z | 38,667,617 | <p>Your program <code>enable_robot.py</code> should meet the following requirements:</p>
<ul>
<li>The first line is a path indicating what program is used to interpret
the script. In this case, it is the python path.</li>
<li>Your script should be executable</li>
</ul>
<h2>A very simple example. We have two python scripts: called.py and caller.py</h2>
<h3>Usage: caller.py will execute called.py using <code>subprocess.Popen()</code></h3>
<h3>File /tmp/called.py</h3>
<pre><code>#!/usr/bin/python
print("OK")
</code></pre>
<h3>File /tmp/caller.py</h3>
<pre><code>#!/usr/bin/python
import subprocess
proc = subprocess.Popen(['/tmp/called.py'])
</code></pre>
<h3>Make both executable:</h3>
<pre><code>chmod +x /tmp/caller.py
chmod +x /tmp/called.py
</code></pre>
<h3>caller.py output:</h3>
<blockquote>
<p>$ /tmp/caller.py </p>
<p>$ OK</p>
</blockquote>
| 0 | 2016-07-29T21:16:35Z | [
"python",
"ros"
] |
What is the inverse of the numpy cumsum function? | 38,666,924 | <p>If I have <code>z = cumsum( [ 0, 1, 2, 6, 9 ] )</code>, which gives me <code>z = [ 0, 1, 3, 9, 18 ]</code>, how can I get back to the original array <code>[ 0, 1, 2, 6, 9 ]</code> ?</p>
| 10 | 2016-07-29T20:23:38Z | 38,666,925 | <p>Here is code that does that. Order will apply the function that many times. If the order is negative, it returns cumsum that many times.</p>
<pre><code>import numpy as np
def inverse_cumsum(z, order=1):
# main part
temp=[]
for ind in xrange( len(z) ):
try:
temp.append( z[::-1][ind] - z[::-1][ind+1] )
except:
temp.append( z[::-1][ind] )
# handles orders recursively
if order > 1:
z = inverse_cumsum( z )
order -= 1
return inverse_cumsum( z, order=order)
elif order == 1:
return np.array(temp)[::-1]
elif order < 0:
return forward_cumsum(z, order=-order)
else:
return z
def forward_cumsum(z, order=1):
for i in range(order):
z = np.cumsum(z)
return z
</code></pre>
<p>If we use the function like so, inverse_cumsum( [ 0, 1, 3, 9, 18 ] ), it gives me [ 0, 1, 2, 6, 9 ], the original array before the cumsum above. </p>
<p>What happens if you apply the function again? Try it with order = 2. What if you apply the function multiple times with a much larger order, use np.linspace, and plot it? Try it!</p>
| 2 | 2016-07-29T20:23:38Z | [
"python",
"numpy",
"cumsum"
] |
What is the inverse of the numpy cumsum function? | 38,666,924 | <p>If I have <code>z = cumsum( [ 0, 1, 2, 6, 9 ] )</code>, which gives me <code>z = [ 0, 1, 3, 9, 18 ]</code>, how can I get back to the original array <code>[ 0, 1, 2, 6, 9 ]</code> ?</p>
| 10 | 2016-07-29T20:23:38Z | 38,666,977 | <pre><code>z[1:] -= z[:-1].copy()
</code></pre>
<p>Short and sweet, with no slow Python loops. We take views of all but the first element (<code>z[1:]</code>) and all but the last (<code>z[:-1]</code>), and subtract elementwise. The copy makes sure we subtract the original element values instead of the values we're computing.</p>
| 10 | 2016-07-29T20:27:10Z | [
"python",
"numpy",
"cumsum"
] |
What is the inverse of the numpy cumsum function? | 38,666,924 | <p>If I have <code>z = cumsum( [ 0, 1, 2, 6, 9 ] )</code>, which gives me <code>z = [ 0, 1, 3, 9, 18 ]</code>, how can I get back to the original array <code>[ 0, 1, 2, 6, 9 ]</code> ?</p>
| 10 | 2016-07-29T20:23:38Z | 38,666,999 | <p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.diff.html" rel="nofollow"><code>np.diff</code></a> to compute elements <code>1...N</code> which will take the difference between any two elements. This is the opposite of <code>cumsum</code>. The only difference is that <code>diff</code> will not return the first element, but the first element is the same in the original and <code>cumsum</code> output so we just re-use that value. </p>
<pre><code>orig = np.insert(np.diff(z), 0, z[0])
</code></pre>
<p>Rather than <code>insert</code>, you could also use <code>np.concatenate</code></p>
<pre><code>orig = np.concatenate((np.array(z[0]).reshape(1,), np.diff(z)))
</code></pre>
<p>We could also just copy and replace elements <code>1...N</code></p>
<pre><code>orig = z.copy()
orig[1:] = np.diff(z)
</code></pre>
| 5 | 2016-07-29T20:28:31Z | [
"python",
"numpy",
"cumsum"
] |
What is the inverse of the numpy cumsum function? | 38,666,924 | <p>If I have <code>z = cumsum( [ 0, 1, 2, 6, 9 ] )</code>, which gives me <code>z = [ 0, 1, 3, 9, 18 ]</code>, how can I get back to the original array <code>[ 0, 1, 2, 6, 9 ]</code> ?</p>
| 10 | 2016-07-29T20:23:38Z | 40,024,880 | <p>My favorite:</p>
<pre><code>orig = np.r_[z[0], np.diff(z)]
</code></pre>
| 0 | 2016-10-13T15:16:05Z | [
"python",
"numpy",
"cumsum"
] |
Pandas NLTK tokenizing "unhashable type: 'list'" | 38,666,973 | <p>Following this example: <a href="https://www.linkedin.com/pulse/twitter-data-mining-python-gephi-case-synthetic-biology-mikko-dufva" rel="nofollow">Twitter data mining with Python and Gephi: Case synthetic biology</a></p>
<p><code>CSV to: df['Country', 'Responses']</code></p>
<pre><code>'Country'
Italy
Italy
France
Germany
'Responses'
"Loren ipsum..."
"Loren ipsum..."
"Loren ipsum..."
"Loren ipsum..."
</code></pre>
<ol>
<li>tokenize the text in 'Responses'</li>
<li>remove the 100 most common words (based on brown.corpus)</li>
<li>identify the remaining 100 most frequent words</li>
</ol>
<p>I can get through step 1 and 2, but get an error on step 3:</p>
<pre><code>TypeError: unhashable type: 'list'
</code></pre>
<p>I believe it's because I'm working in a dataframe and have made this (likely erronous) modification:</p>
<p>Original example:</p>
<pre><code>#divide to words
tokenizer = RegexpTokenizer(r'\w+')
words = tokenizer.tokenize(tweets)
</code></pre>
<p>My code:</p>
<pre><code>#divide to words
tokenizer = RegexpTokenizer(r'\w+')
df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)
</code></pre>
<p>My full code:</p>
<pre><code>df = pd.read_csv('CountryResponses.csv', encoding='utf-8', skiprows=0, error_bad_lines=False)
tokenizer = RegexpTokenizer(r'\w+')
df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)
words = df['tokenized_sents']
#remove 100 most common words based on Brown corpus
fdist = FreqDist(brown.words())
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
mclist.append(mostcommon[i][0])
words = [w for w in words if w not in mclist]
Out: ['the',
',',
'.',
'of',
'and',
...]
#keep only most common words
fdist = FreqDist(words)
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
mclist.append(mostcommon[i][0])
words = [w for w in words if w not in mclist]
TypeError: unhashable type: 'list'
</code></pre>
<p>There are many questions on unhashable lists, but none that I understand to be quite the same.
Any suggestions? Thanks.</p>
<hr>
<p>TRACEBACK</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-164-a0d17b850b10> in <module>()
1 #keep only most common words
----> 2 fdist = FreqDist(words)
3 mostcommon = fdist.most_common(100)
4 mclist = []
5 for i in range(len(mostcommon)):
/home/*******/anaconda3/envs/*******/lib/python3.5/site-packages/nltk/probability.py in __init__(self, samples)
104 :type samples: Sequence
105 """
--> 106 Counter.__init__(self, samples)
107
108 def N(self):
/home/******/anaconda3/envs/******/lib/python3.5/collections/__init__.py in __init__(*args, **kwds)
521 raise TypeError('expected at most 1 arguments, got %d' % len(args))
522 super(Counter, self).__init__()
--> 523 self.update(*args, **kwds)
524
525 def __missing__(self, key):
/home/******/anaconda3/envs/******/lib/python3.5/collections/__init__.py in update(*args, **kwds)
608 super(Counter, self).update(iterable) # fast path when counter is empty
609 else:
--> 610 _count_elements(self, iterable)
611 if kwds:
612 self.update(kwds)
TypeError: unhashable type: 'list'
</code></pre>
| 1 | 2016-07-29T20:26:57Z | 38,667,176 | <p>The <code>FreqDist</code> function takes in an iterable of hashable objects (made to be strings, but it probably works with whatever). The error you're getting is because you pass in an iterable of lists. As you suggested, this is because of the change you made:</p>
<pre><code>df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)
</code></pre>
<p>If I understand the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow">Pandas apply function documentation</a> correctly, that line is applying the <code>nltk.word_tokenize</code> function to some series. <code>word-tokenize</code> returns a list of words.</p>
<p>As a solution, simply add the lists together before trying to apply <code>FreqDist</code>, like so:</p>
<pre><code>allWords = []
for wordList in words:
allWords += wordList
FreqDist(allWords)
</code></pre>
<p>A more complete revision to do what you would like. If all you need is to identify the second set of 100, note that <code>mclist</code> will have that the second time.</p>
<pre><code>df = pd.read_csv('CountryResponses.csv', encoding='utf-8', skiprows=0, error_bad_lines=False)
tokenizer = RegexpTokenizer(r'\w+')
df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)
lists = df['tokenized_sents']
words = []
for wordList in lists:
words += wordList
#remove 100 most common words based on Brown corpus
fdist = FreqDist(brown.words())
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
mclist.append(mostcommon[i][0])
words = [w for w in words if w not in mclist]
Out: ['the',
',',
'.',
'of',
'and',
...]
#keep only most common words
fdist = FreqDist(words)
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
mclist.append(mostcommon[i][0])
# mclist contains second-most common set of 100 words
words = [w for w in words if w in mclist]
# this will keep ALL occurrences of the words in mclist
</code></pre>
| 1 | 2016-07-29T20:43:31Z | [
"python",
"pandas",
"nltk"
] |
restarted computer and got: ImportError: No module named django.core.management | 38,667,260 | <p>I have been having some issues with gulp serving my files so I restarted my computer, upon going back to my project and starting the server I suddenly got the error: <code>ImportError: No module named django.core.management</code>. </p>
<p>I am working locally and in my files I can see the django folder - it's path is: <code>MAMP/Library/lib/python2.7/site-packages/mysql/connector/django</code> </p>
<p>The full error looks like this: </p>
<pre><code> Message:
Command failed: /bin/sh -c ./manage.py runserver
Traceback (most recent call last):
File "./manage.py", line 11, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
Details:
killed: false
code: 1
signal: null
cmd: /bin/sh -c ./manage.py runserver
stdout:
stderr: Traceback (most recent call last):
File "./manage.py", line 11, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
</code></pre>
<p>My manage.py looks like this:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "tckt.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
</code></pre>
<p>running which python gives me this:
<code>/usr/bin/python</code></p>
<p>I am not sure if I am running in a virtual enviornment or not. I am doing the front-end of this project, the enviornment was set up and installed by someone else for me - but running <code>python -c 'import sys; print sys.real_prefix' 2>/dev/null && INVENV=1 || INVENV=0</code> (as another post suggested to check if I was in a virtual enviornment) returned nothing.</p>
<p>I have looked through some of the other posts and see that some people have reinstalled, others have modified paths, others say NOT to edit the manage.py file - but since I am not really sure if the problem is the path or the install I am not sure how to proceed.If you need more info please let me know.</p>
| 1 | 2016-07-29T20:49:56Z | 38,667,411 | <p>You're missing python packages which means you're <a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">VirtualEnv</a> isn't activated.</p>
<p><a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">VirtualEnv</a> creates a folder named <code>env</code> by default (though the name can be changed) which is where it stores the specific python installation and all it's packages. Search for the <code>activate</code> bash script in your project folder. Once you locate you can source it.</p>
<pre><code>source ./env/bin/activate
</code></pre>
<p>In the interest of completeness, in Windows it would be a batch file.</p>
<pre><code>env/bin/activate.bat
</code></pre>
<p>You'll know you're in a virtualenv when your command prompt is prefixed by the env name, for example <code>(env) Macbook user$</code>.</p>
<p>You can now start your django test server.</p>
<pre><code>python manage.py runserver
</code></pre>
<p>To deactivate, simply type <code>deactivate</code> at any time in your command prompt. The <code>(env)</code> prefix on the prompt should disappear.</p>
| 2 | 2016-07-29T21:00:11Z | [
"python",
"mysql",
"django"
] |
Understanding Python Tuples and Reassignments | 38,667,266 | <p>Ok, I do not think this question has been answered here before.</p>
<p>I am wondering exactly how Python is executing this for loop. FYI this is a part of lesson 2 from 6.00SC MIT OCW:</p>
<pre><code>def evaluate_poly(poly, x):
""" Computes the polynomial function for a given value x. Returns that value.
Example:
>>> poly = (0.0, 0.0, 5.0, 9.3, 7.0) # f(x) = 7x^4 + 9.3x^3 + 5x^2
>>> x = -13
>>> print evaluate_poly(poly, x) # f(-13) = 7(-13)^4 + 9.3(-13)^3 + 5(-13)^2
180339.9
poly: tuple of numbers, length > 0
x: number
returns: float """
ans = 0.0
for i in xrange(len(poly)):
ans += poly[i] * (x ** i)
return ans
</code></pre>
<p>Can anybody explain to me how this for loop is executing line by line? I understand the i variable is created to run 5 times (the length of the poly tuple), in which ans is being updated each iteration. Where I get confused is the reassignment of i each time through.</p>
<p>The third time through ans = 0.0 + (5) * x**(2)</p>
<p>It seems to me that poly[i] is grabbing the indexed number (5), but then x is multiplied to the power of i, which is now the index position itself (2). Which is exactly what it's supposed to do, however I cannot understand how i can seemingly be both the indexed number and the indexed position.</p>
<p>I am new to programming so any info at all will be a tremendous help. </p>
<p>Thanks so much!</p>
| -1 | 2016-07-29T20:50:20Z | 38,667,626 | <p>i is assigned to those numbers in the loop: 0,1,2,3,4 because xrange creates a range from 0 till the parameter minus 1. Parameter is len(poly) that returns 5 (the size of the array. Therefore i is assigned from 0 till 4(=5-1)</p>
<p>First iteration i equals 0:</p>
<p>poly[0] actually equals to the first element of poly (0.0)</p>
<p>The formula then becomes:</p>
<pre><code>ans += poly[i] * (x ** i)
ans = ans + poly[i] * (x ** i)
ans = 0.0 + poly[0] * (-13 in the power of 0)
ans = 0.0 + 0.0 * (-13 in the power of 0)
ans = 0.0
</code></pre>
<p>Next iteration i equals 1:</p>
<pre><code>ans = ans + poly[i] * (x ** i)
ans = 0.0 + poly[1] * (-13 in the power of 1)
ans = 0.0 + 0.0 * (-13 in the power of 1)
ans = 0.0
</code></pre>
<p>Next iteration i equals 2:</p>
<pre><code>ans = ans + poly[i] * (x ** i)
ans = 0.0 + poly[2] * (-13 in the power of 2)
ans = 0.0 + 5.0 * (-13 in the power of 2)
</code></pre>
<p>Next iteration i equals 3:</p>
<pre><code>ans = ans + poly[i] * (x ** i)
ans = 5.0 * (-13 in the power of 2) + poly[3] * (-13 in the power of 3)
ans = 5.0 * (-13 in the power of 2) + 9.3 * (-13 in the power of 3)
</code></pre>
<p>Last iteration i equals 4:</p>
<pre><code>ans = ans + poly[i] * (x ** i)
ans = 5.0 * (-13 in the power of 2) + 9.3 * (-13 in the power of 3) + poly[4] * (-13 in the power of 4)
ans = 5.0 * (-13 in the power of 2) + 9.3 * (-13 in the power of 3) + 7.0 * (-13 in the power of 4)
</code></pre>
| 1 | 2016-07-29T21:17:19Z | [
"python",
"tuples",
"variable-assignment"
] |
Getting List object is not callable error on python 2.7, matplotlib | 38,667,276 | <p>Here is my code:</p>
<pre><code>import numpy, math, time
from matplotlib import pyplot
def parse_FFT_hex(string):
invert_arrays = True
num_bits = 12
numpy.array = []
overflow = False
ch_0_re = []; ch_0_im = []; ch_1_re = []; ch_1_im = []
for i in range(0,len(string), 32):
# if any number is greater than 2045 return 'False'
ch_0_re_num = int(string[i+30] + string[i+31] + string[i+28], 16)
if ch_0_re_num > (2**(num_bits-1)): ch_0_re_num = ch_0_re_num - (2**num_bits)
ch_0_im_num = int(string[i+29] + string[i+26] + string[i+27], 16)
if ch_0_im_num > (2**(num_bits-1)): ch_0_im_num = ch_0_im_num - (2**num_bits)
ch_1_re_num = int(string[i+22] + string[i+23] + string[i+20], 16)
if ch_1_re_num > (2**(num_bits-1)): ch_1_re_num = ch_1_re_num - (2**num_bits)
ch_1_im_num = int(string[i+21] + string[i+18] + string[i+19], 16)
if ch_1_im_num > (2**(num_bits-1)): ch_1_im_num = ch_1_im_num - (2**num_bits)
if abs(ch_0_re_num) > 2045 or abs(ch_0_im_num) > 2045 or abs(ch_1_re_num) > 2045 or abs(ch_1_im_num) > 2045: overflow = True
ch_0_re.append(ch_0_re_num); ch_0_im.append(ch_0_im_num); ch_1_re.append(ch_1_re_num); ch_1_im.append(ch_1_im_num)
ch_0_re_num = int(string[i+14] + string[i+15] + string[i+12], 16)
if ch_0_re_num > (2**(num_bits-1)): ch_0_re_num = ch_0_re_num - (2**num_bits)
ch_0_im_num = int(string[i+13] + string[i+10] + string[i+11], 16)
if ch_0_im_num > (2**(num_bits-1)): ch_0_im_num = ch_0_im_num - (2**num_bits)
ch_1_re_num = int(string[i+6] + string[i+7] + string[i+4], 16)
if ch_1_re_num > (2**(num_bits-1)): ch_1_re_num = ch_1_re_num - (2**num_bits)
ch_1_im_num = int(string[i+5] + string[i+2] + string[i+3], 16)
if ch_1_im_num > (2**(num_bits-1)): ch_1_im_num = ch_1_im_num - (2**num_bits)
if abs(ch_0_re_num) > 2045 or abs(ch_0_im_num) > 2045 or abs(ch_1_re_num) > 2045 or abs(ch_1_im_num) > 2045: overflow = True
ch_0_re.append(ch_0_re_num); ch_0_im.append(ch_0_im_num); ch_1_re.append(ch_1_re_num); ch_1_im.append(ch_1_im_num)
if invert_arrays:
temp = ch_0_re
ch_0_re = ch_0_im
ch_0_im = temp
temp = ch_1_re
ch_1_re = ch_1_im
ch_1_im = temp
ch_0 = 0.0; ch_1 = 0.0
for i in range(len(ch_0_re)):
ch_0 += (ch_0_re[i]**2) + (ch_0_im[i]**2)
ch_1 += (ch_1_re[i]**2) + (ch_1_im[i]**2)
ch_0_pow = 10 * math.log10((ch_0/len(ch_0_re))/(2**22))
ch_1_pow = 10 * math.log10((ch_1/len(ch_1_re))/(2**22))
return [ch_0_pow, ch_1_pow, overflow]
powers = parse_FFT_hex(hex_string)
print powers
pyplot.figure()
pyplot.plot(powers[0], powers[1])
</code></pre>
<p>Running this gives me:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\salzda\AppData\Local\Continuum\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Users\salzda\AppData\Local\Continuum\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/Users/salzda/Documents/Python Scripts/untitled1.py", line 82, in <module>
pyplot.figure()
File "C:\Users\salzda\AppData\Local\Continuum\Anaconda2\lib\site-packages\matplotlib\pyplot.py", line 527, in figure
**kwargs)
File "C:\Users\salzda\AppData\Local\Continuum\Anaconda2\lib\site-packages\matplotlib\backends\backend_qt4agg.py", line 45, in new_figure_manager
thisFig = FigureClass(*args, **kwargs)
File "C:\Users\salzda\AppData\Local\Continuum\Anaconda2\lib\site-packages\matplotlib\figure.py", line 325, in __init__
self.dpi = dpi
File "C:\Users\salzda\AppData\Local\Continuum\Anaconda2\lib\site-packages\matplotlib\figure.py", line 410, in _set_dpi
self.dpi_scale_trans.clear().scale(dpi, dpi)
File "C:\Users\salzda\AppData\Local\Continuum\Anaconda2\lib\site-packages\matplotlib\transforms.py", line 1965, in scale
np.float_)
TypeError: 'list' object is not callable
</code></pre>
<p>Even though the list prints fine and gives me the values I want, for some reason it can't do anything after.</p>
| -1 | 2016-07-29T20:51:02Z | 38,667,482 | <p>Your problem:</p>
<pre><code>numpy.array = []
</code></pre>
<p>Don't do that. You're turning <code>numpy.array</code> into a list. Remove that line and your code will work just fine, presumably.</p>
| 1 | 2016-07-29T21:05:49Z | [
"python",
"python-2.7",
"numpy",
"matplotlib"
] |
Using Concurrent.Futures.ProcessPoolExecutor to run simultaneous & independents ABAQUS models | 38,667,290 | <p>I wish to run a total of <strong><em>nAnalysis=25</em></strong> Abaqus models, each using X number of Cores, and I can run concurrently <strong><em>nParallelLoops=5</em></strong> of these models. If one of the current 5 analysis finishes, then another analysis should start until all <strong><em>nAnalysis</em></strong> are completed.</p>
<p>I implemented the code below based on the solutions posted in <strong>1</strong> and <strong>2</strong>. However, I am missing something because all <strong><em>nAnalysis</em></strong> try to start at "once", the code deadlocks and no analysis ever completes since many of then may want to use the same Cores than an already started analysis is using.</p>
<ol>
<li><a href="http://stackoverflow.com/questions/9874042/using-pythons-multiprocessing-module-to-execute-simultaneous-and-separate-seawa">Using Python's Multiprocessing module to execute simultaneous and separate SEAWAT/MODFLOW model runs</a></li>
<li><a href="http://stackoverflow.com/questions/37169336/how-to-parallelize-this-nested-loop-in-python-that-calls-abaqus">How to parallelize this nested loop in Python that calls Abaqus</a></li>
</ol>
<pre class="lang-py prettyprint-override"><code>def runABQfile(*args):
import subprocess
import os
inpFile,path,jobVars = args
prcStr1 = (path+'/runJob.sh')
process = subprocess.check_call(prcStr1, stdin=None, stdout=None, stderr=None, shell=True, cwd=path)
def safeABQrun(*args):
import os
try:
runABQfile(*args)
except Exception as e:
print("Tread Error: %s runABQfile(*%r)" % (e, args))
def errFunction(ppos, *args):
import os
from concurrent.futures import ProcessPoolExecutor
from concurrent.futures import as_completed
from concurrent.futures import wait
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(0,nAnalysis)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
</code></pre>
<p>The only way up to now I am able to run that is if I modify the <code>errFunction</code> to use exactly 5 analysis at the time as below. However, this approach results sometimes in one of the analysis taking much longer than the other 4 in every group (every <code>ProcessPoolExecutor</code> call) and therefore the next group of 5 won't start despite the availability of resources (Cores). Ultimately this results in more time to complete all 25 models.</p>
<pre class="lang-py prettyprint-override"><code>def errFunction(ppos, *args):
import os
from concurrent.futures import ProcessPoolExecutor
from concurrent.futures import as_completed
from concurrent.futures import wait
# Group 1
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(0,5)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
# Group 2
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(5,10)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
# Group 3
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(10,15)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
# Group 4
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(15,20)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
# Group 5
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(20,25)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
</code></pre>
<p>I tried using the <code>as_completed</code> function but it seems not to work either. </p>
<p>Please can you help figuring out the proper parallelization so I can run a <strong><em>nAnalysis</em></strong>, with always <strong><em>nParallelLoops</em></strong> running concurrently?
Your help is appreciated it.
I am using Python 2.7</p>
<p>Bests,
David P.</p>
<hr>
<p><strong>UPDATE JULY 30/2016</strong>:</p>
<p>I introduced a loop in the <code>safeABQrun</code> and that managed the 5 different "queues". The loop is necessary to avoid the case of an analysis trying to run in a node while another one is still running. The analysis are pre-configured to run in one of the requested nodes before starting any actual analysis.</p>
<pre class="lang-py prettyprint-override"><code>def safeABQrun(*list_args):
import os
inpFiles,paths,jobVars = list_args
nA = len(inpFiles)
for k in range(0,nA):
args = (inpFiles[k],paths[k],jobVars[k])
try:
runABQfile(*args) # Actual Run Function
except Exception as e:
print("Tread Error: %s runABQfile(*%r)" % (e, args))
def errFunction(ppos, *args):
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
futures = dict((executor.submit(safeABQrun, inpF, aPth, jVrs), k) for inpF, aPth, jVrs, k in list_args) # 5Nodes
for f in as_completed(futures):
print("|=== Finish Process Train %d ===|" % futures[f])
if f.exception() is not None:
print('%r generated an exception: %s' % (futures[f], f.exception()))
</code></pre>
| 1 | 2016-07-29T20:51:57Z | 38,668,070 | <p>It looks OK to me, but I can't run your code as-is. How about trying something vastly simpler, then <em>add</em> things to it until "a problem" appears? For example, does the following show the kind of behavior you want? It does on my machine, but I'm running Python 3.5.2. You say you're running 2.7, but <code>concurrent.futures</code> didn't exist in Python 2 - so if you are using 2.7, you must be running someone's backport of the library, and perhaps the problem is in that. Trying the following should help to answer whether that's the case:</p>
<pre><code>from concurrent.futures import ProcessPoolExecutor, wait, as_completed
def worker(i):
from time import sleep
from random import randrange
s = randrange(1, 10)
print("%d started and sleeping for %d" % (i, s))
sleep(s)
if __name__ == "__main__":
nAnalysis = 25
nParallelLoops = 5
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
futures = dict((executor.submit(worker, k), k) for k in range(nAnalysis))
for f in as_completed(futures):
print("got %d" % futures[f])
</code></pre>
<p>Typical output:</p>
<pre><code>0 started and sleeping for 4
1 started and sleeping for 1
2 started and sleeping for 1
3 started and sleeping for 6
4 started and sleeping for 5
5 started and sleeping for 9
got 1
6 started and sleeping for 5
got 2
7 started and sleeping for 6
got 0
8 started and sleeping for 6
got 4
9 started and sleeping for 8
got 6
10 started and sleeping for 9
got 3
11 started and sleeping for 6
got 7
12 started and sleeping for 9
got 5
...
</code></pre>
| 0 | 2016-07-29T21:59:02Z | [
"python",
"multiprocessing",
"python-multiprocessing",
"concurrent.futures",
"abaqus"
] |
Using Concurrent.Futures.ProcessPoolExecutor to run simultaneous & independents ABAQUS models | 38,667,290 | <p>I wish to run a total of <strong><em>nAnalysis=25</em></strong> Abaqus models, each using X number of Cores, and I can run concurrently <strong><em>nParallelLoops=5</em></strong> of these models. If one of the current 5 analysis finishes, then another analysis should start until all <strong><em>nAnalysis</em></strong> are completed.</p>
<p>I implemented the code below based on the solutions posted in <strong>1</strong> and <strong>2</strong>. However, I am missing something because all <strong><em>nAnalysis</em></strong> try to start at "once", the code deadlocks and no analysis ever completes since many of then may want to use the same Cores than an already started analysis is using.</p>
<ol>
<li><a href="http://stackoverflow.com/questions/9874042/using-pythons-multiprocessing-module-to-execute-simultaneous-and-separate-seawa">Using Python's Multiprocessing module to execute simultaneous and separate SEAWAT/MODFLOW model runs</a></li>
<li><a href="http://stackoverflow.com/questions/37169336/how-to-parallelize-this-nested-loop-in-python-that-calls-abaqus">How to parallelize this nested loop in Python that calls Abaqus</a></li>
</ol>
<pre class="lang-py prettyprint-override"><code>def runABQfile(*args):
import subprocess
import os
inpFile,path,jobVars = args
prcStr1 = (path+'/runJob.sh')
process = subprocess.check_call(prcStr1, stdin=None, stdout=None, stderr=None, shell=True, cwd=path)
def safeABQrun(*args):
import os
try:
runABQfile(*args)
except Exception as e:
print("Tread Error: %s runABQfile(*%r)" % (e, args))
def errFunction(ppos, *args):
import os
from concurrent.futures import ProcessPoolExecutor
from concurrent.futures import as_completed
from concurrent.futures import wait
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(0,nAnalysis)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
</code></pre>
<p>The only way up to now I am able to run that is if I modify the <code>errFunction</code> to use exactly 5 analysis at the time as below. However, this approach results sometimes in one of the analysis taking much longer than the other 4 in every group (every <code>ProcessPoolExecutor</code> call) and therefore the next group of 5 won't start despite the availability of resources (Cores). Ultimately this results in more time to complete all 25 models.</p>
<pre class="lang-py prettyprint-override"><code>def errFunction(ppos, *args):
import os
from concurrent.futures import ProcessPoolExecutor
from concurrent.futures import as_completed
from concurrent.futures import wait
# Group 1
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(0,5)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
# Group 2
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(5,10)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
# Group 3
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(10,15)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
# Group 4
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(15,20)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
# Group 5
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(20,25)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
</code></pre>
<p>I tried using the <code>as_completed</code> function but it seems not to work either. </p>
<p>Please can you help figuring out the proper parallelization so I can run a <strong><em>nAnalysis</em></strong>, with always <strong><em>nParallelLoops</em></strong> running concurrently?
Your help is appreciated it.
I am using Python 2.7</p>
<p>Bests,
David P.</p>
<hr>
<p><strong>UPDATE JULY 30/2016</strong>:</p>
<p>I introduced a loop in the <code>safeABQrun</code> and that managed the 5 different "queues". The loop is necessary to avoid the case of an analysis trying to run in a node while another one is still running. The analysis are pre-configured to run in one of the requested nodes before starting any actual analysis.</p>
<pre class="lang-py prettyprint-override"><code>def safeABQrun(*list_args):
import os
inpFiles,paths,jobVars = list_args
nA = len(inpFiles)
for k in range(0,nA):
args = (inpFiles[k],paths[k],jobVars[k])
try:
runABQfile(*args) # Actual Run Function
except Exception as e:
print("Tread Error: %s runABQfile(*%r)" % (e, args))
def errFunction(ppos, *args):
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
futures = dict((executor.submit(safeABQrun, inpF, aPth, jVrs), k) for inpF, aPth, jVrs, k in list_args) # 5Nodes
for f in as_completed(futures):
print("|=== Finish Process Train %d ===|" % futures[f])
if f.exception() is not None:
print('%r generated an exception: %s' % (futures[f], f.exception()))
</code></pre>
| 1 | 2016-07-29T20:51:57Z | 38,677,673 | <p>I introduced a loop in the <code>safeABQrun</code> and that managed the 5 different "queues". The loop is necessary to avoid the case of an analysis trying to run in a node while another one is still running. The analysis are pre-configured to run in one of the requested nodes before starting any actual analysis.</p>
<pre class="lang-py prettyprint-override"><code>def safeABQrun(*list_args):
import os
inpFiles,paths,jobVars = list_args
nA = len(inpFiles)
for k in range(0,nA):
args = (inpFiles[k],paths[k],jobVars[k])
try:
runABQfile(*args) # Actual Run Function
except Exception as e:
print("Tread Error: %s runABQfile(*%r)" % (e, args))
def errFunction(ppos, *args):
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
futures = dict((executor.submit(safeABQrun, inpF, aPth, jVrs), k) for inpF, aPth, jVrs, k in list_args) # 5Nodes
for f in as_completed(futures):
print("|=== Finish Process Train %d ===|" % futures[f])
if f.exception() is not None:
print('%r generated an exception: %s' % (futures[f], f.exception()))
</code></pre>
| 0 | 2016-07-30T19:19:16Z | [
"python",
"multiprocessing",
"python-multiprocessing",
"concurrent.futures",
"abaqus"
] |
function to return copy of np.array with some elements replaced | 38,667,350 | <p>I have a Numpy array and a list of indices, as well as an array with the values which need to go into these indices.</p>
<p>The quickest way I know how to achieve this is:</p>
<pre><code>In [1]: a1 = np.array([1,2,3,4,5,6,7])
In [2]: x = np.array([10,11,12])
In [3]: ind = np.array([2,4,5])
In [4]: a2 = np.copy(a1)
In [5]: a2.put(ind,x)
In [6]: a2
Out[6]: array([ 1, 2, 10, 4, 11, 12, 7])
</code></pre>
<p>Notice I had to make a copy of <code>a1</code>. What I'm using this for is to wrap a function which takes an array as input, so I can give it to an optimizer which will vary <em>some</em> of those elements.</p>
<p>So, ideally, I'd like to have something which returns a modified copy of the original, in one line, that works like this:</p>
<pre><code>a2 = np.replace(a1, ind, x)
</code></pre>
<p>The reason for that is that I need to apply it like so:</p>
<pre><code>def somefunction(a):
....
costfun = lambda x: somefunction(np.replace(a1, ind, x))
</code></pre>
<p>With <code>a1</code> and <code>ind</code> constant, that would then give me a costfunction which is only a function of x.</p>
<p>My current fallback solution is to define a small function myself:</p>
<pre><code>def replace(a1, ind, x):
a2 = np.copy(a1)
a2.put(ind,x)
return(a2)
</code></pre>
<p>...but this appears not very elegant to me.</p>
<p>=> Is there a way to turn that into a lambda function?</p>
| 4 | 2016-07-29T20:56:22Z | 38,667,778 | <p>Well you asked for a one-liner, here's one using sparse matrices with <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html" rel="nofollow"><code>Scipy's csr_matrix</code></a> -</p>
<pre><code>In [280]: a1 = np.array([1,2,3,4,5,6,7])
...: x = np.array([10,11,12])
...: ind = np.array([2,4,5])
...:
In [281]: a1+csr_matrix((x-a1[ind], ([0]*x.size, ind)), (1,a1.size)).toarray()
Out[281]: array([[ 1, 2, 10, 4, 11, 12, 7]])
</code></pre>
| 2 | 2016-07-29T21:31:54Z | [
"python",
"python-2.7",
"numpy"
] |
Django rest framework GET all data but POST pk | 38,667,368 | <p>I am creating an API and trying to figure out how to return all data from my <code>Shipping</code> model into the GET method, but still sending the pk through the POST method.</p>
<p>I already read some solutions <a href="http://stackoverflow.com/questions/34278408/django-rest-framework-dynamic-serializer-relation-field-post-pk-but-get-hyper">here</a> and <a href="http://stackoverflow.com/questions/37213416/django-rest-framework-get-all-data-in-relation">here</a>, but it didn't solve my entire problem, at some point I need two different behaviors from my API, where the client need to send only the primary key.</p>
<p>I expect this in my GET <code>/Shipping</code>:</p>
<pre><code>[{
"pk": 1,
...
"city_of_origin": {
"pk": 1,
"name": "San Francisco",
"state": {
"pk": 1,
"initial": "CA",
"name": "California"
}
},
"destination_cities": [
{
"pk": 2,
"name": "San Jose",
"state": {
"pk": 1,
"initial": "CA",
"name": "California"
}
},{
"pk": 3,
"name": "Los Angeles",
"state": {
"pk": 1,
"initial": "CA",
"name": "California"
}
}
]
}]
</code></pre>
<p>And this in my POST:</p>
<pre><code>[{
"pk": 1,
...
"city_of_origin": 1,
"destination_cities": [2, 3]
}]
</code></pre>
<p>I've been trying to change my <code>serializers.py</code>:</p>
<pre><code>class StateSerializer(serializers.ModelSerializer):
class Meta:
model = State
class CitySerializer(serializers.ModelSerializer):
state = StateSerializer()
class Meta:
model = City
class ShippingSerializer(serializers.ModelSerializer):
city_of_origin = CitySerializer()
destination_cities = CitySerializer(many=True)
class Meta:
model = Shipping
</code></pre>
<p>It worked well returning all data, however it changed all my API Root forms, forcing to create a city and a state nested to my Shipping, where I had a dropdown menu with the cities before I changed my serializer. However the exhibition of this dropdown is the behavior I expect on the POST Form.</p>
<p>Here it's my <code>models.py</code>:</p>
<pre><code>class Shipping(models.Model):
city_of_origin = models.ForeignKey(City, related_name='origin', default=None)
destination_cities = models.ManyToManyField(City, related_name='destiny', default=None)
class City(models.Model):
name = models.CharField(blank=False, null=True, max_length=255)
state = models.ForeignKey(State, null=True)
def __str__(self):
return self.name
class State(models.Model):
name = models.CharField(blank=False, null=True, max_length=255)
initial = models.CharField(max_length=2, blank=False, null=True)
def __str__(self):
return self.name
</code></pre>
<p>I want to thank in beforehand all the help you guys can provide me.</p>
<p>EDIT:
I'm using Django 1.9.5 and Django rest framework 3.3.3</p>
| 0 | 2016-07-29T20:57:25Z | 38,668,750 | <p>If <code>get</code> and <code>post</code> are being handled by the same rest api view, I think you are using something like a <code>ViewSet</code> (or an appropriately mixed GenericAPIView). Your <code>ViewSet</code> will use a different serializer for getting and for posting.</p>
<p>For getting/listing you will use the one you already created (let's rename it):</p>
<pre><code>class ShippingGetSerializer(serializers.ModelSerializer):
city_of_origin = CitySerializer()
destination_cities = CitySerializer(many=True)
class Meta:
model = Shipping
</code></pre>
<p>For posting:</p>
<pre><code>class ShippingPostSerializer(serializers.ModelSerializer):
city_of_origin = serializers.PrimaryKeyRelatedField()
destination_cities = serializers.PrimaryKeyRelatedField(many=True)
class Meta:
model = Shipping
</code></pre>
<p>Your ViewSet would have a definition of <code>get_serializer()</code> like this:</p>
<pre><code>class ShippingViewSet(viewsets.ModelViewSet):
queryset = Shipping.objects.all()
def get_serializer_class(self):
if self.request.method == 'POST':
return ShippingPostSerializer
return ShippingGetSerializer
</code></pre>
<p>If you are using two different views for the get and the post entry point, create each one assigning them the <code>serializer_class</code> class attribute to the appropriate serializer as I wrote.</p>
| 1 | 2016-07-29T23:17:13Z | [
"python",
"django",
"django-rest-framework"
] |
Pytest plugin: Overriding pytest_runtest_call and friends | 38,667,429 | <p>I'm developing a test suite using pytest for a project of mine. Because of the nature of the project, I need to create a Pytest plugin that controls how the tests are being run; they are not run locally, but sent to a different process to run. (I know about <code>xdist</code> but I think it doesn't solve my problem.)</p>
<p>I've been writing my own Pytest plugin by overriding the various <code>pytest_runtest_*</code> methods. So far it's been progressing well. Here is where I've hit a wall: I want my implementations of <code>pytest_runtest_setup</code>, <code>pytest_runtest_call</code> and <code>pytest_runtest_teardown</code> to actually be responsible for doing the setup, call and teardown. They're going to do it in a different process. <strong>My problem is:</strong> After Pytest calls my <code>pytest_runtest_setup</code>, it also calls all the other <code>pytest_runtest_setup</code> down the line of plugins. This is because the hook specification for <code>pytest_runtest_setup</code> has <code>firstresult=False</code>.</p>
<p>I don't want this, because I don't want <code>pytest_runtest_setup</code> to actually run on the current process. I want to be responsible for running it on my own. I want to <strong>override</strong> how it's being run, not <strong>add</strong> to it. I want the other implementations of <code>pytest_runtest_setup</code> below my own to <strong>not</strong> be run.</p>
<p>How can I do this? </p>
| 7 | 2016-07-29T21:01:09Z | 38,823,263 | <p><a href="http://doc.pytest.org/en/latest/writing_plugins.html#_pytest.hookspec.pytest_runtest_protocol" rel="nofollow">Generic âruntestâ hooks</a></p>
<p>All runtest related hooks receive a pytest.Item object.</p>
<p>pytest_runtest_protocol(item, nextitem)[source]</p>
<pre><code>implements the runtest_setup/call/teardown protocol for the given test item, including capturing exceptions and calling reporting hooks.
Parameters:
item â test item for which the runtest protocol is performed.
nextitem â the scheduled-to-be-next test item (or None if this is the end my friend). This argument is passed on to pytest_runtest_teardown().
Return boolean:
True if no further hook implementations should be invoked.
</code></pre>
<p>pytest_runtest_setup(item)[source]</p>
<pre><code>called before pytest_runtest_call(item).
</code></pre>
<p>pytest_runtest_call(item)[source]</p>
<pre><code>called to execute the test item.
</code></pre>
<p>pytest_runtest_teardown(item, nextitem)[source]</p>
<pre><code>called after pytest_runtest_call.
Parameters: nextitem â the scheduled-to-be-next test item (None if no further test item is scheduled). This argument can be used to perform exact teardowns, i.e. calling just enough finalizers so that nextitem only needs to call setup-functions.
</code></pre>
<p>pytest_runtest_makereport(item, call)[source]</p>
<pre><code>return a _pytest.runner.TestReport object for the given pytest.Item and _pytest.runner.CallInfo.
</code></pre>
<p>For deeper understanding you may look at the default implementation of these hooks in _pytest.runner and maybe also in _pytest.pdb which interacts with _pytest.capture and its input/output capturing in order to immediately drop into interactive debugging when a test failure occurs.</p>
<p>The _pytest.terminal reported specifically uses the reporting hook to print information about a test run.</p>
| 0 | 2016-08-08T07:14:55Z | [
"python",
"plugins",
"py.test"
] |
Python use regex to match characters for number, letter, space and brackets | 38,667,454 | <p>I know that to match only numbers letters and space, I can use:</p>
<pre><code>re.match(r'[A-Za-z0-9 ]+$', word)
</code></pre>
<p>but I would like to also match brackets such as <code>()</code>, <code>{}</code>, <code>[]</code>. I extended the above regex to :</p>
<pre><code>re.match(r'[A-Z][a-z][0-9][ ][(][)][{][}][[][]]]+$', word)
</code></pre>
<p>but this does not work.</p>
<p>Any ideas what the problem is? Or is there any concise regex guide that I can refer to? </p>
| -2 | 2016-07-29T21:02:59Z | 38,667,470 | <p>The <code>r'[A-Z][a-z][0-9][ ][(][)][{][}][[][]]]+$'</code> matches a <em>sequence</em> of patterns defined:</p>
<ul>
<li><code>[A-Z]</code> - uppercase ASCII letters</li>
<li><code>[a-z]</code> - lowercase ASCII letters</li>
<li><code>[0-9]</code> - ASCII digits</li>
<li><code>[ ]</code> - a space</li>
<li><code>[(]</code> - a <code>(</code></li>
<li><code>[)]</code> - a <code>)</code></li>
<li><code>[{]</code> - a <code>{</code></li>
<li><code>[}]</code> - a <code>}</code></li>
<li><code>[[]</code> - a <code>[</code></li>
<li><code>[]]</code> - a <code>]</code></li>
<li><code>]+</code> - one or more <code>]</code>s (as it is a standalone quantified atom).</li>
<li><code>$</code> - end of string.</li>
</ul>
<p>It matches a string like <a href="https://regex101.com/r/wE3xJ2/1" rel="nofollow"><code>Aa0 (){}[]]</code></a></p>
<p>You just need to add them to the character class:</p>
<pre><code>re.match(r'[A-Za-z0-9 (){}[\]]+$', word)
^^^^^^^
</code></pre>
<p>Note that <code>(</code>, <code>)</code>, <code>{</code>, <code>}</code> and <code>[</code> do not require escaping inside the character class. The <code>]</code> does not have to be escaped when put at the character class start:</p>
<pre><code>re.match(r'[][A-Za-z0-9 (){}]+$', word)
</code></pre>
<p>See the <a href="http://ideone.com/KVYbb7" rel="nofollow">Python demo</a></p>
| 4 | 2016-07-29T21:04:20Z | [
"python",
"regex"
] |
Python use regex to match characters for number, letter, space and brackets | 38,667,454 | <p>I know that to match only numbers letters and space, I can use:</p>
<pre><code>re.match(r'[A-Za-z0-9 ]+$', word)
</code></pre>
<p>but I would like to also match brackets such as <code>()</code>, <code>{}</code>, <code>[]</code>. I extended the above regex to :</p>
<pre><code>re.match(r'[A-Z][a-z][0-9][ ][(][)][{][}][[][]]]+$', word)
</code></pre>
<p>but this does not work.</p>
<p>Any ideas what the problem is? Or is there any concise regex guide that I can refer to? </p>
| -2 | 2016-07-29T21:02:59Z | 38,669,326 | <p>In non-Java character classes, the only special characters are the escape <code>\</code>,<br>
the dash <code>-</code>, the caret <code>^</code>, and the closing bracket <code>]</code>. </p>
<p>Of those - </p>
<ul>
<li><p>If the caret is not at the beginning, it's a literal,<br>
at the beginning, its a negation.</p></li>
<li><p>If the dash is at the beginning or end it's a literal,<br>
in the middle, it's a range operator. </p></li>
<li><p>If the closing bracket is at the beginning or right side of caret at the beginning,<br>
with no closing bracket following it somewhere, it is an unclosed class, an error. </p></li>
</ul>
<p>By the time you figure this all out you're head will explode. </p>
<p>Save yourself from that and follow these simple rules: </p>
<p>Always escape literal brackets <code>[</code> and <code>]</code> no matter where they are.<br>
Always escape literal dash <code>-</code> no matter where it is.<br>
Always put a literal caret <code>^</code> on the right side of all other characters. </p>
<p>Nothing else needs to be escaped.</p>
<p>Examples: <code>[a-z(){}\-\[\]^]</code> and <code>[^a-z(){}\-\[\]^]</code></p>
| 0 | 2016-07-30T00:48:27Z | [
"python",
"regex"
] |
Pixely Rendering of latex using matplotlib | 38,667,565 | <p>In various cases whenever I use latex in matplotlib, I am getting a very pixely appearance when rendering the figure to an image. When I view the figure in interactive mode it looks fine. For example, I'm setting the yaxis label with:</p>
<p>'Emissions Flux '+r'($\mathregular{(\mu g/m^2 s)}$'</p>
<p>I'm also setting the twin y axis to a log scale and the eponents are presumably latex as well. Non latex text is crisp. </p>
<p><a href="http://i.stack.imgur.com/A9RhE.png" rel="nofollow"><img src="http://i.stack.imgur.com/A9RhE.png" alt="enter image description here"></a></p>
| 1 | 2016-07-29T21:12:37Z | 38,687,504 | <p>I as far I understand the problem your images are too pixelated. Often this is the result of saving an image using a bitmap format. To receive better images one should export them to vector-graphs, like for example <code>pdf</code>.</p>
<p>To export images as vector-graphs your save statement should be something like:</p>
<pre><code>myfig.savefig('myfig.pdf', format='pdf')
</code></pre>
<p>A clear explanation about bitmaps and vector graphs: <a href="http://www.prepressure.com/library/file-formats/bitmap-versus-vector" rel="nofollow">http://www.prepressure.com</a>,
an important source of information concerning matlibplot: <a href="http://matplotlib.org/faq/howto_faq.html" rel="nofollow">http://matplotlib.org</a> </p>
| 1 | 2016-07-31T19:09:03Z | [
"python",
"matplotlib",
"latex"
] |
Python and R: how do you show a plot with Pyper in a Jupyter notebook? | 38,667,572 | <p>I am creating a report using Jupyter. Most of my code is in Python, but I need to use some R functionalities. </p>
<p>I use a package called Pyper to call R in Python. It works well, but I could not figure out how to display a plot made in R (via Pyper) in the Jupiter notebook. Everything seems to work well, but Jupyter does not show the plot. </p>
<p>Here's my test code:</p>
<pre><code>In [17]: from pyper import *
r = R()
r("library(TSA)")
r("data(star)")
r("periodogram(star)")
</code></pre>
<p>And this is the output from Jupyter (without the periodogram plot):</p>
<pre><code>Out[17]: 'try({periodogram(star)})\n'
</code></pre>
| 2 | 2016-07-29T21:13:14Z | 38,667,975 | <p>I have found a workaround if anyone is using Pyper and wants to add a plot to Jupyter:</p>
<pre><code>from pyper import *
r = R()
r("library(TSA)")
r("data(star)")
# Save the figure
r("png('rplot.png');periodogram(star);dev.off()")
# Upload the figure to Jupyter
from IPython.display import Image
Image("rplot.png",width=600,height=400)
</code></pre>
| 1 | 2016-07-29T21:50:22Z | [
"python",
"jupyter",
"rpy2",
"pyper"
] |
wxPython dynamically add pages to wizard | 38,667,611 | <p>I have been working on developing a wxPython based wizard which I would like to be capable of dynamically increasing in size based on input provided within the wizard itself. This wizard progresses through a series of pages and then prompts the user to enter a number. The goal is to get the wizard to then increase by the number input at the txtCtrl box. I am having difficulty accessing the pageList list within the wizard class responsible for managing the top level aspects of the wizard. With the following code : </p>
<pre><code>import wx
import wx.wizard as wiz
########################################################################
#----------------------------------------------------------------------
# Wizard Object which contains the list of wizard pages.
class DynaWiz(object):
def __init__(self):
wizard = wx.wizard.Wizard(None, -1, "Simple Wizard")
self.pageList = [TitledPage(wizard, "Page 1"),
TitledPage(wizard, "Page 2"),
TitledPage(wizard, "Page 3"),
TitledPage(wizard, "Page 4"),
AddPage(wizard)]
for i in range(len(self.pageList)-1):
wx.wizard.WizardPageSimple.Chain(self.pageList[i],self.pageList[i+1])
wizard.FitToPage(self.pageList[0])
wizard.RunWizard(self.pageList[0])
wizard.Destroy()
#----------------------------------------------------------------------
#generic wizard pages
class TitledPage(wiz.WizardPageSimple):
def __init__(self, parent, title):
"""Constructor"""
wiz.WizardPageSimple.__init__(self, parent)
sizer = wx.BoxSizer(wx.VERTICAL)
self.SetSizer(sizer)
title = wx.StaticText(self, -1, title)
title.SetFont(wx.Font(18, wx.SWISS, wx.NORMAL, wx.BOLD))
sizer.Add(title, 0, wx.ALIGN_CENTRE|wx.ALL, 5)
sizer.Add(wx.StaticLine(self, -1), 0, wx.EXPAND|wx.ALL, 5)
#----------------------------------------------------------------------
# page used to identify number of pages to add
class AddPage(wiz.WizardPageSimple):
def __init__(self,parent):
self.parent = parent
"""Constructor"""
wiz.WizardPageSimple.__init__(self, parent)
sizer = wx.BoxSizer(wx.VERTICAL)
self.SetSizer(sizer)
self.numPageAdd = wx.TextCtrl(self, -1, "")
self.verifyButton = wx.Button(self, id=wx.ID_ANY, label = "Confirm",name = "confirm")
self.verifyButton.Bind(wx.EVT_BUTTON, self.append_pages)
sizer.Add(self.numPageAdd, 0, wx.ALIGN_CENTRE|wx.ALL, 5)
sizer.Add(self.verifyButton,0,wx.ALIGN_CENTER|wx.ALL, 5)
sizer.Add(wx.StaticLine(self, -1), 0, wx.EXPAND|wx.ALL, 5)
#function used to add pages to pageList inside of Wizard Object containing
# this page
def append_pages(self,event):
n = int(self.numPageAdd.GetValue())
for i in range(n):
#Add n number of pages to wizard list "pageList" here....
self.parent.pageList.append(TitledPage(wizard, "Added Page"))
#----------------------------------------------------------------------
if __name__ == "__main__":
app = wx.App(False)
dWiz = DynaWiz()
app.MainLoop()
</code></pre>
<p>Using this code generated the following error message:</p>
<blockquote>
<p>AttributeError: 'Wizard' object has no attribute 'pageList'</p>
</blockquote>
<p>And I understand why that is, because ultimately the parent of the page is the Wizard object and not the DynaWiz object. That being said, is there a way to access the pageList list in the DynaWiz obect AND ensure that the current wizard gets reloaded from within the event in the AddPage class?</p>
| 0 | 2016-07-29T21:16:22Z | 38,689,077 | <p>You could just pass the Dynawiz instance to AddPage's constructor. Then AddPage can modify pageList. See below:</p>
<pre><code>import wx
import wx.wizard as wiz
########################################################################
#----------------------------------------------------------------------
# Wizard Object which contains the list of wizard pages.
class DynaWiz(object):
def __init__(self):
wizard = wx.wizard.Wizard(None, -1, "Simple Wizard")
self.pageList = [TitledPage(wizard, "Page 1"),
TitledPage(wizard, "Page 2"),
TitledPage(wizard, "Page 3"),
TitledPage(wizard, "Page 4"),
AddPage(wizard, self)]
for i in range(len(self.pageList)-1):
wx.wizard.WizardPageSimple.Chain(self.pageList[i],self.pageList[i+1])
wizard.FitToPage(self.pageList[0])
wizard.RunWizard(self.pageList[0])
wizard.Destroy()
#----------------------------------------------------------------------
#generic wizard pages
class TitledPage(wiz.WizardPageSimple):
def __init__(self, parent, title):
"""Constructor"""
wiz.WizardPageSimple.__init__(self, parent)
sizer = wx.BoxSizer(wx.VERTICAL)
self.SetSizer(sizer)
title = wx.StaticText(self, -1, title)
title.SetFont(wx.Font(18, wx.SWISS, wx.NORMAL, wx.BOLD))
sizer.Add(title, 0, wx.ALIGN_CENTRE|wx.ALL, 5)
sizer.Add(wx.StaticLine(self, -1), 0, wx.EXPAND|wx.ALL, 5)
#----------------------------------------------------------------------
# page used to identify number of pages to add
class AddPage(wiz.WizardPageSimple):
def __init__(self,parent,dynawiz):
self.parent = parent
self.dynawiz = dynawiz
"""Constructor"""
wiz.WizardPageSimple.__init__(self, parent)
sizer = wx.BoxSizer(wx.VERTICAL)
self.SetSizer(sizer)
self.numPageAdd = wx.TextCtrl(self, -1, "")
self.verifyButton = wx.Button(self, id=wx.ID_ANY, label = "Confirm",name = "confirm")
self.verifyButton.Bind(wx.EVT_BUTTON, self.append_pages)
sizer.Add(self.numPageAdd, 0, wx.ALIGN_CENTRE|wx.ALL, 5)
sizer.Add(self.verifyButton,0,wx.ALIGN_CENTER|wx.ALL, 5)
sizer.Add(wx.StaticLine(self, -1), 0, wx.EXPAND|wx.ALL, 5)
#function used to add pages to pageList inside of Wizard Object containing
# this page
def append_pages(self,event):
n = int(self.numPageAdd.GetValue())
for i in range(n):
#Add n number of pages to wizard list "pageList" here....
self.dynawiz.pageList.append(TitledPage(self.parent, "Added Page"))
wx.wizard.WizardPageSimple.Chain(self.dynawiz.pageList[-2],self.dynawiz.pageList[-1])
self.parent.FindWindowById(wx.ID_FORWARD).SetLabel("Next >")
#----------------------------------------------------------------------
if __name__ == "__main__":
app = wx.App(False)
dWiz = DynaWiz()
app.MainLoop()
</code></pre>
| 0 | 2016-07-31T22:59:57Z | [
"python",
"wxpython",
"wizard",
"dynamically-generated"
] |
Runtime error in Scikit-learn during import | 38,667,659 | <p>I am new to Python. So, sorry in advance if this sounds silly but I couldn't find an understandable solution in the forum. I am trying to run my programs in Pycharm and recently changed it from Python 3.5 to Python 2.7.12. After doing so I have started getting the below error while importing from Scikit-learn:</p>
<pre><code>File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/__check_build/__init__.py", line 46, in <module>
raise_build_error(e)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/__check_build/__init__.py", line 41, in raise_build_error
%s""" % (e, local_dir, ''.join(dir_content).strip(), msg))
ImportError: dynamic module does not define init function (init_check_build)
___________________________________________________________________________
Contents of /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/__check_build:
__init__.py __init__.pyc _check_build.so
setup.py setup.pyc
___________________________________________________________________________
It seems that scikit-learn has not been built correctly.
If you have installed scikit-learn from source, please do not forget
to build the package before using it: run `python setup.py install` or
`make` in the source directory.
If you have used an installer, please check that it is suited for your
Python version, your operating system and your platform.
Process finished with exit code 1
</code></pre>
<p>I am using Pycharm IDE - 2016.1, Mac OS, Python 2.7.12. Please let me know how do I get around this.</p>
<p>Thanks</p>
| 0 | 2016-07-29T21:20:04Z | 38,667,753 | <p>It looks like you're still running out of <code>lib/python2.7/site-packages/</code>, per your error message. You need to configure your interpreter to use Anaconda for Python 3. If you've installed Anaconda correctly, you should be able to go to Settings -> NameOfYourProject -> Project Interpreter. Change the Interpreter to point to your Anaconda 3.x stack.</p>
<p>Also, make sure you download/install Anaconda 3.x (not just Python). Anaconda 3 comes with the Python 3 interpreter, so you'll just need to install the most recent version and you should be able to find it in the Pycharm drop down.</p>
| 1 | 2016-07-29T21:29:14Z | [
"python",
"python-2.7",
"python-3.x",
"scikit-learn",
"pycharm"
] |
Django authentication LDAP unable to find user | 38,667,685 | <p>I'm working on a django project and I'm trying to authenticate my APP against LDAP server.
settings.py:</p>
<pre><code>AUTH_LDAP_SERVER_URI = "ldap://domain.local"
AUTH_LDAP_BIND_DN = "domain\django"
AUTH_LDAP_BIND_PASSWORD = "<Password>"
AUTH_LDAP_USER_SEARCH = LDAPSearch("cn=Users,dc=domain,dc=local",
ldap.SCOPE_SUBTREE, "(uid=%(user)s)")
AUTHENTICATION_BACKENDS = (
'django_auth_ldap.backend.LDAPBackend',
'django.contrib.auth.backends.ModelBackend',
)
</code></pre>
<p>Code:</p>
<pre><code>from django_auth_ldap.backend import LDAPBackend
auth = LDAPBackend()
auth.authenticate(username="omers", password="<password>")
</code></pre>
<p>For now I'm just using the shell</p>
<p>When I do tcpdump I see the LDAP packet but for some reason the LDAP server can't find my user but I know it's exist, what am I missing?</p>
<p>Thanks!</p>
| 0 | 2016-07-29T21:22:19Z | 38,667,777 | <p>Ok,
I found the answer, my AD doesn't use UID but CN so instead of</p>
<pre><code>AUTH_LDAP_USER_SEARCH = LDAPSearch("cn=Users,dc=domain,dc=local",
ldap.SCOPE_SUBTREE, "(uid=%(user)s)")
</code></pre>
<p>I used</p>
<pre><code>AUTH_LDAP_USER_SEARCH = LDAPSearch("cn=Users,dc=domain,dc=local",
ldap.SCOPE_SUBTREE, "(cn=%(user)s)")
</code></pre>
| 0 | 2016-07-29T21:31:24Z | [
"python",
"django",
"ldap"
] |
Python tkinter password checker gui - hashing issue | 38,667,724 | <p>I've almost finished my gui. Which is supposed to check an entered password for strength i.e how long it is, upper and lowercase, special characters etc...
Hash that password into an md5 hash, store it in a text file. Then the user would re enter password, re-hashing would take place, then the text file checked to see if that hash was in there. However i cant seem to get the re entered password to hash correctly and use that to check in the file.</p>
<p>My complete code:</p>
<pre><code>from tkinter import *
import hashlib
import os
import re
myGui = Tk()
myGui.geometry('500x400+700+250')
myGui.title('Password Generator')
guiFont = font = dict(family='Courier New, monospaced', size=18, color='#7f7f7f')
guiFont2 = font1 = dict(family='Courier New, monospaced', size=18, color='9400d3')
#====== Password Entry ==========
eLabel = Label(myGui, text="Please Enter you Password: ", font=guiFont)
eLabel.grid(row=0, column=0)
ePassword = Entry(myGui, show="*")
ePassword.grid(row=0, column=1)
#====== Strength Check =======
def checkPassword():
strength = ['Password can not be Blank', 'Very Weak', 'Weak', 'Medium', 'Strong', 'Very Strong']
score = 1
password = ePassword.get()
if len(password) == 0:
passwordStrength.set(strength[0])
return
if len(password) < 4:
passwordStrength.set(strength[1])
return
if len(password) >= 8:
score += 1
if re.search("[0-9]", password):
score += 1
if re.search("[a-z]", password) and re.search("[A-Z]", password):
score += 1
if re.search(".", password):
score += 1
passwordStrength.set(strength[score])
passwordStrength = StringVar()
checkStrBtn = Button(myGui, text="Check Strength", command=checkPassword, height=2, width=25, font=guiFont)
checkStrBtn.grid(row=2, column=0)
checkStrLab = Label(myGui, textvariable=passwordStrength, font=guiFont2)
checkStrLab.grid(row=2, column=1, sticky=W)
#====== Hash the Password ======
def passwordHash():
hash_obj1 = hashlib.md5()
pwmd5 = ePassword.get().encode('utf-8')
hash_obj1.update(pwmd5)
md5pw.set(hash_obj1.hexdigest())
md5pw = StringVar()
hashBtn = Button(myGui, text="Generate Hash", command=passwordHash, height=2, width=25, font=guiFont)
hashBtn.grid(row=3, column=0)
hashLbl = Label(myGui, textvariable=md5pw, font=guiFont2)
hashLbl.grid(row=3, column=1, sticky=W)
#====== Log the Hash to a file =======
def hashlog():
loghash = md5pw.get()
if os.path.isfile('password_hash_log.txt'):
obj1 = open('password_hash_log.txt', 'a')
obj1.write(loghash)
obj1.write("\n")
obj1.close()
else:
obj2 = open('password_hash_log.txt', 'w')
obj2.write(loghash)
obj2.write("\n")
obj2.close()
btnLog = Button(myGui, text="Log Hash", command=hashlog, height=2, width=25, font=guiFont)
btnLog.grid(row=4, column=0)
#====== Re enter password and check against stored hash ======
def verifyHash():
hashinput = vHash.get()
hashobj2 = hashlib.md5(hashinput.encode('utf-8')).hexidigest()
with open('password_hash_log.txt') as obj3:
for line in obj3:
line = line.rstrip()
if line == hashobj2:
output.set("Password Match")
else:
output.set("Passwords do not match try again")
output = StringVar()
lblVerify = Label(myGui, text="Enter Password to Verify: ", font=guiFont)
lblVerify.grid(row=5, column=0, sticky=W)
vHash = Entry(myGui, show="*")
vHash.grid(row=5, column=1)
vBtn = Button(myGui, text="Verify Password", command=verifyHash, height=2, width=25, font=guiFont)
vBtn.grid(row=6, column=0)
vLbl = Label(myGui, textvariable=output, font=guiFont2)
vLbl.grid(row=6, column=1, sticky=W)
myGui.mainloop()
</code></pre>
<p>I'm so close to finishing what i need to do so any help would be very much appreciated.</p>
| 0 | 2016-07-29T21:25:53Z | 38,676,467 | <p>In my previous example nothing was passed through the hexidigest to produce the Hash. I have amended the last part of the code and the gui now works as planned.</p>
<p>The new code:</p>
<pre><code>def verifyHash():
hash_obj2 = hashlib.md5()
pwmd52 = vHash.get().encode('utf-8')
hash_obj2.update(pwmd52)
md5pw2.set(hash_obj2.hexdigest())
with open('password_hash_log.txt') as obj3:
for line in obj3:
line = line.rstrip()
if line == md5pw2.get():
output.set("Password Match")
else:
output.set("Passwords do not match try again")
md5pw2 = StringVar()
</code></pre>
<p>Now md5pw2 is the Hashed code and verify Hash takes that and checks each line of a text file to see if that Hash is in there, it then returns Password Match if it finds it.</p>
| 0 | 2016-07-30T17:07:43Z | [
"python",
"user-interface",
"hash",
"tkinter",
"md5"
] |
Matplotlib Histogram scale y-axis by a constant factor | 38,667,728 | <p>Im plotting some values in a histogram and want to scale the y-axis but I only find ways to either normalize the y-axis values or to scale them logarithmically.
My values are in 100ps timesteps and I want to multiply every y-axis value by 0.1 to get to a nice and easier to understand ns step size.</p>
<p>How can I scale the y-axis values in a histogram ? </p>
<pre><code>n, bins, patches = plt.hist(values1, 50, facecolor='blue', alpha=0.9, label="Sample1",align='left')
n, bins, patches = plt.hist(values2, 50, facecolor='red', alpha=0.9, label="Sample2",align='left')
plt.xlabel('value')
plt.ylabel('time [100ps]')
plt.title('')
plt.axis([-200, 200, 0, 180])
plt.legend()
plt.show()
</code></pre>
<p>In this graph 10 on the y axis means 1ns:</p>
<p><a href="http://i.stack.imgur.com/m11eG.png" rel="nofollow"><img src="http://i.stack.imgur.com/m11eG.png" alt="30 means 3ns"></a></p>
| 1 | 2016-07-29T21:26:10Z | 38,669,541 | <p>The way I would solve is very simple: just multiply your arrays values1 and values2 by 0.1 before plotting it. </p>
<p>The reason why log-scaling exists in matplotlib is that log-transformations are very common. For simple multiplicative scaling, it is very easy to just multiply the array(s) that you are plotting.</p>
<p>EDIT: You're right, I was wrong and confused (did not noticed you were dealing with an histogram). What I would do then is use the <code>matplotlib.ticker</code> module to adjust the ticks on your y-axis. See below:</p>
<pre><code># Your code.
n, bins, patches = plt.hist(values1, 50, facecolor='blue', alpha=0.9, label="Sample1",align='left')
n, bins, patches = plt.hist(values2, 50, facecolor='red', alpha=0.9, label="Sample2",align='left')
plt.xlabel('value')
plt.ylabel('time [100ps]')
plt.title('')
plt.axis([-200, 200, 0, 180])
plt.legend()
# My addition.
import matplotlib.ticker as mtick
def div_10(x, *args):
"""
The function that will you be applied to your y-axis ticks.
"""
x = float(x)/10
return "{:.1f}".format(x)
# Apply to the major ticks of the y-axis the function that you defined.
ax = plt.gca()
ax.yaxis.set_major_formatter(mtick.FuncFormatter(div_10))
plt.show()
</code></pre>
| 0 | 2016-07-30T01:31:55Z | [
"python",
"matplotlib"
] |
return value in a dictionary that's within a list, the whole thing in a dictionary | 38,667,733 | <p>I have an empty table in SQL management studio that Iâd like to populate with values for each sentence. The table has 3 columns â SentId, Word, Count.</p>
<p>My sentence has this structure:</p>
<pre><code>sentence = {âfeaturesâ: [{}, {}, {}â¦] , âidâ: 1234}
</code></pre>
<p>-->To fill out SentId values, I call SQL âinsert into table values (provide 3 values for 3 columns here)â statement, entering sentence[âidâ], which returns 1234. Thatâs simple. With a next step I have problems.</p>
<p>-->To get values for Word and Count columns, I need to get inside âfeaturesâ which has this structure:</p>
<pre><code>âfeaturesâ: [ {âwordâ:â helloâ, âcountâ: 2}, {âwordâ: âthereâ, âcountâ:1}, {}, {}â¦]
</code></pre>
<p>I ran this so far:</p>
<pre><code>sentence = {'features': [{'word': 'hello', 'count': 2}, {'word': 'there', 'count':1}] , 'id': 1234}
print(sentence['features'])
#out>> [{'word': 'hello', 'count': 2}, {'word': 'there', 'count': 1}]
</code></pre>
<p>So I need to get inside the dictionary that is within a list.
This didn`t work:</p>
<pre><code>print(sentence['features'].get("word"))
</code></pre>
<p>Thanks so much for helping me out. I am new to programming.</p>
| -3 | 2016-07-29T21:26:46Z | 38,667,881 | <p>As you can maybe see for yourself, sentence['features'] returns a list. Not a dictionary.
In order to get an element from a list in Python, you need to index it.</p>
<pre><code>a=[1,2,3]
print(a[0]) #would print 1
</code></pre>
<p>So in your case, that would lead to the following code:</p>
<pre><code>print(sentence['features'][0].get("word"))
</code></pre>
<p>sentence['features'][0] returns the first dictionary, in which you then return the value for the key 'word'.
If you want to loop over all the items in the list you can do:</p>
<pre><code>for i in sentence['features']:
print(i['word'])
</code></pre>
<p>For further info, see: <a href="https://docs.python.org/3/tutorial/datastructures.html" rel="nofollow">https://docs.python.org/3/tutorial/datastructures.html</a></p>
| 0 | 2016-07-29T21:41:14Z | [
"python"
] |
Recursive method to build a vector | 38,667,745 | <p>I have a simple recursive function:</p>
<pre><code>def subTree(z,sublevels):
if(z < sublevels):
print "from z = ", z, " to z = ", z+1
subTree(z+1,sublevels)
else:
print "z = ", z, " !"
</code></pre>
<p>this simply goes from z to sublevels, ex:</p>
<pre><code>subTree(2, 6)
from z = 2 to z = 3
from z = 3 to z = 4
from z = 4 to z = 5
from z = 5 to z = 6
z = 6 !
</code></pre>
<p>Now, how can i make it so that the call to the function returns an ordered vector of z ?</p>
<p>(in the example it would be: z[2,3,4,5,6])</p>
<p>From keiv code : </p>
<pre><code>def subTree(z,sublevels,a):
a.append(z)
if(z < sublevels):
subTree(z+1,sublevels,a)
a=[]
subTree(2,6,a)
</code></pre>
| -4 | 2016-07-29T21:28:12Z | 38,667,858 | <p>This code changes "a" list to [2,3,4,5,6]</p>
<pre><code>def subTree(z,sublevels,a):
a.append(z)
if(z < sublevels):
subTree(z+1,sublevels,a)
a=[]
subTree(2,6,a)
</code></pre>
| 0 | 2016-07-29T21:38:44Z | [
"python",
"recursion"
] |
Recursive method to build a vector | 38,667,745 | <p>I have a simple recursive function:</p>
<pre><code>def subTree(z,sublevels):
if(z < sublevels):
print "from z = ", z, " to z = ", z+1
subTree(z+1,sublevels)
else:
print "z = ", z, " !"
</code></pre>
<p>this simply goes from z to sublevels, ex:</p>
<pre><code>subTree(2, 6)
from z = 2 to z = 3
from z = 3 to z = 4
from z = 4 to z = 5
from z = 5 to z = 6
z = 6 !
</code></pre>
<p>Now, how can i make it so that the call to the function returns an ordered vector of z ?</p>
<p>(in the example it would be: z[2,3,4,5,6])</p>
<p>From keiv code : </p>
<pre><code>def subTree(z,sublevels,a):
a.append(z)
if(z < sublevels):
subTree(z+1,sublevels,a)
a=[]
subTree(2,6,a)
</code></pre>
| -4 | 2016-07-29T21:28:12Z | 38,685,866 | <p>I share keiv.fly's bias for a recursive function that returns a result:</p>
<pre><code>def subTree(z, sublevels):
result = [z]
if z < sublevels:
result += subTree(z + 1, sublevels)
return result
a = subTree(2, 6)
</code></pre>
<p>It can be reduced to the slightly less efficient one-liner:</p>
<pre><code>def subTree(z, sublevels):
return [z] + (subTree(z + 1, sublevels) if z < sublevels else [])
</code></pre>
<p>and both can be modified to return a tuple instead of a list:</p>
<pre><code>def subTree(z, sublevels):
return (z,) + (subTree(z + 1, sublevels) if z < sublevels else ())
</code></pre>
<p>And we can easily reverse the order of the result, if desired:</p>
<pre><code>def subTree(z, sublevels):
return (subTree(z + 1, sublevels) if z < sublevels else ()) + (z,)
</code></pre>
<p>Returning:</p>
<pre><code>(6, 5, 4, 3, 2)
</code></pre>
<p>If you want to pass the array into the function, then I suggest you still return it as a value as follows:</p>
<pre><code>def subTree(z, sublevels, array):
array += type(array)([z])
if z < sublevels:
array = subTree(z + 1, sublevels, array)
return array
</code></pre>
<p>By doing <code>type(array)([z])</code>, and the explicit returns, we can make this function work for multiple data types:</p>
<pre><code>a = []
a = subTree(2, 6, a)
print(a)
a = ()
a = subTree(2, 6, a)
print(a)
a = b""
a = subTree(ord('2'), ord('6'), a)
print(a)
</code></pre>
<p>OUTPUT:</p>
<pre><code>[2, 3, 4, 5, 6]
(2, 3, 4, 5, 6)
b'23456'
</code></pre>
| 1 | 2016-07-31T16:04:11Z | [
"python",
"recursion"
] |
python - factory function as static member of class to instantiate | 38,667,756 | <p>I have a class whose existence - or not - depends on the correctness of some input parameters.</p>
<p><strong>QUESTION:</strong><br/>
Would it be ok to create that factory function as a static member of the class I want an instance of? Is that the best way? </p>
<p>I initially tried to do it inside the <code>__new__</code> operator but people said I should use factory function.</p>
<pre><code>class MyClass:
@staticmethod
def GetAMy(arg):
if arg == 5:
return None
else:
return MyClass(arg)
</code></pre>
| 0 | 2016-07-29T21:29:25Z | 38,667,933 | <p>An alternative to using @classmethod would be to put the factory function in the same file as the class.</p>
<p>myclasses.py</p>
<pre><code>def GetAMy(arg):
if arg == 5:
return None
else:
return MyClass(arg)
class MyClass(object):
...
</code></pre>
<p>You then import the module from your main file:</p>
<pre><code>import myclasses
a=myclasses.GetAMy(5)
</code></pre>
| 0 | 2016-07-29T21:45:45Z | [
"python"
] |
how to pass multiple lists of distributions to sklearn randomizedSearchCV | 38,667,784 | <p>I have a customized estimator object in Python (<code>mkl_regressor</code>). One of the learning parameters of such an object is a <code>numpy.array</code> of floats. Usually sklearn estimator objects are tuned by single parameters, like the <code>C</code> of a SVM. Thus the <code>randomizedSearchCV</code> search object takes a distribution or a list of values for picking up some value from the given distribution (in my example is <code>scipy.stats.expon</code>) for the desired parameter. I have tried to pass a list of distributions, but I had not success, because <code>randomizedSearchCV</code> does not execute the elements in the array of distributions. This is what I tried:</p>
<pre><code>from modshogun import *
import Gnuplot, Gnuplot.funcutils
from numpy import *
from sklearn.metrics import r2_score
class mkl_regressor():
def __init__(self, widths = [0.01, 0.1, 1.0, 10.0, 50.0, 100.0], kernel_weights = [0.01, 0.1, 1.0,], svm_c = 0.01, mkl_c = 1.0, svm_norm = 1, mkl_norm = 1, degree = 2):
self.svm_c = svm_c
self.mkl_c = mkl_c
self.svm_norm = svm_norm
self.mkl_norm = mkl_norm
self.degree = degree
self.widths = widths
self.kernel_weights = kernel_weights
def fit(self, X, y, **params):
for parameter, value in params.items():
setattr(self, parameter, value)
self.feats_train = RealFeatures(X.T)
labels_train = RegressionLabels(y.reshape((len(y), )))
self._kernels_ = CombinedKernel()
for width in self.widths:
kernel = GaussianKernel()
kernel.set_width(width)
kernel.init(self.feats_train,self.feats_train)
self._kernels_.append_kernel(kernel)
del kernel
kernel = PolyKernel(10, self.degree)
self._kernels_.append_kernel(kernel)
del kernel
self._kernels_.init(self.feats_train, self.feats_train)
binary_svm_solver = SVRLight()
self.mkl = MKLRegression(binary_svm_solver)
self.mkl.set_C(self.svm_c, self.svm_c)
self.mkl.set_C_mkl(self.mkl_c)
self.mkl.set_mkl_norm(self.mkl_norm)
self.mkl.set_mkl_block_norm(self.svm_norm)
self.mkl.set_kernel(self._kernels_)
self.mkl.set_labels(labels_train)
self.mkl.train()
self.kernel_weights = self._kernels_.get_subkernel_weights()
def predict(self, X):
self.feats_test = RealFeatures(X.T)
self._kernels_.init(self.feats_train, self.feats_test)
self.mkl.set_kernel(self._kernels_)
return self.mkl.apply_regression().get_labels()
def set_params(self, **params):
for parameter, value in params.items():
setattr(self, parameter, value)
return self
def get_params(self, deep=False):
return {param: getattr(self, param) for param in dir(self) if not param.startswith('__') and not callable(getattr(self,param))}
def score(self, X_t, y_t):
predicted = self.predict(X_t)
return r2_score(predicted, y_t)
if __name__ == "__main__":
from sklearn.grid_search import RandomizedSearchCV as RS
from scipy.stats import randint as sp_randint
from scipy.stats import expon
labels = array([2.0,0.0,2.0,1.0,3.0,2.0])
labels = labels.reshape((len(labels), 1))
data = array([[1.0,2.0,3.0],[1.0,2.0,9.0],[1.0,2.0,3.0],[1.0,2.0,0.0],[0.0,2.0,3.0],[1.0,2.0,3.0]])
labels_t = array([1.,3.,4])
labels_t = labels_t.reshape((len(labels_t), 1))
data_t = array([[20.0,30.0,40.0],[10.0,20.0,30.0],[10.0,20.0,40.0]])
k = 3
param_grid = [ {'svm_c': expon(scale=100, loc=5),
'mkl_c': expon(scale=100, loc=5),
'degree': sp_randint(0, 32),
#'widths': [array([4.0,6.0,8.9,3.0]), array([4.0,6.0,8.9,3.0,2.0, 3.0, 4.0]), array( [100.0, 200.0, 300.0, 400.0])
'widths': [[expon, expon]]
}]
mkl = mkl_regressor()
rs = RS(mkl, param_distributions = param_grid[0], n_iter = 10, n_jobs = 24, cv = k)#, scoring="r2", verbose=True)
rs.fit(data, labels)
preds = rs.predict(data_t)
print "R^2: ", rs.score(data_t, labels_t)
print "Parameters: ", rs.best_params_
</code></pre>
<p>The above code works well by passing numpy arrays as elements of the list <code>'widths'</code> of the dictionary of parameters. However, when I try to pass a list of distributions, the randomizedSearchCV object does not respond as desired:</p>
<pre><code>/home/ignacio/distributionalSemanticStabilityThesis/mkl_test.py in fit(self=<__main__.mkl_regressor instance>, X=array([[ 1., 2., 3.],
[ 1., 2., 0.],
[ 0., 2., 3.],
[ 1., 2., 3.]]), y=array([[ 2.],
[ 1.],
[ 3.],
[ 2.]]), **params={})
24 self.feats_train = RealFeatures(X.T)
25 labels_train = RegressionLabels(y.reshape((len(y), )))
26 self._kernels_ = CombinedKernel()
27 for width in self.widths:
28 kernel = GaussianKernel()
---> 29 kernel.set_width(width)
kernel.set_width = <built-in method set_width of GaussianKernel object>
width = <scipy.stats._continuous_distns.expon_gen object>
30 kernel.init(self.feats_train,self.feats_train)
31 self._kernels_.append_kernel(kernel)
32 del kernel
33
TypeError: in method 'GaussianKernel_set_width', argument 2 of type 'float64_t'
</code></pre>
<p>I wouldn't like to force the estimator for executing each distribution generator because in such a case, the <code>randomizedSearchCV</code> wouldn't has control of the used values.</p>
<p>Some suggestions? Thank you.</p>
| 2 | 2016-07-29T21:32:30Z | 38,669,597 | <p>RandomizedSearchCV can take either a <strong>list of parameter values to try</strong> or a <strong>distribution object with an rvs method for sampling</strong>. If you pass it a list, it will assume you passed a discrete set of parameter values to sample from. It does not support a list of distributions for a single parameter. If existing distributions don't suit your needs, make a custom one. </p>
<p>If you need a distribution that returns an array, simply create a class that has an rvs() method to return a random sample and pass an instance of that instead of a list of single-variate distributions.</p>
| 1 | 2016-07-30T01:44:02Z | [
"python",
"scipy",
"scikit-learn",
"shogun"
] |
how to pass multiple lists of distributions to sklearn randomizedSearchCV | 38,667,784 | <p>I have a customized estimator object in Python (<code>mkl_regressor</code>). One of the learning parameters of such an object is a <code>numpy.array</code> of floats. Usually sklearn estimator objects are tuned by single parameters, like the <code>C</code> of a SVM. Thus the <code>randomizedSearchCV</code> search object takes a distribution or a list of values for picking up some value from the given distribution (in my example is <code>scipy.stats.expon</code>) for the desired parameter. I have tried to pass a list of distributions, but I had not success, because <code>randomizedSearchCV</code> does not execute the elements in the array of distributions. This is what I tried:</p>
<pre><code>from modshogun import *
import Gnuplot, Gnuplot.funcutils
from numpy import *
from sklearn.metrics import r2_score
class mkl_regressor():
def __init__(self, widths = [0.01, 0.1, 1.0, 10.0, 50.0, 100.0], kernel_weights = [0.01, 0.1, 1.0,], svm_c = 0.01, mkl_c = 1.0, svm_norm = 1, mkl_norm = 1, degree = 2):
self.svm_c = svm_c
self.mkl_c = mkl_c
self.svm_norm = svm_norm
self.mkl_norm = mkl_norm
self.degree = degree
self.widths = widths
self.kernel_weights = kernel_weights
def fit(self, X, y, **params):
for parameter, value in params.items():
setattr(self, parameter, value)
self.feats_train = RealFeatures(X.T)
labels_train = RegressionLabels(y.reshape((len(y), )))
self._kernels_ = CombinedKernel()
for width in self.widths:
kernel = GaussianKernel()
kernel.set_width(width)
kernel.init(self.feats_train,self.feats_train)
self._kernels_.append_kernel(kernel)
del kernel
kernel = PolyKernel(10, self.degree)
self._kernels_.append_kernel(kernel)
del kernel
self._kernels_.init(self.feats_train, self.feats_train)
binary_svm_solver = SVRLight()
self.mkl = MKLRegression(binary_svm_solver)
self.mkl.set_C(self.svm_c, self.svm_c)
self.mkl.set_C_mkl(self.mkl_c)
self.mkl.set_mkl_norm(self.mkl_norm)
self.mkl.set_mkl_block_norm(self.svm_norm)
self.mkl.set_kernel(self._kernels_)
self.mkl.set_labels(labels_train)
self.mkl.train()
self.kernel_weights = self._kernels_.get_subkernel_weights()
def predict(self, X):
self.feats_test = RealFeatures(X.T)
self._kernels_.init(self.feats_train, self.feats_test)
self.mkl.set_kernel(self._kernels_)
return self.mkl.apply_regression().get_labels()
def set_params(self, **params):
for parameter, value in params.items():
setattr(self, parameter, value)
return self
def get_params(self, deep=False):
return {param: getattr(self, param) for param in dir(self) if not param.startswith('__') and not callable(getattr(self,param))}
def score(self, X_t, y_t):
predicted = self.predict(X_t)
return r2_score(predicted, y_t)
if __name__ == "__main__":
from sklearn.grid_search import RandomizedSearchCV as RS
from scipy.stats import randint as sp_randint
from scipy.stats import expon
labels = array([2.0,0.0,2.0,1.0,3.0,2.0])
labels = labels.reshape((len(labels), 1))
data = array([[1.0,2.0,3.0],[1.0,2.0,9.0],[1.0,2.0,3.0],[1.0,2.0,0.0],[0.0,2.0,3.0],[1.0,2.0,3.0]])
labels_t = array([1.,3.,4])
labels_t = labels_t.reshape((len(labels_t), 1))
data_t = array([[20.0,30.0,40.0],[10.0,20.0,30.0],[10.0,20.0,40.0]])
k = 3
param_grid = [ {'svm_c': expon(scale=100, loc=5),
'mkl_c': expon(scale=100, loc=5),
'degree': sp_randint(0, 32),
#'widths': [array([4.0,6.0,8.9,3.0]), array([4.0,6.0,8.9,3.0,2.0, 3.0, 4.0]), array( [100.0, 200.0, 300.0, 400.0])
'widths': [[expon, expon]]
}]
mkl = mkl_regressor()
rs = RS(mkl, param_distributions = param_grid[0], n_iter = 10, n_jobs = 24, cv = k)#, scoring="r2", verbose=True)
rs.fit(data, labels)
preds = rs.predict(data_t)
print "R^2: ", rs.score(data_t, labels_t)
print "Parameters: ", rs.best_params_
</code></pre>
<p>The above code works well by passing numpy arrays as elements of the list <code>'widths'</code> of the dictionary of parameters. However, when I try to pass a list of distributions, the randomizedSearchCV object does not respond as desired:</p>
<pre><code>/home/ignacio/distributionalSemanticStabilityThesis/mkl_test.py in fit(self=<__main__.mkl_regressor instance>, X=array([[ 1., 2., 3.],
[ 1., 2., 0.],
[ 0., 2., 3.],
[ 1., 2., 3.]]), y=array([[ 2.],
[ 1.],
[ 3.],
[ 2.]]), **params={})
24 self.feats_train = RealFeatures(X.T)
25 labels_train = RegressionLabels(y.reshape((len(y), )))
26 self._kernels_ = CombinedKernel()
27 for width in self.widths:
28 kernel = GaussianKernel()
---> 29 kernel.set_width(width)
kernel.set_width = <built-in method set_width of GaussianKernel object>
width = <scipy.stats._continuous_distns.expon_gen object>
30 kernel.init(self.feats_train,self.feats_train)
31 self._kernels_.append_kernel(kernel)
32 del kernel
33
TypeError: in method 'GaussianKernel_set_width', argument 2 of type 'float64_t'
</code></pre>
<p>I wouldn't like to force the estimator for executing each distribution generator because in such a case, the <code>randomizedSearchCV</code> wouldn't has control of the used values.</p>
<p>Some suggestions? Thank you.</p>
| 2 | 2016-07-29T21:32:30Z | 38,676,863 | <p>The solution @bpachev suggested worked for me. The distribution class:</p>
<pre><code>class expon_vector(stats.rv_continuous):
def __init__(self, loc = 1.0, scale = 50.0, min_size=2, max_size = 10):
self.loc = loc
self.scale = scale
self.min_size = min_size
self.max_size = max_size
self.size = max_size - min_size # Only for initialization
def rvs(self):
self.size = randint.rvs(low = self.min_size,
high = self.max_size, size = 1)
return expon.rvs(loc = self.loc, scale = self.scale, size = self.size)
</code></pre>
<p>Which is included in the dictionary of parameters for the customized estimator I'm using:</p>
<pre><code>param_grid = [ {'svm_c': expon(scale=100, loc=5),
'mkl_c': expon(scale=100, loc=5),
'degree': sp_randint(0, 24),
'widths': expon_vector(loc = 0.1, scale = 100.0,
min_size = 2, max_size = 10) } ]
</code></pre>
| 0 | 2016-07-30T17:53:34Z | [
"python",
"scipy",
"scikit-learn",
"shogun"
] |
Comparing complicated lists | 38,667,785 | <p>So I've got a function in python that has a list containing lists and I'm trying to compare the contents of the list to see if I have a duplicate within it, and then return True if there are duplicates. Essentially something like this;</p>
<pre><code>def dupCheck():
aList = ([3.3,3.2], [1,1], [3.3,3.2], [7,7])
if duplicates in aList:
return True
return False
</code></pre>
| 0 | 2016-07-29T21:32:35Z | 38,667,885 | <p>Normally when checking whether a list has duplicates, you would just create a <code>set</code> from the list, thus removing the duplicates, and then compare the size of that set to the size of the original list. In this case this is not possible, as the nested lists are not hashable and can thus not be added to a set. However, if you always have a list of lists (and not, e.g. a list of lists and other things, or a list of lists of lists) then you can convert the sublists to tuples before putting them into the set.</p>
<pre><code>>>> def dupCheck(lst):
... return len(set(tuple(l) for l in lst)) != len(lst) # duplicates -> true
...
>>> dupCheck([[3.3,3.2], [1,1], [3.3,3.2], [7,7]])
True
</code></pre>
| 2 | 2016-07-29T21:41:40Z | [
"python",
"list",
"duplicates"
] |
Comparing complicated lists | 38,667,785 | <p>So I've got a function in python that has a list containing lists and I'm trying to compare the contents of the list to see if I have a duplicate within it, and then return True if there are duplicates. Essentially something like this;</p>
<pre><code>def dupCheck():
aList = ([3.3,3.2], [1,1], [3.3,3.2], [7,7])
if duplicates in aList:
return True
return False
</code></pre>
| 0 | 2016-07-29T21:32:35Z | 38,668,041 | <p>You can do it lazily, returning True on the first duplicate in place of creating a whole set first:</p>
<pre><code>def dup_check(lst):
seen = set()
for tup in map(tuple, lst):
if tup in seen:
return True
seen.add(tup)
return False
</code></pre>
<p>To keep lazy evaluation you should use <em>itertools.imap</em> in place of <em>map</em> using python2.</p>
<p>If you want a close to one liner solution, you could take advantage of the fact the <em>set.add</em> returns None and combine with <em>any</em>:</p>
<pre><code>def dup_check(lst):
seen = set()
return any(tup in seen or seen.add(tup) for tup in map(tuple, lst))
</code></pre>
<p><code>seen.add(tup)</code> is always going to be <em>None</em> and only if we have already added an identical tuple will <em>tup in seen</em> return True so <em>any</em> will either short-circuit on the first dupe or return False if there are no dupes.</p>
| 4 | 2016-07-29T21:55:51Z | [
"python",
"list",
"duplicates"
] |
Comparing complicated lists | 38,667,785 | <p>So I've got a function in python that has a list containing lists and I'm trying to compare the contents of the list to see if I have a duplicate within it, and then return True if there are duplicates. Essentially something like this;</p>
<pre><code>def dupCheck():
aList = ([3.3,3.2], [1,1], [3.3,3.2], [7,7])
if duplicates in aList:
return True
return False
</code></pre>
| 0 | 2016-07-29T21:32:35Z | 38,674,175 | <p>A solution to the list of list problem:</p>
<pre><code>def remove_doubles(a):
"""Removes list doubles in list where list(set(a)) will raise a TypeError."""
clean_list = []
for b in a:
if b not in clean_list:
clean_list += [b]
return clean_list
</code></pre>
| 1 | 2016-07-30T12:52:06Z | [
"python",
"list",
"duplicates"
] |
matplotlib aggregate multiple points of a series within a pixel | 38,667,838 | <p>I have a pandas series ts with index in datetime and its resolution is 1s; this time series spans 23 hours. I am trying to plot this on multiple charts with a config parameter chart_sec (how many secs showing up in one chart) with a fixed dpi=20; the resolution may not be enough to plot every single point of ts in the chart.</p>
<pre><code>import pyplot as plt
ax = plt.subplot()
dates = [dt.to_datetime() for dt in ts.index]
ax.plot_date(dates, ts, fmt='k-', color='grey')
plt.savefig(fname, dpi=20)
</code></pre>
<p>Question 1, I don't know which value I m plotting using the above code (is it the value of the first or last point in a pixel?)
Question 2, if I want to sum values of points within one pixel, how to do it? </p>
| 1 | 2016-07-29T21:37:10Z | 38,763,416 | <p>One idea if you need to distinguish the points is to save the figure at large size and high dpi. I'm guessing a bit what your data looks like from your example.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
ts = pd.DataFrame(50*pd.np.random.rand(7200)+pd.np.array(range(7200)),
index=pd.DatetimeIndex(freq='s',start='01-01-2016',periods=7200))
plt.figure(figsize=(35,35))
# will be a <matplotlib.figure.Figure object>
dates = [dt.to_datetime() for dt in ts.index]
plt.plot_date(dates, ts, fmt='k-', ms=0, color='grey')
# will be a [<matplotlib.lines.Line2D object>]
plt.savefig('hires.png', dpi=600)
</code></pre>
<p>This produces a nearly 3MB png file, which looks like a slightly noisy straight line when viewed zoomed out, but when you zoom to 100% you can see the individual points as shown below:</p>
<p><a href="http://i.stack.imgur.com/vcXWM.png" rel="nofollow"><img src="http://i.stack.imgur.com/vcXWM.png" alt="Small portion of line graph zoomed to 100%"></a></p>
<p>Still not brilliantly well distinguished, but you can play with the picture size vs. dpi to suit your individual requirements.</p>
<p>N.B. If you need to view in the normal matplotlib figure window (rather than save), you can zoom in. You can also save the zoomed in view, which saves exactly what you see on the screen.</p>
| 0 | 2016-08-04T09:23:35Z | [
"python",
"matplotlib"
] |
Comparing data from one fetch request with another | 38,667,931 | <p>I have three postgres tables: users, files, and likes.</p>
<p>The users table has a one to many relationship with the files table.<br>
The files table has a one to many relationship with the likes table. </p>
<p>My application has a feed with two sections: "community" and "likes". The "community" section consists of all the public user files and the "likes" section consists of all the files liked by the user. Upon loading the feed, I fetch the files for both sections and I end up with an array of dictionaries for each section. My queries look like this:</p>
<pre><code>cur.execute('SELECT f.*, u.username from files as f, users as u '
'where u.id = %i AND f.user_id = %i ORDER BY f.id DESC' %
(user.id, user.id))
cur.execute('SELECT f.*, u.username from files as f, users as u, likes as l '
'where l.user_id = %i AND l.file_id = f.id AND f.share = True AND '
'u.id = f.user_id ORDER BY f.id DESC' % (user.id))
</code></pre>
<p>My "community" feed has a like button that changes to "liked" when the file has been liked. Since I have an array for "community" files and an array of "liked" files, what would be the best way to check if the file ID in the "Community" array is also in the "liked" array so I can update the button?</p>
<p>The "community" and "liked" arrays consist of dictionaries like below:</p>
<pre><code>community_files_dict = {'file_id': file[0], 'user_id': file[1], 'title': file[2],
'date': date_final, 'shared': file[4], 'username': file[5]}
liked_files_dict = {'file_id': file[0], 'user_id': file[1], 'title': file[2],
'date': date_final, 'shared': file[4], 'username': file[5]}
</code></pre>
| 0 | 2016-07-29T21:45:36Z | 38,668,643 | <p>Check for liked in the first query:</p>
<pre><code>cur.execute('''
select f.*, u.username, l.user_id is not null as liked
from
files f
inner join
users u on u.id = f.user_id
left join
likes l on l.user_id = u.id and f.share = true
where u.id = %s
order by f.id desc
''', (user.id,))
</code></pre>
<p>Improvements: <code>join</code> syntax, multi line string, passing parameters through the driver in instead of string interpolation.</p>
| 0 | 2016-07-29T23:02:08Z | [
"python",
"postgresql"
] |
Extract text files into multiple columns in python | 38,667,951 | <p>I have different text files and I want to extract the values from there into a csv file.
Each file has the following format</p>
<pre><code>main cost: 30
additional cost: 5
</code></pre>
<p>I managed to do that but the problem that I want it to insert the values of each file into a different columns I also want the number of text files to be a user argument </p>
<p>This is what I'm doing now </p>
<pre><code> numFiles = sys.argv[1]
d = [[] for x in xrange(numFiles+1)]
for i in range(numFiles):
filename = 'mytext' + str(i) + '.text'
with open(filename, 'r') as in_file:
for line in in_file:
items = line.split(' : ')
num = items[1].split('\n')
if i ==0:
d[i].append(items[0])
d[i+1].append(num[0])
grouped = itertools.izip(*d[i] * 1)
if i == 0:
grouped1 = itertools.izip(*d[i+1] * 1)
with open(outFilename, 'w') as out_file:
writer = csv.writer(out_file)
for j in range(numFiles):
for val in itertools.izip(d[j]):
writer.writerow(val)
</code></pre>
<p>This is what I'm getting now, everything in one column </p>
<pre><code>main cost
additional cost
30
5
40
10
</code></pre>
<p>And I want it to be </p>
<pre><code>main cost | 30 | 40
additional cost | 5 | 10
</code></pre>
| 0 | 2016-07-29T21:47:53Z | 38,668,038 | <p>You could use a dictionary to do this where the key will be the "header" you want to use and the value be a list.</p>
<p>So it would look like <code>someDict = {'main cost': [30,40], 'additional cost': [5,10]}</code></p>
<p>edit2: Went ahead and cleaned up this answer so it makes a little more sense.</p>
<p>You can build the dictionary and iterate over it like this:</p>
<pre><code>from collections import OrderedDict
in_file = ['main cost : 30', 'additional cost : 5', 'main cost : 40', 'additional cost : 10']
someDict = OrderedDict()
for line in in_file:
key,val = line.split(' : ')
num = int(val)
if key not in someDict:
someDict[key] = []
someDict[key].append(num)
for key in someDict:
print(key)
for value in someDict[key]:
print(value)
</code></pre>
<p>The code outputs:</p>
<pre><code>main cost
30
40
additional cost
5
10
</code></pre>
<p>Should be pretty straightforward to modify the example to fit your desired output.</p>
<p>I used the example @ <a href="http://stackoverflow.com/questions/3199171/append-multiple-values-for-one-key-in-python-dictionary">append multiple values for one key in Python dictionary</a> and thanks to @wwii for some suggestions.</p>
<p>I used an <a href="https://docs.python.org/3/library/collections.html#collections.OrderedDict" rel="nofollow">OrderedDict</a> since a dictionary won't keep keys in order.</p>
<p>You can run my example @ <a href="https://ideone.com/myN2ge" rel="nofollow">https://ideone.com/myN2ge</a></p>
| 2 | 2016-07-29T21:55:48Z | [
"python",
"csv",
"extract",
"multiple-columns"
] |
Extract text files into multiple columns in python | 38,667,951 | <p>I have different text files and I want to extract the values from there into a csv file.
Each file has the following format</p>
<pre><code>main cost: 30
additional cost: 5
</code></pre>
<p>I managed to do that but the problem that I want it to insert the values of each file into a different columns I also want the number of text files to be a user argument </p>
<p>This is what I'm doing now </p>
<pre><code> numFiles = sys.argv[1]
d = [[] for x in xrange(numFiles+1)]
for i in range(numFiles):
filename = 'mytext' + str(i) + '.text'
with open(filename, 'r') as in_file:
for line in in_file:
items = line.split(' : ')
num = items[1].split('\n')
if i ==0:
d[i].append(items[0])
d[i+1].append(num[0])
grouped = itertools.izip(*d[i] * 1)
if i == 0:
grouped1 = itertools.izip(*d[i+1] * 1)
with open(outFilename, 'w') as out_file:
writer = csv.writer(out_file)
for j in range(numFiles):
for val in itertools.izip(d[j]):
writer.writerow(val)
</code></pre>
<p>This is what I'm getting now, everything in one column </p>
<pre><code>main cost
additional cost
30
5
40
10
</code></pre>
<p>And I want it to be </p>
<pre><code>main cost | 30 | 40
additional cost | 5 | 10
</code></pre>
| 0 | 2016-07-29T21:47:53Z | 38,668,799 | <p>This is how I might do it. Assumes the fields are the same in all the files. Make a list of names, and a dictionary using those field names as keys, and the list of values as the entries. Instead of running on <code>file1.text</code>, <code>file2.text</code>, etc. run the script with <code>file*.text</code> as a command line argument.</p>
<pre><code>#! /usr/bin/env python
import sys
if len(sys.argv)<2:
print "Give file names to process, with wildcards"
else:
FileList= sys.argv[1:]
FileNum = 0
outFilename = "myoutput.dat"
NameList = []
ValueDict = {}
for InfileName in FileList:
Infile = open(InfileName, 'rU')
for Line in Infile:
Line=Line.strip('\n')
Name,Value = Line.split(":")
if FileNum==0:
NameList.append(Name.strip())
ValueDict[Name] = ValueDict.get(Name,[]) + [Value.strip()]
FileNum += 1 # the last statement in the file loop
Infile.close()
# print NameList
# print ValueDict
with open(outFilename, 'w') as out_file:
for N in NameList:
OutString = "{},{}\n".format(N,",".join(ValueDict.get(N)))
out_file.write(OutString)
</code></pre>
<p>Output for my four fake files was:</p>
<pre><code>main cost,10,10,40,10
additional cost,25.6,25.6,55.6,25.6
</code></pre>
| 0 | 2016-07-29T23:23:41Z | [
"python",
"csv",
"extract",
"multiple-columns"
] |
Identifying a key in one dict, and using it to modify values of another dict | 38,668,066 | <p>For a text-based RPG I'm creating, I have dictionaries outlining various in-game religions, races, etc. Some values of these religions include buffs for player's stats, e.g.: </p>
<pre><code>religion_Dict = {'Way of the White': {'buffs': {'intelligence': 5, 'defense': 3},
'abilities': [...],
'description': '...'}}
</code></pre>
<p>My question comes along when trying to apply a religion's stat buffs to a player's stats. If I have a player class that looks something like: </p>
<pre><code>class Player(object)
def __init__(self):
...
self.religion = None
self.stats = {'intelligence': 10, 'defense': 8}
</code></pre>
<p>Now, let's say the player joins the religion <code>Way of the White</code>, how do I go about identifying the keys <code>intelligence</code> and <code>defense</code> and their respective values -- inside of the dictionary <code>religion_dict</code> -- and apply them onto the values of the player's <code>stats</code> dictionary? </p>
<p>I know I can pull the key names with <code>religion_Dict.keys()</code> or a basic for loop, but how can I use that to correctly modify the corresponding player stat values?</p>
<p>I'm sure I'm just missing a basic concept. In any case, thanks to anyone willing to help answer this simple question! I appreciate it! </p>
| 0 | 2016-07-29T21:58:40Z | 38,668,154 | <p>This adds a method to <code>Player</code> that assigns the values from your dictionary to player stats. It uses <code>get</code> to ensure that the value is in the dictionary and that it contains the field <code>buffs</code>. If so, it gets the values for <code>intelligence</code> and <code>defense</code> and adds them to the player's stats.</p>
<pre><code>class Player(object)
def __init__(self):
...
self.religion = None
self.stats = {'intelligence': 10, 'defense': 8}
def join_religion(religion):
stats = religion_dict.get(religion)
if stats and 'buffs' in stats:
self.intelligence += stats['buffs'].get('intelligence', 0)
self.defense += stats['buffs'].get('defense', 0)
p = Player()
p.join_religion('Way of the White')
</code></pre>
| 1 | 2016-07-29T22:07:02Z | [
"python",
"dictionary"
] |
Identifying a key in one dict, and using it to modify values of another dict | 38,668,066 | <p>For a text-based RPG I'm creating, I have dictionaries outlining various in-game religions, races, etc. Some values of these religions include buffs for player's stats, e.g.: </p>
<pre><code>religion_Dict = {'Way of the White': {'buffs': {'intelligence': 5, 'defense': 3},
'abilities': [...],
'description': '...'}}
</code></pre>
<p>My question comes along when trying to apply a religion's stat buffs to a player's stats. If I have a player class that looks something like: </p>
<pre><code>class Player(object)
def __init__(self):
...
self.religion = None
self.stats = {'intelligence': 10, 'defense': 8}
</code></pre>
<p>Now, let's say the player joins the religion <code>Way of the White</code>, how do I go about identifying the keys <code>intelligence</code> and <code>defense</code> and their respective values -- inside of the dictionary <code>religion_dict</code> -- and apply them onto the values of the player's <code>stats</code> dictionary? </p>
<p>I know I can pull the key names with <code>religion_Dict.keys()</code> or a basic for loop, but how can I use that to correctly modify the corresponding player stat values?</p>
<p>I'm sure I'm just missing a basic concept. In any case, thanks to anyone willing to help answer this simple question! I appreciate it! </p>
| 0 | 2016-07-29T21:58:40Z | 38,668,177 | <p>Here is a sketch of how you would go about this:</p>
<pre><code>religion_Dict = {'Way of the White': {'buffs': {'intelligence': 5, 'defense': 3},
'abilities': [...],
'description': '...'}}
buffs = religion_Dict['Way of the White']['buffs']
for key in buffs:
player.stats[key] = player.stats.get(key,0) + buffs[key]
</code></pre>
<p>Of course, you should wrap this logic in a method in your Player class, but the logic above is what you are looking for. Notice the <code>.get</code> method takes a second argument which is a default value it returns if there is no key value. Thus, this line will add 1 to whatever stat is there, and if it doesn't exist, it adds 1 to 0.</p>
| 2 | 2016-07-29T22:09:42Z | [
"python",
"dictionary"
] |
Identifying a key in one dict, and using it to modify values of another dict | 38,668,066 | <p>For a text-based RPG I'm creating, I have dictionaries outlining various in-game religions, races, etc. Some values of these religions include buffs for player's stats, e.g.: </p>
<pre><code>religion_Dict = {'Way of the White': {'buffs': {'intelligence': 5, 'defense': 3},
'abilities': [...],
'description': '...'}}
</code></pre>
<p>My question comes along when trying to apply a religion's stat buffs to a player's stats. If I have a player class that looks something like: </p>
<pre><code>class Player(object)
def __init__(self):
...
self.religion = None
self.stats = {'intelligence': 10, 'defense': 8}
</code></pre>
<p>Now, let's say the player joins the religion <code>Way of the White</code>, how do I go about identifying the keys <code>intelligence</code> and <code>defense</code> and their respective values -- inside of the dictionary <code>religion_dict</code> -- and apply them onto the values of the player's <code>stats</code> dictionary? </p>
<p>I know I can pull the key names with <code>religion_Dict.keys()</code> or a basic for loop, but how can I use that to correctly modify the corresponding player stat values?</p>
<p>I'm sure I'm just missing a basic concept. In any case, thanks to anyone willing to help answer this simple question! I appreciate it! </p>
| 0 | 2016-07-29T21:58:40Z | 38,668,184 | <pre><code>self.stats = religion_Dict['Way of the White']['buffs']
</code></pre>
| 0 | 2016-07-29T22:10:21Z | [
"python",
"dictionary"
] |
Temp Conversation fahrenheit to celsius | 38,668,343 | <p>I am trying to convert Temperature from fahrenheit to celsius and vice-versa. The ">>>>" which converts fahrenheit to Celsius doesn't function while the Celsius do Fahrenheit button functions. Please assist, I think I have been looking at the code for too long that's why I can't figure it out. </p>
<pre><code>from Tkinter import *
class Temp(Frame):
def __init__(self):
Frame.__init__(self)
# self._fahren = 0.0
# self._cel = 0.0
self.master.title("TempConver")
self.grid()
self._fahrenLabel = Label(self, text="Fahrenheit")
self._fahrenLabel.grid(row=0, column=0)
self._fahrenVar = DoubleVar()
self._fahrenVar.set(32.0)
self._fahrenEntry = Entry(self, textvariable = self._fahrenVar)
self._fahrenEntry.grid(row=1, column=0)
self._celLabel = Label(self, text="Celcius")
self._celLabel.grid(row=0, column=2)
self._celVar = DoubleVar()
self._celEntry = Entry(self, textvariable = self._celVar)
self._celEntry.grid(row=1, column=2)
self._fahrenButton = Button(self, text = ">>>>", command = self.FtoC)
self._fahrenButton.grid(row = 0, column = 1)
self._celButton = Button(self, text = "<<<<", command = self.CtoF)
self._celButton.grid(row = 1, column = 1)
def FtoC(self):
fahren = self._fahrenVar.get()
cel = (5/9) * (fahren - 32)
self._celVar.set(cel)
def CtoF(self):
cel = self._celVar.get()
fahren = (9/5) * (cel + 32)
self._fahrenVar.set(fahren)
def main():
Temp().mainloop()
main()
</code></pre>
| -1 | 2016-07-29T22:26:46Z | 38,668,435 | <p>Your issue has to do with how division works in Python 2.</p>
<p>Compare:</p>
<pre><code>a=(5/9)
b=(5/9.0)
</code></pre>
<p>In the first case, the result is an integer. In the second case, it is a float. If you divide two integers, it will return an integer that is rounded down, in your case to 0, resulting in a 0 answer in any case. If any of the two is a float, the result will be a float. In Python 3, either case will give the same float result.</p>
<p>This should work:</p>
<pre><code>def FtoC(self):
fahren = self._fahrenVar.get()
cel = (5/9.0) * (fahren - 32)
self._celVar.set(cel)
</code></pre>
<p>By the way, your conversion formula for Celsius to Fahrenheit is incorrect. First multiply by 9/5 before you add 32!
It should be:</p>
<pre><code>fahren = ((9/5.0) * cel) + 32
</code></pre>
| 3 | 2016-07-29T22:36:11Z | [
"python",
"python-2.7",
"tkinter"
] |
Memory Usage, Filling Pandas DataFrame using Dict vs using key and value Lists | 38,668,376 | <p>I am making a package that reads a binary file and returns data that can be used to initialize a <code>DataFrame</code>, I am now wondering if it is best to return a <code>dict</code> or two lists (one that holds the keys and one that holds the values). </p>
<p>The package I am making is not supposed to be entirely reliant on a <code>DataFrame</code> object, which is why my package currently outputs the data as a <code>dict</code> (for easy access). If there could be some memory and speed savings (which is paramount for my application as I am dealing with millions of data points), I would like to output the key and value lists instead. These iterables would then be used to initialize a <code>DataFrame</code>.</p>
<p>Here is a simple example:</p>
<pre><code>In [1]: d = {(1,1,1): '111',
...: (2,2,2): '222',
...: (3,3,3): '333',
...: (4,4,4): '444'}
In [2]: keyslist=[(1,1,1),(2,2,2),(3,3,3),(4,4,4)]
In [3]: valslist=['111','222','333','444']
In [4]: import pandas as pd
In [5]: dfdict=pd.DataFrame(d.values(), index=pd.MultiIndex.from_tuples(d.keys(), names=['a','b','c']))
In [6]: dfdict
Out[6]:
0
a b c
3 3 3 333
2 2 2 222
1 1 1 111
4 4 4 444
In [7]: dflist=pd.DataFrame(valslist, index=pd.MultiIndex.from_tuples(keyslist, names=['a','b','c']))
In [8]: dfpair
Out[8]:
0
a b c
1 1 1 111
2 2 2 222
3 3 3 333
4 4 4 444
</code></pre>
<p>It is my understanding that <code>d.values()</code> and <code>d.keys()</code> is creating a new <strong><em>copy</em></strong> of the data. If we disregard the fact the a <code>dict</code> takes more memory then a <code>list</code>, does using <code>d.values()</code> and <code>d.keys()</code> lead to more memory usage then the <code>list</code> pair implementation?</p>
| 3 | 2016-07-29T22:30:40Z | 38,669,276 | <p>I made memory profiling of 1M rows. The winning structure is to use array.array for every numerical index and a list for strings (147MB data and 310MB conversion to pandas).</p>
<p>According to Python manual </p>
<blockquote>
<p>Arrays are sequence types and behave very much like lists, except that
the type of objects stored in them is constrained.</p>
</blockquote>
<p>They even have append method and most likely have very fast append speed.</p>
<p>Second place goes to two separate lists. (308MB and 450MB)</p>
<p>The other two options, using a dict and using a list with tuples of four, were the worst. Dict: 339MB, 524MB. List of four: 308MB, 514MB.</p>
<p>Here is the use of array.array:</p>
<pre><code>In [1]: from array import array
In [2]: import gc
In [3]: import pandas as pd
In [4]: %load_ext memory_profiler
In [5]: a1=array("l",range(1000000))
In [6]: a2=array("l",range(1000000))
In [7]: a3=array("l",range(1000000))
In [8]: b=[str(x*111) for x in list(range(1000000))]
In [9]: gc.collect()
Out[9]: 0
In [10]: %memit a1,a2,a3,b
peak memory: 147.64 MiB, increment: 0.32 MiB
In [11]: %memit dfpair=pd.DataFrame(b, index=pd.MultiIndex.from_arrays([a1,a2,a3], names=['a','b','c']))
peak memory: 310.60 MiB, increment: 162.91 MiB
</code></pre>
<p>Here is the rest of the code (very long):</p>
<p>List of tuples of four:</p>
<pre><code>In [1]: import gc
In [2]: import pandas as pd
In [3]: %load_ext memory_profiler
In [4]: a=list(zip(list(range(1000000)),list(range(1000000)),list(range(1000000))))
In [5]: b=[str(x*111) for x in list(range(1000000))]
In [6]: d2=[x+(b[i],) for i,x in enumerate(a)]
In [7]: del a
In [8]: del b
In [9]: gc.collect()
Out[9]: 0
In [10]: %memit d2
peak memory: 308.40 MiB, increment: 0.28 MiB
In [11]: %memit df = pd.DataFrame(d2, columns=['a','b','c','d']).set_index(['a','b','c'])
peak memory: 514.21 MiB, increment: 205.80 MiB
</code></pre>
<p>Dictionary:</p>
<pre><code>In [1]: import gc
In [2]: import pandas as pd
In [3]: %load_ext memory_profiler
In [4]: a=list(zip(list(range(1000000)),list(range(1000000)),list(range(1000000))))
In [5]: b=[str(x*111) for x in list(range(1000000))]
In [6]: d = dict(zip(a, b))
In [7]: del a
In [8]: del b
In [9]: gc.collect()
Out[9]: 0
In [10]: %memit d
peak memory: 339.14 MiB, increment: 0.23 MiB
In [11]: %memit dfdict=pd.DataFrame(list(d.values()), index=pd.MultiIndex.from_tuples(d.keys(), names=['a','b','c']))
peak memory: 524.10 MiB, increment: 184.95 MiB
</code></pre>
<p>Two arrays:</p>
<pre><code>In [1]: import gc
In [2]: import pandas as pd
In [3]: %load_ext memory_profiler
In [4]: a=list(zip(list(range(1000000)),list(range(1000000)),list(range(1000000))))
In [5]: b=[str(x*111) for x in list(range(1000000))]
In [6]: gc.collect()
Out[6]: 0
In [7]: %memit a,b
peak memory: 307.75 MiB, increment: 0.19 MiB
In [8]: %memit dfpair=pd.DataFrame(b, index=pd.MultiIndex.from_tuples(a, names=['a','b','c']))
peak memory: 459.94 MiB, increment: 152.19 MiB
</code></pre>
| 3 | 2016-07-30T00:40:50Z | [
"python",
"performance",
"list",
"pandas",
"dictionary"
] |
Memory Usage, Filling Pandas DataFrame using Dict vs using key and value Lists | 38,668,376 | <p>I am making a package that reads a binary file and returns data that can be used to initialize a <code>DataFrame</code>, I am now wondering if it is best to return a <code>dict</code> or two lists (one that holds the keys and one that holds the values). </p>
<p>The package I am making is not supposed to be entirely reliant on a <code>DataFrame</code> object, which is why my package currently outputs the data as a <code>dict</code> (for easy access). If there could be some memory and speed savings (which is paramount for my application as I am dealing with millions of data points), I would like to output the key and value lists instead. These iterables would then be used to initialize a <code>DataFrame</code>.</p>
<p>Here is a simple example:</p>
<pre><code>In [1]: d = {(1,1,1): '111',
...: (2,2,2): '222',
...: (3,3,3): '333',
...: (4,4,4): '444'}
In [2]: keyslist=[(1,1,1),(2,2,2),(3,3,3),(4,4,4)]
In [3]: valslist=['111','222','333','444']
In [4]: import pandas as pd
In [5]: dfdict=pd.DataFrame(d.values(), index=pd.MultiIndex.from_tuples(d.keys(), names=['a','b','c']))
In [6]: dfdict
Out[6]:
0
a b c
3 3 3 333
2 2 2 222
1 1 1 111
4 4 4 444
In [7]: dflist=pd.DataFrame(valslist, index=pd.MultiIndex.from_tuples(keyslist, names=['a','b','c']))
In [8]: dfpair
Out[8]:
0
a b c
1 1 1 111
2 2 2 222
3 3 3 333
4 4 4 444
</code></pre>
<p>It is my understanding that <code>d.values()</code> and <code>d.keys()</code> is creating a new <strong><em>copy</em></strong> of the data. If we disregard the fact the a <code>dict</code> takes more memory then a <code>list</code>, does using <code>d.values()</code> and <code>d.keys()</code> lead to more memory usage then the <code>list</code> pair implementation?</p>
| 3 | 2016-07-29T22:30:40Z | 38,669,402 | <p>Here are the benchmarks using <code>memory_profiler</code>:</p>
<pre><code>Filename: testdict.py
Line # Mem usage Increment Line Contents
================================================
4 66.2 MiB 0.0 MiB @profile
5 def testdict():
6
7 66.2 MiB 0.0 MiB d = {}
8
9 260.6 MiB 194.3 MiB for i in xrange(0,1000000):
10 260.6 MiB 0.0 MiB d[(i,i,i)]=str(i)*3
11
12 400.2 MiB 139.6 MiB dfdict=pd.DataFrame(d.values(), index=
pd.MultiIndex.from_tuples(d.keys(), names=['a','b','c']))
Filename: testlist.py
Line # Mem usage Increment Line Contents
================================================
4 66.5 MiB 0.0 MiB @profile
5 def testlist():
6
7 66.5 MiB 0.0 MiB keyslist=[]
8 66.5 MiB 0.0 MiB valslist=[]
9
10 229.3 MiB 162.8 MiB for i in xrange(0,1000000):
11 229.3 MiB 0.0 MiB keyslist.append((i,i,i))
12 229.3 MiB 0.0 MiB valslist.append(str(i)*3)
13
14 273.6 MiB 44.3 MiB dflist=pd.DataFrame(valslist, index=
pd.MultiIndex.from_tuples(keyslist, names=['a','b','c']))
</code></pre>
<p>For the same task and memory types, it does seem the dictionary implementation is not as memory efficient.</p>
<h1>Edit</h1>
<p>For some reason, when I change the values to arrays of numbers (more representative of my data) I get very similar performance, does anyone know why this is happening?</p>
<pre><code>Filename: testdict.py
Line # Mem usage Increment Line Contents
================================================
4 66.9 MiB 0.0 MiB @profile
5 def testdict():
6
7 66.9 MiB 0.0 MiB d = {}
8
9 345.6 MiB 278.7 MiB for i in xrange(0,1000000):
10 345.6 MiB 0.0 MiB d[(i,i,i)]=[0]*9
11
12 546.2 MiB 200.6 MiB dfdict=pd.DataFrame(d.values(), index=
pd.MultiIndex.from_tuples(d.keys(), names=['a','b','c']))
Filename: testlist.py
Line # Mem usage Increment Line Contents
================================================
4 66.3 MiB 0.0 MiB @profile
5 def testlist():
6
7 66.3 MiB 0.0 MiB keyslist=[]
8 66.3 MiB 0.0 MiB valslist=[]
9
10 314.7 MiB 248.4 MiB for i in xrange(0,1000000):
11 314.7 MiB 0.0 MiB keyslist.append((i,i,i))
12 314.7 MiB 0.0 MiB valslist.append([0]*9)
13
14 515.2 MiB 200.6 MiB dflist=pd.DataFrame(valslist, index=
pd.MultiIndex.from_tuples(keyslist, names=['a','b','c']))
</code></pre>
| 0 | 2016-07-30T01:04:48Z | [
"python",
"performance",
"list",
"pandas",
"dictionary"
] |
Need help outtping json with python | 38,668,389 | <p>Im in need of help outputting the json key with python. I tried to output the name "carl".</p>
<p>Python code : </p>
<pre><code> from json import loads
import json,urllib2
class yomamma:
def __init__(self):
url = urlopen('http://localhost/name.php').read()
name = loads(url)
print "Hello" (name)
</code></pre>
<p>Php code (for the json which i made):</p>
<pre><code><?php
$arr = array('person_one'=>"Carl", 'person_two'=>"jack");
echo json_encode($arr);
</code></pre>
<p>the output of the php is :
{"person_one":"Carl","person_two":"jack"}</p>
| 1 | 2016-07-29T22:31:44Z | 38,668,426 | <p>I'll just assume the PHP code works correctly, I don't know PHP very well.</p>
<p>On the client, I recommend using <a href="http://docs.python-requests.org/en/master/" rel="nofollow"><code>requests</code></a> (installable through <code>pip install requests</code>):</p>
<pre><code>import requests
r = requests.get('http://localhost/name.php')
data = r.json()
print data['person_one']
</code></pre>
<p>The <code>.json</code> method returns a Python dictionary.</p>
<p>Taking a closer look at your code, it seems you're trying to concatenate two strings by just writing them next to eachother. Instead, use either the concatenation operator (<code>+</code>):</p>
<pre><code>print "Hello" + data['person_one']
</code></pre>
<p>Alternatively, you can use the string formatting functionality:</p>
<pre><code>print "Hello {}".format(data['person_one'])
</code></pre>
<p>Or even fancier (but maybe a bit complex to understand for the start):</p>
<pre><code>r = requests.get('http://localhost/name.php')
print "Hello {person_one}".format(**r.json())
</code></pre>
| 1 | 2016-07-29T22:35:28Z | [
"php",
"python",
"json"
] |
Need help outtping json with python | 38,668,389 | <p>Im in need of help outputting the json key with python. I tried to output the name "carl".</p>
<p>Python code : </p>
<pre><code> from json import loads
import json,urllib2
class yomamma:
def __init__(self):
url = urlopen('http://localhost/name.php').read()
name = loads(url)
print "Hello" (name)
</code></pre>
<p>Php code (for the json which i made):</p>
<pre><code><?php
$arr = array('person_one'=>"Carl", 'person_two'=>"jack");
echo json_encode($arr);
</code></pre>
<p>the output of the php is :
{"person_one":"Carl","person_two":"jack"}</p>
| 1 | 2016-07-29T22:31:44Z | 38,668,470 | <p>try this:</p>
<pre><code>import json
person_data = json.loads(url)
print "Hello {}".format(person_data["person_one"])
</code></pre>
| 0 | 2016-07-29T22:40:33Z | [
"php",
"python",
"json"
] |
how to share a javascript websocket instance with multiple html files | 38,668,391 | <p>I am using websockets in javascript and python flask. </p>
<p>I have a websocket server to which i connect my webpage using javascript websockets. The "/" contains a form that contain the ip address of the websocket server, and "/connectToServer" route will establish a websocket connection with the server. </p>
<p>Now, I will have routes from this webpage like, /details, /profile. I need to use the same instance of the websocket in all my routes. How do i do it? </p>
<p>P.S. I do not intend to use the websocket client api in python. I need to do it in javascript only.</p>
| -1 | 2016-07-29T22:31:55Z | 38,669,022 | <p>You would need something to make your page persistent. </p>
<p>Client-side libraries such as <a href="https://angularjs.org/" rel="nofollow">Angular</a> or <a href="https://vuejs.org/" rel="nofollow">Vue</a> (just two random examples) will do the job. </p>
<p>However you should look into a tutorial or something since its not very straightforward</p>
| 0 | 2016-07-29T23:56:45Z | [
"javascript",
"python",
"flask",
"websocket"
] |
PIL not saving PNG images correctly on windows | 38,668,429 | <p>I am trying to save an image as a PNG using PIL on python. It works great on any Linux machine I try, but when I try a Windows machine the output images is completely transparent. If I try to save it as a JPEG it works fine. Any ideas?</p>
<pre><code>bg1 = Image.new('RGBA', screen_size, (255,255,255,0))
...
bg1.save(path, 'PNG')
</code></pre>
<p>vs</p>
<pre><code>bg1.save(path, 'JPEG', quality=100)
</code></pre>
| 0 | 2016-07-29T22:35:53Z | 39,329,317 | <p>When you create the new image, the fourth component of your RGBA values is the alpha. By setting it to 0 you are telling every pixel to be completely transparent. Try setting it to 255 if you don't actually want any transparency:</p>
<pre><code>bg1 = Image.new('RGBA', screen_size, (255,255,255,255))
</code></pre>
| 0 | 2016-09-05T11:09:40Z | [
"python",
"python-imaging-library",
"pillow"
] |
Efficient way to find the shortest distance between two arrays? | 38,668,482 | <p>I am trying to find the shortest distance between two sets of arrays. The x- arrays are identical and just contain integers. Here is an example of what I am trying to do:</p>
<pre><code>import numpy as np
x1 = x2 = np.linspace(-1000, 1000, 2001)
y1 = (lambda x, a, b: a*x + b)(x1, 2, 1)
y2 = (lambda x, a, b: a*(x-2)**2 + b)(x2, 2, 10)
def dis(x1, y1, x2, y2):
return sqrt((y2-y1)**2+(x2-x1)**2)
min_distance = np.inf
for a, b in zip(x1, y1):
for c, d in zip(x2, y2):
if dis(a, b, c, d) < min_distance:
min_distance = dis(a, b, c, d)
>>> min_distance
2.2360679774997898
</code></pre>
<p>This solution works, but the problem is runtime. If x has a length of ~10,000, the solution is infeasible because the program ha O(n^2) runtime. Now, I tried making some approximations to speed the program up:</p>
<pre><code>for a, b in zip(x1, y1):
cut = (x2 > a-20)*(x2 < a+20)
for c, d in zip(x2, y2):
if dis(a, b, c, d) < min_distance:
min_distance = dis(a, b, c, d)
</code></pre>
<p>But the program is still taking longer than I'd like. Now, from my understanding, it is generally inefficient to loop through a numpy array, so I'm sure there is still room for improvement. Any ideas on how to speed this program up?</p>
| 0 | 2016-07-29T22:41:55Z | 38,668,562 | <p>This is a difficult problem, and it may help if you are willing to accept approximations. I would check out something like Spottify's <a href="https://github.com/spotify/annoy" rel="nofollow">annoy</a>.</p>
| 0 | 2016-07-29T22:51:43Z | [
"python",
"arrays",
"numpy",
"runtime",
"ipython"
] |
Efficient way to find the shortest distance between two arrays? | 38,668,482 | <p>I am trying to find the shortest distance between two sets of arrays. The x- arrays are identical and just contain integers. Here is an example of what I am trying to do:</p>
<pre><code>import numpy as np
x1 = x2 = np.linspace(-1000, 1000, 2001)
y1 = (lambda x, a, b: a*x + b)(x1, 2, 1)
y2 = (lambda x, a, b: a*(x-2)**2 + b)(x2, 2, 10)
def dis(x1, y1, x2, y2):
return sqrt((y2-y1)**2+(x2-x1)**2)
min_distance = np.inf
for a, b in zip(x1, y1):
for c, d in zip(x2, y2):
if dis(a, b, c, d) < min_distance:
min_distance = dis(a, b, c, d)
>>> min_distance
2.2360679774997898
</code></pre>
<p>This solution works, but the problem is runtime. If x has a length of ~10,000, the solution is infeasible because the program ha O(n^2) runtime. Now, I tried making some approximations to speed the program up:</p>
<pre><code>for a, b in zip(x1, y1):
cut = (x2 > a-20)*(x2 < a+20)
for c, d in zip(x2, y2):
if dis(a, b, c, d) < min_distance:
min_distance = dis(a, b, c, d)
</code></pre>
<p>But the program is still taking longer than I'd like. Now, from my understanding, it is generally inefficient to loop through a numpy array, so I'm sure there is still room for improvement. Any ideas on how to speed this program up?</p>
| 0 | 2016-07-29T22:41:55Z | 38,668,619 | <p>Your problem could also be represented as 2d collision detection, so a <a href="https://en.wikipedia.org/wiki/Quadtree" rel="nofollow">quadtree</a> might help. Insertion and querying both run in O(log n) time, so the whole search would run in O(n log n). </p>
<p>One more suggestion, since sqrt is monotonic, you can compare the squares of distances instead of the distances themselves, which will save you n^2 square root calculations. </p>
| 1 | 2016-07-29T22:58:14Z | [
"python",
"arrays",
"numpy",
"runtime",
"ipython"
] |
Efficient way to find the shortest distance between two arrays? | 38,668,482 | <p>I am trying to find the shortest distance between two sets of arrays. The x- arrays are identical and just contain integers. Here is an example of what I am trying to do:</p>
<pre><code>import numpy as np
x1 = x2 = np.linspace(-1000, 1000, 2001)
y1 = (lambda x, a, b: a*x + b)(x1, 2, 1)
y2 = (lambda x, a, b: a*(x-2)**2 + b)(x2, 2, 10)
def dis(x1, y1, x2, y2):
return sqrt((y2-y1)**2+(x2-x1)**2)
min_distance = np.inf
for a, b in zip(x1, y1):
for c, d in zip(x2, y2):
if dis(a, b, c, d) < min_distance:
min_distance = dis(a, b, c, d)
>>> min_distance
2.2360679774997898
</code></pre>
<p>This solution works, but the problem is runtime. If x has a length of ~10,000, the solution is infeasible because the program ha O(n^2) runtime. Now, I tried making some approximations to speed the program up:</p>
<pre><code>for a, b in zip(x1, y1):
cut = (x2 > a-20)*(x2 < a+20)
for c, d in zip(x2, y2):
if dis(a, b, c, d) < min_distance:
min_distance = dis(a, b, c, d)
</code></pre>
<p>But the program is still taking longer than I'd like. Now, from my understanding, it is generally inefficient to loop through a numpy array, so I'm sure there is still room for improvement. Any ideas on how to speed this program up?</p>
| 0 | 2016-07-29T22:41:55Z | 38,670,721 | <p><code>scipy</code> has a <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html" rel="nofollow"><code>cdist</code> function</a> which calculates distance between all pairs of points:</p>
<pre><code>from scipy.spatial.distance import cdist
import numpy as np
x1 = x2 = np.linspace(-1000, 1000, 2001)
y1 = (lambda x, a, b: a*x + b)(x1, 2, 1)
y2 = (lambda x, a, b: a*(x-2)**2 + b)(x2, 2, 10)
R1 = np.vstack((x1,y1)).T
R2 = np.vstack((x2,y2)).T
dists = cdist(R1,R2) # find all mutual distances
print (dists.min())
# output: 2.2360679774997898
</code></pre>
<p>This runs more than 250 times faster than the original for loop.</p>
| 1 | 2016-07-30T05:42:55Z | [
"python",
"arrays",
"numpy",
"runtime",
"ipython"
] |
Return all keys along with value in nested dictionary | 38,668,680 | <p>I am working on getting all text that exists in several <code>.yaml</code> files placed into a new singular YAML file that will contain the English translations that someone can then translate into Spanish.</p>
<p>Each YAML file has a lot of nested text. I want to print the full 'path', aka all the keys, along with the value, for each value in the YAML file. Here's an example input for a <code>.yaml</code> file that lives in the myproject.section.more_information file:</p>
<pre><code>default:
heading: Hereâs A Title
learn_more:
title: Title of Thing
url: www.url.com
description: description
opens_new_window: true
</code></pre>
<p>and here's the desired output: </p>
<pre><code>myproject.section.more_information.default.heading: Hereâs a Title
myproject.section.more_information.default.learn_more.title: Title of Thing
mproject.section.more_information.default.learn_more.url: www.url.com
myproject.section.more_information.default.learn_more.description: description
myproject.section.more_information.default.learn_more.opens_new_window: true
</code></pre>
<p>This seems like a good candidate for recursion, so I've looked at examples such as <a href="http://stackoverflow.com/questions/36808260/python-recursive-search-of-dict-with-nested-keys?rq=1">this answer</a></p>
<p>However, I want to preserve all of the keys that lead to a given value, not just the last key in a value. I'm currently using PyYAML to read/write YAML. </p>
<p>Any tips on how to save each key as I continue to check if the item is a dictionary and then return all the keys associated with each value?</p>
| 1 | 2016-07-29T23:07:58Z | 38,668,875 | <p>What you're wanting to do is flatten nested dictionaries. This would be a good place to start: <a href="http://stackoverflow.com/questions/6027558/flatten-nested-python-dictionaries-compressing-keys">Flatten nested Python dictionaries, compressing keys</a></p>
<p>In fact, I think the code snippet in the top answer would work for you if you just changed the sep argument to <code>.</code>.</p>
<p>edit:</p>
<p>Check this for a working example based on the linked SO answer <a href="http://ideone.com/Sx625B" rel="nofollow">http://ideone.com/Sx625B</a></p>
<pre><code>import collections
some_dict = {
'default': {
'heading': 'Hereâs A Title',
'learn_more': {
'title': 'Title of Thing',
'url': 'www.url.com',
'description': 'description',
'opens_new_window': 'true'
}
}
}
def flatten(d, parent_key='', sep='_'):
items = []
for k, v in d.items():
new_key = parent_key + sep + k if parent_key else k
if isinstance(v, collections.MutableMapping):
items.extend(flatten(v, new_key, sep=sep).items())
else:
items.append((new_key, v))
return dict(items)
results = flatten(some_dict, parent_key='', sep='.')
for item in results:
print(item + ': ' + results[item])
</code></pre>
<p>If you want it in order, you'll need an OrderedDict though.</p>
| 0 | 2016-07-29T23:33:56Z | [
"python",
"dictionary",
"recursion",
"pyyaml"
] |
Return all keys along with value in nested dictionary | 38,668,680 | <p>I am working on getting all text that exists in several <code>.yaml</code> files placed into a new singular YAML file that will contain the English translations that someone can then translate into Spanish.</p>
<p>Each YAML file has a lot of nested text. I want to print the full 'path', aka all the keys, along with the value, for each value in the YAML file. Here's an example input for a <code>.yaml</code> file that lives in the myproject.section.more_information file:</p>
<pre><code>default:
heading: Hereâs A Title
learn_more:
title: Title of Thing
url: www.url.com
description: description
opens_new_window: true
</code></pre>
<p>and here's the desired output: </p>
<pre><code>myproject.section.more_information.default.heading: Hereâs a Title
myproject.section.more_information.default.learn_more.title: Title of Thing
mproject.section.more_information.default.learn_more.url: www.url.com
myproject.section.more_information.default.learn_more.description: description
myproject.section.more_information.default.learn_more.opens_new_window: true
</code></pre>
<p>This seems like a good candidate for recursion, so I've looked at examples such as <a href="http://stackoverflow.com/questions/36808260/python-recursive-search-of-dict-with-nested-keys?rq=1">this answer</a></p>
<p>However, I want to preserve all of the keys that lead to a given value, not just the last key in a value. I'm currently using PyYAML to read/write YAML. </p>
<p>Any tips on how to save each key as I continue to check if the item is a dictionary and then return all the keys associated with each value?</p>
| 1 | 2016-07-29T23:07:58Z | 38,668,894 | <p>Keep a simple list of strings, being the most recent key at each indentation depth. When you progress from one line to the next with no change, simply change the item at the end of the list. When you "out-dent", pop the last item off the list. When you indent, append to the list.</p>
<p>Then, each time you hit a colon, the corresponding key item is the concatenation of the strings in the list, something like:</p>
<pre><code>'.'.join(key_list)
</code></pre>
<p>Does that get you moving at an honorable speed?</p>
| 0 | 2016-07-29T23:35:59Z | [
"python",
"dictionary",
"recursion",
"pyyaml"
] |
Return all keys along with value in nested dictionary | 38,668,680 | <p>I am working on getting all text that exists in several <code>.yaml</code> files placed into a new singular YAML file that will contain the English translations that someone can then translate into Spanish.</p>
<p>Each YAML file has a lot of nested text. I want to print the full 'path', aka all the keys, along with the value, for each value in the YAML file. Here's an example input for a <code>.yaml</code> file that lives in the myproject.section.more_information file:</p>
<pre><code>default:
heading: Hereâs A Title
learn_more:
title: Title of Thing
url: www.url.com
description: description
opens_new_window: true
</code></pre>
<p>and here's the desired output: </p>
<pre><code>myproject.section.more_information.default.heading: Hereâs a Title
myproject.section.more_information.default.learn_more.title: Title of Thing
mproject.section.more_information.default.learn_more.url: www.url.com
myproject.section.more_information.default.learn_more.description: description
myproject.section.more_information.default.learn_more.opens_new_window: true
</code></pre>
<p>This seems like a good candidate for recursion, so I've looked at examples such as <a href="http://stackoverflow.com/questions/36808260/python-recursive-search-of-dict-with-nested-keys?rq=1">this answer</a></p>
<p>However, I want to preserve all of the keys that lead to a given value, not just the last key in a value. I'm currently using PyYAML to read/write YAML. </p>
<p>Any tips on how to save each key as I continue to check if the item is a dictionary and then return all the keys associated with each value?</p>
| 1 | 2016-07-29T23:07:58Z | 38,670,751 | <p>Walking over nested dictionaries begs for recursion and by handing in the "prefix" to "path" this prevents you from having to do any manipulation on the segments of your path (as @Prune) suggests.</p>
<p>There are a few things to keep in mind that makes this problem interesting:</p>
<ul>
<li>because you are using multiple files can result in the same path in multiple files, which you need to handle (at least throwing an error, as otherwise you might just lose data). In my example I generate a list of values.</li>
<li>dealing with special keys (non-string (convert?), empty string, keys containing a <code>.</code>). My example reports these and exits.</li>
</ul>
<p>Example code using <a href="https://pypi.python.org/pypi/ruamel.yaml/" rel="nofollow">ruamel.yaml</a> ¹:</p>
<pre><code>import sys
import glob
import ruamel.yaml
from ruamel.yaml.comments import CommentedMap, CommentedSeq
from ruamel.yaml.compat import string_types, ordereddict
class Flatten:
def __init__(self, base):
self._result = ordereddict() # key to list of tuples of (value, comment)
self._base = base
def add(self, file_name):
data = ruamel.yaml.round_trip_load(open(file_name))
self.walk_tree(data, self._base)
def walk_tree(self, data, prefix=None):
"""
this is based on ruamel.yaml.scalarstring.walk_tree
"""
if prefix is None:
prefix = ""
if isinstance(data, dict):
for key in data:
full_key = self.full_key(key, prefix)
value = data[key]
if isinstance(value, (dict, list)):
self.walk_tree(value, full_key)
continue
# value is a scalar
comment_token = data.ca.items.get(key)
comment = comment_token[2].value if comment_token else None
self._result.setdefault(full_key, []).append((value, comment))
elif isinstance(base, list):
print("don't know how to handle lists", prefix)
sys.exit(1)
def full_key(self, key, prefix):
"""
check here for valid keys
"""
if not isinstance(key, string_types):
print('key has to be string', repr(key), prefix)
sys.exit(1)
if '.' in key:
print('dot in key not allowed', repr(key), prefix)
sys.exit(1)
if key == '':
print('empty key not allowed', repr(key), prefix)
sys.exit(1)
return prefix + '.' + key
def dump(self, out):
res = CommentedMap()
for path in self._result:
values = self._result[path]
if len(values) == 1: # single value for path
res[path] = values[0][0]
if values[0][1]:
res.yaml_add_eol_comment(values[0][1], key=path)
continue
res[path] = seq = CommentedSeq()
for index, value in enumerate(values):
seq.append(value[0])
if values[0][1]:
res.yaml_add_eol_comment(values[0][1], key=index)
ruamel.yaml.round_trip_dump(res, out)
flatten = Flatten('myproject.section.more_information')
for file_name in glob.glob('*.yaml'):
flatten.add(file_name)
flatten.dump(sys.stdout)
</code></pre>
<p>If you have an additional input file:</p>
<pre><code>default:
learn_more:
commented: value # this value has a comment
description: another description
</code></pre>
<p>then the result is:</p>
<pre><code>myproject.section.more_information.default.heading: Hereâs A Title
myproject.section.more_information.default.learn_more.title: Title of Thing
myproject.section.more_information.default.learn_more.url: www.url.com
myproject.section.more_information.default.learn_more.description:
- description
- another description
myproject.section.more_information.default.learn_more.opens_new_window: true
myproject.section.more_information.default.learn_more.commented: value # this value has a comment
</code></pre>
<p>Of course if your input doesn't have double paths, your output won't have any lists.</p>
<p>By using <code>string_types</code> and <code>ordereddict</code> from <code>ruamel.yaml</code> makes this Python2 and Python3 compatible (you don't indicate which version you are using). </p>
<p>The ordereddict preserves the original key ordering, but this is of course dependent on the processing order of the files. If you want the paths sorted, just change <code>dump()</code> to use:</p>
<pre><code> for path in sorted(self._result):
</code></pre>
<p>Also note that the comment on the 'commented' dictionary entry is preserved.</p>
<hr>
<p>¹ <sub>ruamel.yaml is a YAML 1.2 parser that preserves comments and other data on round-tripping (PyYAML does most parts of YAML 1.1). Disclaimer: I am the author of ruamel.yaml</sub></p>
| 0 | 2016-07-30T05:46:52Z | [
"python",
"dictionary",
"recursion",
"pyyaml"
] |
Tokenizing a huge quantity of text in python | 38,668,717 | <p>I have a huge list of text files to tokenize. I have the following code which works for a small dataset. I am having trouble using the same procedure with a huge dataset, however. I am giving the example of a small dataset as below.</p>
<pre><code>In [1]: text = [["It works"], ["This is not good"]]
In [2]: tokens = [(A.lower().replace('.', '').split(' ') for A in L) for L in text]
In [3]: tokens
Out [3]:
[<generator object <genexpr> at 0x7f67c2a703c0>,
<generator object <genexpr> at 0x7f67c2a70320>]
In [4]: list_tokens = [tokens[i].next() for i in range(len(tokens))]
In [5]: list_tokens
Out [5]:
[['it', 'works'], ['this', 'is', 'not', 'good']]
</code></pre>
<p>While all works so well with a small dataset, I encounter problem processing a huge list of lists of strings (more than 1,000,000 lists of strings) with the same code. As I still can tokenize the strings with the huge dataset as in <code>In [3]</code>, it fails in <code>In [4]</code> (i.e. killed in terminal). I suspect it is just because the body of the text is too big. </p>
<p>I am here, therefore, seek for suggestions on the improvement of the procedure to obtain lists of strings in a list as what I have in <code>In [5]</code>. </p>
<p>My actual purpose, however, is to count the words in each list. For instance, in the example of the small dataset above, I will have things as below.</p>
<pre><code>[[0,0,1,0,0,1], [1, 1, 0, 1, 1, 0]] (note: each integer denotes the count of each word)
</code></pre>
<p>If I don't have to convert generators to lists to get the desired results (i.e. word counts), that would also be good. </p>
<p>Please let me know if my question is unclear. I would love to clarify as best as I can. Thank you. </p>
| 0 | 2016-07-29T23:13:17Z | 38,668,948 | <p>You could create a <code>set</code> of unique words, then loop through and count each of those...</p>
<pre><code>#! /usr/bin/env python
text = [["It works works"], ["It is not good this"]]
SplitList = [x[0].split(" ") for x in text]
FlattenList = sum(SplitList,[]) # "trick" to flatten a list
UniqueList = list(set(FlattenList))
CountMatrix = [[x.count(y) for y in UniqueList] for x in SplitList]
print UniqueList
print CountMatrix
</code></pre>
<p>Output is the total list of words, and their counts in each string: </p>
<pre><code>['good', 'this', 'is', 'It', 'not', 'works']
[[0, 0, 0, 1, 0, 2], [1, 1, 1, 1, 1, 0]]
</code></pre>
| 1 | 2016-07-29T23:44:18Z | [
"python",
"string",
"nlp",
"tokenize"
] |
Tokenizing a huge quantity of text in python | 38,668,717 | <p>I have a huge list of text files to tokenize. I have the following code which works for a small dataset. I am having trouble using the same procedure with a huge dataset, however. I am giving the example of a small dataset as below.</p>
<pre><code>In [1]: text = [["It works"], ["This is not good"]]
In [2]: tokens = [(A.lower().replace('.', '').split(' ') for A in L) for L in text]
In [3]: tokens
Out [3]:
[<generator object <genexpr> at 0x7f67c2a703c0>,
<generator object <genexpr> at 0x7f67c2a70320>]
In [4]: list_tokens = [tokens[i].next() for i in range(len(tokens))]
In [5]: list_tokens
Out [5]:
[['it', 'works'], ['this', 'is', 'not', 'good']]
</code></pre>
<p>While all works so well with a small dataset, I encounter problem processing a huge list of lists of strings (more than 1,000,000 lists of strings) with the same code. As I still can tokenize the strings with the huge dataset as in <code>In [3]</code>, it fails in <code>In [4]</code> (i.e. killed in terminal). I suspect it is just because the body of the text is too big. </p>
<p>I am here, therefore, seek for suggestions on the improvement of the procedure to obtain lists of strings in a list as what I have in <code>In [5]</code>. </p>
<p>My actual purpose, however, is to count the words in each list. For instance, in the example of the small dataset above, I will have things as below.</p>
<pre><code>[[0,0,1,0,0,1], [1, 1, 0, 1, 1, 0]] (note: each integer denotes the count of each word)
</code></pre>
<p>If I don't have to convert generators to lists to get the desired results (i.e. word counts), that would also be good. </p>
<p>Please let me know if my question is unclear. I would love to clarify as best as I can. Thank you. </p>
| 0 | 2016-07-29T23:13:17Z | 38,670,117 | <p>There are lots of available tokenizers that are optimized. I would look at <code>CountVectorizer</code> in <code>sklearn</code>, which is built for counting tokens. </p>
<p>You could also use <code>nltk</code> or <code>textblob</code>, if you want more options. The latter is faster, in my experience.</p>
| 0 | 2016-07-30T03:36:10Z | [
"python",
"string",
"nlp",
"tokenize"
] |
Pandas drop the first few rows contain nan in each group | 38,668,737 | <p>I have a panel data, I would like to drop the first (few) row(s) which contain NaN in each group. (Or some general method which could drop based on the index within the group and other conditions.)</p>
<pre><code>df = pd.DataFrame(
{'ID': [10001, 10001, 10001, 10002, 10002, 10002, 10003, 10003, 10003, 10003],
'PRICE': [None, 11.5, 14.31, 15.125, 14.44, None, None, None, None, 23.55],
'date': [19920103, 19920106, 19920107, 19920108, 19920109, 19920110,
19920113, 19920114, 19920115, 19920116]},
index = range(1,11))
</code></pre>
<p>The data would look like:</p>
<pre><code> ID PRICE date
1 10001 NaN 19920103
2 10001 11.500 19920106
3 10001 14.310 19920107
4 10002 15.125 19920108
5 10002 14.440 19920109
6 10002 NaN 19920110
7 10003 NaN 19920113
8 10003 NaN 19920114
9 10003 NaN 19920115
10 10003 23.550 19920116
</code></pre>
<p>I would like to drop line 1 and 7, but not line 9, since line 9 is not one of the first few missing observations, I tried</p>
<pre><code>def mask_first_missing(x):
result = x.notnull() & x.rank()==1
return result
mask = df.groupby(['ID'])['PRICE'].transform(mask_first_missing).astype(bool)
print(df[mask])
</code></pre>
<p>But it removed row 1, 7 and 9, apparently row 9 is not the first observation in group 3, </p>
<p>If I do this</p>
<pre><code>df[df.groupby('ID', as_index=False)['PRICE'].nth(0).notnull()]
</code></pre>
<p>Then the index created by groupby object is not aligned with the original dataframe</p>
<p>Could anybody help me with this? Thank you</p>
| 2 | 2016-07-29T23:16:07Z | 38,668,904 | <p>This is a way to do it:</p>
<pre><code>notnull = df.PRICE.notnull()
protected = df.index > df.PRICE.last_valid_index()
df[notnull | protected]
</code></pre>
<p><a href="http://i.stack.imgur.com/2dew3.png" rel="nofollow"><img src="http://i.stack.imgur.com/2dew3.png" alt="enter image description here"></a></p>
| 2 | 2016-07-29T23:37:47Z | [
"python",
"pandas",
"panel"
] |
Pandas drop the first few rows contain nan in each group | 38,668,737 | <p>I have a panel data, I would like to drop the first (few) row(s) which contain NaN in each group. (Or some general method which could drop based on the index within the group and other conditions.)</p>
<pre><code>df = pd.DataFrame(
{'ID': [10001, 10001, 10001, 10002, 10002, 10002, 10003, 10003, 10003, 10003],
'PRICE': [None, 11.5, 14.31, 15.125, 14.44, None, None, None, None, 23.55],
'date': [19920103, 19920106, 19920107, 19920108, 19920109, 19920110,
19920113, 19920114, 19920115, 19920116]},
index = range(1,11))
</code></pre>
<p>The data would look like:</p>
<pre><code> ID PRICE date
1 10001 NaN 19920103
2 10001 11.500 19920106
3 10001 14.310 19920107
4 10002 15.125 19920108
5 10002 14.440 19920109
6 10002 NaN 19920110
7 10003 NaN 19920113
8 10003 NaN 19920114
9 10003 NaN 19920115
10 10003 23.550 19920116
</code></pre>
<p>I would like to drop line 1 and 7, but not line 9, since line 9 is not one of the first few missing observations, I tried</p>
<pre><code>def mask_first_missing(x):
result = x.notnull() & x.rank()==1
return result
mask = df.groupby(['ID'])['PRICE'].transform(mask_first_missing).astype(bool)
print(df[mask])
</code></pre>
<p>But it removed row 1, 7 and 9, apparently row 9 is not the first observation in group 3, </p>
<p>If I do this</p>
<pre><code>df[df.groupby('ID', as_index=False)['PRICE'].nth(0).notnull()]
</code></pre>
<p>Then the index created by groupby object is not aligned with the original dataframe</p>
<p>Could anybody help me with this? Thank you</p>
| 2 | 2016-07-29T23:16:07Z | 38,672,921 | <p>alternatve approach using custom ranking:</p>
<pre><code>In [49]: %paste
df[df.assign(x=np.where(pd.isnull(df.PRICE), 1, np.nan))
.groupby('ID').x.cumsum().fillna(np.inf) > 1
]
## -- End pasted text --
Out[49]:
ID PRICE date
2 10001 11.500 19920106
3 10001 14.310 19920107
4 10002 15.125 19920108
5 10002 14.440 19920109
6 10002 14.120 19920110
8 10003 16.500 19920114
9 10003 NaN 19920115
</code></pre>
<p>Explanation:</p>
<pre><code>In [50]: df.assign(x=np.where(pd.isnull(df.PRICE), 1, np.nan))
Out[50]:
ID PRICE date x
1 10001 NaN 19920103 1.0
2 10001 11.500 19920106 NaN
3 10001 14.310 19920107 NaN
4 10002 15.125 19920108 NaN
5 10002 14.440 19920109 NaN
6 10002 14.120 19920110 NaN
7 10003 NaN 19920113 1.0
8 10003 16.500 19920114 NaN
9 10003 NaN 19920115 1.0
In [51]: df.assign(x=np.where(pd.isnull(df.PRICE), 1, np.nan)).groupby('ID').x.cumsum().fillna(np.inf)
Out[51]:
1 1.000000
2 inf
3 inf
4 inf
5 inf
6 inf
7 1.000000
8 inf
9 2.000000
Name: x, dtype: float64
In [52]: df.assign(x=np.where(pd.isnull(df.PRICE), 1, np.nan)).groupby('ID').x.cumsum().fillna(np.inf) > 1
Out[52]:
1 False
2 True
3 True
4 True
5 True
6 True
7 False
8 True
9 True
Name: x, dtype: bool
</code></pre>
| 0 | 2016-07-30T10:34:55Z | [
"python",
"pandas",
"panel"
] |
Pandas drop the first few rows contain nan in each group | 38,668,737 | <p>I have a panel data, I would like to drop the first (few) row(s) which contain NaN in each group. (Or some general method which could drop based on the index within the group and other conditions.)</p>
<pre><code>df = pd.DataFrame(
{'ID': [10001, 10001, 10001, 10002, 10002, 10002, 10003, 10003, 10003, 10003],
'PRICE': [None, 11.5, 14.31, 15.125, 14.44, None, None, None, None, 23.55],
'date': [19920103, 19920106, 19920107, 19920108, 19920109, 19920110,
19920113, 19920114, 19920115, 19920116]},
index = range(1,11))
</code></pre>
<p>The data would look like:</p>
<pre><code> ID PRICE date
1 10001 NaN 19920103
2 10001 11.500 19920106
3 10001 14.310 19920107
4 10002 15.125 19920108
5 10002 14.440 19920109
6 10002 NaN 19920110
7 10003 NaN 19920113
8 10003 NaN 19920114
9 10003 NaN 19920115
10 10003 23.550 19920116
</code></pre>
<p>I would like to drop line 1 and 7, but not line 9, since line 9 is not one of the first few missing observations, I tried</p>
<pre><code>def mask_first_missing(x):
result = x.notnull() & x.rank()==1
return result
mask = df.groupby(['ID'])['PRICE'].transform(mask_first_missing).astype(bool)
print(df[mask])
</code></pre>
<p>But it removed row 1, 7 and 9, apparently row 9 is not the first observation in group 3, </p>
<p>If I do this</p>
<pre><code>df[df.groupby('ID', as_index=False)['PRICE'].nth(0).notnull()]
</code></pre>
<p>Then the index created by groupby object is not aligned with the original dataframe</p>
<p>Could anybody help me with this? Thank you</p>
| 2 | 2016-07-29T23:16:07Z | 38,685,680 | <p>Thank you for your helps, but I think neither of the answers fit my task. </p>
<p>I figured out a solution myself, by creating a subindex column.</p>
<pre><code>df = pd.DataFrame(
{'ID': [10001, 10001, 10001, 10001, 10002, 10002, 10002, 10003, 10003, 10003, 10003],
'PRICE': [None, 11.5, None, 14.31, 15.125, 14.44, None, None, None, None, 23.55],
'date': [19920103, 19920106, 19920107, 19920108, 19920109, 19920110,
19920113, 19920114, 19920115, 19920116, 19920122]},
index = range(1,12))
df.loc[:, 'subindex'] = df.groupby('ID').cumcount()
</code></pre>
<p>Then one will obtain</p>
<pre><code> ID PRICE date subindex
1 10001 NaN 19920103 0
2 10001 11.500 19920106 1
3 10001 NaN 19920107 2
4 10001 14.310 19920108 3
5 10002 15.125 19920109 0
6 10002 14.440 19920110 1
7 10002 NaN 19920113 2
8 10003 NaN 19920114 0
9 10003 NaN 19920115 1
10 10003 NaN 19920116 2
11 10003 23.550 19920122 3
</code></pre>
<p>Instead of doing everything on based groupby, now I can select the nth observation of each groups based on column 'subindex'. </p>
<p>Now if I want to drop the first two NaN observation of 'PRICE' of each group, I can create a mask </p>
<pre><code>mask_first_few_nan = (df.loc[:, 'PRICE'].isnull()) & (df.loc[:, 'subindex'] <= 1)
df[~mask_first_few_nan]
</code></pre>
<p>The result is</p>
<pre><code> ID PRICE date subindex
2 10001 11.500 19920106 1
3 10001 NaN 19920107 2
4 10001 14.310 19920108 3
5 10002 15.125 19920109 0
6 10002 14.440 19920110 1
7 10002 NaN 19920113 2
10 10003 NaN 19920116 2
11 10003 23.550 19920122 3
</code></pre>
| 0 | 2016-07-31T15:47:33Z | [
"python",
"pandas",
"panel"
] |
Nested "ifs" on pandas df columns | 38,668,788 | <p>I have a pandas df called data.</p>
<p>I want to do something like:</p>
<pre><code>for i in range(data["col1"].count()):
if data["col1"][i] > 25:
count1 += 1
if data["col2"][i] > 35:
count2 += 1
</code></pre>
<p>and possibly with more columns so that I can keep track of when several conditions are met together. This works but it is slow, what is a better way?</p>
| 3 | 2016-07-29T23:22:38Z | 38,668,851 | <p>This is a better way to go:</p>
<pre><code>cond1 = data.col1 > 25
cond2 = data.col2 > 35
count1 = cond1.sum()
count2 = (cond1 & cond2).sum()
</code></pre>
| 4 | 2016-07-29T23:30:24Z | [
"python",
"pandas"
] |
Nested "ifs" on pandas df columns | 38,668,788 | <p>I have a pandas df called data.</p>
<p>I want to do something like:</p>
<pre><code>for i in range(data["col1"].count()):
if data["col1"][i] > 25:
count1 += 1
if data["col2"][i] > 35:
count2 += 1
</code></pre>
<p>and possibly with more columns so that I can keep track of when several conditions are met together. This works but it is slow, what is a better way?</p>
| 3 | 2016-07-29T23:22:38Z | 38,668,953 | <pre><code>count1 = df[df["col1"] > 25].count().values
count2 = df[(df["col1"]> 25) & (df["col2"]>35)].count().values
print count1
print count2
</code></pre>
| 1 | 2016-07-29T23:44:47Z | [
"python",
"pandas"
] |
PEG Online Judge Coding | 38,668,807 | <p>I am solving a problem on the PEG Online Judge, which is a site where you can solve lots of problems for practice and fun.</p>
<p>I am having trouble with one in particular. I have posted there for help but am not receiving any.</p>
<p>It is the Caporegime problem: <a href="http://wcipeg.com/problem/capos" rel="nofollow">http://wcipeg.com/problem/capos</a></p>
<p>You can use a number of languages to solve this. I decided on Python (although I have coded it in C++ too). There are 12 datasets the judge uses in testing the code. My code passes 11/12. I have no idea why I can't pass the last test and am hoping someone can help me.</p>
<p>I think it's a set partitioning problem of some kind and I solve it with a breadth first search approach. The problem datasets are not big, so it doesn't get out of hand.</p>
<p>Here is my solution:</p>
<pre><code>import sys
import copy
class SearchState():
def __init__(self, label, crews):
self.label = label
self.crews = crews
def __repr__(self):
return "State: %s: %s" % (self.label, str(self.crews))
def crewsSoldierCanBeIn(s, crews, grudges):
'''
For a given soldier and a list of crews and grudges,
return the crews the soldier an go in
'''
noGrudgeCrews = []
for i, crew in enumerate(crews):
conflict = False
for c in crew:
if [s, c] in grudges or [c, s] in grudges:
conflict = True
break
if not conflict:
noGrudgeCrews.append(i)
return noGrudgeCrews
def solve(numSoldiers, grudges):
'''
Put each soldier in a crew, output min no. of crews and who is in them
'''
crews = [[1]]
numStates = 0
states = [SearchState(numStates, crews)]
for s in range(2, numSoldiers+1):
newStates = []
for state in states:
possibleCrews = crewsSoldierCanBeIn(s, state.crews, grudges)
if len(possibleCrews) > 0:
for crew in possibleCrews:
numStates += 1
newCrews = copy.deepcopy(state.crews)
newCrews[crew].append(s)
newStates.append(SearchState(numStates, newCrews))
else:
numStates += 1
newCrews = copy.deepcopy(state.crews)
newCrews.append([s])
newStates.append(SearchState(numStates, newCrews))
states = copy.deepcopy(newStates)
minNumCrews = 1000000
minState = -1
for i, state in enumerate(states):
if len(state.crews) < minNumCrews:
minNumCrews = len(state.crews)
minState = i
print(len(states[minState].crews))
for crew in states[minState].crews:
for s in crew:
print("%d " % (s), end = "")
print()
def readInData(f):
numSoldiers, numGrudges = map(int, f.readline().strip().split())
grudges = []
for _ in range(numGrudges):
grudges.append(list(map(int, f.readline().strip().split())))
return numSoldiers, grudges
def main():
# Read in the data
f = sys.stdin
numSoldiers, grudges = readInData(f)
solve(numSoldiers, grudges)
if __name__ == '__main__':
main()
</code></pre>
| -2 | 2016-07-29T23:24:18Z | 38,718,477 | <p>Ok, so I've finally solved this.</p>
<p>Basically I needed to use a DFS approach, it can't really be solved (withing the online Judge's memory and time constraints) via BFS.</p>
<p>The advantage of DFS is twofold: 1) I can reach a solution (not the best solution) fairly quickly and use this to prune the tree, to get rid of heaps of partial solutions that will never be any good and 2) Very little memory is needed.</p>
<p>So DFS is faster and uses less memory, for this problem.</p>
| 0 | 2016-08-02T10:54:40Z | [
"python",
"algorithm",
"data-structures",
"breadth-first-search"
] |
constrain a series or array to a range of values | 38,668,814 | <p>I have a series of values that I want to have constrained to be within +1 and -1.</p>
<pre><code>s = pd.Series(np.random.randn(10000))
</code></pre>
<p>I know I can use <code>apply</code>, but is there a simple vectorized approach?</p>
<pre><code>s_ = s.apply(lambda x: min(max(x, -1), 1))
s_.head()
0 -0.256117
1 0.879797
2 1.000000
3 -0.711397
4 -0.400339
dtype: float64
</code></pre>
| 3 | 2016-07-29T23:25:23Z | 38,668,821 | <p>Use nested <code>np.where</code></p>
<pre><code>pd.Series(np.where(s < -1, -1, np.where(s > 1, 1, s)))
</code></pre>
<hr>
<h3>Timing</h3>
<p><a href="http://i.stack.imgur.com/Auz4d.png" rel="nofollow"><img src="http://i.stack.imgur.com/Auz4d.png" alt="enter image description here"></a></p>
| 2 | 2016-07-29T23:26:04Z | [
"python",
"numpy",
"pandas"
] |
constrain a series or array to a range of values | 38,668,814 | <p>I have a series of values that I want to have constrained to be within +1 and -1.</p>
<pre><code>s = pd.Series(np.random.randn(10000))
</code></pre>
<p>I know I can use <code>apply</code>, but is there a simple vectorized approach?</p>
<pre><code>s_ = s.apply(lambda x: min(max(x, -1), 1))
s_.head()
0 -0.256117
1 0.879797
2 1.000000
3 -0.711397
4 -0.400339
dtype: float64
</code></pre>
| 3 | 2016-07-29T23:25:23Z | 38,668,858 | <p>Use <a href="http://pandas.pydata.org/pandas-docs/version/0.13.1/generated/pandas.Series.clip.html" rel="nofollow"><code>clip</code></a>:</p>
<pre><code>s = s.clip(-1,1)
</code></pre>
<p>Example Input:</p>
<pre><code>s = pd.Series([-1.2, -0.5, 1, 1.1])
0 -1.2
1 -0.5
2 1.0
3 1.1
</code></pre>
<p>Example Output:</p>
<pre><code>0 -1.0
1 -0.5
2 1.0
3 1.0
</code></pre>
| 4 | 2016-07-29T23:31:47Z | [
"python",
"numpy",
"pandas"
] |
constrain a series or array to a range of values | 38,668,814 | <p>I have a series of values that I want to have constrained to be within +1 and -1.</p>
<pre><code>s = pd.Series(np.random.randn(10000))
</code></pre>
<p>I know I can use <code>apply</code>, but is there a simple vectorized approach?</p>
<pre><code>s_ = s.apply(lambda x: min(max(x, -1), 1))
s_.head()
0 -0.256117
1 0.879797
2 1.000000
3 -0.711397
4 -0.400339
dtype: float64
</code></pre>
| 3 | 2016-07-29T23:25:23Z | 38,668,861 | <p>You can use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.between.html" rel="nofollow"><code>between</code></a> Series method:</p>
<pre><code>In [11]: s[s.between(-1, 1)]
Out[11]:
0 -0.256117
1 0.879797
3 -0.711397
4 -0.400339
5 0.667196
...
</code></pre>
<p>Note: This <em>discards</em> the values outside of the between range.</p>
| 1 | 2016-07-29T23:32:07Z | [
"python",
"numpy",
"pandas"
] |
constrain a series or array to a range of values | 38,668,814 | <p>I have a series of values that I want to have constrained to be within +1 and -1.</p>
<pre><code>s = pd.Series(np.random.randn(10000))
</code></pre>
<p>I know I can use <code>apply</code>, but is there a simple vectorized approach?</p>
<pre><code>s_ = s.apply(lambda x: min(max(x, -1), 1))
s_.head()
0 -0.256117
1 0.879797
2 1.000000
3 -0.711397
4 -0.400339
dtype: float64
</code></pre>
| 3 | 2016-07-29T23:25:23Z | 38,669,142 | <p>One more suggestion:</p>
<pre><code>s[s<-1] = -1
s[s>1] = 1
</code></pre>
| 0 | 2016-07-30T00:15:01Z | [
"python",
"numpy",
"pandas"
] |
Queue implementation in python gives error on size | 38,669,099 | <p>I'm new to Python, thus the question. This is my implementation of a Queue</p>
<pre><code>class Queue:
def __init__(self):
self.top = None
self.marker = None
self.size = 0
def push(self, item):
self.size += 1
curr = Node(item)
if self.top is None:
self.top = curr
self.marker = curr
else:
self.marker.next = curr
self.marker = curr
def pop(self):
if self.top is None:
raise Exception("Popping an empty queue")
curr = self.top
self.size -= 1
if self.top is self.marker:
self.top = None
self.marker = None
else:
self.top = self.top.next
return curr
def peek(self):
return self.top.value
def size(self):
return self.size
def isempty(self):
return self.size == 0
</code></pre>
<p>The Node class is defined as follows,</p>
<pre><code>class Node:
def __init__(self, value=None, next=None):
self.value = value;
self.next = next
</code></pre>
<p>This implementation works fine for most of the methods except when I call size.
This call,</p>
<pre><code> print(queue.size())
</code></pre>
<p>Results in the following exception,</p>
<pre><code>print(queue.size())
TypeError: 'int' object is not callable
</code></pre>
<p>Can't seem to understand what the issue is here.</p>
| 0 | 2016-07-30T00:07:00Z | 38,669,127 | <p>You gave an attribute and a method the same name, <strong>size</strong>. This confuses name resolution (to you; the interpreter has its rules tightly in place). Check your name resolution precedence; I think you'll find that the attribute takes precedence in this case. Thus, <strong>queue.size</strong> resolves to an integer, not a function.</p>
| 1 | 2016-07-30T00:12:46Z | [
"python",
"queue"
] |
Can set accept-language header but not Connection header? PhantomJS (Selenium Webdriver with Python) | 38,669,108 | <p>I'm able to set the Accept-Language header , but somehow I'm unable to set the Connection header to "keep-alive": </p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
webdriver.DesiredCapabilities.PHANTOMJS['phantomjs.page.customHeaders.Accept-Language'] = 'ru-RU'
webdriver.DesiredCapabilities.PHANTOMJS['phantomjs.page.customHeaders.Connection'] = "keep-alive"
driver = webdriver.PhantomJS("/home/user/bin/phantomjs",service_args=['--ignore-ssl-errors=true', '--ssl-protocol=any'])
driver.set_window_size(1120, 550)
driver.get("http://www.httpbin.org/headers")
print(driver.page_source)
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code><html><head></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"headers": {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Encoding": "gzip, deflate",
"Accept-Language": "ru-RU",
"Host": "www.httpbin.org",
"User-Agent": "Mozilla/5.0 (Unknown; Linux x86_64) AppleWebKit/538.1 (KHTML, like Gecko) PhantomJS/2.1.1 Safari/538.1"
}
}
</pre></body></html>
</code></pre>
<p>I thought maybe, for whatever reason, the header itself or the fields were case sensitive, so I looked up examples of those headers and used them exactly as is, but no dice. How do I set the Connection header or Keep-alive header?</p>
| 5 | 2016-07-30T00:08:38Z | 39,131,180 | <p>Looks like the default header for phantomjs connection is Keep-alive, the site you are using to view headers does not show the Connection header, even when not using PhantomJS. If you look at your request using Fiddler, you can see that it has the connection keep-alive header</p>
<p>GET /headers HTTP/1.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,<em>/</em>;q=0.8<br>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/538.1 (KHTML, like Gecko) PhantomJS/2.1.1 Safari/538.1</p>
<p><strong>Connection: Keep-Alive</strong></p>
<p>Accept-Encoding: gzip, deflate
Accept-Language: en-US,*
Host: www.httpbin.org</p>
| 2 | 2016-08-24T19:02:41Z | [
"python",
"selenium",
"http-headers",
"phantomjs"
] |
Python Subprocess Failure After A Certain Quantity of Data | 38,669,125 | <p>I am using Python and Java together for some scientific computing. Python sends permutations to the Java code to be processed. I use the subprocess object with piped data in and out. This system works great up until about 75k permuations. At that point it crashes. The strangest thing is how consistently it occurs at around 75k permuations regardless of changing other variables. </p>
<p>The Python code sends 50 permuations at a time, but changing this number doesn't affect when it crashes.</p>
<p>Having the python code run the relevant function on smaller parts of the data (40k, then 40k, etc.) doesn't affect when it crashes. </p>
<p>Reducing the number of simulataneous threads from 4 to 1 doesn't affect when it crashes.</p>
<p>Yet, it doesn't crash at a specific permutation, just around 75k (could be at 70k, could be 81k, etc.)</p>
<p>I'm completely mystified.
Here's the error thrown:</p>
<pre><code>[1:84150] //Thread 1, permutation #84150
Number of threads active 2
("Failure","Failure")
Traceback (most recent call last):
File "D:\TMD Projects\BE9\BE9_RestrictHB_2A_Fine\TMD_7.28.16.py", line 836, in run
returnedDataTuple = p.communicate(sentData.encode())
File "C:\Python34\lib\subprocess.py", line 959, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "C:\Python34\lib\subprocess.py", line 1190, in _communicate
self.stderr_thread.start()
File "C:\Python34\lib\threading.py", line 851, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
</code></pre>
<p>Here's the relevant code, nothing too complex:</p>
<pre><code>p = subprocess.Popen(["java","-jar","WatGenDabBatchNoTP.jar","L","6","0",str(BATCH_SIZE)], stdin=subprocess.PIPE , stdout=subprocess.PIPE, stderr=subprocess.PIPE)
returnedData = "Failure"
returnedError = "Failure"
returnedTuple = ("Failure","Failure")
try:
returnedTuple = p.communicate(sentData.encode())
returnedData = returnedTuple[0].decode('utf-8')
returnedError = returnedTuple[1].decode('utf-8')
except:
PRINT("["+str(me.thread)+":"+str(number)+"]")
PRINT("Number of threads active "+str(threading.activeCount()))
PRINT(str(backTuple))
PRINT(traceback.format_exc())
finally:
#p.stdout.flush() //Flushing buffers throws an error
#p.stdin.flush()
#p.stderr.flush()
p.terminate() //Terminating process doesn't help
p.kill()
</code></pre>
<p>The above code is part of a loop. It sends BATCH_SIZE permutations in each run of the loop and crashes when it gets to 75-85k.
It's run on Windows 7 and Python 3.4.2</p>
| 2 | 2016-07-30T00:12:25Z | 38,670,281 | <p>Some things to try:</p>
<ol>
<li>Change BATCH_SIZE to something very small. Does the program still crash? Does the program crash after the same number of iterations through the loop as before, or does it crash after the same number of records have been processed?</li>
</ol>
<p>I suspect you are leaking a thread on every iteration. If that is the case, the program should run out of threads after the same number of iterations through the loop even if BATCH_SIZE is a small number.</p>
<p>On the other hand, the problem may be related to the total number of records processed by the loop. Varying BATCH_SIZE will help to determine
if this is the case.</p>
<ol start="2">
<li><p>Try adding <code>p.wait()</code> in the <code>finally</code> block. I would first try <code>p.wait()</code> alone (without calling <code>p.kill()</code> or <code>p.terminate()</code>).</p></li>
<li><p>Instead of calling your Java program, have it call a simple program which just prints out some dummy data. If the problem persists it would
eliminate the Java program as being part of the problem.</p></li>
<li><p>Simplify the program as much as possible. Use only one processing thread. Remove the use of threading / subprocess from any other part of your program. Instead of generating the permutations, pre-calculate each batch and save it in a file. Then use
a simple for-loop to feed them to your Java program. This will help
determine if the way you are calling <code>subprocess.Popen</code> is the culprit
or not.</p></li>
</ol>
| 2 | 2016-07-30T04:15:13Z | [
"python",
"multithreading",
"pipe",
"subprocess"
] |
Authorizing a python script to access the GData API without the OAuth2 user flow | 38,669,200 | <p>I'm writing a small python script that will retrieve a list of my Google Contacts (using the <a href="https://developers.google.com/google-apps/contacts/v3/" rel="nofollow">Google Contacts API</a>) and will randomly suggest one person for me to contact (good way to automate keeping in touch with friends!)</p>
<p>This is just a standalone script that I plan to schedule on a cron job. The problem is that Google seems to require OAuth2 style authentication, where the user (me) has to approve the access and then the app receives an authorization token I can then use to query the user's (my) contacts. </p>
<p>Since I'm only accessing my own data, is there a way to "pre-authorize" myself? Ideally I'd love to be able to retrieve some authorization token and then I'd run the script and pass that token as an environment variable</p>
<pre><code>AUTH_TOKEN=12345 python my_script.py
</code></pre>
<p>That way it doesn't require user input/interaction to authorize it one time. </p>
| 0 | 2016-07-30T00:24:30Z | 38,669,860 | <p>The implementation you're describing invokes the full "three-legged" OAuth handshake, which requires explicit user consent. If you don't need user consent, you can instead utilize "two-legged" OAuth via a <a href="https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances" rel="nofollow">Google service account</a>, which is tied to an <em>application</em>, rather than a <em>user</em>. Once you've <a href="https://console.developers.google.com/permissions/serviceaccounts" rel="nofollow">granted permission</a> to your service account to access your contacts, you can use the <a href="https://github.com/google/oauth2client" rel="nofollow"><code>oauth2client</code></a> <a href="https://github.com/google/oauth2client/blob/bb2386ea51b330765b7c44461465bdceb0be09b4/oauth2client/service_account.py#L43-L542" rel="nofollow"><code>ServiceAccountCredentials</code> class</a> to directly access GData without requiring user consent.</p>
<p>Here's the two-legged authentication example from the <a href="https://developers.google.com/api-client-library/python/auth/service-accounts" rel="nofollow">Google service account documentation</a>: </p>
<pre><code>import json
from httplib2 import Http
from oauth2client.service_account import ServiceAccountCredentials
from apiclient.discovery import build
scopes = ['https://www.googleapis.com/auth/sqlservice.admin']
credentials = ServiceAccountCredentials.from_json_keyfile_name(
'service-account.json', scopes)
sqladmin = build('sqladmin', 'v1beta3', credentials=credentials)
response = sqladmin.instances().list(project='examinable-example-123').execute()
print response
</code></pre>
| 1 | 2016-07-30T02:38:29Z | [
"python",
"gdata",
"oauth2"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.