title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
Tree structure for expectiminimax algorithm
38,621,382
<p>So I don't have much formal computer science education, so I apologize in advance if this is a stupid question. </p> <p>I'm currently writing a dice poker game in Python. The rules are <em>not</em> like <a href="http://witcher.wikia.com/wiki/Dice_poker_in_The_Witcher" rel="nofollow">the game found in The Witcher 2</a> but instead based upon an old mobile dice poker game that was taken off of the App Store a while ago.</p> <p>The rules are as follows:</p> <ul> <li>The player and the AI initially roll 5 <a href="https://en.wikipedia.org/wiki/Poker_dice" rel="nofollow">poker dice</a> each.</li> <li>The player and the AI select which dice to hold onto and roll the rest, revealing the results to one another.</li> <li>The above step is repeated.</li> <li><p>Whoever has a hand higher on the ranking (see below) wins, with the absolute high card serving as tie-breaker.</p> <ol> <li>Five of a Kind</li> <li>Four of a Kind</li> <li>Straight</li> <li>Full House</li> <li>Three of a Kind</li> <li>Two Pair</li> <li>One Pair</li> <li>High Card</li> </ol></li> </ul> <p>The relevant code is below:</p> <pre><code>class Tree(object): '''Generic tree node''' def __init__(self, children=None, value=None, type=None): self.children = [] self.value = value self.type = type # either 'player', 'ai', or 'chance' if children is not None: for child in children: self.add_child(child) def add_child(self, node): assert isinstance(node, Tree) self.children.append(node) def is_terminal(self): return len(self.children) == 0 def expectiminimax(node): '''Adapted from Wikipedia's pseudocode''' MAX_INT = 1e20 if node.is_terminal(): return node.value if node.type == 'player': q = MAX_INT for child in node.children: q = min(q, expectiminimax(child)) elif node.type == 'ai': q = -MAX_INT for child in node.children: q = max(q, expectiminimax(child)) elif node.type == 'chance': q = 0 for child in node.children: # All children are equally probable q += len(node.children)**-1 * expectiminimax(child) return q def ai_choose(ai_hand, player_hand): ''' Given an AI hand and a player hand, choose which cards to hold onto for best outcome. ''' def construct_tree(ai_hand, player_hand): ''' Construct a 5-layer (?) tree for use with expectiminimax. Δ MAX / \ O ... O CHANCE - Possible AI moves / \ / \ ∇ .. ∇ ∇ .. ∇ MIN - Possible card dice rolls / \ ........ O ... O ........... CHANCE - Possible player moves / \ / \ ............ ▢ .. ▢ ▢ .. ▢ ............. END - Possible card dice rolls ''' tree_structure = ['ai', 'chance', 'player', 'chance', 'ai'] tree = Tree(type=tree_structure[0]) for subset in powerset(ai_hand.hand): tree.add_child(Tree(value=subset)) # ... </code></pre> <p><strong>What I want to ask is this:</strong> is this layer structure correct? Or should the min, max, and chance layers be rearranged? Other general comments are also welcome.</p>
1
2016-07-27T19:15:24Z
38,621,566
<p>As far as I can see the layering is correct. I did something similar some time ago and I think you can implement it without the tree datastructure, it should be doable and is likely more cleaner since you don't need a chance type.</p>
0
2016-07-27T19:26:05Z
[ "python", "tree", "artificial-intelligence", "minimax" ]
"Literally" convert a String into a bytearray
38,621,499
<p>I have a small server sending raw data with a socket in python which only accepts bytearrays as input. In this bytearray I have to include mac-addresses. These mac-addresses come from a json-file, imported as a string. e.g "00 11 22 33 44 55" (actually without the spaces)</p> <p>what i am searching for is an easy way of encoding this string into a bytearray. so the first byte should be 00, second 11 and so on.</p> <p>all "solutions" i have found will encode any string into a byte-array, but this isn't what i want, because it will split up my mac-address further because they will encode for example 0, then 0, then 1, then 1 and so on so my 6-byte mac-address becomes a 12 byte encoded byte-array.</p> <p>Is there any built-in function I can use or do I have to create my own function to do this?</p> <hr> <p><strong>SOLUTION:</strong> Thx to you all and Arnial for providing the most easy answer. The thing is, all these answers i have more or less tried out with no effect before <strong>BUT</strong> My problem was not the type of the return-type of these methods (which my socket always refused to send), it was actually the length of the message i tried to send. The socket refuses to send messages shorter then 12 bytes (source/destination mac-addresses), but i only ever tried a short message with this example mac-address converted with one of the here presented methods.</p> <p>So thank you all for your help!</p>
1
2016-07-27T19:21:57Z
38,621,747
<p>Just split the string up into chunks of 2 characters, and interpret the hex value.</p> <pre><code>def str2bytes(string): return tuple(int(string[i:i+2], 16) for i in range(0, len(string), 2)) print str2bytes("001122334455") #(0, 17, 34, 51, 68, 85) </code></pre> <p>If you are looking to have a string version of the above then:</p> <pre><code>def str2bytes(string): return "".join(chr(int(string[i:i+2], 16)) for i in range(0, len(string), 2)) print str2bytes("001122334455") #Encoded string '\x00\x11"3DU' same as '\x00\x11\x22\x33\x44\x55' </code></pre>
1
2016-07-27T19:36:56Z
[ "python", "python-3.x" ]
"Literally" convert a String into a bytearray
38,621,499
<p>I have a small server sending raw data with a socket in python which only accepts bytearrays as input. In this bytearray I have to include mac-addresses. These mac-addresses come from a json-file, imported as a string. e.g "00 11 22 33 44 55" (actually without the spaces)</p> <p>what i am searching for is an easy way of encoding this string into a bytearray. so the first byte should be 00, second 11 and so on.</p> <p>all "solutions" i have found will encode any string into a byte-array, but this isn't what i want, because it will split up my mac-address further because they will encode for example 0, then 0, then 1, then 1 and so on so my 6-byte mac-address becomes a 12 byte encoded byte-array.</p> <p>Is there any built-in function I can use or do I have to create my own function to do this?</p> <hr> <p><strong>SOLUTION:</strong> Thx to you all and Arnial for providing the most easy answer. The thing is, all these answers i have more or less tried out with no effect before <strong>BUT</strong> My problem was not the type of the return-type of these methods (which my socket always refused to send), it was actually the length of the message i tried to send. The socket refuses to send messages shorter then 12 bytes (source/destination mac-addresses), but i only ever tried a short message with this example mac-address converted with one of the here presented methods.</p> <p>So thank you all for your help!</p>
1
2016-07-27T19:21:57Z
38,621,816
<p>Your conversion isn't so literal as you think.</p> <p>String "00112233445566" is 12 characters long, this why it converts to 12 bytes array.</p> <p>Your mac looks like hex encoded byte string, so probably you can use this:</p> <pre><code>bytes.fromhex( "001122334455" ) </code></pre> <p>It will create byte sequence that starts with zero byte, then 0x11 (17), than 0x22 (34) ...</p>
2
2016-07-27T19:41:02Z
[ "python", "python-3.x" ]
"Literally" convert a String into a bytearray
38,621,499
<p>I have a small server sending raw data with a socket in python which only accepts bytearrays as input. In this bytearray I have to include mac-addresses. These mac-addresses come from a json-file, imported as a string. e.g "00 11 22 33 44 55" (actually without the spaces)</p> <p>what i am searching for is an easy way of encoding this string into a bytearray. so the first byte should be 00, second 11 and so on.</p> <p>all "solutions" i have found will encode any string into a byte-array, but this isn't what i want, because it will split up my mac-address further because they will encode for example 0, then 0, then 1, then 1 and so on so my 6-byte mac-address becomes a 12 byte encoded byte-array.</p> <p>Is there any built-in function I can use or do I have to create my own function to do this?</p> <hr> <p><strong>SOLUTION:</strong> Thx to you all and Arnial for providing the most easy answer. The thing is, all these answers i have more or less tried out with no effect before <strong>BUT</strong> My problem was not the type of the return-type of these methods (which my socket always refused to send), it was actually the length of the message i tried to send. The socket refuses to send messages shorter then 12 bytes (source/destination mac-addresses), but i only ever tried a short message with this example mac-address converted with one of the here presented methods.</p> <p>So thank you all for your help!</p>
1
2016-07-27T19:21:57Z
38,621,826
<p>This is actually built-in! It's <a href="https://docs.python.org/3/library/binascii.html#binascii.unhexlify" rel="nofollow"><code>binascii.unhexlify</code></a>.</p> <pre><code>import binascii binascii.unhexlify('001122334455') </code></pre>
0
2016-07-27T19:41:32Z
[ "python", "python-3.x" ]
"Literally" convert a String into a bytearray
38,621,499
<p>I have a small server sending raw data with a socket in python which only accepts bytearrays as input. In this bytearray I have to include mac-addresses. These mac-addresses come from a json-file, imported as a string. e.g "00 11 22 33 44 55" (actually without the spaces)</p> <p>what i am searching for is an easy way of encoding this string into a bytearray. so the first byte should be 00, second 11 and so on.</p> <p>all "solutions" i have found will encode any string into a byte-array, but this isn't what i want, because it will split up my mac-address further because they will encode for example 0, then 0, then 1, then 1 and so on so my 6-byte mac-address becomes a 12 byte encoded byte-array.</p> <p>Is there any built-in function I can use or do I have to create my own function to do this?</p> <hr> <p><strong>SOLUTION:</strong> Thx to you all and Arnial for providing the most easy answer. The thing is, all these answers i have more or less tried out with no effect before <strong>BUT</strong> My problem was not the type of the return-type of these methods (which my socket always refused to send), it was actually the length of the message i tried to send. The socket refuses to send messages shorter then 12 bytes (source/destination mac-addresses), but i only ever tried a short message with this example mac-address converted with one of the here presented methods.</p> <p>So thank you all for your help!</p>
1
2016-07-27T19:21:57Z
38,621,830
<p><a href="https://docs.python.org/3/library/binascii.html#binascii.a2b_hex" rel="nofollow">https://docs.python.org/3/library/binascii.html#binascii.a2b_hex</a></p> <pre><code>import binascii def str2bytes(string): return binascii.a2b_hex(string) print(str2bytes("001122334455")) </code></pre>
0
2016-07-27T19:41:47Z
[ "python", "python-3.x" ]
python take input when number of input is not specified
38,621,513
<p>I am new to python and trying to solve a problem in SPOJ ,</p> <p>In this question number of input(maximum 10 is specified, how ever it could be anything between 1 to 10) is not specified hence it gives NZEC error</p> <p>I tried this:</p> <pre><code>t = 10 while(t&gt;0): t = t - 1 n = raw_input() if(len(n) == 0): break </code></pre> <p>but it does not work</p> <p>in c we can use EOF to determine this</p> <p>please help</p>
-1
2016-07-27T19:22:42Z
38,621,838
<p>Solved Use </p> <pre><code>try: while True: n = int(raw_input()) #do something except: pass </code></pre>
0
2016-07-27T19:42:20Z
[ "python", "eof" ]
Pandas : saving Series of dictionaries to disk
38,621,539
<p>I have a python pandas Series of dictionaries :</p> <pre><code>id dicts 1 {'5': 1, '8': 20, '1800': 2} 2 {'2': 2, '8': 1, '1000': 25, '1651': 1} ... ... ... ... ... ... 20000000 {'2': 1, '10': 20} </code></pre> <p>The (key, value) in the dictionaries represent ('feature', count). About 2000 unique features exist.</p> <p>The Series' memory usage in pandas is about 500MB. What would be the best way to write this object to disk (having ideally low disk space usage, and being fast to write and fast to read back in afterwards) ?</p> <p>Options considered (and tried for the first 2) :<br> - to_csv (but treats the dictionaries as strings, so conversion back to dictionaries afterwards is very slow)<br> - cPickle (but ran out of memory during execution)<br> - conversion to a scipy sparse matrix structure </p>
3
2016-07-27T19:24:24Z
38,625,568
<p>I'm curious as to how your <code>Series</code> only takes up 500MB. If you are using the <code>.memory_usage</code> method, this will only return the total memory used by the each python object reference, which is all your Series is storing. That doesn't account for the actual memory of the dictionaries. Rough calculation 20,000,000 * 288 bytes = 5.76GB should be your memory usage. That 288 bytes is a conservative estimate of the memory required by each dictionary.</p> <h2>Converting to a sparse matrix</h2> <p>Anyway, try the following approach to convert your data into a sparse-matrix representation:</p> <pre><code>import numpy as np, pandas as pd from sklearn.feature_extraction import DictVectorizer from scipy.sparse import csr_matrix import pickle </code></pre> <p>I would use <code>int</code>s rather than strings as keys, as this will keep the right order later on. So, assuming your series is named <code>dict_series</code>:</p> <pre><code>dict_series = dict_series.apply(lambda d: {int(k):d[k] for k in d} </code></pre> <p>This might be memory intensive, and you maybe be better off simply creating your <code>Series</code> of <code>dict</code>s using <code>int</code>s as keys from the start. Or simply you can just skip this step. Now, to construct your sparse matrix:</p> <pre><code>dv = DictVectorizer(dtype=np.int32) sparse = dv.fit_transform(dict_series) </code></pre> <h2>Saving to disk</h2> <p>Now, essentially, your sparse matrix can be reconstructed from 3 fields: <code>sparse.data</code>, <code>sparse.indices</code>, <code>sparse.indptr</code>, an optionally, <code>sparse.shape</code>. The fastest and most memory efficient way to save an load the arrays <code>sparse.data</code>, <code>sparse.indices</code>, <code>sparse.indptr</code> is to use the np.ndarray <code>tofile</code> method, which saves the arrays as raw bytes. From the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tofile.html" rel="nofollow">documentation</a>:</p> <blockquote> <p>This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness.</p> </blockquote> <p>So this method loses any dtype information and endiamness. The former issue can be dealt with simply by making note of the datatype before hand, you'll be using np.int32 anyway. The latter issue isn't a problem if you are working locally, but if portability is important, you will need to look into alternate ways of storing the information.</p> <pre><code># to save sparse.data.tofile('data.dat') sparse.indices.tofile('indices.dat') sparse.indptr.tofile('indptr.dat') # don't forget your dict vectorizer! with open('dv.pickle', 'wb') as f: pickle.dump(dv,f) # pickle your dv to be able to recover your original data! </code></pre> <h2>To recover everything:</h2> <pre><code>with open('dv.pickle', 'rb') as f: dv = pickle.load(f) sparse = csr_matrix((np.fromfile('data.dat', dtype = np.int32), np.fromfile('indices.dat', dtype = np.int32), np.fromfile('indptr.dat', dtype = np.int32)) original = pd.Series(dv.inverse_transform(sparse)) </code></pre>
2
2016-07-28T01:07:23Z
[ "python", "pandas", "dictionary", "scipy", "sparse-matrix" ]
Compare Across Arrays by Index in Python
38,621,581
<p>I'm looking to compare values of several arrays by index.</p> <p>For example, if I have</p> <pre><code>a = [1, 1, 1, 1, 1, 1] b = [2, 2, 5, 5, 5, 2] c = [3, 3, 3, 3, 3, 3] </code></pre> <p>I'd like to be able to determine that index 2 of array b is out of range of a and c.</p> <p>Even more so, I'd like it to output the index of the third value which is out of range in a row.</p> <p>So far, I have something like: </p> <pre><code>av = [1,1,1,1,1,1,1,1,1,1] sd = [0.1, 0.1, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.1, 0.1] x = 1.1 counter = 0 for index, value in enumerate(np.sum([av,sd], axis=0)): if value &gt; x: counter += 1 else: counter = 0 if counter &gt;= 3: print "misbehaving channels" print(index, value) break </code></pre> <p>which will output (4, 1.2), telling me the index after it has been >x for 3 elements in a row, and the value at that index. </p> <p>However, as you can see, this doesn't compare across arrays, just where x = 1.1</p> <p>So, going back to the original example, ideally, the output would be something like (4, 5), where the index is 4, value is 5.</p> <p>Thanks, any help would be greatly appreciated.</p>
0
2016-07-27T19:27:03Z
38,621,763
<p>What about this:</p> <pre><code>a = [1, 1, 1, 1, 1, 1] b = [2, 2, 5, 5, 5, 2] c = [3, 3, 3, 3, 3, 3] a = np.array(a) b = np.array(b) c = np.array(c) out_of_range = (b &lt; a)|(b &gt; c) # array([False, False, True, True, True, False], dtype=bool) third_in_a_row = [i for i in xrange(2, len(a)) if out_of_range[i] and out_of_range[i-1] and out_of_range[i-2]] # [4] </code></pre>
0
2016-07-27T19:37:49Z
[ "python", "arrays", "numpy", "indexing" ]
Compare Across Arrays by Index in Python
38,621,581
<p>I'm looking to compare values of several arrays by index.</p> <p>For example, if I have</p> <pre><code>a = [1, 1, 1, 1, 1, 1] b = [2, 2, 5, 5, 5, 2] c = [3, 3, 3, 3, 3, 3] </code></pre> <p>I'd like to be able to determine that index 2 of array b is out of range of a and c.</p> <p>Even more so, I'd like it to output the index of the third value which is out of range in a row.</p> <p>So far, I have something like: </p> <pre><code>av = [1,1,1,1,1,1,1,1,1,1] sd = [0.1, 0.1, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.1, 0.1] x = 1.1 counter = 0 for index, value in enumerate(np.sum([av,sd], axis=0)): if value &gt; x: counter += 1 else: counter = 0 if counter &gt;= 3: print "misbehaving channels" print(index, value) break </code></pre> <p>which will output (4, 1.2), telling me the index after it has been >x for 3 elements in a row, and the value at that index. </p> <p>However, as you can see, this doesn't compare across arrays, just where x = 1.1</p> <p>So, going back to the original example, ideally, the output would be something like (4, 5), where the index is 4, value is 5.</p> <p>Thanks, any help would be greatly appreciated.</p>
0
2016-07-27T19:27:03Z
38,621,817
<p>Here is solution using <code>groupby</code> from <code>itertools</code> which groups the in-range and out-of-range sequence consecutively and then we can loop through the iterator, and check if the group is out of range and size is larger than or equal to <code>3</code>, if it is, add 3 to the index, find the value correspondingly and stop, otherwise keep moving forward until the sequence is found. If nothing is found, the value would be none and index be -1:</p> <pre><code>from itertools import groupby index = -1 value = None for k, g in groupby(y &lt; min(x, z) or y &gt; max(x, z) for x, y, z in zip(a, b, c)): grp_size = len(list(g)) if k and grp_size &gt;= 3: index += 3 value = b[index] break else: index += grp_size if value is not None: print((index, value)) # (4, 5) </code></pre>
0
2016-07-27T19:41:03Z
[ "python", "arrays", "numpy", "indexing" ]
Compare Across Arrays by Index in Python
38,621,581
<p>I'm looking to compare values of several arrays by index.</p> <p>For example, if I have</p> <pre><code>a = [1, 1, 1, 1, 1, 1] b = [2, 2, 5, 5, 5, 2] c = [3, 3, 3, 3, 3, 3] </code></pre> <p>I'd like to be able to determine that index 2 of array b is out of range of a and c.</p> <p>Even more so, I'd like it to output the index of the third value which is out of range in a row.</p> <p>So far, I have something like: </p> <pre><code>av = [1,1,1,1,1,1,1,1,1,1] sd = [0.1, 0.1, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.1, 0.1] x = 1.1 counter = 0 for index, value in enumerate(np.sum([av,sd], axis=0)): if value &gt; x: counter += 1 else: counter = 0 if counter &gt;= 3: print "misbehaving channels" print(index, value) break </code></pre> <p>which will output (4, 1.2), telling me the index after it has been >x for 3 elements in a row, and the value at that index. </p> <p>However, as you can see, this doesn't compare across arrays, just where x = 1.1</p> <p>So, going back to the original example, ideally, the output would be something like (4, 5), where the index is 4, value is 5.</p> <p>Thanks, any help would be greatly appreciated.</p>
0
2016-07-27T19:27:03Z
38,622,058
<p>Here's a vectorized way using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.convolve.html" rel="nofollow"><code>np.convolve</code></a> -</p> <pre><code># Get the index of first such occurance idx = np.where(np.convolve(b&gt;(a+c),[1]*3)&gt;=3)[0][0] # Index into b and get the tuple of index and value out = (idx, b[idx]) </code></pre> <p>Sample run -</p> <pre><code>In [265]: a = np.array([1, 1, 1, 1, 1, 1]) ...: b = np.array([2, 2, 5, 5, 5, 2]) ...: c = np.array([3, 3, 3, 3, 3, 3]) ...: In [266]: idx = np.where(np.convolve(b&gt;(a+c),[1]*3)&gt;=3)[0][0] In [267]: (idx, b[idx]) Out[267]: (4, 5) </code></pre>
1
2016-07-27T19:54:32Z
[ "python", "arrays", "numpy", "indexing" ]
Find the set of elements in a list without sorting in Python
38,621,615
<p>I have a list of elements 1,...,K with repeats. For example for K=4 :</p> <pre><code>[4 2 1 1 2 1 1 3 2 ] </code></pre> <p>I want to find the sequence that 1,...,K is appeared in the list (without sorting). For example for the above sequence, the result would be </p> <pre><code>[4, 2 ,1 ,3 ] </code></pre> <p>How can I write this algorithm efficiently in python, with less runtime.</p> <p>Thank you!</p>
-2
2016-07-27T19:28:43Z
38,621,716
<p>The normal list-deduping would probably be good enough:</p> <pre><code>def f7(seq): seen = set() seen_add = seen.add return [x for x in seq if not (x in seen or seen_add(x))] </code></pre> <p><a href="http://stackoverflow.com/a/480227/748858">reference</a></p> <p>However, this is by nature <code>O(N)</code>. That's the best you can do in the general case, but you <em>may</em> be able to do better from a practical standpoint for a large class of inputs.</p> <pre><code>def ordered_dedupe_with_constraints(lst, K): output = collections.OrderedDict() len_lst = len(lst) i = 0 while len(output) &lt; K and i &lt; len_lst: output.setdefault(lst[i], None) i += 1 return list(output) </code></pre> <p>This second answer uses the fact that you have at most <code>K</code> distinct elements in <code>lst</code> to break early when the <code>k</code>'th element has been added to the output. Although this is still <code>O(N)</code> in the general case, it's possible that you'll get MUCH better performance from this is <code>K &lt;&lt; len_lst</code> and the items are sufficiently shuffled. Of course, you need to know <code>K</code> ahead of time by some means other than iterating to get the <code>max</code> (that would defeat the purpose of our short-circuiting).</p> <p>If these constraints aren't the case, you're probably better off going with the function <code>f7</code> as reported in the reference since the implementation there is likely to be more optimal than the implementation here.</p>
1
2016-07-27T19:35:32Z
[ "python", "set" ]
Find the set of elements in a list without sorting in Python
38,621,615
<p>I have a list of elements 1,...,K with repeats. For example for K=4 :</p> <pre><code>[4 2 1 1 2 1 1 3 2 ] </code></pre> <p>I want to find the sequence that 1,...,K is appeared in the list (without sorting). For example for the above sequence, the result would be </p> <pre><code>[4, 2 ,1 ,3 ] </code></pre> <p>How can I write this algorithm efficiently in python, with less runtime.</p> <p>Thank you!</p>
-2
2016-07-27T19:28:43Z
38,621,732
<pre><code>from collections import OrderedDict list_numbers=[4,2,1,1,2,1,1,3,2] print list(OrderedDict.fromkeys(list_numbers)) </code></pre> <p>This gives the desired output - [4, 2, 1, 3]</p>
1
2016-07-27T19:36:10Z
[ "python", "set" ]
Find the set of elements in a list without sorting in Python
38,621,615
<p>I have a list of elements 1,...,K with repeats. For example for K=4 :</p> <pre><code>[4 2 1 1 2 1 1 3 2 ] </code></pre> <p>I want to find the sequence that 1,...,K is appeared in the list (without sorting). For example for the above sequence, the result would be </p> <pre><code>[4, 2 ,1 ,3 ] </code></pre> <p>How can I write this algorithm efficiently in python, with less runtime.</p> <p>Thank you!</p>
-2
2016-07-27T19:28:43Z
38,622,121
<p>Here is another way, which assumes that all numbers in the range 1,...,k appear (as per the problem description):</p> <pre><code>def inOrder(nums): k = max(nums) indices = [nums.index(n) for n in range(1,k+1)] return [n for i,n in sorted(zip(indices,range(1,k+1)))] </code></pre> <p>For example</p> <pre><code>&gt;&gt;&gt; inOrder([4, 2, 1, 1, 2, 1, 1, 3, 2]) [4, 2, 1, 3] </code></pre> <p>It is <code>O(nk)</code> where <code>n</code> is the length of the list. On the other hand, it uses built-in methods which are fairly quick, and if on average the first appearance of each number is somewhat early in the list, then the runtime will be much better than the worst-case. For example, if you define:</p> <pre><code>nums = [random.randint(1,1000) for i in range(10**6)] </code></pre> <p>then the evaluation of <code>inOrder(nums)</code> takes less than a second (even though the list has 1 million entries). </p>
0
2016-07-27T19:58:13Z
[ "python", "set" ]
Find the set of elements in a list without sorting in Python
38,621,615
<p>I have a list of elements 1,...,K with repeats. For example for K=4 :</p> <pre><code>[4 2 1 1 2 1 1 3 2 ] </code></pre> <p>I want to find the sequence that 1,...,K is appeared in the list (without sorting). For example for the above sequence, the result would be </p> <pre><code>[4, 2 ,1 ,3 ] </code></pre> <p>How can I write this algorithm efficiently in python, with less runtime.</p> <p>Thank you!</p>
-2
2016-07-27T19:28:43Z
38,630,522
<p>This will be O(k). </p> <p>It will go through the list. For each element, if it's the first time that element appears, it will add it to the list.</p> <p>If there's a possibility that there are numbers in the list larger than k, or other non integer elements, add an extra check that it's an integer less than k. This code will not ensure that all of the numbers between 0 and k exist in the list.</p> <pre><code>def findSeq(inputList): dict = {} newList = [] for elem in inputList: if elem not in dict: dict[elem] = True # This can be set to anything newList += [elem] return inputList </code></pre> <p>I wrote this first because I misunderstood your question... Didn't want it to go to waste :). This checks if the elements of a list appear in another list in order.</p> <pre><code># inList([2, 1, 5], [2, 3, 1, 5]) -&gt; True #inList([2, 1, 5], [2, 3, 5, 1]) -&gt; False def inList(small, big): i = 0 # index in small j = 0 # index in big while j &lt; len(big): if small(i) == big(j): i += 1 j += 1 # Any success is guaranteed to happen here, # right after you've found a match if i+1 == len(small): return True else: j += 1 return False </code></pre>
0
2016-07-28T08:05:56Z
[ "python", "set" ]
Pandas Pivot Time-series by year
38,621,652
<p>Hello and thanks in advance for any help. I have a simple dataframe with two columns. I did not set an index explicitly, but I believe a dataframe gets an integer index that I see along the left side of the output. Question below:</p> <pre><code>df = pandas.DataFrame(res) df.columns = ['date', 'pb'] df['date'] = pandas.to_datetime(df['date']) df.dtypes date datetime64[ns] pb float64 dtype: object date pb 0 2016-04-01 24199.933333 1 2016-03-01 23860.870968 2 2016-02-01 23862.275862 3 2016-01-01 25049.193548 4 2015-12-01 24882.419355 5 2015-11-01 24577.000000 date datetime64[ns] pb float64 dtype: object </code></pre> <p><strong>I would like to pivot the dataframe so that I have years across the top (columns): 2016, 2015, etc and a row for each month: 1 - 12.</strong></p>
3
2016-07-27T19:31:35Z
38,621,739
<p>Using the <a href="http://pandas.pydata.org/pandas-docs/stable/basics.html#dt-accessor" rel="nofollow"><code>.dt accessor</code></a> you can create columns for year and month and then pivot on those:</p> <pre><code>df['Year'] = df['date'].dt.year df['Month'] = df['date'].dt.month pd.pivot_table(df,index='Month',columns='Year',values='pb',aggfunc=np.sum) </code></pre> <p>Alternately if you don't want those other columns you can do:</p> <pre><code>pd.pivot_table(df,index=df['date'].dt.month,columns=df['date'].dt.year, values='pb',aggfunc=np.sum) </code></pre> <p>With my dummy dataset that produces:</p> <pre><code>Year 2013 2014 2015 2016 date 1 92924.0 102072.0 134660.0 132464.0 2 79935.0 82145.0 118234.0 147523.0 3 86878.0 94959.0 130520.0 138325.0 4 80267.0 89394.0 120739.0 129002.0 5 79283.0 91205.0 118904.0 125878.0 6 77828.0 89884.0 112488.0 121953.0 7 78839.0 94407.0 113124.0 NaN 8 79885.0 97513.0 116771.0 NaN 9 79455.0 99555.0 114833.0 NaN 10 77616.0 98764.0 115872.0 NaN 11 75043.0 95756.0 107123.0 NaN 12 81996.0 102637.0 114952.0 NaN </code></pre>
4
2016-07-27T19:36:37Z
[ "python", "pandas" ]
Pandas Pivot Time-series by year
38,621,652
<p>Hello and thanks in advance for any help. I have a simple dataframe with two columns. I did not set an index explicitly, but I believe a dataframe gets an integer index that I see along the left side of the output. Question below:</p> <pre><code>df = pandas.DataFrame(res) df.columns = ['date', 'pb'] df['date'] = pandas.to_datetime(df['date']) df.dtypes date datetime64[ns] pb float64 dtype: object date pb 0 2016-04-01 24199.933333 1 2016-03-01 23860.870968 2 2016-02-01 23862.275862 3 2016-01-01 25049.193548 4 2015-12-01 24882.419355 5 2015-11-01 24577.000000 date datetime64[ns] pb float64 dtype: object </code></pre> <p><strong>I would like to pivot the dataframe so that I have years across the top (columns): 2016, 2015, etc and a row for each month: 1 - 12.</strong></p>
3
2016-07-27T19:31:35Z
38,622,023
<p>Using <code>stack</code> instead of <code>pivot</code></p> <pre><code>df = pd.DataFrame( dict(date=pd.date_range('2013-01-01', periods=42, freq='M'), pb=np.random.rand(42))) df.set_index([df.date.dt.month, df.date.dt.year]).pb.unstack() </code></pre> <p><a href="http://i.stack.imgur.com/omcVD.png" rel="nofollow"><img src="http://i.stack.imgur.com/omcVD.png" alt="enter image description here"></a></p>
3
2016-07-27T19:52:59Z
[ "python", "pandas" ]
string comparison not recognizing match
38,621,676
<p>I have a list of urls in 'origFile' that will be augmented and written into 'readyFile'. I want to add urls to 'readyFile' only if they are not already in the 'readyFile'. </p> <pre><code>with open('bpBlacklist.txt', 'r') as origFile, open('bpBlacklistReady','r+') as readyFile : for line in origFile: orig_string = line.strip() if orig_string in readyFile.read(): None else: readyFile.write( "'" + orig_string + "'," + '\n' ) origFile.close() readyFile.close() </code></pre> <p>Right now, it just rewrites the whole list into 'readyFile' every time I run it. I tried moving the "'+ str + '" augmentation outside the if statement but the problem persisted. </p>
1
2016-07-27T19:32:39Z
38,621,752
<p>By writing <code>readyFile.write(...)</code> your next <code>readyFile.read()</code> call will be overwritten. You should save your readyFile content into a variable right after the <code>with</code> statement:</p> <pre><code>with open('bpBlacklist.txt', 'r') as origFile, open('bpBlacklistReady','r+') as readyFile : readyFileContent = readyFile.read() </code></pre>
1
2016-07-27T19:37:07Z
[ "python", "string", "comparison" ]
string comparison not recognizing match
38,621,676
<p>I have a list of urls in 'origFile' that will be augmented and written into 'readyFile'. I want to add urls to 'readyFile' only if they are not already in the 'readyFile'. </p> <pre><code>with open('bpBlacklist.txt', 'r') as origFile, open('bpBlacklistReady','r+') as readyFile : for line in origFile: orig_string = line.strip() if orig_string in readyFile.read(): None else: readyFile.write( "'" + orig_string + "'," + '\n' ) origFile.close() readyFile.close() </code></pre> <p>Right now, it just rewrites the whole list into 'readyFile' every time I run it. I tried moving the "'+ str + '" augmentation outside the if statement but the problem persisted. </p>
1
2016-07-27T19:32:39Z
38,621,778
<p>Your condition doesn't work as you expect because the <code>read()</code> method will return the whole file as a string. You need to check the membership between an iterable of urls.</p> <p>And instead of looping over the file and check the membership for all the urls you can find the difference using <code>set.difference()</code> function then write the extra urls:</p> <pre><code>with open('bpBlacklist.txt', 'r') as origFile, open('bpBlacklistReady','r+') as readyFile : current = set(origFile) diffs = current.difference(readyFile.readlines()) for url in diffs: readyFile.write(url + '\n') </code></pre>
0
2016-07-27T19:38:51Z
[ "python", "string", "comparison" ]
How do I get python code to execute properly from a button click when an alert is included as action from the button click?
38,621,677
<p>I have been having trouble getting alerts to work with an html button press that also calls a url. Describing the problem in more detail, I have a button that when pressed calls a url which from my urls.py file and views.py file calls some python code that plays a video but when I include an alert to trigger when the button is pressed, the python code does not get executed but the alert shows up without any problem. My question is how do I make these two actions compatible on a single button click so that both are executed properly? Also using the Django framework if that provides more information. </p> <p>This is the html and javascript</p> <pre><code>&lt;script type="text/javascript"&gt; function show_alert() { alert("The View Is Now Playing in a New Window"); } &lt;/script&gt; &lt;a class="btn btn-default" href="/VideoPlayer" role="button" onclick="show_alert()"&gt;Play Video&lt;/a&gt; </code></pre>
0
2016-07-27T19:32:41Z
38,621,792
<p>if you want to execute server side and client side code you need to use XHR or simply <a href="http://www.w3schools.com/ajax/" rel="nofollow">Ajax</a></p> <p><code>&lt;a href=''&gt;&lt;/a&gt;</code> will redirect page to <code>href</code>'s url, for this you need to add 'return false' to your JavaScript click method</p> <p>like this </p> <pre><code>`function click(){ alert(); // call your ajax here return false; }` </code></pre> <p>hopefully this can help you</p>
0
2016-07-27T19:39:41Z
[ "javascript", "python", "html", "django" ]
Logistic Regression Python
38,621,685
<p>I have been trying to implement logistic regression for a classification problem, but it is giving me really bizarre results. I have gotten decent results with gradient boosting and random forests so I thought of getting to basic and see what best can I achieve. Can you help me point out what am I doing wrong that is causing this overfitting? You can get the data from <a href="https://www.kaggle.com/c/santander-customer-satisfaction/data" rel="nofollow">https://www.kaggle.com/c/santander-customer-satisfaction/data</a></p> <p>Here is my code:</p> <pre><code>import pandas as pd import numpy as np train = pd.read_csv("path") test = pd.read_csv("path") test["TARGET"] = 0 fullData = pd.concat([train,test], ignore_index = True) remove1 = [] for col in fullData.columns: if fullData[col].std() == 0: remove1.append(col) fullData.drop(remove1, axis=1, inplace=True) import numpy as np remove = [] cols = fullData.columns for i in range(len(cols)-1): v = fullData[cols[i]].values for j in range(i+1,len(cols)): if np.array_equal(v,fullData[cols[j]].values): remove.append(cols[j]) fullData.drop(remove, axis=1, inplace=True) from sklearn.cross_validation import train_test_split X_train, X_test = train_test_split(fullData, test_size=0.20, random_state=1729) print(X_train.shape, X_test.shape) y_train = X_train["TARGET"].values X = X_train.drop(["TARGET","ID"],axis=1,inplace = False) from sklearn.ensemble import ExtraTreesClassifier clf = ExtraTreesClassifier(random_state=1729) selector = clf.fit(X, y_train) from sklearn.feature_selection import SelectFromModel fs = SelectFromModel(selector, prefit=True) X_t = X_test.drop(["TARGET","ID"],axis=1,inplace = False) X_t = fs.transform(X_t) X_tr = X_train.drop(["TARGET","ID"],axis=1,inplace = False) X_tr = fs.transform(X_tr) from sklearn.linear_model import LogisticRegression log = LogisticRegression(penalty ='l2', C = 1, random_state = 1, ) from sklearn import cross_validation scores = cross_validation.cross_val_score(log,X_tr,y_train,cv = 10) print(scores.mean()) log.fit(X_tr,y_train) predictions = log.predict(X_t) predictions = predictions.astype(int) print(predictions.mean()) </code></pre>
0
2016-07-27T19:33:26Z
38,633,268
<p>You are not configuring the C parameter - well, technically you are, but only to the default value - which is one of the usual suspects for overfitting. You can have a look at <a href="http://scikit-learn.org/stable/modules/grid_search.html#grid-search" rel="nofollow">GridSearchCV</a> and play around a bit with several values for the C parameter (e.g. from 10^-5 to 10^5) to see if it eases your problem. Changing the penalty rule to 'l1' might help as well.</p> <p>Besides, there were several challenges with that competition: It is an imbalanced data set, and the distributions between the training set and the private LB were a bit different. All of this if going to play against you, specially when using simple algorithms like LR.</p>
0
2016-07-28T10:06:30Z
[ "python", "logistic-regression", "data-science" ]
PyQt5 pip installation error 13. Permission denied
38,621,689
<p>I'm trying to install PyQt5 with the command <code>pip install PyQt5</code></p> <p>but I get an error instead.</p> <p>I use Python 3.5, windows 10.</p> <p>error:</p> <pre><code>C:\WINDOWS\system32&gt;pip install PyQt5 Collecting PyQt5 Using cached PyQt5-5.7-cp35-none-win_amd64.whl Collecting sip (from PyQt5) Using cached sip-4.18.1-cp35-none-win_amd64.whl Installing collected packages: sip, PyQt5 Exception: Traceback (most recent call last): File "c:\anaconda3\lib\site-packages\pip\basecommand.py", line 215, in main status = self.run(options, args) File "c:\anaconda3\lib\site-packages\pip\commands\install.py", line 317, in run prefix=options.prefix_path, File "c:\anaconda3\lib\site-packages\pip\req\req_set.py", line 742, in install **kwargs File "c:\anaconda3\lib\site-packages\pip\req\req_install.py", line 831, in install self.move_wheel_files(self.source_dir, root=root, prefix=prefix) File "c:\anaconda3\lib\site-packages\pip\req\req_install.py", line 1032, in move_wheel_files isolated=self.isolated, File "c:\anaconda3\lib\site-packages\pip\wheel.py", line 346, in move_wheel_files clobber(source, lib_dir, True) File "c:\anaconda3\lib\site-packages\pip\wheel.py", line 324, in clobber shutil.copyfile(srcfile, destfile) File "c:\anaconda3\lib\shutil.py", line 115, in copyfile with open(dst, 'wb') as fdst: PermissionError: [Errno 13] Permission denied: 'c:\\anaconda3\\Lib\\site-packages\\sip.pyd' </code></pre>
0
2016-07-27T19:33:42Z
39,791,527
<p>With different Windows (8.2) and different PyQt (4-4) I had the same problem. What worked for me was: Run taks manager and see if there are any Python tasks running. If there are (there were for me) kill them, as they (probably) lock the sip.pyd file. Then try to use "pip install" ("successfully installed" in my case).</p>
1
2016-09-30T12:26:35Z
[ "python", "pip" ]
GetPixel memory leak in Python
38,621,724
<p>I have a script that sits outside a game, reads pixels, and reacts to that information by "pressing keys", "clicking", etc. So to get the pixels, I am using code like this</p> <pre><code>def function(): a = win32gui.GetPixel(win32gui.GetDC(win32gui.GetActiveWindow()), x, y) return a </code></pre> <p>in order to get values of pixels on the screen quickly and have the script quickly react.</p> <p>It starts off fine, is able to execute everything it needs to in time, but it gets progressively slower.</p> <p>I have identified the problem source as GetPixel by trying to use other methods like this</p> <pre><code>def function(): box = (x1, y1, x2, y2) im = ImageOps.grayscale(ImageGrab.grab(box)) a = array(im.getcolors()) a = a.sum() return a </code></pre> <p>which are far too slow, but if I run a while loop containing these other methods, they do not gradually execute slower and slower like the fast method with GetPixel does (memory leak.) </p> <p>I am using local variables that are deleted afterwards, etc. It IS GetPixel that is the problem. I just don't know where the stuff it's not deleting is, how to tell Python to delete it, if that's even possible, etc.</p>
2
2016-07-27T19:35:51Z
38,677,186
<p>You should call <a href="http://docs.activestate.com/activepython/3.2/pywin32/win32gui__ReleaseDC_meth.html" rel="nofollow">win32gui.ReleaseDC</a> for each call of <a href="http://docs.activestate.com/activepython/3.2/pywin32/win32gui__GetDC_meth.html" rel="nofollow">win32gui.GetDC</a> as explained in <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/dd144871(v=vs.85).aspx" rel="nofollow">GetDC</a>:</p> <blockquote> <p>After painting with a common DC, the ReleaseDC function must be called to release the DC.</p> </blockquote> <pre><code>def function(): hwnd = win32gui.GetActiveWindow() hdc = win32gui.GetDC(hwnd) a = win32gui.GetPixel(hdc , x, y) win32gui.ReleaseDC(hwnd,hdc) return a </code></pre>
0
2016-07-30T18:27:06Z
[ "python", "memory", "pixel", "getpixel" ]
Sklearn: find mean centroid location for clusters?
38,621,791
<pre><code>import pandas as pd, numpy as np, scipy import sklearn.feature_extraction.text as text from sklearn import decomposition descs = ["You should not go there", "We may go home later", "Why should we do your chores", "What should we do"] vectorizer = text.CountVectorizer() dtm = vectorizer.fit_transform(descs).toarray() vocab = np.array(vectorizer.get_feature_names()) nmf = decomposition.NMF(3, random_state = 1) topic = nmf.fit_transform(dtm) </code></pre> <p>Printing <code>topic</code> leaves me with:</p> <pre><code>&gt;&gt;&gt; print(topic) [0. , 1.403 , 0. ], [0. , 0. , 1.637 ], [1.257 , 0. , 0. ], [0.874 , 0.056 , 0.065 ] </code></pre> <p>Which are vectors of each element in <code>descs</code>'s likelihood to belong to a certain cluster. How can I get the coordinates of the centroid of each cluster? Ultimately, I want to develop a function to calculate the distance of each element in <code>descs</code> from the centroid of the cluster it was assigned to.</p> <p>Would it be best to just compute the average of each <code>descs</code> element's <code>topic</code> value for each cluster?</p>
0
2016-07-27T19:39:32Z
38,626,076
<p>The <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html" rel="nofollow">docs</a> of <code>sklearn.decomposition.NMF</code> explain how to get the coordinates of the centroid of each cluster:</p> <blockquote> <p><strong>Attributes:</strong> &nbsp;&nbsp;&nbsp; <strong>components_</strong> : array, [n_components, n_features]<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Non-negative components of the data. </p> </blockquote> <p>The basis vectors are arranged row-wise, as shown in the following interactive session:</p> <pre><code>In [995]: np.set_printoptions(precision=2) In [996]: nmf.components_ Out[996]: array([[ 0.54, 0.91, 0. , 0. , 0. , 0. , 0. , 0.89, 0. , 0.89, 0.37, 0.54, 0. , 0.54], [ 0. , 0.01, 0.71, 0. , 0. , 0. , 0.71, 0.72, 0.71, 0.01, 0.02, 0. , 0.71, 0. ], [ 0. , 0.01, 0.61, 0.61, 0.61, 0.61, 0. , 0. , 0. , 0.62, 0.02, 0. , 0. , 0. ]]) </code></pre> <p>As for your second question, I don't see the point of "<em>computing the average of each</em> <code>descs</code> <em>element's topic value for each cluster</em>". In my opinion it makes more sense to perform the classification through the calculated likelihoods.</p>
1
2016-07-28T02:16:27Z
[ "python", "scikit-learn" ]
Replacing "/" with "\" doesn't work
38,621,822
<p>I am trying to replace <code>/</code> with <code>\</code> as below, but it doesn't work, why is that?</p> <pre><code>str = "images/companyPkg/Pkg/nib64/" replaced_str = str.replace('//','\\') print replaced_str </code></pre>
0
2016-07-27T19:41:15Z
38,621,849
<p><code>'/'</code> does not need to be doubled. <code>'\'</code> is doubled because strings cannot end with <code>'\'</code>:</p> <pre><code>s = "images/companyPkg/Pkg/nib64/" replaced_str = s.replace('/','\\') </code></pre> <p>Don't assign anything to the name <code>str</code>, <code>str</code> is a builtin (class for strings) in Python. Making an assignment will make the builtin name unusable later on in your code. You don't want that.</p>
8
2016-07-27T19:43:07Z
[ "python" ]
Replacing "/" with "\" doesn't work
38,621,822
<p>I am trying to replace <code>/</code> with <code>\</code> as below, but it doesn't work, why is that?</p> <pre><code>str = "images/companyPkg/Pkg/nib64/" replaced_str = str.replace('//','\\') print replaced_str </code></pre>
0
2016-07-27T19:41:15Z
38,621,899
<p>You don't need to escape the <code>/</code> in python just the <code>\</code> so the following line should do the trick:</p> <pre><code>replaced_str = str.replace('/', '\\') </code></pre>
2
2016-07-27T19:45:22Z
[ "python" ]
Replacing "/" with "\" doesn't work
38,621,822
<p>I am trying to replace <code>/</code> with <code>\</code> as below, but it doesn't work, why is that?</p> <pre><code>str = "images/companyPkg/Pkg/nib64/" replaced_str = str.replace('//','\\') print replaced_str </code></pre>
0
2016-07-27T19:41:15Z
38,622,199
<p>You should double the back slash <code>\</code> because it is the escape character and used to provide a special meaning to the certain characters as for example <code>n</code> is <strong>simple 'n'</strong> but <code>\n</code> is a <strong>new line</strong>, but forward slash <code>/</code> is a simple character so you don't need to double it.</p> <p>You should write <code>replaced_str = str.replace('/','\\')</code></p>
0
2016-07-27T20:03:01Z
[ "python" ]
Find duplicates in python list of dictionaries
38,621,915
<p>I have kind of dictionary below: </p> <pre><code>a = [{'un': 'a', 'id': "cd"}, {'un': 'b', 'id': "cd"},{'un': 'b', 'id': "cd"}, {'un': 'c', 'id': "vd"}, {'un': 'c', 'id': "a"}, {'un': 'c', 'id': "vd"}, {'un': 'a', 'id': "cm"}] </code></pre> <p>I need to find the duplicates of dictionaries by 'un' key, for example this {'un': 'a', 'id': "cd"} and this {'un': 'a', 'id': "cm"} dicts are duplicates by value of key 'un' secondly when the duplicates are found I need to make decision what dict to keep concerning its second value of the key 'id', for example we keep dict with pattern value "cm".</p> <p>I have already made the firs step see the code below:</p> <pre><code>from collections import defaultdict temp_ids = [] dup_dict = defaultdict(list) for number, row in enumerate(a): id = row['un'] if id not in temp_ids: temp_ids.append(id) else: tally[id].append(number) </code></pre> <p>Using this code I more or less able to find indexes of duplicate lists, maybe there is other method to do it. And also I need the next step code that makes decision what dict keep and what omit. Will be very grateful for help.</p>
2
2016-07-27T19:46:21Z
38,622,067
<p>You can use another dictionary in order to categorize your dictionaries based on <code>'un'</code> key then choose the expected items based on <code>id</code>:</p> <pre><code>&gt;&gt;&gt; from collections import defaultdict &gt;&gt;&gt; &gt;&gt;&gt; d = defaultdict(list) &gt;&gt;&gt; &gt;&gt;&gt; for i in a: ... d[i['un']].append(i) ... &gt;&gt;&gt; d defaultdict(&lt;type 'list'&gt;, {'a': [{'un': 'a', 'id': 'cd'}, {'un': 'a', 'id': 'cm'}], 'c': [{'un': 'c', 'id': 'vd'}, {'un': 'c', 'id': 'a'}, {'un': 'c', 'id': 'vd'}], 'b': [{'un': 'b', 'id': 'cd'}, {'un': 'b', 'id': 'cd'}]}) &gt;&gt;&gt; &gt;&gt;&gt; keeps = {'a': 'cm', 'b':'cd', 'c':'vd'} # the key is 'un' and the value is 'id' should be keep for that 'un' &gt;&gt;&gt; &gt;&gt;&gt; [i for key, val in d.items() for i in val if i['id']==keeps[key]] [{'un': 'a', 'id': 'cm'}, {'un': 'c', 'id': 'vd'}, {'un': 'c', 'id': 'vd'}, {'un': 'b', 'id': 'cd'}, {'un': 'b', 'id': 'cd'}] &gt;&gt;&gt; </code></pre> <p>In the last line (the nested list comprehension) we loop over the aggregated dict's items then over the values and keep those items within the values that follows or condition which is <code>i['id']==keeps[key]</code> that means we will keep the items that has an <code>id</code> with specified values in <code>keeps</code> dictionary.</p> <p>You can beak the list comprehension to something like this:</p> <pre><code>final_list = [] for key, val in d.items(): for i in val: if i['id']==keeps[key]: final_list.append(i) </code></pre> <p>Note that since the iteration of list comprehensions has performed in C it's very faster than regular python loops and in the pythonic way to go. But if the performance is not important for you you can use the regular approach. </p>
0
2016-07-27T19:55:10Z
[ "python", "list", "python-2.7", "dictionary" ]
Find duplicates in python list of dictionaries
38,621,915
<p>I have kind of dictionary below: </p> <pre><code>a = [{'un': 'a', 'id': "cd"}, {'un': 'b', 'id': "cd"},{'un': 'b', 'id': "cd"}, {'un': 'c', 'id': "vd"}, {'un': 'c', 'id': "a"}, {'un': 'c', 'id': "vd"}, {'un': 'a', 'id': "cm"}] </code></pre> <p>I need to find the duplicates of dictionaries by 'un' key, for example this {'un': 'a', 'id': "cd"} and this {'un': 'a', 'id': "cm"} dicts are duplicates by value of key 'un' secondly when the duplicates are found I need to make decision what dict to keep concerning its second value of the key 'id', for example we keep dict with pattern value "cm".</p> <p>I have already made the firs step see the code below:</p> <pre><code>from collections import defaultdict temp_ids = [] dup_dict = defaultdict(list) for number, row in enumerate(a): id = row['un'] if id not in temp_ids: temp_ids.append(id) else: tally[id].append(number) </code></pre> <p>Using this code I more or less able to find indexes of duplicate lists, maybe there is other method to do it. And also I need the next step code that makes decision what dict keep and what omit. Will be very grateful for help.</p>
2
2016-07-27T19:46:21Z
38,622,184
<p>you were pretty much on the right track with a defaultdict... here's roughly how I would write it.</p> <pre><code>from collections import defaultdict a = [{'un': 'a', 'id': "cd"}, {'un': 'b', 'id': "cd"},{'un': 'b', 'id': "cd"}, {'un': 'c', 'id': "vd"}, {'un': 'c', 'id': "a"}, {'un': 'c', 'id': "vd"}, {'un': 'a', 'id': "cm"}] items = defaultdict(list) for row in a: items[row['un']].append(row['id']) #make a list of 'id' values for each 'un' key for key in items.keys(): if len(items[key]) &gt; 1: #if there is more than one 'id' newValue = somefunc(items[key]) #decided which of the list items to keep items[key] = newValue #put that new value back into the dictionary </code></pre>
0
2016-07-27T20:01:31Z
[ "python", "list", "python-2.7", "dictionary" ]
How to split a math formula that is composed of variables that contain dashes
38,621,962
<p>Trying to split this line:</p> <pre><code>formula='%abc-def%+%hij-klm%/%opq+rst%-%uvw-xyz% </code></pre> <p>The variable are contained with the <code>"%"</code> signs and must remain intact. </p> <p>I want to split on <code>+-/*</code> without splitting the variable due to the <code>'-'</code> in the name.</p> <p>Is there an easy way without having to use a for loop to scan each character?</p> <hr> <p><strong>1st Way:</strong></p> <p>Splits the variables (no good):</p> <pre><code>re.compile("[\+\/\-\*]").split(formula) ['%abc', 'def%', '%hij', 'klm%', '%opq', 'rst%', '%uvw', 'xyz%'] </code></pre> <hr> <p><strong>2nd Way:</strong></p> <p>Loses the % (no good):</p> <pre><code>re.compile("%[\+\/\-\*]%").split(formula) ['%abc-def', 'hij-klm', 'opq+rst', 'uvw-xyz%'] </code></pre> <hr> <p><strong>Expected Output:</strong></p> <p>I'm looking for something that'll yield:</p> <pre><code>['%abc-def%', '%hij-klm%', '%opq+rst%', '%uvw-xyz%'] </code></pre> <p>Thanks, Dan</p>
1
2016-07-27T19:49:14Z
38,622,156
<p>You can prevent <code>re.compile("%[\+\/\-\*]%").split(formula)</code> from dropping the <code>%</code> characters by using lookarounds:</p> <pre><code>re.compile(r"(?&lt;=%)[+/*-](?=%)").split(formula) </code></pre> <p>Another solution would be to split on <code>[+/*-]</code> but only if followed by an even number of <code>%</code> characters:</p> <pre><code>re.split(r'[+*/-](?=(?:(?:[^%]*%){2})*$)', formula) </code></pre>
0
2016-07-27T20:00:10Z
[ "python", "regex", "split" ]
How to split a math formula that is composed of variables that contain dashes
38,621,962
<p>Trying to split this line:</p> <pre><code>formula='%abc-def%+%hij-klm%/%opq+rst%-%uvw-xyz% </code></pre> <p>The variable are contained with the <code>"%"</code> signs and must remain intact. </p> <p>I want to split on <code>+-/*</code> without splitting the variable due to the <code>'-'</code> in the name.</p> <p>Is there an easy way without having to use a for loop to scan each character?</p> <hr> <p><strong>1st Way:</strong></p> <p>Splits the variables (no good):</p> <pre><code>re.compile("[\+\/\-\*]").split(formula) ['%abc', 'def%', '%hij', 'klm%', '%opq', 'rst%', '%uvw', 'xyz%'] </code></pre> <hr> <p><strong>2nd Way:</strong></p> <p>Loses the % (no good):</p> <pre><code>re.compile("%[\+\/\-\*]%").split(formula) ['%abc-def', 'hij-klm', 'opq+rst', 'uvw-xyz%'] </code></pre> <hr> <p><strong>Expected Output:</strong></p> <p>I'm looking for something that'll yield:</p> <pre><code>['%abc-def%', '%hij-klm%', '%opq+rst%', '%uvw-xyz%'] </code></pre> <p>Thanks, Dan</p>
1
2016-07-27T19:49:14Z
38,622,169
<p>A 2-step solution:</p> <pre><code>import re tempList = re.split("(\-|\+|\/|\*)(?=%)",'%abc-def%+%hij-klm%/%opq+rst%-%uvw-xyz%') finalList = [x for x in tempList if "%" in x] ['%abc-def%', '%hij-klm%', '%opq+rst%', '%uvw-xyz%'] </code></pre> <p>I hope this helps.</p>
0
2016-07-27T20:00:55Z
[ "python", "regex", "split" ]
Import package from adjacent module works in Py2 but not in Py3
38,621,975
<p>I'm still getting used to structuring python projects and relative imports, I thought I understood relative imports mostly until I ran into an issue when testing on Py3.</p> <p>I have a project that is structure like so:</p> <pre><code>scriptA.py package/__init__.py scriptB.py scriptC.py </code></pre> <p>and <code>__init.py__</code> contains the following:</p> <pre><code>from scriptB import functionB from scriptC import functionC </code></pre> <p>In <code>scriptA</code> <code>import package as _package</code> works in Py2.7, but fails on Py3.5 with the error, <code>ImportError: No module named 'scriptB'</code>.</p> <p>How can I import <code>package</code> in a way that is compatible with both Py2 and 3? Why is this different?</p> <p>I tried doing <code>import .package as _package</code> but that doesn't seem to change anything (still figuring out when to use <code>.</code> and <code>..</code>...</p>
0
2016-07-27T19:50:15Z
38,623,466
<p><strong>No module named 'scriptB'</strong></p> <p>Looks like a path error. Check your path statement. The path statement in Windows can be accessed by typing "system" into start menu search bar. From there you can access "Environment Variables", one of which is the "path" statement. There you can see different directories that are allowed to be accessed by Python scripts. Make sure your Python 3.x directory and Python 3.x/scripts directory are listed there.<br> Files listed in the path statement are kind of like Global Variables in a program. If both Python 2 and Python 3 have the <strong>same</strong> file names in their directories you could access an incompatible file. I've read that there is a python launcher which will solve this very problem so that you can run multiple versions of python on the same computer. For now I would test to see if this is your problem by -copying the path statement to a backup file and then -remove references to python 2.x, -reboot -run your program. If you want to run multiple versions of python copy the original path statement back and install the python launcher.</p>
0
2016-07-27T21:27:01Z
[ "python", "python-3.x", "packages", "python-2.x", "importerror" ]
Import package from adjacent module works in Py2 but not in Py3
38,621,975
<p>I'm still getting used to structuring python projects and relative imports, I thought I understood relative imports mostly until I ran into an issue when testing on Py3.</p> <p>I have a project that is structure like so:</p> <pre><code>scriptA.py package/__init__.py scriptB.py scriptC.py </code></pre> <p>and <code>__init.py__</code> contains the following:</p> <pre><code>from scriptB import functionB from scriptC import functionC </code></pre> <p>In <code>scriptA</code> <code>import package as _package</code> works in Py2.7, but fails on Py3.5 with the error, <code>ImportError: No module named 'scriptB'</code>.</p> <p>How can I import <code>package</code> in a way that is compatible with both Py2 and 3? Why is this different?</p> <p>I tried doing <code>import .package as _package</code> but that doesn't seem to change anything (still figuring out when to use <code>.</code> and <code>..</code>...</p>
0
2016-07-27T19:50:15Z
38,624,524
<p>So the problem was that the imports in <code>__init__.py</code> should have been relative imports i.e.:</p> <pre><code>from .scriptB import functionB from .scriptC import functionC </code></pre> <p>I guess this is one of the differences between importing a module and a python package. It seems that in Py3.5 relative imports need to be done explicitly and so an error was thrown. This is my interpretation. unfortunatly the structure I had put in the question was not close enough to the problem I had, so this doesn't solve it... But a more detailed (and probably accurate) answer for this is still welcomed.</p>
0
2016-07-27T22:57:28Z
[ "python", "python-3.x", "packages", "python-2.x", "importerror" ]
Validation on CreateView and UpdateView in django always being triggered, even when model doesn't/shouldn't
38,622,020
<p>I am not sure why adding widgets in my GroupForm form in forms.py caused my validations to go haywire. Before that they were respecting my models, now after adding widget attrs for everything it no longer respects the models and says a field is required for everything. Is there some other item I missed when defining the widget?</p> <p>forms.py:</p> <pre><code>class GroupForm(forms.ModelForm): group_name = forms.CharField(widget = forms.TextInput(attrs={'tabindex':'1', 'placeholder':'Groups name'})) group_contact = forms.CharField(widget = forms.TextInput(attrs={'tabindex':'2', 'placeholder':'Groups point of contact person'})) tin = forms.CharField(widget = forms.TextInput(attrs={'tabindex':'3', 'placeholder':'Groups tin#'})) npi = forms.CharField(widget = forms.TextInput(attrs={'tabindex':'4', 'placeholder':'Groups npi#'})) notes = forms.CharField(widget = forms.Textarea(attrs={'tabindex':'5', 'placeholder':'Group notes'})) #notes = forms.CharField(widget = forms.TextInput(attrs={'tabindex':'5', 'placeholder':'Groups notes'})) billing_address = forms.ModelChoiceField(queryset=Address.objects.all(), widget=forms.Select(attrs={'tabindex':'6'})) mailing_address = forms.ModelChoiceField(queryset=Address.objects.all(), widget=forms.Select(attrs={'tabindex':'7'})) start_date = forms.DateField(widget=forms.TextInput(attrs= { 'class':'datepicker', 'tabindex' : '8', 'placeholder' : 'Groups start date' })) end_date = forms.DateField(widget=forms.TextInput(attrs= { 'class':'datepicker', 'tabindex' : '9', 'placeholder' : 'Groups term date' })) change_date = forms.DateField(widget=forms.TextInput(attrs= { 'class':'datepicker', 'tabindex' : '10', 'placeholder' : 'Groups changed date' })) change_text = forms.CharField(widget = forms.TextInput(attrs={'tabindex':'11', 'placeholder':'Reason for date change'})) #term_comment = forms.CharField(widget= forms.TextInput(attrs={'tabindex':'12', 'placeholder':'Note on group term'})) term_comment = forms.CharField(widget = forms.Textarea(attrs={'tabindex':'12', 'placeholder':'Note on group term'})) group_phone = forms.RegexField(regex=r'^(\+\d{1,2}\s)?\(?\d{3}\)?[\s.-]?\d{3}[\s.-]?\d{4}$', error_message = ("Phone number must be entered in the format: '555-555-5555 or 5555555555'. Up to 15 digits allowed."), widget = forms.TextInput(attrs={'tabindex':'13', 'placeholder': '555-555-5555 or 5555555555'})) group_fax = forms.RegexField(regex=r'^(\+\d{1,2}\s)?\(?\d{3}\)?[\s.-]?\d{3}[\s.-]?\d{4}$', error_message = ("Fax number must be entered in the format: '555-555-5555 or 5555555555'. Up to 15 digits allowed."), widget = forms.TextInput(attrs={'tabindex':'15', 'placeholder': '555-555-5555 or 5555555555'})) group_term = forms.ModelChoiceField(queryset=GroupTerm.objects.all(), widget=forms.Select(attrs={'tabindex':'16'})) class Meta: model=Group exclude = ['created_at', 'updated_at'] </code></pre> <p>views.py:</p> <pre><code>class GroupCreateView(CreateView): model = Group form_class = GroupForm template_name = 'ipaswdb/group/group_form.html' success_url = 'ipaswdb/group/' def form_valid(self, form): return super(GroupCreateView, self).form_valid(form) class GroupUpdateView(UpdateView): model = Group form_class = GroupForm template_name = 'ipaswdb/group/group_form.html' success_url = 'ipaswdb/group/' </code></pre> <p>Group model:</p> <pre><code>class Group(models.Model): group_name = models.CharField(max_length=50) group_contact= models.CharField(max_length=50) tin = models.CharField(max_length=50) npi =models.CharField(max_length=50) notes = models.TextField(max_length = 255, null=True, blank=True) billing_address = models.ForeignKey('Address', related_name = 'billing_address', on_delete=models.SET_NULL, null=True) mailing_address = models.ForeignKey('Address', related_name = 'mailing_address', on_delete=models.SET_NULL, null=True, blank=True) start_date = models.DateField(auto_now=False, auto_now_add=False, null=True, blank=True) end_date = models.DateField(auto_now=False, auto_now_add=False, null=True, blank=True) change_date = models.DateField(auto_now=False, auto_now_add=False, null=True, blank=True) change_text = models.TextField(max_length = 255, null=True, blank=True) term_comment = models.TextField(max_length = 255, null=True, blank=True) group_phone=models.CharField(max_length=50) group_fax = models.CharField(max_length=50) group_term = models.ForeignKey(GroupTerm, on_delete=models.SET_NULL, null=True, blank=True) #quesiton is can a group be termed many times? created_at=models.DateField(auto_now_add=True) updated_at=models.DateField(auto_now=True) #provider_location = models.ManyToManyField('ProviderLocations', through='GroupLocations') def __str__(self): return self.group_name </code></pre>
0
2016-07-27T19:52:37Z
38,627,053
<p>It's not because you added the widgets, it's because you actually redefined the fields and while redefining them, you did not respect your model's requirements. For example in your model </p> <pre><code> mailing_address = models.ForeignKey(..., null=True, blank=True) </code></pre> <p>mailing address is allowed to be empty, but in your defined form field it's required.</p> <pre><code>mailing_address = forms.ModelChoiceField(queryset=Address.objects.all(), widget=forms.Select(attrs={'tabindex':'7'})) # You need required=False </code></pre> <p>If you want to redefine your own fields for the <code>modelForm</code> you can, then you need to respect your model while doing it. <strong>However</strong>, you can also accomplish what you're trying by using already existing dictionaries in <code>modelForm</code>. For example inside your <code>class Meta</code> you can override the widgets like this:</p> <pre><code> class YourForm(ModelForm): class Meta: model = YourModel fields = ('field_1', 'field_2', 'field_3', ...) widgets = { # CHANGE THE WIDGETS HERE IF YOU WANT TO 'field_1': Textarea(attrs={'cols': 80, 'rows': 20}), } labels ={ # CHANGE THE LABELS HERE IF YOU WANT TO } </code></pre> <p>More info at Django's <code>modelForm</code> docs: <a href="https://docs.djangoproject.com/en/1.9/topics/forms/modelforms/#overriding-the-default-fields" rel="nofollow">Overriding defaults field</a></p>
1
2016-07-28T04:22:03Z
[ "python", "django", "validation", "django-models", "django-forms" ]
Plot histogram of a list of strings in Python
38,622,082
<p>I have a very long list of IDs (IDs are string values. I want to plot a histogram of this list. There are some codes in other threads on stackoverflow for plotting a histogram but the histogram I want should look like this picture (i.e. highest values are in the left side and the values gradually decrease when x-axis increase. </p> <p>This is the code for plotting regular histogram</p> <pre><code>import pandas from collections import Counter items=a long list of strings letter_counts = Counter(items) df = pandas.DataFrame.from_dict(letter_counts, orient='index') df.plot(kind='bar') </code></pre> <p><a href="http://i.stack.imgur.com/oc1nk.gif" rel="nofollow"><img src="http://i.stack.imgur.com/oc1nk.gif" alt="The histogram"></a></p>
1
2016-07-27T19:56:07Z
38,622,519
<p>how about something along these lines...</p> <pre><code>from collections import Counter import matplotlib.pyplot as plt import numpy as np counts = Counter(['a','a','a','c','a','a','c','b','b','d', 'd','d','d','d','b']) common = counts.most_common() labels = [item[0] for item in common] number = [item[1] for item in common] nbars = len(common) plt.bar(np.arange(nbars), number, tick_label=labels) plt.show() </code></pre> <p>The <a href="https://docs.python.org/2/library/collections.html#collections.Counter.most_common" rel="nofollow"><code>most_common()</code></a> call is the main innovation of this script. The rest is easily found in the <code>matplotlib</code> documentation (already linked in my comment).</p>
0
2016-07-27T20:22:38Z
[ "python", "matplotlib", "histogram" ]
Is it really necessary to hash the same for classes that compare the same?
38,622,200
<p>Reading <a href="https://stackoverflow.com/a/1608882/143211">this answer</a> it seems, that if <code>__eq__</code> is defined in custom class, <code>__hash__</code> needs to be defined as well. This is understandable.<br> However it is not clear, why - effectively - <code>__eq__</code> should be same as <code>self.__hash__()==other.__hash__</code></p> <p>Imagining a class like this: </p> <pre><code>class Foo: ... self.Name self.Value ... def __eq__(self,other): return self.Value==other.Value ... def __hash__(self): return id(self.Name) </code></pre> <p>This way class instances could be compared by value, which could be the only reasonable use, but considered identical by name.<br> This way <code>set</code> could not contain multiple instances with equal name, but comparison would still work. </p> <p>What could be the problem with such definition?</p> <p>The reason for defining <code>__eq__</code>, <code>__lt__</code> and other by <code>Value</code> is to be able to sort instances by <code>Value</code> and to be able to use functions like max. For example, he class should represent a physical output of a device (say heating element). Each of these outputs has unique Name. The Value is power of the output device. To find optimal combination of heating elements to turn on, it is useful to be able to compare them by power (Value). In a set or dictionary, however, it should not be possible to have multiple outputs with same names. Of course, different outputs with different names might easily have equal power.</p>
3
2016-07-27T20:03:02Z
38,622,299
<p>The problem is that it does not make sense, hash is used to do efficient bucketing of objects. Consequently, when you have a set, which is implemented as a hash table, each hash points to a bucket, which is usually a list of elements. In order to check if an element is in the set (or other hash based container) you go to the bucket pointed by a hash and then you iterate over all elements in the list, comparing them one by one.</p> <p>In other words - hash is not supposed to be a comparator (as it can, and should give you sometimes a false positive). In particular, in your example, your set will not work - it will not recognize duplicate, as they do not compare to each other.</p> <pre><code>class Foo: def __eq__(self,other): return self.Value==other.Value def __hash__(self): return id(self.Name) a = set() el = Foo() el.Name = 'x' el.Value = 1 el2 = Foo() el2.Name = 'x' el2.Value = 2 a.add(el) a.add(el2) print len(a) # should be 1, right? Well it is 2 </code></pre> <p>actually it is even worse then that, if you have 2 objects with the same values but different names, they are not recognized to be the same either</p> <pre><code>class Foo: def __eq__(self,other): return self.Value==other.Value def __hash__(self): return id(self.Name) a = set() el = Foo() el.Name = 'x' el.Value = 2 el2 = Foo() el2.Name = 'a' el2.Value = 2 a.add(el) a.add(el2) print len(a) # should be 1, right? Well it is 2 again </code></pre> <p>while doing it properly (thus, "if a == b, then hash(a) == hash(b)") gives:</p> <pre><code>class Foo: def __eq__(self,other): return self.Name==other.Name def __hash__(self): return id(self.Name) a = set() el = Foo() el.Name = 'x' el.Value = 1 el2 = Foo() el2.Name = 'x' el2.Value = 2 a.add(el) a.add(el2) print len(a) # is really 1 </code></pre> <h1>Update</h1> <p>There is also an non deterministic part, which is hard to easily reproduce, but essentially hash does not uniquely define a bucket. Usually it is like</p> <pre><code>bucket_id = hash(object) % size_of_allocated_memory </code></pre> <p>consequently things that have different hashes can still end up in the same bucket. Consequently, you can get two elements equal to each (inside set) due to equality of Values even though Names are different, as well as the other way around, depending on actual internal implementation, memory constraints etc. </p> <p>In general there are many more examples where things can go wrong, as hash is <strong>defined</strong> as a function <code>h : X -&gt; Z</code> such that <code>x == y =&gt; h(x) == h(y)</code>, thus people implementing their containers, authorization protocols, and other tools are free to <strong>assume</strong> this property. If you break it - every single tool using hashes can break. Furthermore, it can break <strong>in time</strong>, meaning that you update some library and your code will stop working, as a valid update to the underlying libraries (using the above assumption) can lead to exploiting your violation of this assumption. </p> <h1>Update 2</h1> <p>Finally, in order to solve your issue - you simply should not define your <strong>eq</strong>, <strong>lt</strong> operators to handle sorting. It is about <strong>actual comparison of the elements</strong>, which should be compatible with the rest of the behaviours. All you have to do is define a separate <strong>comparator</strong> and use it in your sorting routines (sorting in python accepts any comparator, you do not need to rely on &lt;, > etc.). The other way around is to instead have valid &lt;, >, = defined on values, but in order to keep names unique - keep a set with... well... names, and not objects themselves. Whichever path you choose - the crucial element here is: <strong>equality and hashing have to be compatible</strong>, that's all.</p>
5
2016-07-27T20:09:12Z
[ "python", "class", "hash", "equivalence" ]
Is it really necessary to hash the same for classes that compare the same?
38,622,200
<p>Reading <a href="https://stackoverflow.com/a/1608882/143211">this answer</a> it seems, that if <code>__eq__</code> is defined in custom class, <code>__hash__</code> needs to be defined as well. This is understandable.<br> However it is not clear, why - effectively - <code>__eq__</code> should be same as <code>self.__hash__()==other.__hash__</code></p> <p>Imagining a class like this: </p> <pre><code>class Foo: ... self.Name self.Value ... def __eq__(self,other): return self.Value==other.Value ... def __hash__(self): return id(self.Name) </code></pre> <p>This way class instances could be compared by value, which could be the only reasonable use, but considered identical by name.<br> This way <code>set</code> could not contain multiple instances with equal name, but comparison would still work. </p> <p>What could be the problem with such definition?</p> <p>The reason for defining <code>__eq__</code>, <code>__lt__</code> and other by <code>Value</code> is to be able to sort instances by <code>Value</code> and to be able to use functions like max. For example, he class should represent a physical output of a device (say heating element). Each of these outputs has unique Name. The Value is power of the output device. To find optimal combination of heating elements to turn on, it is useful to be able to compare them by power (Value). In a set or dictionary, however, it should not be possible to have multiple outputs with same names. Of course, different outputs with different names might easily have equal power.</p>
3
2016-07-27T20:03:02Z
38,622,907
<p>It is possible to implement your class like this and <em>not</em> have any problems. However, you have to be 100% sure that no two different objects will ever produce the same hash. Consider the following example:</p> <pre><code>class Foo: def __init__(self, name, value): self.name= name self.value= value def __eq__(self, other): return self.value == other.value def __hash__(self): return hash(self.name[0]) s= set() s.add(Foo('a', 1)) s.add(Foo('b', 1)) print(len(s)) # output: 2 </code></pre> <p>But you have a problem if a hash collision occurs:</p> <pre><code>s.add(Foo('abc', 1)) print(len(s)) # output: 2 </code></pre> <p>In order to prevent this, you would have to know <strong>exactly</strong> how the hashes are generated (which, if you rely on functions like <code>id</code> or <code>hash</code>, might vary between implementations!) and also the values of the attribute(s) used to generate the hash (<code>name</code> in this example). That's why ruling out the possibility of a hash collision is very difficult, if not impossible. It's basically like begging for unexpected things to happen.</p>
-1
2016-07-27T20:46:54Z
[ "python", "class", "hash", "equivalence" ]
Python block thread if list is empty
38,622,284
<p>Is there a way to make a thread go to sleep if the list is empty and wake it up again when there are items? I don't want to use Queues since I want to be able to index into the datastructure.</p>
4
2016-07-27T20:08:26Z
38,622,859
<p>I would go with this:</p> <pre><code>import threading class MyList (list): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self._cond = threading.Condition() def append(self, item): with self._cond: super().append(item) self._cond.notify_all() def pop_or_sleep(self): with self._cond: while not len(self): self._cond.wait() return self.pop() </code></pre>
1
2016-07-27T20:43:41Z
[ "python", "multithreading", "list" ]
Python block thread if list is empty
38,622,284
<p>Is there a way to make a thread go to sleep if the list is empty and wake it up again when there are items? I don't want to use Queues since I want to be able to index into the datastructure.</p>
4
2016-07-27T20:08:26Z
38,638,174
<p>Yes, the solution will probably involve a <a href="https://docs.python.org/3/library/threading.html#condition-objects" rel="nofollow"><code>threading.Condition</code></a> variable as you note in comments.</p> <p>Without more information or a code snippet, it's difficult to know what API suits your needs. How are you producing new elements? How are you consuming them? At base, you could do something like this:</p> <pre><code>cv = threading.Condition() elements = [] # elements is protected by, and signaled by, cv def produce(...): with cv: ... add elements somehow ... cv.notify_all() def consume(...): with cv: while len(elements) == 0: cv.wait() ... remove elements somehow ... </code></pre>
1
2016-07-28T13:39:29Z
[ "python", "multithreading", "list" ]
Creating a max function from scratch (python)
38,622,287
<p>So I'm trying to create a function that determines the maximum of a set of values, from scratch. </p> <p>I'm not sure how to make the function change the new value of Max after a new Max has been found, I'm also not sure if my usage of args is correct. </p> <pre><code>def Maximum(*args): Max = 0 for item in List: if item &gt; Max: item = Max return Max List = [1,5,8,77,24,95] maxList = Maximum(List) print str(maxList) </code></pre> <p>Any help would be hugely appreciated.</p>
0
2016-07-27T20:08:37Z
38,622,335
<p>You've got one line of code backwards. Your if statement is effectively saying that if item is greater than Max, set item to Max. You need to flip that to say if item is greater than Max, set Max to item.</p> <pre><code>if item &gt; Max: Max = item return Max </code></pre> <p>Also, I'm not an expert in Python, but i think you need to change the List inside your function to match the parameter name, in this case args.</p>
1
2016-07-27T20:11:39Z
[ "python", "python-2.7", "function", "max" ]
Creating a max function from scratch (python)
38,622,287
<p>So I'm trying to create a function that determines the maximum of a set of values, from scratch. </p> <p>I'm not sure how to make the function change the new value of Max after a new Max has been found, I'm also not sure if my usage of args is correct. </p> <pre><code>def Maximum(*args): Max = 0 for item in List: if item &gt; Max: item = Max return Max List = [1,5,8,77,24,95] maxList = Maximum(List) print str(maxList) </code></pre> <p>Any help would be hugely appreciated.</p>
0
2016-07-27T20:08:37Z
38,622,611
<p>*args = list of arguments -as positional arguments</p> <p>You are passing a list as an argument here. So your code should look something like this -</p> <pre><code>def maximum(nums): Max = 0 for item in nums: if item &gt; Max: Max=item return Max List = [1,5,8,77,24,95] print maximum(List) </code></pre> <p>This would give you the result : 95.</p> <p>On the other hand you can use the max built in function to get the maximum number in the list. </p> <pre><code>print max(List) </code></pre>
0
2016-07-27T20:28:11Z
[ "python", "python-2.7", "function", "max" ]
How to make Wagtail search case-insensitive
38,622,289
<p>I use Wagtail serach:</p> <pre><code>query = self.request.query_params questions = models.Questions.objects.filter( answer__isnull=False, owner__isnull=False).exclude(answer__exact='') s = get_search_backend() results = s.search(query[u'question'], questions) </code></pre> <p>And this is how I set up the indexing of my <code>Questions</code> model:</p> <pre><code>search_fields = [ index.SearchField('question', partial_match=True, boost=2), index.FilterField('answer'), index.FilterField('owner_id') ] </code></pre> <p>But it case sensitive. So queries <code>how</code> and <code>How</code> will give different results. </p> <p>I need to make my search behave this way:</p> <p>When I type either <code>how</code> or <code>How</code>, it should return</p> <pre><code>how to... How to... The way how... THE WAY HoW... </code></pre> <p>In other words, it should find all mentions of <code>how</code> in all posible cases.</p> <p>How do I make it work?</p> <p>P.S.: I'm using default backend, and I'm free to change it if needed.</p>
3
2016-07-27T20:08:39Z
38,665,074
<p>With Wagtail's elasticsearch backend, fields indexed with <code>partial_match=True</code> are tokenized in <a href="https://github.com/torchbox/wagtail/blob/v1.5.3/wagtail/wagtailsearch/backends/elasticsearch.py#L623" rel="nofollow">lowercase</a>. So to accomplish case-insensitive search all you need to do is lowercase the query string:</p> <pre><code>results = s.search(query[u'question'].lower(), questions) </code></pre>
3
2016-07-29T18:10:37Z
[ "python", "django", "search", "elasticsearch", "wagtail" ]
How can I ask for user input using Python 3.x?
38,622,296
<p>I'm creating an assignment for school.</p> <p>First create the pseudocode, etc</p> <p>Then write a compare function that <code>returns 1 if a &gt; b , 0 if a == b , and -1 if a &lt; b</code></p> <p>I've written that part</p> <pre><code>def compare(a, b): return (a &gt; b) - (a &lt; b) </code></pre> <p>BUT then I have to prompt the user to input the numbers for comparison. </p> <p>I have no idea how to write the user input prompt.</p>
-6
2016-07-27T20:09:00Z
38,622,371
<p>Since you're using Python 3.x, you can use:</p> <pre><code>def compare(a, b): a = int(input("Insert value A: ")) b = int(input("Insert value B: ")) return (a &gt; b) - (a &lt; b) </code></pre> <p>Since Python 3.x doesn't evaluate and/or convert the data type, you have to explicitly convert to <code>ints</code>, with <code>int()</code>, like this:</p> <pre><code>a = int(input("Insert value A: ")) </code></pre> <p>But if you wan't to create a good function, you'll have to validate the params A and B, to make sure your program don't accept <code>"one"</code> or <code>"twelve"</code> as inputs. </p> <p>You can take a deeper look here: <a href="http://stackoverflow.com/questions/23294658/asking-the-user-for-input-until-they-give-a-valid-response">Asking the user for input until they give a valid response</a></p>
0
2016-07-27T20:13:41Z
[ "python", "python-3.x", "input" ]
How can I ask for user input using Python 3.x?
38,622,296
<p>I'm creating an assignment for school.</p> <p>First create the pseudocode, etc</p> <p>Then write a compare function that <code>returns 1 if a &gt; b , 0 if a == b , and -1 if a &lt; b</code></p> <p>I've written that part</p> <pre><code>def compare(a, b): return (a &gt; b) - (a &lt; b) </code></pre> <p>BUT then I have to prompt the user to input the numbers for comparison. </p> <p>I have no idea how to write the user input prompt.</p>
-6
2016-07-27T20:09:00Z
38,622,448
<p>Use the command:</p> <pre><code> variable = input("You can write something here:" ) </code></pre> <p>Then, when compiling the .py file, the terminal will show the message:</p> <pre><code> You can write something here: </code></pre> <p>Where you can put your input as easy as type and enter.</p> <p>And maybe, as said above, you'll wish to convert the input to an int or float using either int() or float() to make sure you're getting a valid input.</p>
0
2016-07-27T20:17:26Z
[ "python", "python-3.x", "input" ]
How can I execute JavaScript code from Python?
38,622,385
<p>Lets say that I have this code inside a JavaScript file:</p> <pre><code>var x = 10; x = 10 - 5; console.log(x); function greet() { console.log("Hello World!"); } greet() </code></pre> <p>How would I use Python to execute this code and <em>"print"</em> <code>x</code> and <code>Hello World!</code>?<br /> Here is some pseudo code that further explains what I'm thinking:</p> <pre><code># 1. open the script script = open("/path/to/js/files.js", "r") # 2. get the script content script_content = script.read() # 3. close the file script.close() # 4. execute the script content and "print" "x" and "Hello World!" x = js.exec(script_content) </code></pre> <p>And, the expected result would look like this:</p> <pre><code>&gt;&gt;&gt; 5 &gt;&gt;&gt; "Hello World!" </code></pre>
2
2016-07-27T20:14:49Z
38,623,079
<p>The module <code>Naked</code> does exactly this. <code>pip install Naked</code> (or install from source if you prefer) and import the library shell functions as follows:</p> <pre><code>from Naked.toolshed.shell import execute_js, muterun_js response = muterun_js('file.js') if response.exitcode == 0: print(response.stdout) else: sys.stderr.write(response.stderr) </code></pre> <p>For your particular case, with file.js as</p> <pre><code>var x = 10; x = 10 - 5; console.log(x); function greet() { console.log("Hello World!"); } greet() </code></pre> <p>the output is <code>'5\nHello World!\n'</code>, which you can parse as desired.</p>
3
2016-07-27T20:59:42Z
[ "javascript", "python", "python-3.x", "python-3.5" ]
python2.7 - store boolean values as individual bits when writing to disk
38,622,386
<p>I'm writing code that converts integers into padded 8-bit strings. I would then like to write those strings to a binary file. I am having problems figuring out the proper <code>dtype</code> to be used with the numpy array that I am currently using.</p> <p>In the following code when I have <code>bin_data</code> variable set up with <code>dtype=np.int8</code> the output is:</p> <pre><code>$ python bool_dtype.py a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 1, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True [0 0 0 0 1 0 0 0 0] 16 </code></pre> <p>When <code>bin_data</code> is set as <code>dtype=np.bool_</code> the output is always true as shown below:</p> <pre><code>$ python bool_dtype.py a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 1, bool(a[j]) = True a[j] = 1, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 0, bool(a[j]) = True a[j] = 1, bool(a[j]) = True a[j] = 1, bool(a[j]) = True [ True True True True True True True True True] 16 </code></pre> <p>When I look at the xxd dump of the data when using the <code>dtype=np.int8</code> I see an expected byte being used to represent each bit (1,0) IE 00000001 or 00000000. Using <code>dtype=np.bool_</code> leads to the same problem.</p> <h1>So the two main questions I have are</h1> <ol> <li><p>Why is bool always reading as True when reading an array element</p></li> <li><p>How can I more efficiently store the data when I write it to the file such that a single bit is <strong>not</strong> stored as a byte but instead just concatenated onto the previous element?</p></li> </ol> <p>Here is the code in question, Thanks!</p> <pre><code>#!/usr/bin/python2.7 import numpy as np import os # x = np.zeros(200,dtype=np.bool_) # for i in range(0,len(x)): # if i%2 != 1: # x[i] = 1 data_size = 2 data = np.random.randint(0,9,data_size) tx='' for i in range(0,data_size): tx += chr(data[i]) data = tx a = np.zeros(8,dtype=np.int8) bin_data = np.zeros(len(data)*8,dtype=np.bool_) # each i is a character byte in data string for i in range(0,len(data)): # formats data in 8bit binary without the 0b prefix a = format(ord(data[i]),'b').zfill(8) for j in range(0,len(a)): bin_data[i*len(a) + j] = a[j] print("a[j] = {}, bool(a[j]) = {}").format(a[j], bool(a[j])) print bin_data[1:10] print len(bin_data) path = os.getcwd() path = path + '/bool_data.bin' data_file = open(path, "wb") data_file.write(bin_data) data_file.close() </code></pre> <h2>edit:</h2> <p>What I expect to see when using <code>dtype=np.bool_</code></p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a = np.zeros(2,dtype=np.bool_) &gt;&gt;&gt; a array([False, False], dtype=bool) &gt;&gt;&gt; a[1] = 1 &gt;&gt;&gt; a array([False, True], dtype=bool) </code></pre>
3
2016-07-27T20:14:51Z
38,622,864
<ol> <li>The reason that bool is always returning true is that a[j] is a nonempty <strong>string</strong>. You need to cast a[j] to an int before testing with bool (and also before assigning it as an entry to a numpy bool array).</li> <li>You can just call numpy.packbits to compress your boolean array into a uint8 array, (it pads for you if needed) and then call numpy.unpackbits to reverse the operation. </li> </ol> <p><b>Edit:</b><br/><br/> If your boolean array has a length that isn't a multiple of 8, after packing and unpacking your array will be zero-padded to make the length a multiple of 8. In this case, you have two options:</p> <ol> <li>If you can safely truncate your data to have a number of bits that is divisible by 8, then do so. Something like: <code>data=data[:8*(len(data)/8)]</code></li> <li>If you can't afford to truncate, then you are going to record the number of meaningful bits somehow. I suggest making the first byte of your packed data equal to the number of meaningful bits mod 8. This will add only one byte of memory overhead, and not much compute time. Something like:</li> </ol> <h1>Packing</h1> <pre><code>bool_data = np.array([True, True, True]) nbits = len(bool_data) rem = nbits % 8 nbytes = nbits/8 if rem: nbytes += 1 data = np.empty(1+nbytes, dtype=np.uint8) data[0] = rem data[1:] = np.packbits(bool_data) </code></pre> <h1>Unpacking</h1> <pre><code>rem = data[0] bool_data = np.unpackbits(data[1:]) if rem: bool_data = bool_data[:-(8-rem)] </code></pre>
5
2016-07-27T20:44:10Z
[ "python", "numpy" ]
Change True/False value to discrete value in pandas dataframe with np.where()
38,622,487
<p>I am trying to assign a state name to a list of university names: </p> <pre><code>df = pd.DataFrame({'College': pd.Series(['University of Michigan', 'University of Florida', 'Iowa State'])}) State = ['Michigan', 'Iowa'] df['State'] = np.where(df['College'].str.contains('|'.join(State)), 'state','--') </code></pre> <p>I would like to replace the "state" value that appears when there is a match with the actual name of the state. Example: University of Michigan -> Michigan (rather than "state"). Ultimately, "State" will have all 50 states so I can't write 50 "np.where" statements for each state name. </p> <p>Thank you for your help. </p>
3
2016-07-27T20:20:00Z
38,622,524
<p>You could use <code>str.extract</code> here, instead of <code>np.where</code>:</p> <pre><code>In [290]: df['State'] = df['College'].str.extract('({})'.format('|'.join(State)), expand=True) In [291]: df Out[291]: College State 0 University of Michigan Michigan 1 University of Florida NaN 2 Iowa State Iowa </code></pre>
3
2016-07-27T20:22:47Z
[ "python", "pandas", "contains", "assign", "extraction" ]
Change True/False value to discrete value in pandas dataframe with np.where()
38,622,487
<p>I am trying to assign a state name to a list of university names: </p> <pre><code>df = pd.DataFrame({'College': pd.Series(['University of Michigan', 'University of Florida', 'Iowa State'])}) State = ['Michigan', 'Iowa'] df['State'] = np.where(df['College'].str.contains('|'.join(State)), 'state','--') </code></pre> <p>I would like to replace the "state" value that appears when there is a match with the actual name of the state. Example: University of Michigan -> Michigan (rather than "state"). Ultimately, "State" will have all 50 states so I can't write 50 "np.where" statements for each state name. </p> <p>Thank you for your help. </p>
3
2016-07-27T20:20:00Z
38,622,590
<pre><code>States = [ 'Washington' 'Wisconsin' 'West Virginia' 'Florida' 'Wyoming' 'New Hampshire' 'New Jersey' 'New Mexico' 'National' 'North Carolina' 'North Dakota' 'Nebraska' 'New York' 'Rhode Island' 'Nevada' 'Guam' 'Colorado' 'California' 'Georgia' 'Connecticut' 'Oklahoma' 'Ohio' 'Kansas' 'South Carolina' 'Kentucky' 'Oregon' 'South Dakota' 'Delaware' 'District of Columbia' 'Hawaii' 'Puerto Rico' 'Texas' 'Louisiana' 'Tennessee' 'Pennsylvania' 'Virginia' 'Virgin Islands' 'Alaska' 'Alabama' 'American Samoa' 'Arkansas' 'Vermont' 'Illinois' 'Indiana' 'Iowa' 'Arizona' 'Idaho' 'Maine' 'Maryland' 'Massachusetts' 'Utah' 'Missouri' 'Minnesota' 'Michigan' 'Montana' 'Northern Mariana Islands' 'Mississippi' ] state_str = '|'.join(States) df.update(df.College.str.extract(r'(?P&lt;State&gt;{})'.format(state_str), expand=True)) df </code></pre> <p><a href="http://i.stack.imgur.com/zw2Et.png" rel="nofollow"><img src="http://i.stack.imgur.com/zw2Et.png" alt="enter image description here"></a></p>
2
2016-07-27T20:26:41Z
[ "python", "pandas", "contains", "assign", "extraction" ]
Maven build with Java: How to execute script located in resources?
38,622,523
<p>I am building my Java project with Maven and I have a script file that ends up in the <code>target/classes/resources</code> folder. While I can access the file itself via <code>this.getClass.getResource("/lookUpScript.py").getPath()</code>, I cannot execute a shell command with <code>"." + this.getClass.getResource("/lookUpScript.py").getPath()</code>; this ultimately ends up being <code>./lookUpScript.py</code>. To execute the shell command I am using a method that is part of my company's code that I can get to work fine with any command not involving a file. Is there a standard way of accessing files located in the resources area of a Maven build that may fix this? </p>
1
2016-07-27T20:22:45Z
38,622,730
<p>The maven path for all the artifacts is not the same that gets generated when you run it or export the project. You can check this by exporting the project as Jar/War/Ear file and viewing it via winRAR or any other tool.</p> <p>The resources should be in jar parallel to com directory if its a jar project, but you can double check it.</p>
1
2016-07-27T20:36:04Z
[ "java", "python", "shell", "maven" ]
Installed Python with 32 bit install, appears as 64 bit
38,622,585
<p>I need to run the 32 bit version of Python. I thought that was what I had running on my machine, as that is the installer I downloaded and when I rerun the installer is refers to the currently installed version of Python as "Python 3.5 32-Bit". </p> <p>However when I run <code>platform.architecture()</code> it states that I am running 64 bit. I know this isn't always reliable so I also ran <code>sys.maxsize</code> and it returns <code>9223372036854775807</code>, so I am definitely running the 64 bit install. </p> <p>I need to run the 32 bit version of Python to interface with the 32 bit Java using pywinauto. I'm running Windows 7 Enterprise, 64-bit.</p>
3
2016-07-27T20:26:22Z
38,622,996
<p>You can determine if your Python is truly 64bit by running this code and looking at Task Manager in Windows (or its equivalent in Linux) and seeing what is the maximum allocated memory for the program. If it is 2GB (it could be 3GB for some cases I am not sure) then it is Python 32bit. Otherwise 64bit. On my computer the program executed till 9GB and then almost hanged the computer.</p> <pre><code>a=[] while(True): a.append([1234]*10000000) </code></pre>
2
2016-07-27T20:53:30Z
[ "python" ]
Installed Python with 32 bit install, appears as 64 bit
38,622,585
<p>I need to run the 32 bit version of Python. I thought that was what I had running on my machine, as that is the installer I downloaded and when I rerun the installer is refers to the currently installed version of Python as "Python 3.5 32-Bit". </p> <p>However when I run <code>platform.architecture()</code> it states that I am running 64 bit. I know this isn't always reliable so I also ran <code>sys.maxsize</code> and it returns <code>9223372036854775807</code>, so I am definitely running the 64 bit install. </p> <p>I need to run the 32 bit version of Python to interface with the 32 bit Java using pywinauto. I'm running Windows 7 Enterprise, 64-bit.</p>
3
2016-07-27T20:26:22Z
38,623,318
<p>This sounds like you might have multiple instances of Python installed on your machine. Verify that you're calling the correct one by calling it explicitly from its full path, and noting if its still saying 64-bit or 32-bit.</p> <p>Moving forward, using a <a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">virtualenv</a> might simplify any confusion of which python installation, and which installed packages, are being used.</p>
3
2016-07-27T21:15:27Z
[ "python" ]
mysql source command do nothing inside docker container
38,622,594
<h1>Description</h1> <p>I'm running docker container with mysql in it and I want to run python script after mysql started, which will apply dump on it.</p> <p>Here is a snippet of <strong>Dockerfile</strong>:</p> <pre><code>FROM mysql:5.6 RUN apt-get update &amp;&amp; \ apt-get install -y python ADD apply_dump.py /usr/local/bin/apply_dump.py ADD starter.sh /usr/local/bin/starter.sh CMD ["/usr/local/bin/starter.sh"] </code></pre> <p><strong>starter.sh</strong>:</p> <pre><code>nohup python '/usr/local/bin/apply_dump.py' &amp; mysqld </code></pre> <p><strong>apply_dump.py</strong>:</p> <pre><code>import os import urllib import gzip import shutil import subprocess import time import logging import sys # wait for mysql server time.sleep(5) print "Start dumping" dumpName = "ggg.sql" dumpGzFile = dumpName + ".gz" dumpSqlFile = dumpName + ".sql" print "Loading dump {}...".format(dumpGzFile) urllib.urlretrieve('ftp://ftpUser:ftpPassword@ftpHost/' + dumpGzFile, '/tmp/' + dumpGzFile) print "Extracting dump..." with gzip.open('/tmp/' + dumpGzFile, 'rb') as f_in: with open('/tmp/' + dumpSqlFile, 'wb') as f_out: shutil.copyfileobj(f_in, f_out) print "Dropping database..." subprocess.call(["mysql", "-u", "root", "-proot", "-e", "drop database if exists test_db"]) print "Creating database..." subprocess.call(["mysql", "-u", "root", "-proot", "-e", "create schema test_db"]) print "Applying dump..." subprocess.call(["mysql", "--user=root", "--password=root", "test_db", "-e" "source /tmp/{}".format(dumpSqlFile)]) print "Done" </code></pre> <p>content of <strong>ggg.sql.gz</strong> is pretty simple:</p> <pre><code>CREATE TABLE test_table (id INT NOT NULL,PRIMARY KEY (id)); </code></pre> <h2>Problem</h2> <p>Database created, but table is not. If I'll go to container and will run this script manually, table will be created. If I'll replace <code>source</code> command with direct sql create statement that will work as well. But in reality dump file will be pretty big and only <code>source</code> command will cope with this (or not only it?). Am I doing something wrong?</p> <p>Thanks in advance.</p>
1
2016-07-27T20:26:59Z
38,662,417
<p>Try passing your source SQL file into the MySQL command like this, instead of using the <code>-e</code> flag:</p> <pre><code>subprocess.call(["mysql", "--user=root", "--password=root", "test_db", "&lt;", "/tmp/%s" % dumpSqlFile]) </code></pre> <p>This will call import your SQL file using the widely used syntax:</p> <p><code>mysql --user=root --password=root test_db &lt; /tmp/source.sql</code></p>
0
2016-07-29T15:26:33Z
[ "python", "mysql", "docker" ]
Execute IPython notebook cell from python script
38,622,610
<p>In the IPython notebook, you can execute an outside script, say <code>test.py</code>, using the run magic:</p> <pre><code>%run test.py </code></pre> <p>Is there a way to do the opposite, i.e. given an IPython notebook, accessing and then running a particular cell inside it from a python script?</p>
2
2016-07-27T20:28:11Z
38,645,985
<p>The file with extention "ipynb" of Jupyter (or IPython) is a JSON file. And the cells are under the name "cells" ["cells"]. Then you choose the number of the cell [0] and to get the source choose "source" ["source"]. In return you get an array with one element so you need to get the first element [0].</p> <pre><code>&gt;&gt;&gt; import json &gt;&gt;&gt; from pprint import pprint &gt;&gt;&gt; with open('so1.ipynb', 'r') as content_file: ... content = content_file.read() ... &gt;&gt;&gt; data=json.loads(content) &gt;&gt;&gt; data["cells"][0]["source"][0] '1+1' &gt;&gt;&gt; eval(data["cells"][0]["source"][0]) 2 &gt;&gt;&gt; data["cells"][1]["source"][0] '2+2' &gt;&gt;&gt; eval(data["cells"][1]["source"][0]) 4 </code></pre> <p><strong>EDIT:</strong></p> <p>To run other python scripts in cells that have %run:</p> <pre><code>os.system(data["cells"][2]["source"][0].replace("%run ","")) </code></pre> <p>Or replace it with the following if you have -i option:</p> <pre><code>execfile(data["cells"][2]["source"][0].replace("%run -i ","")) </code></pre> <p>See <a href="http://stackoverflow.com/questions/3781851/run-a-python-script-from-another-python-script-passing-in-args">Run a python script from another python script, passing in args</a> for more info.</p>
1
2016-07-28T20:18:07Z
[ "python", "ipython" ]
Get list of missing documents from MongodDB
38,622,666
<p>First mongodb project so it's probably obvious but my google fu is insufficient or I'm approaching it wrong.</p> <p>I have a collection in mongodb. Each document has an <code>id</code> field that is unique. In my program I end up with a list of these ids and I want to make a slow expensive call to an api if I don't have a copy of the corresponding document in mongodb. </p> <p>Obviously, I don't want to make the expensive call if I don't have to so the solution seems to be I need to retrieve the full list and compare:</p> <pre><code>ids = [45, 23, 45, 88, 34, 28] # Except many more items = db.items.find({'id': {'$in': ids}}, {'id':1}) missingIds = compareFx(ids, items) for missingId in missingIds: doc = expensiveCall(missingId) db.items.insert_one(doc) </code></pre> <p>That requires me to do a full retrieve of all existing/matching documents when I'm just looking for the ones that don't exist. There is an <code>$exists</code> operator but it seems to only operate on fields not documents, but maybe I just don't see how to use it. </p> <p>Is there some sort of operator/combination of operators to get back the ids that weren't found or is this the right/best way to do this?</p>
1
2016-07-27T20:32:36Z
38,623,536
<p>Using pymongo you can check if id is <a href="https://docs.mongodb.com/manual/reference/operator/query/in/" rel="nofollow">in</a> list or <a href="https://docs.mongodb.com/manual/reference/operator/query/nin/" rel="nofollow">nin</a>:</p> <pre><code>db.items.find({'id': {"$in":ids}}, {id:1}) </code></pre> <p>or not in list:</p> <pre><code>db.items.find({'id': {"$nin":ids}}, {id:1}) </code></pre>
0
2016-07-27T21:32:40Z
[ "python", "mongodb" ]
python - couldn't open a file using radare2: invalid option -- '0'
38,622,681
<p>I have installed radare2 using pip install and then in the python shell I gave the followig lines of code</p> <pre><code>Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import r2pipe &gt;&gt;&gt; r = r2pipe.open("/bin/ls") radare2: invalid option -- '0' </code></pre> <p>I have cross checked that /bin/ls is available. Why am I getting this error?</p>
1
2016-07-27T20:33:40Z
38,623,148
<p>Here's what I did:</p> <ol> <li><p>go to <a href="https://github.com/radare/radare2" rel="nofollow">https://github.com/radare/radare2</a>, clone the project to my laptop and install it by <code>sys/install.sh</code> (radare2 README contains all instructions)</p></li> <li><p>pip install r2pipe (I have python 2.7.6 on Ubuntu 14.04)</p></li> </ol> <p>Here's the output I got from python console:</p> <pre><code>&gt;&gt;&gt; r2 = r2pipe.open("/bin/ls") &gt;&gt;&gt; print(r2.cmd("pd 10")) ;-- entry0: 0x00404890 31ed xor ebp, ebp 0x00404892 4989d1 mov r9, rdx 0x00404895 5e pop rsi 0x00404896 4889e2 mov rdx, rsp 0x00404899 4883e4f0 and rsp, 0xfffffffffffffff0 0x0040489d 50 push rax 0x0040489e 54 push rsp 0x0040489f 49c7c0d01e41. mov r8, 0x411ed0 0x004048a6 48c7c1601e41. mov rcx, 0x411e60 0x004048ad 48c7c7c02840. mov rdi, 0x4028c0 ; "AWAVAUATUH..S..H...." @ 0x4028c0 &gt;&gt;&gt; print(r2.cmdj("pd 10")) r2pipe.cmdj.Error: No JSON object could be decoded None </code></pre> <p>Please make sure you install <code>radare2</code> properly. You can try to <code>uninstall</code> your current radare2 and install it from scratch in case of some version issues on radare side.</p>
1
2016-07-27T21:03:34Z
[ "python" ]
Still confused about reference counting
38,622,715
<p>Out of curiosity, I am trying to understand how reference counting works in Python. These two entries: </p> <ul> <li><a href="http://stackoverflow.com/q/38253757/3001761">Python Ref Counts to Object</a></li> <li><a href="http://stackoverflow.com/q/510406/3001761">Is there a way to get the current ref count of an object in Python?</a></li> </ul> <p>were helpful, but still raised questions.</p> <ol> <li><p>Using <code>sys.getrefcount()</code> returns a different value than <code>len(gc.get_referrers())</code>. For example:</p> <pre><code>&gt;&gt;&gt; a = 3 &gt;&gt;&gt; print sys.getrefcount(a) 38 &gt;&gt;&gt; print len(gc.get_referrers(a)) 23 </code></pre> <p>Why the difference?</p></li> <li><p>As I understand it, the reference count on <code>a</code> is so high because there is already an object holding an integer value of <code>3</code> at the time I bound the name <code>a</code> to it. How does Python keep track of which object is holding <code>3</code> so that it binds the name <code>a</code> to it and increments its reference count accordingly?</p></li> </ol>
1
2016-07-27T20:35:21Z
38,622,937
<ol> <li><code>gc.get_referrers</code> only returns objects the cycle-detecting GC knows about. Objects that couldn't possibly be involved in a reference cycle don't need to be tracked by the cycle detector, so they might not show up in the <code>get_referrers</code> list.</li> <li><p>With <a href="https://hg.python.org/cpython/file/2.7/Objects/intobject.c#l79" rel="nofollow">this array here</a>:</p> <pre><code>static PyIntObject *small_ints[NSMALLNEGINTS + NSMALLPOSINTS]; </code></pre></li> </ol>
0
2016-07-27T20:49:05Z
[ "python", "reference", "counting" ]
JaqvaScript SyntaxError: JSON.parse: unexpected character line 1
38,622,741
<p>In Python I load a variable with data then convert it to json with this code.</p> <pre><code>jsVar = (json.dumps(jsPass)) </code></pre> <p>which produces this output. {"TT1004": [[1004, 45.296109039999997, -75.926546579999993, 66.996664760000002, 150, false]], "TT1001": [[1001, 45.296471220000001, -75.923881289999997, 64.616423409999996, 150, false]], "TT1003": [[1003, 45.296109379999997, -75.926543379999998, 67.240025419999995, 150, false]], "TT1002": [[1002, 45.29626098, -75.924908610000003, 65.300880480000004, 150, true]]} </p> <p>The output passes validation on the JSON Formatter &amp; Validator website. When I run the javaScript code</p> <pre><code>var myVar2 = {}; var myVar2 = JSON.parse(jsVar); </code></pre> <p>I get the following error</p> <p>SyntaxError: JSON.parse: unexpected character at line 1 column 2 of the JSON data. </p> <p>I am not very strong with JS and new to JSON. Any constructive comments, website or literature suggestions will be greatly appreciated.</p> <p>Better Focus on the actual Issue. From your very helpful explanations and questions I can see that I don’t need to parse the data. As has been stated here the variable holds valid JavaScript array data. What I’m discovering is that the variable generated in Python is not the variable that I’m accessing in JS, even though they are in the same file location and have the same name. The process is set up to run like this. First, the Python code generates the JSON variable, second the JavaScript code assigns a variable with the data from the variable generated in Python and then the HTML file is fired to execute the JS code. When I look at the output of the console.log or .dir methods for the JavaScript variable there is no data. What I need to learn is how to import the Python variable into the JavaScript Code. I don’t know whether this question can be answered here or should I ask another question in StackOverFlow? </p>
1
2016-07-27T20:36:43Z
38,622,845
<p>I 've tested quickly with JSONlint so you definitely get valid json already. There is no need to normalize this with <code>JSON.parse()</code> since it is already fine. Only thing you have to do is:</p> <pre><code>var myVar2 = jsVar; </code></pre> <p>If you check <code>console.log(myVar2.TT1001);</code> will present you some array data as wanted.</p>
0
2016-07-27T20:43:02Z
[ "javascript", "python", "json" ]
JaqvaScript SyntaxError: JSON.parse: unexpected character line 1
38,622,741
<p>In Python I load a variable with data then convert it to json with this code.</p> <pre><code>jsVar = (json.dumps(jsPass)) </code></pre> <p>which produces this output. {"TT1004": [[1004, 45.296109039999997, -75.926546579999993, 66.996664760000002, 150, false]], "TT1001": [[1001, 45.296471220000001, -75.923881289999997, 64.616423409999996, 150, false]], "TT1003": [[1003, 45.296109379999997, -75.926543379999998, 67.240025419999995, 150, false]], "TT1002": [[1002, 45.29626098, -75.924908610000003, 65.300880480000004, 150, true]]} </p> <p>The output passes validation on the JSON Formatter &amp; Validator website. When I run the javaScript code</p> <pre><code>var myVar2 = {}; var myVar2 = JSON.parse(jsVar); </code></pre> <p>I get the following error</p> <p>SyntaxError: JSON.parse: unexpected character at line 1 column 2 of the JSON data. </p> <p>I am not very strong with JS and new to JSON. Any constructive comments, website or literature suggestions will be greatly appreciated.</p> <p>Better Focus on the actual Issue. From your very helpful explanations and questions I can see that I don’t need to parse the data. As has been stated here the variable holds valid JavaScript array data. What I’m discovering is that the variable generated in Python is not the variable that I’m accessing in JS, even though they are in the same file location and have the same name. The process is set up to run like this. First, the Python code generates the JSON variable, second the JavaScript code assigns a variable with the data from the variable generated in Python and then the HTML file is fired to execute the JS code. When I look at the output of the console.log or .dir methods for the JavaScript variable there is no data. What I need to learn is how to import the Python variable into the JavaScript Code. I don’t know whether this question can be answered here or should I ask another question in StackOverFlow? </p>
1
2016-07-27T20:36:43Z
38,622,873
<p>Your data is already parsed and you can simply use it like javascript object. if your data is like </p> <pre><code>"{"TT1004":[[1004,45.29610904,-75.92654658,66.99666476,150,false]],"TT1001":[[1001,45.29647122,-75.92388129,64.61642341,150,false]],"TT1003":[[1003,45.29610938,-75.92654338,67.24002542,150,false]],"TT1002":[[1002,45.29626098,-75.92490861,65.30088048,150,true]]}" </code></pre> <p>a JSON string, then you need to parse it but the data you pasted above is a javascript object.</p> <pre><code>{ "TT1004": [ [1004, 45.296109039999997, -75.926546579999993, 66.996664760000002, 150, false] ], "TT1001": [ [1001, 45.296471220000001, -75.923881289999997, 64.616423409999996, 150, false] ], "TT1003": [ [1003, 45.296109379999997, -75.926543379999998, 67.240025419999995, 150, false] ], "TT1002": [ [1002, 45.29626098, -75.924908610000003, 65.300880480000004, 150, true] ] } </code></pre>
0
2016-07-27T20:44:30Z
[ "javascript", "python", "json" ]
JaqvaScript SyntaxError: JSON.parse: unexpected character line 1
38,622,741
<p>In Python I load a variable with data then convert it to json with this code.</p> <pre><code>jsVar = (json.dumps(jsPass)) </code></pre> <p>which produces this output. {"TT1004": [[1004, 45.296109039999997, -75.926546579999993, 66.996664760000002, 150, false]], "TT1001": [[1001, 45.296471220000001, -75.923881289999997, 64.616423409999996, 150, false]], "TT1003": [[1003, 45.296109379999997, -75.926543379999998, 67.240025419999995, 150, false]], "TT1002": [[1002, 45.29626098, -75.924908610000003, 65.300880480000004, 150, true]]} </p> <p>The output passes validation on the JSON Formatter &amp; Validator website. When I run the javaScript code</p> <pre><code>var myVar2 = {}; var myVar2 = JSON.parse(jsVar); </code></pre> <p>I get the following error</p> <p>SyntaxError: JSON.parse: unexpected character at line 1 column 2 of the JSON data. </p> <p>I am not very strong with JS and new to JSON. Any constructive comments, website or literature suggestions will be greatly appreciated.</p> <p>Better Focus on the actual Issue. From your very helpful explanations and questions I can see that I don’t need to parse the data. As has been stated here the variable holds valid JavaScript array data. What I’m discovering is that the variable generated in Python is not the variable that I’m accessing in JS, even though they are in the same file location and have the same name. The process is set up to run like this. First, the Python code generates the JSON variable, second the JavaScript code assigns a variable with the data from the variable generated in Python and then the HTML file is fired to execute the JS code. When I look at the output of the console.log or .dir methods for the JavaScript variable there is no data. What I need to learn is how to import the Python variable into the JavaScript Code. I don’t know whether this question can be answered here or should I ask another question in StackOverFlow? </p>
1
2016-07-27T20:36:43Z
38,622,939
<p>Your data is already a JavaScript object. You can do something like this if you aren't sure whether the object you're getting back is already a valid JavaScript object.</p> <pre><code>if(typeof data != 'object') { var data = JSON.parse(data); } </code></pre>
-1
2016-07-27T20:49:11Z
[ "javascript", "python", "json" ]
What are the implications of Python integers not being limited by 32 or 64 bit size?
38,622,776
<p>There's a bit manipulation problem that asks you to sum up two integers without using + or - operators. Below is the code in Java:</p> <pre><code>public int getSum(int a, int b) { while (b != 0) { int carry = a &amp; b; a = a ^ b; b = carry &lt;&lt; 1; } return a; } </code></pre> <p>When you try to sum up -1 and 1, the intermediate values take on [-2, 2], [-4, 4] and so on until the number overflows and the result reaches 0. You can't do the same in Python, the process goes on forever taking up an entire CPU thread and slowly growing in memory. It seems that on my machine the numbers will grow for a while until no memory is left.</p> <pre><code>def getSum(a, b): while c != 0: carry = a &amp; b a = a ^ b b = carry &lt;&lt; 1 return a if __name__ == '__main__': print getSum(-1, 1) # will run forever </code></pre> <p>This is rather peculiar example, are there any real-world implications of not having the integers limited in size?</p>
2
2016-07-27T20:39:36Z
38,622,812
<p>the implications are that you must <strong>know</strong> and <strong>enforce</strong> your own integer widths when computing checksums</p> <p>you make it the size you want </p> <pre><code>carry = (a &amp; b)&amp;255 a = (a ^ b)&amp;255 b = (carry &lt;&lt; 1)&amp;255 </code></pre> <p>would be one byte wide integers ... </p>
3
2016-07-27T20:41:28Z
[ "python", "integer", "bit-manipulation" ]
Add columns to a pivot table (pandas)
38,622,801
<p>I know in R I can use tidyr for the following:</p> <pre><code>data_wide &lt;- spread(data_protein, Fraction, Count) </code></pre> <p>and data_wide will inherit all the columns from data_protein that are not spread.</p> <pre><code>Protein Peptide Start Fraction Count 1 A 122 F1 1 1 A 122 F2 2 1 B 230 F1 3 1 B 230 F2 4 </code></pre> <p>becomes</p> <pre><code>Protein Peptide Start F1 F2 1 A 122 1 2 1 B 230 3 4 </code></pre> <p>But in pandas (Python),</p> <pre><code>data_wide = data_prot2.reset_index(drop=True).pivot('Peptide','Fraction','Count').fillna(0) </code></pre> <p>doesn't inherit anything not specified in the function (index, key, value). Thus, I decided to join it through df.join():</p> <pre><code>data_wide2 = data_wide.join(data_prot2.set_index('Peptide')['Start']).sort_values('Start') </code></pre> <p>But that produces duplicates of the peptides because there are several start values. Is there any more straightforward way to solve this? Or a special parameter for join that omits repeats? Thank you in advance.</p>
3
2016-07-27T20:40:46Z
38,622,949
<p>try this:</p> <pre><code>In [144]: df Out[144]: Protein Peptide Start Fraction Count 0 1 A 122 F1 1 1 1 A 122 F2 2 2 1 B 230 F1 3 3 1 B 230 F2 4 In [145]: df.pivot_table(index=['Protein','Peptide','Start'], columns='Fraction').reset_index() Out[145]: Protein Peptide Start Count Fraction F1 F2 0 1 A 122 1 2 1 1 B 230 3 4 </code></pre> <p>you can also specify <code>Count</code> column explicitly:</p> <pre><code>In [146]: df.pivot_table(index=['Protein','Peptide','Start'], columns='Fraction', values='Count').reset_index() Out[146]: Fraction Protein Peptide Start F1 F2 0 1 A 122 1 2 1 1 B 230 3 4 </code></pre>
1
2016-07-27T20:49:59Z
[ "python", "pandas", "dataframe", "pivot-table", "tidyr" ]
Add columns to a pivot table (pandas)
38,622,801
<p>I know in R I can use tidyr for the following:</p> <pre><code>data_wide &lt;- spread(data_protein, Fraction, Count) </code></pre> <p>and data_wide will inherit all the columns from data_protein that are not spread.</p> <pre><code>Protein Peptide Start Fraction Count 1 A 122 F1 1 1 A 122 F2 2 1 B 230 F1 3 1 B 230 F2 4 </code></pre> <p>becomes</p> <pre><code>Protein Peptide Start F1 F2 1 A 122 1 2 1 B 230 3 4 </code></pre> <p>But in pandas (Python),</p> <pre><code>data_wide = data_prot2.reset_index(drop=True).pivot('Peptide','Fraction','Count').fillna(0) </code></pre> <p>doesn't inherit anything not specified in the function (index, key, value). Thus, I decided to join it through df.join():</p> <pre><code>data_wide2 = data_wide.join(data_prot2.set_index('Peptide')['Start']).sort_values('Start') </code></pre> <p>But that produces duplicates of the peptides because there are several start values. Is there any more straightforward way to solve this? Or a special parameter for join that omits repeats? Thank you in advance.</p>
3
2016-07-27T20:40:46Z
38,623,264
<p>Using <code>stack</code>:</p> <pre><code>df.set_index(df.columns[:4].tolist()) \ .Count.unstack().reset_index() \ .rename_axis(None, axis=1) </code></pre> <p><a href="http://i.stack.imgur.com/WvvP2.png" rel="nofollow"><img src="http://i.stack.imgur.com/WvvP2.png" alt="enter image description here"></a></p>
2
2016-07-27T21:11:49Z
[ "python", "pandas", "dataframe", "pivot-table", "tidyr" ]
Python - xlutils "filter.py" syntax error - print repr(self.name)
38,622,807
<p>I developing a script that will eventually use a pdfminer package to convert .pdfs to .txt files. In preparing the the subject files for use, I am having to import copy from xlutils.copy</p> <p>from xlutils.copy import copy, I run into a syntax error from one of the copy.py associated python files (<em>line 699</em> of xlutils\filter.py</p> <pre><code>def method(self,name,*args): if self.name: print repr(self.name), print "%s:%r"%(name,args) </code></pre> <p>the syntax error cursor points to the area of print repr(self.name) between the "r" and the left parenthesis. I discovered <strong>repr</strong> is not defined until line 825 of the filter.py script. </p> <p>What could be the exact cause of the syntax error and is their any way to correct the script such that filter.py does not trip up the xlutils.copy command?</p>
1
2016-07-27T20:41:02Z
38,902,811
<p>In python3, the print command is significantly different. in particular you can't say <code>print x</code>. you have to say <code>print(x)</code>. See <a href="https://docs.python.org/3.0/whatsnew/3.0.html#print-is-a-function" rel="nofollow">https://docs.python.org/3.0/whatsnew/3.0.html#print-is-a-function</a></p>
0
2016-08-11T17:38:21Z
[ "python", "xlutils" ]
How to split a list of numbers based on the distance of consecutive elements in Python?
38,622,868
<p>Given a list of numbers, how can I split it whenever the distance of two adjacent elements is larger than <em>n</em>?</p> <p><strong>Input:</strong></p> <pre><code>n = 3 l = [1, 2, 5, 3, -2, -1, 4, 5, 2, 4, 8] </code></pre> <p><strong>Output:</strong></p> <pre><code>[[1, 2, 5, 3], [-2, -1], [4, 5, 2, 4], [8]] </code></pre>
0
2016-07-27T20:44:24Z
38,622,869
<p><strong>Code</strong></p> <pre><code>from boltons import iterutils def grouponpairs(l, f): groups = [] g = [] pairs = iterutils.pairwise(l + [None]) for a, b in pairs: g.append(a) if b is None: continue if not f(a, b): groups.append(g) g = [] groups.append(g) return groups </code></pre> <p><strong>Test</strong></p> <pre><code>grouponpairs([1, 2, 5, 3, -2, -1, 4, 5, 2, 4, 8], lambda a, b: abs(a - b) &lt;= 3) # [[1, 2, 5, 3], [-2, -1], [4, 5, 2, 4], [8]] </code></pre>
1
2016-07-27T20:44:24Z
[ "python", "list", "split" ]
How to split a list of numbers based on the distance of consecutive elements in Python?
38,622,868
<p>Given a list of numbers, how can I split it whenever the distance of two adjacent elements is larger than <em>n</em>?</p> <p><strong>Input:</strong></p> <pre><code>n = 3 l = [1, 2, 5, 3, -2, -1, 4, 5, 2, 4, 8] </code></pre> <p><strong>Output:</strong></p> <pre><code>[[1, 2, 5, 3], [-2, -1], [4, 5, 2, 4], [8]] </code></pre>
0
2016-07-27T20:44:24Z
38,623,043
<p>You can do it using zip:</p> <pre><code># initialization &gt;&gt;&gt; lst = [1, 2, 5, 3, -2, -1, 4, 5, 2, 4, 8] &gt;&gt;&gt; n = 3 </code></pre> <p>Find splitting locations using zip:</p> <pre><code>&gt;&gt;&gt; indices = [i + 1 for (x, y, i) in zip(lst, lst[1:], range(len(lst))) if n &lt; abs(x - y)] </code></pre> <p>Slice subslists using previous result:</p> <pre><code># pad start index list with 0 and end index list with length of original list &gt;&gt;&gt; result = [lst[start:end] for start, end in zip([0] + indices, indices + [len(lst)])] &gt;&gt;&gt; result [[1, 2, 5, 3], [-2, -1], [4, 5, 2, 4], [8]] </code></pre>
3
2016-07-27T20:56:55Z
[ "python", "list", "split" ]
How to split a list of numbers based on the distance of consecutive elements in Python?
38,622,868
<p>Given a list of numbers, how can I split it whenever the distance of two adjacent elements is larger than <em>n</em>?</p> <p><strong>Input:</strong></p> <pre><code>n = 3 l = [1, 2, 5, 3, -2, -1, 4, 5, 2, 4, 8] </code></pre> <p><strong>Output:</strong></p> <pre><code>[[1, 2, 5, 3], [-2, -1], [4, 5, 2, 4], [8]] </code></pre>
0
2016-07-27T20:44:24Z
38,623,101
<p>Here's a more primitive piece of code that achieves what you want to do, even though it is not efficient (see Reut Sharabani's answer for a more efficient solution.)</p> <pre><code># Input list l = [1, 6, 5, 3, 5, 0, -3, -5, 2] # Difference to split list with n = 3 output = [] t = [] for i in range(1, len(l)): t.append(l[i]) if abs(l[i] - l[i - 1]) &lt; n: None else: output.append(t) t = [] return output </code></pre>
1
2016-07-27T21:00:49Z
[ "python", "list", "split" ]
How to split a list of numbers based on the distance of consecutive elements in Python?
38,622,868
<p>Given a list of numbers, how can I split it whenever the distance of two adjacent elements is larger than <em>n</em>?</p> <p><strong>Input:</strong></p> <pre><code>n = 3 l = [1, 2, 5, 3, -2, -1, 4, 5, 2, 4, 8] </code></pre> <p><strong>Output:</strong></p> <pre><code>[[1, 2, 5, 3], [-2, -1], [4, 5, 2, 4], [8]] </code></pre>
0
2016-07-27T20:44:24Z
38,623,590
<pre><code>n = 3 a = [1, 2, 5, 3, -2, -1, 4, 5, 2, 4, 8] b = [abs(i - j) &gt; n for i, j in zip(a[:-1], a[1:])] m = [i + 1 for i, j in enumerate(b) if j is True] m = [0] + m + [len(a)] result = [a[i: j] for i, j in zip(m[:-1], m[1:])] print(result) </code></pre>
0
2016-07-27T21:35:41Z
[ "python", "list", "split" ]
Python binding of functions within a c++ program
38,622,890
<p>I have a program written in c++ that functions on it's own, however we want to make it accessible to Python. Specifically, we have several functions that are more efficient in c++, but we do a lot of other things with the output using Python scripts. I don't want to rewrite the whole of main() in Python as we make use of Boost's root finding algorithms and other functionalities that'd be a pain to do in Python. </p> <p>Is it possible to add Python binding to these functions while keeping the c++ main()? I've never done Python binding before, but I've looked at <a href="http://www.boost.org/doc/libs/1_61_0/libs/python/doc/html/tutorial/index.html" rel="nofollow">Boost.python</a> since we're already using Boost. Most of the examples use c++ functions/classes in a hpp file and embed them into a python program, which isn't exactly what we want.</p> <p>What we want is to keep our c++ program as a standalone so it can run as it is if users want, and also allow users to call these functions from a Python program. Being able to use the same Makefile and exe would be great. We don't really want to make a separate c++ library containing the bound functions; we're not interested in making a pythonic version of the code, merely allowing access to these useful functions.</p> <p>Thanks</p>
0
2016-07-27T20:45:52Z
38,624,911
<p>We have an extensive c++ library which we made available to python through use of a python wrapper class which calls an interface that we defined in boost python. </p> <p>One python class handles all the queries in a pythonic manner, by calling a python extension module written in c++ with boost python. The python extension executes c++ code, so it can link and use anything from the original library.</p> <p>You said your c++ is an executable, though. Why can't you use system calls to launch a shell process? You can do that in any language, including python. What I thought was you want to access individual functions, which means you need all your functions in a static library. </p> <p>You build your c++ exe normally, linking the common code. You make a "boost python extension module" which links the common code, and can be imported by a python script. And of course a unit test executable, which links and tests the common code. My preference is that the common code be a stand-alone static lib (use -fPic if there's a posix gcc build).</p>
1
2016-07-27T23:42:32Z
[ "python", "c++", "boost", "boost-python", "python-bindings" ]
How do I login to the Django Rest browsable API when I have a custom auth model?
38,623,002
<p>I have a custom user model as follows in <code>account/models.py</code></p> <pre><code>from django.contrib.auth.modles import AbstractUser from django.db.models.signals import post_save from rest_framework.authtoken.models import Token from django.db import models from django.dispatch import receiver from django.conf import settings @receiver(post_save, sender=settings.AUTH_USER_MODEL) def create_auth_token(sender, instance=None, created=False, **kwargs): if created: Token.objects.create(user=instance) class UserProfile(AbstractUser): gender = models.CharField(max_length=1,default='') </code></pre> <p>and in <code>settings.py</code></p> <pre><code>REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework.authentication.TokenAuthentication', ) } </code></pre> <p>...</p> <pre><code>AUTH_USER_MODEL = "account.UserProfile" </code></pre> <p>However, whenever I try to log into the browsable API, it asks me to use a correct username and password, and I am using credentials of users who are both marked as superusers and staff.</p> <p>The <code>manage.py runserver</code> console shows this status message:</p> <pre><code>[27/Jul/2016 20:41:39] "POST /api-auth/login/ HTTP/1.1" 200 2897 </code></pre>
1
2016-07-27T20:53:39Z
38,626,166
<p>I've ran into this before too and from what I remember it's because the built-in DRF auth form is not using TokenAuthentication, but rather SessionAuthentication. Try adding <code>rest_framework.authentication.SessionAuthentication</code> to your <code>DEFAULT_AUTHENTICATION_CLASSES</code> tuple</p>
0
2016-07-28T02:26:38Z
[ "python", "django", "rest", "login" ]
Why is numpy's root failing?
38,623,076
<ul> <li><em>Why is my answer from <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.roots.html" rel="nofollow"><code>np.roots</code></a> different from <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.polynomial.polynomial.polyroots.html" rel="nofollow"><code>np.polynomial.polynomial.polyroots</code></a>?</em></li> </ul> <p>Define the polynomial</p> <pre><code># a specific polynomial x**0 + x**1 + x**2 + x**3 p = [1, -2.8176255165067872, -0.97639120853458261, -0.86023870029448335] </code></pre> <p>Here is a neat example to demonstrate the difference,</p> <pre><code>import numpy as np r1 = np.roots(p); r2 = np.polynomial.polynomial.polyroots(p) f = lambda x: np.sum([x**i*j for i,j in enumerate(p)]) print "{:&gt;10} {:&gt;10}".format("roots","polyroots") for i,j in zip(r1, r2): # test should return 0 print "{:10.5f} {:10.5f}".format(np.abs(f(i)),np.abs(f(j))) </code></pre> <p>The output is clearly not zero</p> <pre><code> roots polyroots 46.41221 0.00000 1.97595 0.00000 1.97595 0.00000 </code></pre> <h2>Correct Case for Comparison</h2> <p>In comparision, Mathematica correctly obtains the roots:</p> <pre><code>fn[x_] := 1.` - 2.817625516506788` x - 0.97639120853458261` x^2 - 0.8602387002944835` x^3 Roots[fn[x] == 0, x] </code></pre> <p>which provides the roots as:</p> <pre><code>x == -0.723475 - 1.78978 I || x == -0.723475 + 1.78978 I || x == 0.311926 </code></pre> <p>Testing verifies this:</p> <pre><code>fn[-0.7234748700272414` - 1.7897835665374093` I] -4.44089*10^-16 - 2.66454*10^-15 </code></pre>
2
2016-07-27T20:59:37Z
38,623,178
<p>The code in <code>numpy.polynomial</code> is newer than <code>numpy.roots</code> (and <code>numpy.poly1d</code>, etc). In the new polynomial code, the convention for the order of the coefficients was changed. In the new code, the coefficients are given in increasing order, while in the old code, the highest order coefficient is given first.</p> <pre><code>In [98]: p = [1, -2.8176255165067872, -0.97639120853458261, -0.86023870029448335] In [99]: np.roots(p[::-1]) Out[99]: array([-0.72347487+1.78978357j, -0.72347487-1.78978357j, 0.31192616+0.j ]) In [100]: np.polynomial.polynomial.polyroots(p) Out[100]: array([-0.72347487-1.78978357j, -0.72347487+1.78978357j, 0.31192616+0.j ]) </code></pre>
4
2016-07-27T21:05:36Z
[ "python", "numpy", "polynomial-math", "polynomials" ]
Tweepy - API.Search
38,623,102
<p>I want to be able to run a search for keywords (<code>ContinuousDelivery</code> in this example) and have it return the date created, text and screen name of tweets containing the keyword and store it into a CSV which will ultimately be ported to a relational database. </p> <ol> <li><p>I can get the created and text but not the screen name with code below. </p></li> <li><p>I am also wondering how I can be sure I am getting all of the results pursuant to my request. </p></li> </ol> <p>I have looked at Twitter API documentation and the <code>tweepy</code> GitHub but neither has got me far. </p> <pre><code># --OAuth Headers omitted-- api = tweepy.API(auth) # Open/Create a file to append data csvFile = open('result17.csv', 'a') #Use csv Writer csvWriter = csv.writer(csvFile) for tweet in tweepy.Cursor(api.search, q="ContinuousDelivery", #since="2014-02-14", #until="2014-02-15", lang="en").items(5000000): #Write a row to the csv file/ I use encode utf-8 csvWriter.writerow([tweet.created_at, tweet.text.encode('utf-8'), tweet.screen_name]) print tweet.created_at, tweet.text, tweet.screen_name csvFile.close() </code></pre>
1
2016-07-27T21:00:52Z
38,662,431
<p>To get the author's screenname use tweet.author.screen_name. The profile is it's own object inside the tweet.</p>
0
2016-07-29T15:27:09Z
[ "python", "twitter", "tweepy" ]
VSCode -- how to set working directory for debug
38,623,138
<p>I'm starting to use vscode for Python. I have a simple test program. I want to run it under debug and I need to set the working directory for the run. </p> <p>How/where do I do that?</p>
2
2016-07-27T21:03:05Z
38,623,229
<p>You can set up current working directory for debugged program using <code>cwd</code> argument in <code>launch.json</code></p>
0
2016-07-27T21:09:43Z
[ "python", "vscode" ]
VSCode -- how to set working directory for debug
38,623,138
<p>I'm starting to use vscode for Python. I have a simple test program. I want to run it under debug and I need to set the working directory for the run. </p> <p>How/where do I do that?</p>
2
2016-07-27T21:03:05Z
38,637,243
<p>All you need to do is configure the cwd setting in launch.json file as follows: <code> { "name": "Python", "type": "python", "pythonPath":"python", .... "cwd": "&lt;Path to the directory&gt;" .... } </code></p> <p>More information about this can be found here: <a href="https://github.com/DonJayamanne/pythonVSCode/wiki/Debugging#cwd" rel="nofollow">https://github.com/DonJayamanne/pythonVSCode/wiki/Debugging#cwd</a></p>
1
2016-07-28T13:04:05Z
[ "python", "vscode" ]
Terminal hangs after sshing via python subprocess
38,623,221
<p>I've been working on this for a long time, and any help would be appreciated.</p> <p>What I am trying to do here is ssh to a testing server, then cd .., and then print a list of the directories in that folder through python. This code is my best attempt:</p> <pre><code>def subprocess_cmd(command): process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True) print "Test 1" proc_stdout = process.communicate()[0].strip() #proc_stdout= process.stdout.readlines() (Gives same outcome as communicate) #proc_stdout= process.stdout.read() (Gives same outcome as communicate) print "Test 2" print proc_stdout </code></pre> <p>subprocess_cmd('ssh user@server -p 111;cd ..;ls')</p> <p>For some reason this function always hangs at the "proc_stdout= "step. It never prints "Test 2" or returns a list of files. It works fine if I take out the ssh command though. What I expect to see in the terminal is something like this, but instead the terminal hangs, and I can't interact with it anymore:</p> <pre><code>dredbounds-computer: python file_name.py Test 1 Test 2 FileA FileB FileC </code></pre> <p>Update: I modified the code and and put proc_stdout= process.stderr. communicate(). Here is my updated code: </p> <pre><code>def subprocess_cmd(command): process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True) print "Test 1" proc_stderr= process.stderr. communicate() print "Test 2" print proc_stderr print "Test 3" </code></pre> <p>Running this I am getting the following error in the terminal: </p> <pre><code>dredbounds-computer: python terminal_test.py Test 1 Traceback (most recent call last): File "file_name.py", line 26, in &lt;module&gt; subprocess_cmd('ssh user@server -p 111;cd ..;ls') File "terminal_test.py", line 21, in subprocess_cmd proc_stdout= process.stderr. communicate() AttributeError: 'NoneType' object has no attribute 'communicate' </code></pre> <p>Does anyone know how I can fix this code, or another way of doing the same thing. Not sure why this is giving me a none type error. Is there something wrong with how I call my ssh command? I've entered the same commands manually in the terminal and it returns a list of directories, so it should work in theory. Any advice?</p>
2
2016-07-27T21:08:57Z
38,637,265
<p>If you want just to list directory contents, you can send some command over SSH.</p> <p>Bash:</p> <pre><code>ssh 192.168.122.24 ls /tmp </code></pre> <p>or if you want to use "cd" as in your question:</p> <pre><code>ssh 192.168.122.24 "cd /tmp; ls" </code></pre> <p>Python script example:</p> <pre><code>import subprocess HOST = 'server' PORT = '111' USER = 'user' CMD = 'cd /tmp; ls' process = subprocess.Popen(['ssh', '{}@{}'.format(USER, HOST), '-p', PORT, CMD], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) result = process.stdout.readlines() if not result: err = process.stderr.readlines() print('ERROR: {}'.format(err)) else: print(result) </code></pre>
1
2016-07-28T13:04:54Z
[ "python", "bash", "ssh", "subprocess" ]
KeyError: 'plotly_domain'
38,623,249
<p>I am running the code from <a href="https://plot.ly/~jackp/16625/" rel="nofollow">here</a>:</p> <pre><code>import plotly import plotly.plotly as py from plotly.tools import FigureFactory as FF import numpy as np import pandas as pd print(plotly.__version__) dataframe = pd.DataFrame(np.random.randn(100, 3), columns=['Column A', 'Column B', 'Column C']) fig = FF.create_scatterplotmatrix(dataframe, diag='histogram', index='Column A', colormap=['rgb(100, 150, 255)', '#F0963C', 'rgb(51, 255, 153)'], colormap_type='seq', height=800, width=800) py.iplot(fig, filename = 'Custom Sequential Colormap') </code></pre> <p>And I get this error:</p> <pre><code>/Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4 /Users/mona/PycharmProjects/PythonCodes/plotly_viz.py 1.12.4 This is the format of your plot grid: [ (1,1) x1,y1 ] [ (1,2) x2,y2 ] [ (2,1) x3,y3 ] [ (2,2) x4,y4 ] Traceback (most recent call last): File "/Users/mona/PycharmProjects/PythonCodes/plotly_viz.py", line 14, in &lt;module&gt; py.iplot(fig, filename = 'Custom Sequential Colormap') File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/plotly/plotly/plotly.py", line 175, in iplot return tools.embed(url, **embed_options) File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/plotly/tools.py", line 443, in embed != session.get_session_config()['plotly_domain']): KeyError: 'plotly_domain' Process finished with exit code 1 </code></pre> <p>Do you know what is the problem and how can it be solved? I was trying plotly out but even my first try happened to be unsuccessful!</p>
2
2016-07-27T21:10:47Z
38,623,309
<p>Based on this answer, try using <code>py.plot</code> instead of <code>py.iplot</code>.</p> <p><a href="https://stackoverflow.com/questions/34929778/keyerror-plotly-domain-when-using-plotly-to-do-scatter-plot-in-python">KeyError: &#39;plotly_domain&#39; when using plotly to do scatter plot in python</a></p> <p>The reason is that <code>iplot</code> is for ipython sessions.</p>
2
2016-07-27T21:15:09Z
[ "python", "python-3.x", "plot", "plotly", "keyerror" ]
Reading a portion of a large xlsx file with python
38,623,368
<p>I have a large .xlsx file with 1 million rows. I don't want to open the whole file in one go. I was wondering if I can read a chunk of the file, process it and then read the next chunk? (I prefer to use pandas for it.)</p>
1
2016-07-27T21:19:34Z
38,623,540
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html" rel="nofollow">read_excel()</a> method:</p> <pre><code>chunksize = 10**5 for chunk in pd.read_excel(filename, chunksize=chunksize): # process `chunk` DF </code></pre> <p>if your excel file has multiple sheets, take a look at <a href="http://stackoverflow.com/a/38623545/5741205">bpachev's</a> solution</p>
1
2016-07-27T21:32:52Z
[ "python", "pandas" ]
Reading a portion of a large xlsx file with python
38,623,368
<p>I have a large .xlsx file with 1 million rows. I don't want to open the whole file in one go. I was wondering if I can read a chunk of the file, process it and then read the next chunk? (I prefer to use pandas for it.)</p>
1
2016-07-27T21:19:34Z
38,623,545
<p>Yes. Pandas supports chunked reading. You would go about reading an excel file like so.</p> <pre><code>import pandas as pd xl = pd.ExcelFile("myfile.xlsx") for sheet_name in xl.sheet_names: reader = xl.parse(sheet_name, chunksize=1000): for chunk in reader: #parse chunk here </code></pre>
2
2016-07-27T21:33:00Z
[ "python", "pandas" ]
Nested for loop on tupled coordinates
38,623,400
<p>I need to write a code that will eventually read in coordinates from across multiple tables of big data stored in excel sheets but first I wanted to learn to write a nested for loop to analyze the tuples in the code below.</p> <p>All I can find for nested for loops don't have anything like this so I thought it could be good to post up here. </p> <p>What I need this code to do specifically is to take the first coordinate in file1 and compare it to each coordinate in file2, Then second coordinate in file1 to each coordinate in file2, and so on to loop through each coordinate in file1 compared to every coordinate in file to and then return if the two share the specified proximity. </p> <pre><code>import math file1 = ('1.36, 7.11', '1.38, 7.12', '1.5, -7.14', '8.7, 3.33', '8.5, 3.34', '8.63, -3.36') file2 = ('1.46, 7.31', '1.47, 7.32', '1.49, -7.34', '8.56, 3.13', '8.57, 3.14', '8.59, -3.16') dist = file1.apply(lambda row: math.hypot(row['x_diff'], row['y_diff']), axis=1) for dist in file1: for dist in file2: if dist.values &gt;= .5: print 'no match' elif dist.values &lt;= .5: print True, dist </code></pre> <p>My hunch with whats wrong is that I am not filling the appropriate command to read the tuples as coordinates. Then furthermore, I am having a lot of confusion as to what I ought to write in this statement here <code>for dist in file1</code>. By that I mean what I am supposed to call and how to label it appropriately. </p> <p>I realize this is probably a mess but, this is my first coding project ever so if absolutely anybody can help steer me in the right direction or provide some feedback as to what I might need to understand better here I would greatly appreciate it. </p>
0
2016-07-27T21:21:23Z
38,623,585
<p><strong>For-loops in general</strong> :<br> You choose one variable as an iterator (can be anything, but shouldn't be something that is used elsewhere at the same time) which iterates through an iterable (e.g. a list). In my example below, i and j are the iterators while range(10) is the object they iterate through. Within the loop, you write everything down you want to have repeated. In my example below, I append every possible i/j combination to the list. <br> <br> <strong>Nested</strong> for loops require you to use <strong>two different variables</strong>. </p> <p>Example:</p> <pre><code>whatever = [] for j in range(10): for i in range(10): whatever.append([j, i]) </code></pre> <p>After running the code, whatever will look like that:</p> <pre><code>[ [0, 0], [0, 1], [0,2] ,... [1, 0], [1, 1], ... [9, 9] ] </code></pre>
2
2016-07-27T21:35:16Z
[ "python", "for-loop", "math", "coordinates", "nested-loops" ]
Nested for loop on tupled coordinates
38,623,400
<p>I need to write a code that will eventually read in coordinates from across multiple tables of big data stored in excel sheets but first I wanted to learn to write a nested for loop to analyze the tuples in the code below.</p> <p>All I can find for nested for loops don't have anything like this so I thought it could be good to post up here. </p> <p>What I need this code to do specifically is to take the first coordinate in file1 and compare it to each coordinate in file2, Then second coordinate in file1 to each coordinate in file2, and so on to loop through each coordinate in file1 compared to every coordinate in file to and then return if the two share the specified proximity. </p> <pre><code>import math file1 = ('1.36, 7.11', '1.38, 7.12', '1.5, -7.14', '8.7, 3.33', '8.5, 3.34', '8.63, -3.36') file2 = ('1.46, 7.31', '1.47, 7.32', '1.49, -7.34', '8.56, 3.13', '8.57, 3.14', '8.59, -3.16') dist = file1.apply(lambda row: math.hypot(row['x_diff'], row['y_diff']), axis=1) for dist in file1: for dist in file2: if dist.values &gt;= .5: print 'no match' elif dist.values &lt;= .5: print True, dist </code></pre> <p>My hunch with whats wrong is that I am not filling the appropriate command to read the tuples as coordinates. Then furthermore, I am having a lot of confusion as to what I ought to write in this statement here <code>for dist in file1</code>. By that I mean what I am supposed to call and how to label it appropriately. </p> <p>I realize this is probably a mess but, this is my first coding project ever so if absolutely anybody can help steer me in the right direction or provide some feedback as to what I might need to understand better here I would greatly appreciate it. </p>
0
2016-07-27T21:21:23Z
38,623,930
<p>Assuming you're getting your data in as tuples:</p> <pre><code># convert file1 and file2 to lists of 2d points # this is quite sloppy and I'll tidy it up when I get home from work xs = [[float(pq.split(',')[0]),float(pq.split(',')[1])] for pq in list(file1)] ys = [[float(pq.split(',')[0]),float(pq.split(',')[1])] for pq in list(file2)] # generate a cartesian product of the two lists cp = [(x,y) for x in xs for y in ys] # generate distances dists = map(lambda (x,y):math.hypot(x[0]-y[0],x[1]-y[1]),cp) # loop through and find distances below some_threshold for i in range(len(xs)): for j in range(1,len(ys)+1): if dists[i*j] &gt; some_threshold: print i,j,dist else: print 'no match' </code></pre> <p>I would recommend using pandas or numpy if you're going to be reading in any reasonably sized dataset, however.</p>
1
2016-07-27T22:01:51Z
[ "python", "for-loop", "math", "coordinates", "nested-loops" ]
Nested for loop on tupled coordinates
38,623,400
<p>I need to write a code that will eventually read in coordinates from across multiple tables of big data stored in excel sheets but first I wanted to learn to write a nested for loop to analyze the tuples in the code below.</p> <p>All I can find for nested for loops don't have anything like this so I thought it could be good to post up here. </p> <p>What I need this code to do specifically is to take the first coordinate in file1 and compare it to each coordinate in file2, Then second coordinate in file1 to each coordinate in file2, and so on to loop through each coordinate in file1 compared to every coordinate in file to and then return if the two share the specified proximity. </p> <pre><code>import math file1 = ('1.36, 7.11', '1.38, 7.12', '1.5, -7.14', '8.7, 3.33', '8.5, 3.34', '8.63, -3.36') file2 = ('1.46, 7.31', '1.47, 7.32', '1.49, -7.34', '8.56, 3.13', '8.57, 3.14', '8.59, -3.16') dist = file1.apply(lambda row: math.hypot(row['x_diff'], row['y_diff']), axis=1) for dist in file1: for dist in file2: if dist.values &gt;= .5: print 'no match' elif dist.values &lt;= .5: print True, dist </code></pre> <p>My hunch with whats wrong is that I am not filling the appropriate command to read the tuples as coordinates. Then furthermore, I am having a lot of confusion as to what I ought to write in this statement here <code>for dist in file1</code>. By that I mean what I am supposed to call and how to label it appropriately. </p> <p>I realize this is probably a mess but, this is my first coding project ever so if absolutely anybody can help steer me in the right direction or provide some feedback as to what I might need to understand better here I would greatly appreciate it. </p>
0
2016-07-27T21:21:23Z
38,623,996
<p>You represent your tuples as strings which is very inconvenient to work with. "Real" tuples are generally better to start off.</p> <pre><code>file1 = [(1.36, 7.11), (1.38, 7.12), (1.5, -7.14), (8.7, 3.33)] file2 = [(1.46, 7.31), (1.47, 7.32), (1.49, -7.34), (8.56, 3.13)] </code></pre> <p>The next question is, how do we get the distance between those two xy points? For this, we can use <code>scipy.spatial.distance.euclidean</code> as a function which takes two tuples and returns the euclidean norm of the vector in between. For example:</p> <pre><code>&gt; import scipy.spatial.distance as distance &gt; distance.euclidean(file1[0], file2[0]) 0.22360679774997827 </code></pre> <p>Now, we come to core of your question: the nested loop. The logic is as follows. For each element in <code>file1</code>, say <code>coord1</code>, we take each element in <code>file2</code>, say <code>coord2</code> and compute the distance between <code>coord1</code> and <code>coord2</code>.</p> <pre><code>for coord1 in file1: for coord2 in file2: dist = distance.euclidean(coord1, coord2) # do not forget to import distance before if dist &lt; 0.5: print True, dist else: print 'no match' </code></pre> <p>I would name the variables after what they represent. <code>file1</code> is a <code>coordinate_list</code> (<code>first_coordinate_list</code>, <code>coordinate_list_1</code>) and the elements are coordinates, e.g. <code>coordinate</code>, <code>coordinate_1</code>, <code>left_coordinate</code>.</p>
1
2016-07-27T22:07:46Z
[ "python", "for-loop", "math", "coordinates", "nested-loops" ]
Questions on use ginput and print the results with python
38,623,495
<p>I would like to use <code>ginput</code> function in python to get two points from the figure and get the closest int value and save the array. The code is shown below:</p> <pre><code>from __future__ import print_function from pylab import plot, ginput, show, axis import matplotlib.pyplot as plt import numpy as np t = np.arange(10) plt.plot(t, np.sin(t)) print("Please click") x = ginput(2) x=np.ceil(x) **print (x)** plt.show() </code></pre> <p>When I run the code and print the x with brackets, the output is:</p> <pre><code>Please click [[ 2. 1.] [ 8. -0.]] </code></pre> <p>However, if I run the code and print <code>x</code> without the brackets, </p> <pre><code>from __future__ import print_function from pylab import plot, ginput, show, axis import matplotlib.pyplot as plt import numpy as np t = np.arange(10) plt.plot(t, np.sin(t)) print("Please click") x = ginput(2) x=np.ceil(x) **print x** plt.show() </code></pre> <p>the output shows error:</p> <pre><code> File "&lt;ipython-input-27-1d992b01e790&gt;", line 1 print x ^ SyntaxError: invalid syntax </code></pre> <p>I am really confused about this. What is the reason of this?</p>
1
2016-07-27T21:29:16Z
38,623,561
<p>In Python 3, <a href="https://docs.python.org/3/library/functions.html#print" rel="nofollow">print()</a> is a function, therefore it requires parentheses around its input argument like any other function.</p>
1
2016-07-27T21:33:58Z
[ "python" ]
Visualisation with Python
38,623,570
<p>I want to plot the data from a connected device (which count number of steps) used by 100 users. I got about 4000 records for each user.</p> <p>So I plot each point mannualy in a picture of size 100*4000 using the package PIL : <code>Image.putpixel((CurrentTimePx,CurrentRowPx),(CurrentActivityPixelIntensity,0,0))</code></p> <p>But the picture is too small, I have to zoom to see correctly the points.</p> <p>Do you have a solution to print the picture point by point but with a correct size ?</p> <p>Edit :</p> <p>It's a bit more difficult : for one user, I plot a point (at time 0) for which intensity=number of steps. Then I skip to next row (for another user) and I do the same. Once I've done all the users, I skip to next column (for time 1) and so on...</p> <p>I join a picture of the complete visualisation. <a href="http://i.stack.imgur.com/tqnDA.png" rel="nofollow">Final Image</a></p>
0
2016-07-27T21:34:15Z
38,623,687
<p>Try using matplotlib instead of PIL.</p> <p>Matplotlib:</p> <p><a href="http://matplotlib.org/" rel="nofollow">http://matplotlib.org/</a></p> <p>Specifically about bar charts (I think this is what you want):</p> <p><a href="http://matplotlib.org/examples/api/barchart_demo.html" rel="nofollow">http://matplotlib.org/examples/api/barchart_demo.html</a></p>
1
2016-07-27T21:42:17Z
[ "python", "image", "visualization" ]
Visualisation with Python
38,623,570
<p>I want to plot the data from a connected device (which count number of steps) used by 100 users. I got about 4000 records for each user.</p> <p>So I plot each point mannualy in a picture of size 100*4000 using the package PIL : <code>Image.putpixel((CurrentTimePx,CurrentRowPx),(CurrentActivityPixelIntensity,0,0))</code></p> <p>But the picture is too small, I have to zoom to see correctly the points.</p> <p>Do you have a solution to print the picture point by point but with a correct size ?</p> <p>Edit :</p> <p>It's a bit more difficult : for one user, I plot a point (at time 0) for which intensity=number of steps. Then I skip to next row (for another user) and I do the same. Once I've done all the users, I skip to next column (for time 1) and so on...</p> <p>I join a picture of the complete visualisation. <a href="http://i.stack.imgur.com/tqnDA.png" rel="nofollow">Final Image</a></p>
0
2016-07-27T21:34:15Z
38,623,714
<p>I would put these data in a (100, 4000) numpy array and plot it using <code>matplotlib</code>. For example:</p> <pre><code>import matplotlib.pyplot as plt # TODO: Put data in numpy array X # TODO: Define the image size you want in my_size (eg, my_size=(10, 20)) plt.figure(figsize=my_size) plt.imshow(X, interpolation="nearest", aspect="auto") plt.savefig("my_plot.pdf") </code></pre>
1
2016-07-27T21:44:31Z
[ "python", "image", "visualization" ]
Comparing values in different time frames (after resample and rolling in Pandas)
38,623,737
<p>I have fast timeframe (tick data) and want to check if the value is equal to maximum price of rolling max on 1 minute timeframe.</p> <p>Tick data are:</p> <pre><code>2016-06-27 08:30:00 4243.00 2016-06-27 08:30:00 4243.00 2016-06-27 08:30:00 4243.00 2016-06-27 08:30:00 4243.00 2016-06-27 08:30:00 4243.00 2016-06-27 08:30:00 4243.00 2016-06-27 08:30:00 4243.00 2016-06-27 08:30:00 4242.75 2016-06-27 08:30:00 4242.75 2016-06-27 08:30:00 4242.75 2016-06-27 08:30:00 4242.75 2016-06-27 08:30:00 4242.75 2016-06-27 08:30:00 4242.75 2016-06-27 08:30:00 4242.75 2016-06-27 08:30:00 4242.75 2016-06-27 08:30:00 4242.75 2016-06-27 08:30:00 4242.75 2016-06-27 08:30:00 4242.50 2016-06-27 08:30:00 4242.50 2016-06-27 08:30:00 4242.50 </code></pre> <p>I calculate the rolling max on 1 minute timeframe using:</p> <pre><code>rol=ntick.Last.resample('1min').max().rolling(center=False,window=4).max() </code></pre> <p>But what is the fastest way to check if value from tick data is equal to rolling max in rol?</p> <p>I am still quite new to Python, so I can come only with very slow way using loop:</p> <pre><code>mask=[] for x in range(0,len(ntick)): mask.append(ntick.Last[x]==rol[ntick.index[x].replace(second=0)]) </code></pre> <p>and then apply mask as ntick['mask']=mask</p> <p>This works but is not very efficient. Any tip how to do this better?</p> <p>EDIT:</p> <p>List comprehension instead of the loop makes the process 3x faster:</p> <pre><code>mask=[ntick.Last[x]==rol[ntick.index[x].replace(second=0)] for x in range(0,len(ntick))] </code></pre> <p>But still wondering if there is some better way.</p>
0
2016-07-27T21:46:43Z
38,647,845
<p>If I understand correctly what you're asking, you may want to use <code>Series.asof</code>, which returns last valid value and can take a list-like argument. I assume <code>ntick</code> (and also <code>rol</code>) has a sorted <code>DatetimeIndex</code> as an index.</p> <pre><code>rol2 = rol.squeeze().asof(ntick.index) </code></pre> <p>Initially, <code>rol</code> is a one-column data frame, so <code>squeeze</code> is necessary to turn it into a <code>Series</code>. Indexes of <code>rol2</code> and <code>ntick</code> are now equal and we can compare:</p> <pre><code>mask = ntick.Last == rol2 </code></pre>
1
2016-07-28T22:37:28Z
[ "python", "pandas" ]
duplicated outputs of np.apply_along_axis when a dict is returned
38,623,768
<p>Check this piece of code snippet,</p> <pre><code>import numpy as np a = np.arange(20).reshape(2,10) # the result is right if there is only 1 key func = lambda x: dict(k1=len(x)) print np.apply_along_axis(func, -1, a) out[1]: [[{'k1': 10}] [{'k1': 10}]] # but if there are more than 1 key in the returned dict # the results are duplicated func = lambda x: dict(k1=1, k2=len(x)) print np.apply_along_axis(func, -1, a) out[2]: [[{'k2': 10, 'k1': 1} {'k2': 10, 'k1': 1}] [{'k2': 10, 'k1': 1} {'k2': 10, 'k1': 1}]] func = lambda x: dict(k1=1, k2=2, k3=len(x)) print np.apply_along_axis(func, -1, a) out[3]: [[{'k3': 10, 'k2': 2, 'k1': 1} {'k3': 10, 'k2': 2, 'k1': 1} {'k3': 10, 'k2': 2, 'k1': 1}] [{'k3': 10, 'k2': 2, 'k1': 1} {'k3': 10, 'k2': 2, 'k1': 1} {'k3': 10, 'k2': 2, 'k1': 1}]] </code></pre> <p>The problem has been described in the comments and the results have been shown also.</p>
0
2016-07-27T21:49:03Z
38,627,495
<p>It seems like <code>np.apply_along_axis</code> is trying to figure out what the resulting shape should be, based on the result of calling <code>func</code>. If your input array has shape <code>(n, m)</code> and your <code>func</code> returns something with length <code>k</code>, then <code>np.apply_along_axis(func, -1, a)</code> will return an array of shape <code>(n, k)</code>. This is true even if your function returns something other than a list or an array. If your function returns a scalar, the resulting shape will be <code>(n,)</code>.</p> <p>Examples:</p> <pre><code># np.diff(a[0]) has length 9. &gt;&gt;&gt; np.apply_along_axis(lambda x: np.diff(x), -1, a).shape (2, 9) # sorted(a[0]) has length 10 &gt;&gt;&gt; np.apply_along_axis(lambda x: sorted(x), -1, a).shape (2, 10) # len(a[0]) is a scalar &gt;&gt;&gt; np.apply_along_axis(lambda x: len(x), -1, a).shape (2,) </code></pre> <p>Now, in your case, since you're returning a <code>dict</code> with length 2, the resulting shape is <code>(2, 2)</code>. A simple workaround would be to wrap the dictionary in something that's a scalar. But apparently, numpy doesn't like custom scalars. So if you try to use a custom <code>DictWrap</code> class like this:</p> <pre><code>class DictWrap(object): def __init__(self, *args, **kwargs): self._d = dict(*args, **kwargs) </code></pre> <p>...it doesn't work:</p> <pre><code>&gt;&gt;&gt; np.apply_along_axis(lambda x: DictWrap(k1=1, k2=len(x)), -1, a) ... TypeError: object of type 'DictWrap' has no len() </code></pre> <p>So either we need to add a custom <code>__len__()</code> method to <code>DictWrap</code> which returns 1, or we can wrap the dictionary in a list:</p> <pre><code>&gt;&gt;&gt; np.apply_along_axis(lambda x: [dict(k1=1, k2=len(x))], -1, a) array([[{'k2': 10, 'k1': 1}], [{'k2': 10, 'k1': 1}]], dtype=object) </code></pre> <p>This has a shape <code>(2, 1)</code>. You can call <code>squeeze()</code> on it to get a 1-d array:</p> <pre><code>&gt;&gt;&gt; r = np.apply_along_axis(lambda x: [dict(k1=1, k2=len(x))], -1, a) &gt;&gt;&gt; r.squeeze() array([{'k2': 10, 'k1': 1}, {'k2': 10, 'k1': 1}], dtype=object) </code></pre> <p>Another, and perhaps the easiest, way would be to get rid of the extra dimensions yourself:</p> <pre><code>&gt;&gt;&gt; r = np.apply_along_axis(lambda x: dict(k1=1, k2=len(x)), -1, a) &gt;&gt;&gt; r[:, 0] array([{'k2': 10, 'k1': 1}, {'k2': 10, 'k1': 1}], dtype=object) </code></pre> <p>To see how exactly numpy handles various cases, see <a href="https://github.com/numpy/numpy/blob/v1.11.0/numpy/lib/shape_base.py#L20-L131" rel="nofollow">documentation of <code>apply_along_axis</code></a> (particularly starting at <code>if isscalar(res):</code>).</p>
1
2016-07-28T05:06:57Z
[ "python", "numpy", "dictionary" ]
ImportError: cannot import name 'tree' for sklearn
38,623,912
<p>I've recently installed Scipy, Numpy and Scikit-learn by using pip, but when I run the program below </p> <pre><code>from sklearn import tree features = [[140, 1], [130, 1], [150, 1], [170, 1]] #input labels = [0, 0, 1, 1] #output clf = tree.DecisionTreeClassifier() clf = clf.fit(features, labels) #fit = find patterns in data print (clf.predict([[160, 0]])) </code></pre> <p>The shell prints this error </p> <pre><code>Traceback (most recent call last): File "C:/Machine Learning/sklearn.py", line 1, in &lt;module&gt; from sklearn import tree File "C:/Machine Learning\sklearn.py", line 1, in &lt;module&gt; from sklearn import tree ImportError: cannot import name 'tree' </code></pre> <p>Does anyone know how to solve this? I've tried uninstalling and reinstalling it, but I get the same error. Many thanks in advance! </p>
0
2016-07-27T22:00:11Z
38,624,095
<p>The solution is to rename your "sklearn.py" under the "Machine Learning" folder to any other name but not "sklearn.py".</p> <p>Why? That's the mechanism of Python modules searching sequence. Try prepend these lines to your "sklearn.py":</p> <pre><code>import sys print(sys.path) </code></pre> <p>You'll find the first element of the output list is always an empty string, which means the current directory has the highest priority on modules searching. Runs <code>from sklearn import tree</code> at "C:\Machine Learning" folder will import the local same name "sklearn.py" as "sklearn" module, instead of importing the machine learning module globally.</p>
0
2016-07-27T22:16:28Z
[ "python", "numpy", "scikit-learn", "importerror" ]
string compared to list in a dict
38,623,921
<p>I will be entering a large data set of strs to compare to a dict with lists. For example, the str 'phd' will be compared against strs from this dict</p> <pre><code> edu_options = {'Completed College' : [ 'bachelor', 'ba', 'be', 'bs'....], 'Grad School' : ['phd','doctor'...] } </code></pre> <p>input str comes from edu_dict</p> <pre><code>edu_dict = { "A.S":"Attended Vocational/Technical", "AS":"Attended Vocational/Technical", "AS,":"Attended Vocational/Technical", "ASS,":"Attended Vocational/Technical", "Associate":"Attended Vocational/Technical", "Associate of Arts (A.A.),":"Attended Vocational/Technical", "Associate of Arts and Sciences (AAS)":"Attended Vocational/Technical", "B-Arch":"Completed College", "B-Tech":"Attended Vocational/Technical", "B.A. B.S":"Completed College", "B.A.,":"Completed College", "B.Arch,":"Completed College", "B.S":"Completed College", "B.S.":"Completed College", "B.S. in Management":"Completed College", "B.S.,":"Completed College", "BA":"Completed College",... *The list is 169 items similar to this* } </code></pre> <p>clean_edu() takes the key from edu_dict, removes the punctuation, spaces...etc. For example 'P.H.D.' becomes 'phd'. If 'phd' matches a str from any of these lists, it should return the correct key, in this case 'Completed Graduate'. For most of the inputs I have put in, the correct value has been returned.</p> <pre><code>def clean_edu(edu_entry): lower_case_key = edu_entry.lower() # changing the key to lower case chars_in = "-.,')(" #setting the chars to be translated chars_out = " " char_change = "".maketrans(chars_in, chars_out) # replacing punctuation(char_in) with empty space(char_out) clean = lower_case_key.translate(char_change) #executing char_change cleaned_string = re.sub(r'\s\s{0,}','',clean).strip() return cleaned_string while user == "": for edu_level in edu_options: for option in edu_options[edu_level]: if option in cleaned_string: user = edu_level return user user = "No match" </code></pre> <p>The problem is that 'bs' is correctly triggered for some of the inputs but not for others. When I print the unmatched str and their comparison</p> <pre><code>print ("Not Detected. Adding to txt" + '\t' + edu_entry + '\t' + cleaned_string + '\t' + option) Output: " Not Detected. Adding to txt business nursing </code></pre> <p>where bs is the input and l is the comparison str. In edu_options dict there is no value 'l' so I don't understand where this is coming from. This problem didn't occur for input strs such as 'bs biology' or 'bs business'.</p> <p>Successful run: </p> <p>input str: 'P.H.D' output:'Completed Graduate School' </p>
0
2016-07-27T22:00:45Z
38,624,765
<p>I'm not sure if I understand what you should return when you find a match in a list, maybe the key of that list?</p> <p>In that case, this should work:</p> <pre><code>&gt;&gt;&gt; edu_options = {'Completed College' : [ 'bachelor', 'ba', 'be', 'bs'], 'Grad Shool': ['phd', 'doctor']} &gt;&gt;&gt; cleaned_string = 'phd' &gt;&gt;&gt; for key, value in edu_options.items(): ... if cleaned_string in value: # value is the list ... print key # inside a function, use return ... &gt;&gt;&gt; Grad Shool </code></pre> <p>Edit: I think the mistake is in your second loop, look what happens:</p> <pre><code>&gt;&gt;&gt; edu_options = {'Completed College' : [ 'bachelor', 'ba', 'be', 'bs'], 'Grad Shool': ['phd', 'doctor']} &gt;&gt;&gt; for edu_level in edu_options: ... for option in edu_level: # Right here ... print option ... C o m p l e t e d C o l l e g e G r a d S h o o l &gt;&gt;&gt; </code></pre> <p>From there 'l' comes out.</p>
1
2016-07-27T23:25:39Z
[ "python", "dictionary", "comparison" ]
Python 2.7.12 Matplotlib x11 forwarding not showing or throwing multiple errors
38,623,934
<p>I am logging in to a remote linux machine from windows 7 via putty. In the settings I enabled the X11 forwarding option, and added the -X flag when loging in to the ssh server. On this server I run the following python code:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt import numpy as np import pyfits a = raw_input("path: ") #filepath on the server, conected with filename file = pyfits.open (a +'/file.fits', memap = 'True') data = file[0].data print data.shape #shape gets printed correctly plt.figure(1) plt.imshow(data[0,:,:], cmap = 'gray') print 3 plt.show() print 4 </code></pre> <p>I get all the print values, with the output looking like this:</p> <pre><code>(300, 512, 512) 3 4 </code></pre> <p>there is no error raised nor a x11 window opend. The comadoline goes back as if the program was at the end. Is there any possibility to get the <code>plt.show()</code> comand to actually show on the remote controlling windows machine?</p>
0
2016-07-27T22:02:02Z
38,635,174
<p>I got it figured out: </p> <p>at first it is like "tcaswell" said, you can´t use the <code>'Agg'</code> backend with interactive windows. This error gets fixed by just deleting the first two lines of code. The second problem is, that by the <code>plt.figure(1)</code> comand a new figure 1 is created but in the <code>plt.show()</code> comand there is no figure specified that should show up. So this error can be solved by either deleting the line that says <code>plt.figure(1)</code> or by putting the number of the figure to plot in the brackets behind the <code>plt.show()</code> comand: <code>plt.show(1)</code>. By this way it is possible to create multiple figures in one file and being able to switch between them.</p>
0
2016-07-28T11:34:41Z
[ "python", "python-2.7", "matplotlib", "x11", "x11-forwarding" ]
How to auto tweet from sqlite?
38,623,945
<p>I have a db.sqlite file that is updated live. If a certain value is saved within DB I would like to create a tweet based around this value. Any recommendations on the best way to do this please. </p>
-1
2016-07-27T22:03:18Z
38,623,975
<p>I'm not sure if you can do it directly in sqlite (even if you could, sounds like it would be a very bad decision to shift so much logic into a database unless extremely necessary)</p> <p>You can create tweets from Python though:</p> <p><a href="https://pypi.python.org/pypi/twitter" rel="nofollow">https://pypi.python.org/pypi/twitter</a> </p>
0
2016-07-27T22:05:50Z
[ "python", "sqlite", "twitter" ]
Python - Formatting Data into Excel Spreadsheet Using pandas
38,623,948
<p>I want two columns of data the team name and the line. However all my input is just placed into cell B1. (Note that was with the commented out code at the bottom of my snippet). I think I need to iterate through my lists with for loops to get all the teams down the A column, and lines down the B column, but just can't wrap my head around it while doing it with pandas. Any help would be greatly appreciated!</p> <p>Thanks</p> <pre><code>team = [] line = [] # Each row in table find all rows with class name team for tr in table.find_all("tr", class_="team"): # Place all text with identifier 'name' in list named team for td in tr.find_all("td", ["name"]): team.append(td.text.strip()) for tr in table.find_all("tr", class_="team"): for td in tr.find_all("td", ["currentline"]): line.append(td.text.strip()) # Amount of data in lists x = len(team) # Team name is list team, Line in list line. Create relationship between both data sets # Creating Excel Document and Placing Team Name in column A, Placing Line in Column B #Need to add data into Excel somehow for i in range(0,1): for j to line: """ data = {'Team':[team], 'line' : [line]} table = pd.DataFrame(data) writer = pd.ExcelWriter('Scrape.xlsx') table.to_excel(writer, 'Scrape 1') writer.save()""" </code></pre>
1
2016-07-27T22:03:22Z
38,624,783
<p>Here you should not do this because you're making lists of lists:</p> <pre><code>data = {'Team':[team], 'line' : [line]} # ... </code></pre> <p>Instead do:</p> <pre><code>data = {'Team': team, 'line': line} # ... </code></pre>
1
2016-07-27T23:27:42Z
[ "python", "excel", "pandas", "beautifulsoup" ]
youtube-dl download videos with formatid
38,623,977
<blockquote> <p>------------------------------Format Numbers----------------------------------------------------------------------</p> <ul> <li>249 webm audio only DASH audio 52k , opus @ 50k, 629.08KiB</li> <li>250 webm audio only DASH audio 69k , opus @ 70k, 811.98KiB</li> <li>171 webm audio only DASH audio 110k , vorbis@128k, 1.27MiB</li> <li>140 m4a audio only DASH audio 128k , m4a_dash container, mp4a.40.2@128k, 1.56MiB</li> <li>251 webm audio only DASH audio 138k , opus @160k, 1.53MiB</li> <li>278 webm 254x144 144p 82k , webm container, vp9, 13fps, video only, 772.69KiB</li> <li>242 webm 400x226 240p 101k , vp9, 25fps, video only, 884.56KiB</li> <li>160 mp4 254x144 144p 112k , avc1.4d400c, 13fps, video only, 1.31MiB</li> <li>133 mp4 400x226 240p 265k , avc1.4d400d, 25fps, video only, 2.92MiB</li> <li>17 3gp 176x144 small , mp4v.20.3, mp4a.40.2@ 24k</li> <li>36 3gp 320x180 small , mp4v.20.3, mp4a.40.2</li> <li>18 mp4 400x226 medium , avc1.42001E, mp4a.40.2@ 96k</li> <li>43 webm 640x360 medium , vp8.0, vorbis@128k (best)</li> </ul> </blockquote> <p>I want use format numbers in a program like this</p> <pre><code>import youtube_dl url = "https://www.youtube.com/watch?v=BaW_jenozKc" ydl_opts = { 'verbose': True, 'format': 'bestaudio/best', #maybe like this 'formatid'= 22 'outtmpl': '%(title)s-%(id)s.%(ext)s', 'noplaylist': True, } with youtube_dl.YoutubeDL(ydl_opts) as ydl: ydl.download([url]) </code></pre> <p>how can i do this</p>
0
2016-07-27T22:05:50Z
38,624,425
<p>If you really want format 22, then indeed, pass in a <code>format</code> key of <code>22</code>. You can use <code>/best</code> to fall back to the best video format if 22 is not available:</p> <pre><code>ydl_opts = { 'format': '22/best', ... </code></pre>
1
2016-07-27T22:47:11Z
[ "python", "youtube", "embedding", "youtube-dl", "python-embedding" ]
Get substring between strings from a python list
38,623,983
<p>How to get the content between strings <code>&amp;quot</code> and <code>autoRefresh</code> which will be <code>/commander/link/jobDetails/jobs/a2537f238-8622-11ee-a1a0-f0921c14c828?</code> from a list as below, I just need the first match (there could be multiple matches).</p> <pre><code>['something', 'something', ' something top.window.location.href = &amp;quot;/commander/link/jobDetails/jobs/a2537f238-8622-11ee-a1a0-f0921c14c828?autoRefresh=0&amp;amp;s=Jobs&amp;quot;;"&gt;','something'] </code></pre> <p>Tried </p> <pre><code>link = re.search('&amp;quot;(.*?)autoRefresh', big_list) print link.group(1) </code></pre> <p>and got <code>TypeError: expected string or buffer</code></p>
0
2016-07-27T22:06:31Z
38,624,035
<p>You need to iterate over the list, checking each string:</p> <pre><code>big_list = ['something', 'something', ' something top.window.location.href = &amp;quot;/commander/link/jobDetails/jobs/a2537f238-8622-11ee-a1a0-f0921c14c828?autoRefresh=0&amp;amp;s=Jobs&amp;quot;;"&gt;','something'] def get_all_subs(lst, pat, grp=0): patt = re.compile(pat) for s in lst: m = patt.search(s, grp) if m: yield m.group(grp) print(list(get_all_subs(big_list, '&amp;quot;(.*?)autoRefresh', 1))) </code></pre> <p>Or call <code>str.join</code> on the list and use <em>findall</em>:</p> <pre><code>print(re.findall('&amp;quot;(.*?)autoRefresh', "".join(big_list))) </code></pre>
0
2016-07-27T22:10:43Z
[ "python" ]
Get substring between strings from a python list
38,623,983
<p>How to get the content between strings <code>&amp;quot</code> and <code>autoRefresh</code> which will be <code>/commander/link/jobDetails/jobs/a2537f238-8622-11ee-a1a0-f0921c14c828?</code> from a list as below, I just need the first match (there could be multiple matches).</p> <pre><code>['something', 'something', ' something top.window.location.href = &amp;quot;/commander/link/jobDetails/jobs/a2537f238-8622-11ee-a1a0-f0921c14c828?autoRefresh=0&amp;amp;s=Jobs&amp;quot;;"&gt;','something'] </code></pre> <p>Tried </p> <pre><code>link = re.search('&amp;quot;(.*?)autoRefresh', big_list) print link.group(1) </code></pre> <p>and got <code>TypeError: expected string or buffer</code></p>
0
2016-07-27T22:06:31Z
38,624,165
<p>You may use the following:</p> <pre><code>re.search(r'(?&lt;=&amp;quot).*?(?=autoRefresh)', ''.join(YourList)) </code></pre>
0
2016-07-27T22:22:59Z
[ "python" ]
Jinja: Check if variable is iterable
38,623,993
<p>Is there a way in Jinja to check if a variable is iterable? I'm working with Django and depening on whether I use <code>objects.filter</code> or <code>objects.get</code> the <code>response</code> is send to the jinja template is could be iterable or not. </p> <p>I tried the following:</p> <pre><code>{% extends 'header.html' %} {% block content %} {% if response is iterable %} {% for i in response %} &lt;p&gt;i&lt;/p&gt; {% endfor %} {% else %} {{ response }} {% endif %} {% endblock %} </code></pre> <p>However, Django throws: <code>Unused 'is' at end of if expression.</code></p>
0
2016-07-27T22:07:33Z
38,624,197
<p>Try:</p> <p><code>{% if iterable(response) %} </code></p>
1
2016-07-27T22:26:38Z
[ "python", "django", "jinja2" ]
Tuple Comparison with Integers
38,624,020
<p>I am trying to do tuple comparison. I expected 2 as a result, but this bit of code prints out 0. Why?</p> <pre><code>tup1 = (1, 2, 3, 4, 5) tup2 = (2, 7, 9, 8, 5) count = 0 if tup1[0:5] == tup2[0]: count + 1 elif tup1[0:5] == tup2[1]: count + 1 elif tup1[0:5] == tup2[2]: count + 1 elif tup1[0:5] == tup2[3]: count + 1 elif tup1[0:5] == tup2[4]: count + 1 print(count) </code></pre>
2
2016-07-27T22:09:39Z
38,624,054
<p>You can do what you intend with a set intersection:</p> <pre><code>len(set(tup1) &amp; set(tup2)) </code></pre> <p>The <em>intersection</em> returns the common items in both tuples:</p> <pre><code>&gt;&gt;&gt; set(tup1) &amp; set(tup2) {2, 5} </code></pre> <p>Calling <code>len</code> on the result of the intersection gives the number of common items in both tuples. </p> <p>The above will however not give correct results if there are duplicated items in any of the tuples. You will need to do, say a comprehension, to handle this:</p> <pre><code>sum(1 for i in tup1 if i in tup2) # adds one if item in tup1 is found in tup2 </code></pre> <p>You may need to change the order in which the tuples appear depending on which of them has the duplicate. Or if both contain dupes, you could make two runs juxtaposing both tuples and take the max value from both runs.</p>
3
2016-07-27T22:12:39Z
[ "python", "integer", "comparison", "tuples" ]
Tuple Comparison with Integers
38,624,020
<p>I am trying to do tuple comparison. I expected 2 as a result, but this bit of code prints out 0. Why?</p> <pre><code>tup1 = (1, 2, 3, 4, 5) tup2 = (2, 7, 9, 8, 5) count = 0 if tup1[0:5] == tup2[0]: count + 1 elif tup1[0:5] == tup2[1]: count + 1 elif tup1[0:5] == tup2[2]: count + 1 elif tup1[0:5] == tup2[3]: count + 1 elif tup1[0:5] == tup2[4]: count + 1 print(count) </code></pre>
2
2016-07-27T22:09:39Z
38,624,092
<p>You are comparing one slice of one tuple (e.g tup1[0:5]) to one element of the other one which happens to be an integer. Therefore, the result of the comparison will always result in "False". To check, whether an element of tup2 is situated also in tup1 as well, you may use intersection or the following:</p> <pre><code>if tup2[n] in tup1: ... </code></pre>
0
2016-07-27T22:16:25Z
[ "python", "integer", "comparison", "tuples" ]
Tuple Comparison with Integers
38,624,020
<p>I am trying to do tuple comparison. I expected 2 as a result, but this bit of code prints out 0. Why?</p> <pre><code>tup1 = (1, 2, 3, 4, 5) tup2 = (2, 7, 9, 8, 5) count = 0 if tup1[0:5] == tup2[0]: count + 1 elif tup1[0:5] == tup2[1]: count + 1 elif tup1[0:5] == tup2[2]: count + 1 elif tup1[0:5] == tup2[3]: count + 1 elif tup1[0:5] == tup2[4]: count + 1 print(count) </code></pre>
2
2016-07-27T22:09:39Z
38,624,401
<p>Your code fails as you are comparing a tuple to an integer, even if you use in as below you would still need to use <code>+=</code> <code>count + 1</code> does not update the count variable: </p> <pre><code>count = 0 for ele in tup2: if ele in tup1: count += 1 </code></pre> <p>You can do it in linear time and account for duplicate occurrences in tup2 making only tup1 set:</p> <pre><code>st = set(tup1) print(sum(ele in st for ele in tup2)) </code></pre> <p>If you wanted the total sum from both of common elements, you could use a <em>Counter dict</em>:</p> <pre><code>tup1 = (1, 2, 3, 4, 5, 4, 2) tup2 = (2, 7, 9, 8, 2, 5) from collections import Counter cn = Counter(tup1) print(sum(cn[i] for i in tup2)) </code></pre>
0
2016-07-27T22:44:41Z
[ "python", "integer", "comparison", "tuples" ]
Assign to masked numpy array without removing mask?
38,624,029
<p>I would like to assign to (a slice of) a masked numpy array but not modify the mask. (Assignment normally clears the mask (unless it is "hard"), which seems completely contrary to the point of masking, but that's what we've got to work with.) I would also like this routine to work for plain unmasked arrays.</p> <p>Is there a better way to do this than saving and restoring the mask?</p> <pre><code>a = np.ma.array([0, 1, 2], mask=[0, 1, 0]) mask = a.mask.copy() if np.ma.is_masked(a) else None # Have to copy because it might be shared a[a &lt; 2] = -1 if mask is not None: a.mask = mask print(a, a.data) # [-1 -- 2] [-1 -1 2] </code></pre> <p>This is Python 2, numpy 1.11.1.</p>
2
2016-07-27T22:10:25Z
38,624,075
<p>In researching the question, I found an answer:</p> <pre><code>np.copyto(a, -1, where=a &lt; 2) </code></pre>
0
2016-07-27T22:14:46Z
[ "python", "numpy" ]
Assign to masked numpy array without removing mask?
38,624,029
<p>I would like to assign to (a slice of) a masked numpy array but not modify the mask. (Assignment normally clears the mask (unless it is "hard"), which seems completely contrary to the point of masking, but that's what we've got to work with.) I would also like this routine to work for plain unmasked arrays.</p> <p>Is there a better way to do this than saving and restoring the mask?</p> <pre><code>a = np.ma.array([0, 1, 2], mask=[0, 1, 0]) mask = a.mask.copy() if np.ma.is_masked(a) else None # Have to copy because it might be shared a[a &lt; 2] = -1 if mask is not None: a.mask = mask print(a, a.data) # [-1 -- 2] [-1 -1 2] </code></pre> <p>This is Python 2, numpy 1.11.1.</p>
2
2016-07-27T22:10:25Z
38,624,143
<p>I think what you want can be done by:</p> <pre><code>a.data[a &lt; 2] = -1 </code></pre>
2
2016-07-27T22:20:12Z
[ "python", "numpy" ]
Converting list Dict's to DataFrame: Pandas
38,624,151
<p>I'm doing some web-scraping and I'm storing the variables of interest in form of:</p> <pre><code>a = {'b':[100, 200],'c':[300, 400]} </code></pre> <p>This is for one page, where there were two <code>b</code>'s and two <code>c</code>'s. The next page could have three of each, where I'd store them as:</p> <pre><code>b = {'b':[300, 400, 500],'c':[500, 600, 700]} </code></pre> <p>When I go to create a <code>DataFrame</code> from the list of <code>dict</code>'s, I get:</p> <pre><code>import pandas as pd df = pd.DataFrame([a, b]) df b c 0 [100, 200] [300, 400] 1 [300, 400, 500] [500, 600, 700] </code></pre> <p>What I'm expecting is:</p> <pre><code>df b c 0 100 300 1 200 400 2 300 500 3 400 600 4 500 700 </code></pre> <p>I could create a <code>DataFrame</code> each time I store a page and <code>concat</code> the list of <code>DataFrame</code>'s at the end. However, based on experience, this is very expensive because the construction of thousands of <code>DataFrame</code>'s is much more expensive than creating one <code>DataFrame</code> from a lower-level constructor (i.e., list of <code>dict</code>'s).</p>
2
2016-07-27T22:21:25Z
38,624,388
<p>What about simply merging the dictionaries in each step?</p> <pre><code>import pandas as pd def merge_dicts(trg, src): for k, v in src.items(): trg[k].extend(v) a = {'b':[100, 200],'c':[300, 400]} b = {'b':[300, 400, 500],'c':[500, 600, 700]} merge_dicts(a, b) print(a) # {'c': [300, 400, 500, 600, 700], 'b': [100, 200, 300, 400, 500]} print(pd.DataFrame(a)) # b c # 0 100 300 # 1 200 400 # 2 300 500 # 3 400 600 # 4 500 700 </code></pre>
0
2016-07-27T22:43:20Z
[ "python", "pandas", "dictionary", "dataframe" ]
Converting list Dict's to DataFrame: Pandas
38,624,151
<p>I'm doing some web-scraping and I'm storing the variables of interest in form of:</p> <pre><code>a = {'b':[100, 200],'c':[300, 400]} </code></pre> <p>This is for one page, where there were two <code>b</code>'s and two <code>c</code>'s. The next page could have three of each, where I'd store them as:</p> <pre><code>b = {'b':[300, 400, 500],'c':[500, 600, 700]} </code></pre> <p>When I go to create a <code>DataFrame</code> from the list of <code>dict</code>'s, I get:</p> <pre><code>import pandas as pd df = pd.DataFrame([a, b]) df b c 0 [100, 200] [300, 400] 1 [300, 400, 500] [500, 600, 700] </code></pre> <p>What I'm expecting is:</p> <pre><code>df b c 0 100 300 1 200 400 2 300 500 3 400 600 4 500 700 </code></pre> <p>I could create a <code>DataFrame</code> each time I store a page and <code>concat</code> the list of <code>DataFrame</code>'s at the end. However, based on experience, this is very expensive because the construction of thousands of <code>DataFrame</code>'s is much more expensive than creating one <code>DataFrame</code> from a lower-level constructor (i.e., list of <code>dict</code>'s).</p>
2
2016-07-27T22:21:25Z
38,624,739
<p>Try this change the keys for clarity:</p> <pre><code>a = {'e':[100, 200],'f':[300, 400]} b = {'e':[300, 400, 500],'f':[500, 600, 700]} c = {'e':[300, 400, 500],'f':[500, 600, 700]} listDicts = [a,b,c] dd= {} for x in listDicts: for k in listDicts[0].keys(): try: dd[k] = dd[k] + x[k] except: dd[k] = x[k] df = pd.DataFrame(dd) e f 0 100 300 1 200 400 2 300 500 3 400 600 4 500 700 5 100 300 6 200 400 7 300 500 8 400 600 9 500 700 </code></pre>
1
2016-07-27T23:22:29Z
[ "python", "pandas", "dictionary", "dataframe" ]