title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
How to read dictionary in different manner? | 38,504,377 | <p>I'm having one dictionary which key is number and value is one or many lists. I want to read this dictionary so that I can separate key and value. </p>
<p><strong>Dictionary-</strong> </p>
<pre><code>{
1468332424064000: '[80000,2]',
1468332423282000: '[30000,6]',
1468332421081000: '[40000,2]',
1468332424121000: '[30000,2][40000,2]',
1468332424014000: '[60000,2]',
1468332421131000: '[40000,2][30000,6]',
1468332422921000: '[60000,2]',
1468332421046000: '[40000,2]',
1468332422217000: '[40000,2]',
1468332424921000: '[40000,2]',
1468332421459000: '[30000,6]',
1468332422579000: '[60000,2][30000,6]',
1468332422779000: '[30000,2]',
1468332424161000: '[70000,6]'
}
</code></pre>
<p><strong>Program-Code-</strong></p>
<pre><code>for k,v in latency_obj.d.iteritems():
li = v.split()
for l in li:
print l
</code></pre>
<p><strong>Output-</strong></p>
<pre><code>[80000,2]
[30000,6]
[40000,2]
[30000,2][40000,2]
[60000,2]
[40000,2][30000,6]
[60000,2]
[40000,2]
[40000,2]
[40000,2]
[30000,6]
[60000,2][30000,6]
[30000,2]
[70000,6]
</code></pre>
<p>But I want this two lists as a separate lists so that I can retrieve values of that lists. Any Idea What I'm missing?</p>
| 0 | 2016-07-21T12:30:34Z | 38,504,638 | <p>Try this,</p>
<pre><code>for key,value in d.items():
d[key] = eval(value.replace(']','],',1))
</code></pre>
<p><strong>output</strong></p>
<pre><code>{1468332421046000: ([40000, 2],),
1468332421081000: ([40000, 2],),
1468332421131000: ([40000, 2], [30000, 6]),
1468332421459000: ([30000, 6],),
1468332422217000: ([40000, 2],),
1468332422579000: ([60000, 2], [30000, 6]),
1468332422779000: ([30000, 2],),
1468332422921000: ([60000, 2],),
1468332423282000: ([30000, 6],),
1468332424014000: ([60000, 2],),
1468332424064000: ([80000, 2],),
1468332424121000: ([30000, 2], [40000, 2]),
1468332424161000: ([70000, 6],),
1468332424921000: ([40000, 2],)}
</code></pre>
<p>So you can change the string into tuple and you can access all the values.</p>
<p>You can access the values like this.</p>
<pre><code>In [1]: d[1468332421131000]
Out[1]: ([40000, 2], [30000, 6])
In [2]: d[1468332421131000][0]
Out[2]: [40000, 2]
In [3]: d[1468332421131000][1]
Out[3]: [30000, 6]
In [4]: d[1468332421131000][0][1]
Out[4]: 2
</code></pre>
<p><strong>Concept :</strong></p>
<p><code>replace(']','],',1)</code> will replace the first occurance of <code>]</code> with <code>],</code> and just evaluate with <code>eval</code>. And stored the same dictionary itself.</p>
| 0 | 2016-07-21T12:45:02Z | [
"python",
"list",
"dictionary"
] |
How to read dictionary in different manner? | 38,504,377 | <p>I'm having one dictionary which key is number and value is one or many lists. I want to read this dictionary so that I can separate key and value. </p>
<p><strong>Dictionary-</strong> </p>
<pre><code>{
1468332424064000: '[80000,2]',
1468332423282000: '[30000,6]',
1468332421081000: '[40000,2]',
1468332424121000: '[30000,2][40000,2]',
1468332424014000: '[60000,2]',
1468332421131000: '[40000,2][30000,6]',
1468332422921000: '[60000,2]',
1468332421046000: '[40000,2]',
1468332422217000: '[40000,2]',
1468332424921000: '[40000,2]',
1468332421459000: '[30000,6]',
1468332422579000: '[60000,2][30000,6]',
1468332422779000: '[30000,2]',
1468332424161000: '[70000,6]'
}
</code></pre>
<p><strong>Program-Code-</strong></p>
<pre><code>for k,v in latency_obj.d.iteritems():
li = v.split()
for l in li:
print l
</code></pre>
<p><strong>Output-</strong></p>
<pre><code>[80000,2]
[30000,6]
[40000,2]
[30000,2][40000,2]
[60000,2]
[40000,2][30000,6]
[60000,2]
[40000,2]
[40000,2]
[40000,2]
[30000,6]
[60000,2][30000,6]
[30000,2]
[70000,6]
</code></pre>
<p>But I want this two lists as a separate lists so that I can retrieve values of that lists. Any Idea What I'm missing?</p>
| 0 | 2016-07-21T12:30:34Z | 38,504,857 | <p>Considering the 'lists' i.e. dictionary values that are been discussed here as string, the numbers can be extracted using regular expressions.</p>
<pre><code>>>> import re
>>> lists = '[30000,2][40000,2]'
>>> out_list = re.findall('\d+',lists)
['30000', '2', '40000', '2'] # The elements are actually strings
>>> [eval(n) for n in out_list]
[30000, 2, 40000, 2] # List containing numbers
</code></pre>
<p>Hope this is what you are expecting.</p>
| 0 | 2016-07-21T12:55:05Z | [
"python",
"list",
"dictionary"
] |
Assigning values to multiple names | 38,504,588 | <p>I have seen many Python snippets where they write something like this:</p>
<pre><code>labels, features = targetFeatureSplit(data)
</code></pre>
<p>or something like </p>
<pre><code>ages_train, ages_test, net_worths_train, net_worths_test = train_test_split(ages, net_worths, test_size=0.1, random_state=42)
</code></pre>
<p>How are they assigning these values?</p>
| 0 | 2016-07-21T12:42:26Z | 38,504,721 | <p>So if you have a function that returns two values like so:</p>
<pre><code>def example():
return 'alice', 'bob'
</code></pre>
<p>You can then call this function and set it to the variable <code>test</code>.</p>
<pre><code>test = example()
</code></pre>
<p>where test is a tuple of 'alice' and 'bob'.</p>
<p>You can instead assign what the function returns to two variables instead, for example,</p>
<pre><code>a, b = example()
</code></pre>
<p>where a is 'alice' and b is 'bob'.</p>
<p>To answer the last bit of your question - if a function does not have a return keyword in it, then it will return <code>None</code> when the function completes. Therefore for variable assigment you can only set one variable equal to what this function returns.</p>
| 1 | 2016-07-21T12:48:42Z | [
"python",
"python-2.7",
"variables",
"iterable-unpacking"
] |
Filtering dict of dict with dictionary comprehension | 38,504,605 | <p>I have problem with filtering dict of dict while using dict comprehension.</p>
<p>I have dict:</p>
<pre><code>clients = {
'Shop': {'url' : 'url_v', 'customer' : 'cumoster_v',
'some_other_key1' : 'some_value'},
'Gym': {'url' : 'url_v1', 'customer_1' : 'customer_v1', 'customer_2': 'customer_v2',
'some_other_key2' : 'some_value'},
'Bank': {'url' : 'url_v2', 'customer_3' : 'customer_v3',
'some_other_key3' : 'some_value'}
}
</code></pre>
<p>I would like to make another dict which will have only 'customer.*' keys.
So, new dict should looks like:</p>
<pre><code>dict_only_cust = {
'Shop': {'customer' : 'cumoster_v'},
'Gym': {'customer_1' : 'customer_v1', 'customer_2': 'customer_v2'},
'Bank': {'customer_3' : 'customer_v3'}
}
</code></pre>
<p>As I'm big fan of lists and dicts comprehension, I'm wondering if it is possible to do it with this.</p>
<p>So far, I've written:</p>
<pre><code>dict_only_cust = {v.pop(t) for k, v in clients.items()
for t, vv in v.items()
if not re.match('.*customer.*', t)}
</code></pre>
<p>Code fails with <code>'RuntimeError: dictionary changed size during iteration'</code></p>
<p>Second time I've tried:</p>
<pre><code>dict_only_cust = {k:{t: vv} for k, v in clients.items()
for t, vv in v.items()
if re.match('.*customer.*', t)}
</code></pre>
<p>It is almost OK, but it is returning</p>
<pre><code>dict_only_cust = {
'Shop' : {'customer' : 'cumoster_v'},
'Gym' : {'customer_1' : 'customer_v1'},
'Bank' : {'customer_3' : 'customer_v3'}
}
</code></pre>
<p>How to solve this problem using dict comprehension?
I'm using python 3.4.</p>
<p>Thanks!</p>
| 2 | 2016-07-21T12:43:12Z | 38,504,793 | <pre><code>>>> {key:{k:v for k,v in dic.items() if 'customer' in k} for key,dic in clients.items()}
{'Shop': {'customer': 'cumoster_v'}, 'Gym': {'customer_2': 'customer_v2', 'customer_1': 'customer_v1'}, 'Bank': {'customer_3': 'customer_v3'}}
</code></pre>
| 4 | 2016-07-21T12:51:47Z | [
"python",
"python-3.x",
"dictionary"
] |
Filtering dict of dict with dictionary comprehension | 38,504,605 | <p>I have problem with filtering dict of dict while using dict comprehension.</p>
<p>I have dict:</p>
<pre><code>clients = {
'Shop': {'url' : 'url_v', 'customer' : 'cumoster_v',
'some_other_key1' : 'some_value'},
'Gym': {'url' : 'url_v1', 'customer_1' : 'customer_v1', 'customer_2': 'customer_v2',
'some_other_key2' : 'some_value'},
'Bank': {'url' : 'url_v2', 'customer_3' : 'customer_v3',
'some_other_key3' : 'some_value'}
}
</code></pre>
<p>I would like to make another dict which will have only 'customer.*' keys.
So, new dict should looks like:</p>
<pre><code>dict_only_cust = {
'Shop': {'customer' : 'cumoster_v'},
'Gym': {'customer_1' : 'customer_v1', 'customer_2': 'customer_v2'},
'Bank': {'customer_3' : 'customer_v3'}
}
</code></pre>
<p>As I'm big fan of lists and dicts comprehension, I'm wondering if it is possible to do it with this.</p>
<p>So far, I've written:</p>
<pre><code>dict_only_cust = {v.pop(t) for k, v in clients.items()
for t, vv in v.items()
if not re.match('.*customer.*', t)}
</code></pre>
<p>Code fails with <code>'RuntimeError: dictionary changed size during iteration'</code></p>
<p>Second time I've tried:</p>
<pre><code>dict_only_cust = {k:{t: vv} for k, v in clients.items()
for t, vv in v.items()
if re.match('.*customer.*', t)}
</code></pre>
<p>It is almost OK, but it is returning</p>
<pre><code>dict_only_cust = {
'Shop' : {'customer' : 'cumoster_v'},
'Gym' : {'customer_1' : 'customer_v1'},
'Bank' : {'customer_3' : 'customer_v3'}
}
</code></pre>
<p>How to solve this problem using dict comprehension?
I'm using python 3.4.</p>
<p>Thanks!</p>
| 2 | 2016-07-21T12:43:12Z | 38,504,881 | <pre><code>{k:{k1:v1 for k1,v1 in v.items() if k1.startswith('customer')} for k, v in clients.items()}
</code></pre>
<p>Output :</p>
<pre><code>{'Shop': {'customer': 'cumoster_v'}, 'Gym': {'customer_2': 'customer_v2', 'customer_1': 'customer_v1'}, 'Bank': {'customer_3': 'customer_v3'}}
</code></pre>
| 2 | 2016-07-21T12:56:03Z | [
"python",
"python-3.x",
"dictionary"
] |
Calculate percentage of count for a list of arrays | 38,504,737 | <p>Simple problem, but I cannot seem to get it to work. I want to calculate the percentage a number occurs in a list of arrays and output this percentage accordingly.
I have a list of arrays which looks like this:</p>
<pre><code>import numpy as np
# Create some data
listvalues = []
arr1 = np.array([0, 0, 2])
arr2 = np.array([1, 1, 2, 2])
arr3 = np.array([0, 2, 2])
listvalues.append(arr1)
listvalues.append(arr2)
listvalues.append(arr3)
listvalues
>[array([0, 0, 2]), array([1, 1, 2, 2]), array([0, 2, 2])]
</code></pre>
<p>Now I count the occurrences using collections, which returns a a list of collections.Counter:</p>
<pre><code>import collections
counter = []
for i in xrange(len(listvalues)):
counter.append(collections.Counter(listvalues[i]))
counter
>[Counter({0: 2, 2: 1}), Counter({1: 2, 2: 2}), Counter({0: 1, 2: 2})]
</code></pre>
<p>The result I am looking for is an array with 3 columns, representing the value 0 to 2 and len(listvalues) of rows. Each cell should be filled with the percentage of that value occurring in the array:</p>
<pre><code># Result
66.66 0 33.33
0 50 50
33.33 0 66.66
</code></pre>
<p>So 0 occurs 66.66% in array 1, 0% in array 2 and 33.33% in array 3, and so on..</p>
<p>What would be the best way to achieve this?
Many thanks!</p>
| 4 | 2016-07-21T12:49:23Z | 38,505,354 | <p>The <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="nofollow">numpy_indexed</a> package has a utility function for this, called count_table, which can be used to solve your problem efficiently as such:</p>
<pre><code>import numpy_indexed as npi
arrs = [arr1, arr2, arr3]
idx = [np.ones(len(a))*i for i, a in enumerate(arrs)]
(rows, cols), table = npi.count_table(np.concatenate(idx), np.concatenate(arrs))
table = table / table.sum(axis=1, keepdims=True)
print(table * 100)
</code></pre>
| 2 | 2016-07-21T13:16:52Z | [
"python",
"arrays",
"list",
"numpy"
] |
Calculate percentage of count for a list of arrays | 38,504,737 | <p>Simple problem, but I cannot seem to get it to work. I want to calculate the percentage a number occurs in a list of arrays and output this percentage accordingly.
I have a list of arrays which looks like this:</p>
<pre><code>import numpy as np
# Create some data
listvalues = []
arr1 = np.array([0, 0, 2])
arr2 = np.array([1, 1, 2, 2])
arr3 = np.array([0, 2, 2])
listvalues.append(arr1)
listvalues.append(arr2)
listvalues.append(arr3)
listvalues
>[array([0, 0, 2]), array([1, 1, 2, 2]), array([0, 2, 2])]
</code></pre>
<p>Now I count the occurrences using collections, which returns a a list of collections.Counter:</p>
<pre><code>import collections
counter = []
for i in xrange(len(listvalues)):
counter.append(collections.Counter(listvalues[i]))
counter
>[Counter({0: 2, 2: 1}), Counter({1: 2, 2: 2}), Counter({0: 1, 2: 2})]
</code></pre>
<p>The result I am looking for is an array with 3 columns, representing the value 0 to 2 and len(listvalues) of rows. Each cell should be filled with the percentage of that value occurring in the array:</p>
<pre><code># Result
66.66 0 33.33
0 50 50
33.33 0 66.66
</code></pre>
<p>So 0 occurs 66.66% in array 1, 0% in array 2 and 33.33% in array 3, and so on..</p>
<p>What would be the best way to achieve this?
Many thanks!</p>
| 4 | 2016-07-21T12:49:23Z | 38,505,399 | <p>You can create a list with the percentages with the following code : </p>
<pre><code>percentage_list = [((counter[i].get(j) if counter[i].get(j) else 0)*10000)//len(listvalues[i])/100.0 for i in range(len(listvalues)) for j in range(3)]
</code></pre>
<p>After that, create a np array from that list :</p>
<pre><code>results = np.array(percentage_list)
</code></pre>
<p>Reshape it so we have the good result : </p>
<pre><code>results = results.reshape(3,3)
</code></pre>
<p>This should allow you to get what you wanted.<br>
This is most likely not efficient, and not the best way to do this, but it has the merit of working. </p>
<p>Do not hesitate if you have any question.</p>
| 0 | 2016-07-21T13:19:08Z | [
"python",
"arrays",
"list",
"numpy"
] |
Calculate percentage of count for a list of arrays | 38,504,737 | <p>Simple problem, but I cannot seem to get it to work. I want to calculate the percentage a number occurs in a list of arrays and output this percentage accordingly.
I have a list of arrays which looks like this:</p>
<pre><code>import numpy as np
# Create some data
listvalues = []
arr1 = np.array([0, 0, 2])
arr2 = np.array([1, 1, 2, 2])
arr3 = np.array([0, 2, 2])
listvalues.append(arr1)
listvalues.append(arr2)
listvalues.append(arr3)
listvalues
>[array([0, 0, 2]), array([1, 1, 2, 2]), array([0, 2, 2])]
</code></pre>
<p>Now I count the occurrences using collections, which returns a a list of collections.Counter:</p>
<pre><code>import collections
counter = []
for i in xrange(len(listvalues)):
counter.append(collections.Counter(listvalues[i]))
counter
>[Counter({0: 2, 2: 1}), Counter({1: 2, 2: 2}), Counter({0: 1, 2: 2})]
</code></pre>
<p>The result I am looking for is an array with 3 columns, representing the value 0 to 2 and len(listvalues) of rows. Each cell should be filled with the percentage of that value occurring in the array:</p>
<pre><code># Result
66.66 0 33.33
0 50 50
33.33 0 66.66
</code></pre>
<p>So 0 occurs 66.66% in array 1, 0% in array 2 and 33.33% in array 3, and so on..</p>
<p>What would be the best way to achieve this?
Many thanks!</p>
| 4 | 2016-07-21T12:49:23Z | 38,505,456 | <p>Here's an approach -</p>
<pre><code># Get lengths of each element in input list
lens = np.array([len(item) for item in listvalues])
# Form group ID array to ID elements in flattened listvalues
ID_arr = np.repeat(np.arange(len(lens)),lens)
# Extract all values & considering each row as an indexing perform counting
vals = np.concatenate(listvalues)
out_shp = [ID_arr.max()+1,vals.max()+1]
counts = np.bincount(ID_arr*out_shp[1] + vals)
# Finally get the percentages with dividing by group counts
out = 100*np.true_divide(counts.reshape(out_shp),lens[:,None])
</code></pre>
<p>Sample run with an additional fourth array in input list -</p>
<pre><code>In [316]: listvalues
Out[316]: [array([0, 0, 2]),array([1, 1, 2, 2]),array([0, 2, 2]),array([4, 0, 1])]
In [317]: print out
[[ 66.66666667 0. 33.33333333 0. 0. ]
[ 0. 50. 50. 0. 0. ]
[ 33.33333333 0. 66.66666667 0. 0. ]
[ 33.33333333 33.33333333 0. 0. 33.33333333]]
</code></pre>
| 1 | 2016-07-21T13:21:36Z | [
"python",
"arrays",
"list",
"numpy"
] |
Calculate percentage of count for a list of arrays | 38,504,737 | <p>Simple problem, but I cannot seem to get it to work. I want to calculate the percentage a number occurs in a list of arrays and output this percentage accordingly.
I have a list of arrays which looks like this:</p>
<pre><code>import numpy as np
# Create some data
listvalues = []
arr1 = np.array([0, 0, 2])
arr2 = np.array([1, 1, 2, 2])
arr3 = np.array([0, 2, 2])
listvalues.append(arr1)
listvalues.append(arr2)
listvalues.append(arr3)
listvalues
>[array([0, 0, 2]), array([1, 1, 2, 2]), array([0, 2, 2])]
</code></pre>
<p>Now I count the occurrences using collections, which returns a a list of collections.Counter:</p>
<pre><code>import collections
counter = []
for i in xrange(len(listvalues)):
counter.append(collections.Counter(listvalues[i]))
counter
>[Counter({0: 2, 2: 1}), Counter({1: 2, 2: 2}), Counter({0: 1, 2: 2})]
</code></pre>
<p>The result I am looking for is an array with 3 columns, representing the value 0 to 2 and len(listvalues) of rows. Each cell should be filled with the percentage of that value occurring in the array:</p>
<pre><code># Result
66.66 0 33.33
0 50 50
33.33 0 66.66
</code></pre>
<p>So 0 occurs 66.66% in array 1, 0% in array 2 and 33.33% in array 3, and so on..</p>
<p>What would be the best way to achieve this?
Many thanks!</p>
| 4 | 2016-07-21T12:49:23Z | 38,506,278 | <p>You can get a list of all values and then simply iterate over the individual arrays to get the percentages:</p>
<p><code>values = set([y for row in listvalues for y in row])
print [[(a==x).sum()*100.0/len(a) for x in values] for a in listvalues]
</code></p>
| 2 | 2016-07-21T13:56:51Z | [
"python",
"arrays",
"list",
"numpy"
] |
Calculate percentage of count for a list of arrays | 38,504,737 | <p>Simple problem, but I cannot seem to get it to work. I want to calculate the percentage a number occurs in a list of arrays and output this percentage accordingly.
I have a list of arrays which looks like this:</p>
<pre><code>import numpy as np
# Create some data
listvalues = []
arr1 = np.array([0, 0, 2])
arr2 = np.array([1, 1, 2, 2])
arr3 = np.array([0, 2, 2])
listvalues.append(arr1)
listvalues.append(arr2)
listvalues.append(arr3)
listvalues
>[array([0, 0, 2]), array([1, 1, 2, 2]), array([0, 2, 2])]
</code></pre>
<p>Now I count the occurrences using collections, which returns a a list of collections.Counter:</p>
<pre><code>import collections
counter = []
for i in xrange(len(listvalues)):
counter.append(collections.Counter(listvalues[i]))
counter
>[Counter({0: 2, 2: 1}), Counter({1: 2, 2: 2}), Counter({0: 1, 2: 2})]
</code></pre>
<p>The result I am looking for is an array with 3 columns, representing the value 0 to 2 and len(listvalues) of rows. Each cell should be filled with the percentage of that value occurring in the array:</p>
<pre><code># Result
66.66 0 33.33
0 50 50
33.33 0 66.66
</code></pre>
<p>So 0 occurs 66.66% in array 1, 0% in array 2 and 33.33% in array 3, and so on..</p>
<p>What would be the best way to achieve this?
Many thanks!</p>
| 4 | 2016-07-21T12:49:23Z | 38,506,415 | <p>I would like to use functional-paradigm to resolve this problem. For example:</p>
<pre><code>>>> import numpy as np
>>> import pprint
>>>
>>> arr1 = np.array([0, 0, 2])
>>> arr2 = np.array([1, 1, 2, 2])
>>> arr3 = np.array([0, 2, 2])
>>>
>>> arrays = (arr1, arr2, arr3)
>>>
>>> u = np.unique(np.hstack(arrays))
>>>
>>> result = [[1.0 * c.get(uk, 0) / l
... for l, c in ((len(arr), dict(zip(*np.unique(arr, return_counts=True))))
... for arr in arrays)] for uk in u]
>>>
>>> pprint.pprint(result)
[[0.6666666666666666, 0.0, 0.3333333333333333],
[0.0, 0.5, 0.0],
[0.3333333333333333, 0.5, 0.6666666666666666]]
</code></pre>
| 0 | 2016-07-21T14:03:43Z | [
"python",
"arrays",
"list",
"numpy"
] |
Python Pandas DataFrame check if string is other string and fill column | 38,504,740 | <p>I am learning Python and this might be a noob question:</p>
<pre><code>import pandas as pd
A = pd.DataFrame({"A":["house", "mouse", "car", "tree"]})
check_list = ["house", "tree"]
</code></pre>
<p>I want to check rowwise if the string in A is in check_list. The result should be </p>
<pre><code> A YESorNO
0 house YES
1 mouse NO
2 car NO
3 tree YES
</code></pre>
| 0 | 2016-07-21T12:49:30Z | 38,504,774 | <p>Use <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow"><code>numpy.where</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a>:</p>
<pre><code>import pandas as pd
import numpy as np
A = pd.DataFrame({"A":["house", "mouse", "car", "tree"]})
check_list = ["house", "tree"]
A['YESorNO'] = np.where(A['A'].isin(check_list),'YES','NO')
print (A)
A YESorNO
0 house YES
1 mouse NO
2 car NO
3 tree YES
</code></pre>
| 1 | 2016-07-21T12:51:09Z | [
"python",
"string",
"if-statement",
"pandas"
] |
Python Pandas DataFrame check if string is other string and fill column | 38,504,740 | <p>I am learning Python and this might be a noob question:</p>
<pre><code>import pandas as pd
A = pd.DataFrame({"A":["house", "mouse", "car", "tree"]})
check_list = ["house", "tree"]
</code></pre>
<p>I want to check rowwise if the string in A is in check_list. The result should be </p>
<pre><code> A YESorNO
0 house YES
1 mouse NO
2 car NO
3 tree YES
</code></pre>
| 0 | 2016-07-21T12:49:30Z | 38,508,711 | <p>If for some reason you don't want to import numpy,</p>
<pre><code>import pandas as pd
A = pd.DataFrame({"A":["house", "mouse", "car", "tree"]})
check_list = ["house", "tree"]
</code></pre>
<p>Here's the one liner:</p>
<pre><code>A['YESorNO'] = ['YES' if x in check_list else 'NO' for x in A['A']]
</code></pre>
| 1 | 2016-07-21T15:45:28Z | [
"python",
"string",
"if-statement",
"pandas"
] |
Print each n-th row of pandas dataframe | 38,504,756 | <p>is there an elegant solution to print only each n-th row of a pandas dataframe? for instance, I would like to only print each 2nd row.</p>
<p>this could be done via</p>
<pre><code>i = 0
for index, row in df.iterrows():
if ((i%2) == 0):
print(row)
i++
</code></pre>
<p>but is there a more pythonic way to do this?</p>
| 2 | 2016-07-21T12:50:12Z | 38,504,785 | <p>slice the df with a step param with <code>iloc</code>:</p>
<pre><code>print(df.iloc[::2])
In [73]:
df = pd.DataFrame(np.random.randn(5,3), columns=list('abc'))
df
Out[73]:
a b c
0 0.613844 -0.167024 -1.287091
1 0.473858 -0.456157 0.037850
2 0.020583 0.368597 -0.147517
3 0.152791 -1.231226 -0.570839
4 -0.280074 0.806033 -1.610855
In [77]:
print(df.iloc[::2])
a b c
0 0.613844 -0.167024 -1.287091
2 0.020583 0.368597 -0.147517
4 -0.280074 0.806033 -1.610855
</code></pre>
| 4 | 2016-07-21T12:51:31Z | [
"python",
"pandas",
"printing"
] |
Fast calculation of v-disparity with OpenCV-Function calcHist | 38,504,760 | <p>Based on a disparity matrix from a passive stereo-camera system i need to calculate a v-disparity representation for obstacle detection with OpenCV.</p>
<p>A working implementation is <strong>not</strong> the problem. The problem is to do it fast...</p>
<p>(One) Reference for v-Disparity: Labayrade, R. and Aubert, D. and Tarel, J.P.â¨Real time obstacle detection in stereovision on non flat road geometry through v-disparity representation</p>
<p>The basic in short, to get the v-disparity (figure <a href="http://i.stack.imgur.com/fPvsg.png" rel="nofollow">1</a>), is to analyze the rows of the disparity-matrix (figure <a href="http://i.stack.imgur.com/YXYvf.png" rel="nofollow">2</a>) an represent the result as a histogram for each row over the disparity values. u-disparity (figure <a href="http://i.stack.imgur.com/R0qSt.png" rel="nofollow">3</a>) is the same on the columns of the disparity-matrix. (All figures are false-colored.)</p>
<p>I have implement the "same" in Python and C++. The speed in Python is acceptable but in C++ i get for the u- and v-disparity a time round about a half second (0.5 s). </p>
<p><em>(1. edit: due to the separate time measurement, only the calculation of the u-histogram takes a big amount of time...)</em></p>
<p>This leads me to following questions:</p>
<ol>
<li><p>Is it possible to avoid the loops for the line-wise calculation of the histogram? Is there a "trick" to do it with one call of <code>calcHist</code>-Function from OpenCV? Perhaps with the dimensions?</p></li>
<li><p>Is it in C++ just bad-coded and the runtime-issue are not related to the loops used for calculation?</p></li>
</ol>
<p>Thanks, all</p>
<hr>
<p>Working implementation in Python:</p>
<pre><code>#!/usr/bin/env python2
#-*- coding: utf-8 -*-
#
# THIS SOURCE-CODE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED. IN NO EVENT WILL THE AUTHOR BE HELD LIABLE FOR ANY DAMAGES ARISING FROM
# THE USE OF THIS SOURCE-CODE. USE AT YOUR OWN RISK.
import cv2
import numpy as np
import time
def draw_object(image, x, y, width=50, height=100):
color = image[y, x]
image[y-height:y, x-width//2:x+width//2] = color
IMAGE_HEIGHT = 600
IMAGE_WIDTH = 800
while True:
max_disp = 200
# create fake disparity
image = np.zeros((IMAGE_HEIGHT, IMAGE_WIDTH), np.uint8)
for c in range(IMAGE_HEIGHT)[::-1]:
image[c, ...] = int(float(c) / IMAGE_HEIGHT * max_disp)
draw_object(image, 275, 175)
draw_object(image, 300, 200)
draw_object(image, 100, 350)
# calculate v-disparity
vhist_vis = np.zeros((IMAGE_HEIGHT, max_disp), np.float)
for i in range(IMAGE_HEIGHT):
vhist_vis[i, ...] = cv2.calcHist(images=[image[i, ...]], channels=[0], mask=None, histSize=[max_disp],
ranges=[0, max_disp]).flatten() / float(IMAGE_HEIGHT)
vhist_vis = np.array(vhist_vis * 255, np.uint8)
vblack_mask = vhist_vis < 5
vhist_vis = cv2.applyColorMap(vhist_vis, cv2.COLORMAP_JET)
vhist_vis[vblack_mask] = 0
# calculate u-disparity
uhist_vis = np.zeros((max_disp, IMAGE_WIDTH), np.float)
for i in range(IMAGE_WIDTH):
uhist_vis[..., i] = cv2.calcHist(images=[image[..., i]], channels=[0], mask=None, histSize=[max_disp],
ranges=[0, max_disp]).flatten() / float(IMAGE_WIDTH)
uhist_vis = np.array(uhist_vis * 255, np.uint8)
ublack_mask = uhist_vis < 5
uhist_vis = cv2.applyColorMap(uhist_vis, cv2.COLORMAP_JET)
uhist_vis[ublack_mask] = 0
image = cv2.applyColorMap(image, cv2.COLORMAP_JET)
cv2.imshow('image', image)
cv2.imshow('vhist_vis', vhist_vis)
cv2.imshow('uhist_vis', uhist_vis)
cv2.imwrite('disparity_image.png', image)
cv2.imwrite('v-disparity.png', vhist_vis)
cv2.imwrite('u-disparity.png', uhist_vis)
if chr(cv2.waitKey(0)&255) == 'q':
break
</code></pre>
<hr>
<p>Working implementation in C++:</p>
<pre><code>#include <iostream>
#include <stdlib.h>
#include <ctime>
#include <opencv2/opencv.hpp>
using namespace std;
void draw_object(cv::Mat image, unsigned int x, unsigned int y, unsigned int width=50, unsigned int height=100)
{
image(cv::Range(y-height, y), cv::Range(x-width/2, x+width/2)) = image.at<unsigned char>(y, x);
}
int main()
{
unsigned int IMAGE_HEIGHT = 600;
unsigned int IMAGE_WIDTH = 800;
unsigned int MAX_DISP = 250;
unsigned int CYCLE = 0;
//setenv("QT_GRAPHICSSYSTEM", "native", 1);
// === PREPERATIONS ==
cv::Mat image = cv::Mat::zeros(IMAGE_HEIGHT, IMAGE_WIDTH, CV_8U);
cv::Mat uhist = cv::Mat::zeros(IMAGE_HEIGHT, MAX_DISP, CV_32F);
cv::Mat vhist = cv::Mat::zeros(MAX_DISP, IMAGE_WIDTH, CV_32F);
cv::Mat tmpImageMat, tmpHistMat;
float value_ranges[] = {(float)0, (float)MAX_DISP};
const float* hist_ranges[] = {value_ranges};
int channels[] = {0};
int histSize[] = {MAX_DISP};
struct timespec start, finish;
double elapsed;
while(1)
{
CYCLE++;
// === CLEANUP ==
image = cv::Mat::zeros(IMAGE_HEIGHT, IMAGE_WIDTH, CV_8U);
uhist = cv::Mat::zeros(IMAGE_HEIGHT, MAX_DISP, CV_32F);
vhist = cv::Mat::zeros(MAX_DISP, IMAGE_WIDTH, CV_32F);
// === CREATE FAKE DISPARITY WITH OBJECTS ===
for(int i = 0; i < IMAGE_HEIGHT; i++)
image.row(i) = ((float)i / IMAGE_HEIGHT * MAX_DISP);
draw_object(image, 200, 500);
draw_object(image, 525 + CYCLE%100, 275);
draw_object(image, 500, 300 + CYCLE%100);
clock_gettime(CLOCK_MONOTONIC, &start);
// === CALCULATE V-HIST ===
for(int i = 0; i < IMAGE_HEIGHT; i++)
{
tmpImageMat = image.row(i);
vhist.row(i).copyTo(tmpHistMat);
cv::calcHist(&tmpImageMat, 1, channels, cv::Mat(), tmpHistMat, 1, histSize, hist_ranges, true, false);
vhist.row(i) = tmpHistMat.t() / (float) IMAGE_HEIGHT;
}
clock_gettime(CLOCK_MONOTONIC, &finish);
elapsed = (finish.tv_sec - start.tv_sec);
elapsed += (finish.tv_nsec - start.tv_nsec) * 1e-9;
cout << "V-HIST-TIME: " << elapsed << endl;
clock_gettime(CLOCK_MONOTONIC, &start);
// === CALCULATE U-HIST ===
for(int i = 0; i < IMAGE_WIDTH; i++)
{
tmpImageMat = image.col(i);
uhist.col(i).copyTo(tmpHistMat);
cv::calcHist(&tmpImageMat, 1, channels, cv::Mat(), tmpHistMat, 1, histSize, hist_ranges, true, false);
uhist.col(i) = tmpHistMat / (float) IMAGE_WIDTH;
}
clock_gettime(CLOCK_MONOTONIC, &finish);
elapsed = (finish.tv_sec - start.tv_sec);
elapsed += (finish.tv_nsec - start.tv_nsec) * 1e-9;
cout << "U-HIST-TIME: " << elapsed << endl;
// === PREPARE AND SHOW RESULTS ===
uhist.convertTo(uhist, CV_8U, 255);
cv::applyColorMap(uhist, uhist, cv::COLORMAP_JET);
vhist.convertTo(vhist, CV_8U, 255);
cv::applyColorMap(vhist, vhist, cv::COLORMAP_JET);
cv::imshow("image", image);
cv::imshow("uhist", uhist);
cv::imshow("vhist", vhist);
if ((cv::waitKey(1)&255) == 'q')
break;
}
return 0;
}
</code></pre>
<hr>
<p><a href="http://i.stack.imgur.com/fPvsg.png" rel="nofollow"><img src="http://i.stack.imgur.com/fPvsg.png" alt="enter image description here"></a>
Figure 1: v-disparity</p>
<p><a href="http://i.stack.imgur.com/YXYvf.png" rel="nofollow"><img src="http://i.stack.imgur.com/YXYvf.png" alt="fake disparity matrix"></a>
Figure 2: Fake disparity matrix</p>
<p><a href="http://i.stack.imgur.com/R0qSt.png" rel="nofollow"><img src="http://i.stack.imgur.com/R0qSt.png" alt="enter image description here"></a>
Figure 3: u-disparity</p>
<hr>
<ol>
<li>edit:
<ul>
<li>correct name for u- and v-disparity and separate time measurement in c++ example</li>
<li>small typo</li>
</ul></li>
</ol>
| 0 | 2016-07-21T12:50:17Z | 38,956,111 | <p>Today i had the possibility to reinvestigate the problem. Remembering the OpenCV basics (<a href="http://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html" rel="nofollow">1</a>) for the <code>Mat</code>-structure and the fact that only one calculation takes a huge amount of time, i had the solution.</p>
<p>In OpenCV, each row of an image could be reached by a row-pointer. For iterating columns (done in u-disparity calculation) i suspect, that OpenCV needs to resolve every row-pointer + column-offset for building the histogram. </p>
<p>Changing the Code in a way, that OpenCV is able to use row-pointer, solves the problem for me.</p>
<pre><code> | old code [s] | changed [s]
------------+--------------+-------------
V-HIST-TIME | 0.00351909 | 0.00334152
U-HIST-TIME | 0.600039 | 0.00449285
</code></pre>
<p>So for the u-hist-loop i transpose the image and reverse the operation after the loop. The line wise access for calculation could now be done via the row-pointer. </p>
<p>Changed Codelines:</p>
<pre><code> // === CALCULATE U-HIST ===
image = image.t();
for(int i = 0; i < IMAGE_WIDTH; i++)
{
tmpImageMat = image.row(i);
uhist.col(i).copyTo(tmpHistMat);
cv::calcHist(&tmpImageMat, 1, channels, cv::Mat(), tmpHistMat, 1, histSize, hist_ranges, true, false);
uhist.col(i) = tmpHistMat / (float) IMAGE_WIDTH;
}
image = image.t();
</code></pre>
<hr>
<p>Finally my second question takes effect, the runtime-issue belongs not to the loop. A time less than 5 ms is (for now) fast enough. </p>
| 0 | 2016-08-15T13:35:21Z | [
"python",
"c++",
"opencv",
"computer-vision"
] |
RE pandas resample | 38,504,852 | <p>Been trying to take an average of a month worth of data but I wanted to check that:</p>
<pre><code>df=df.resample('M').mean()
</code></pre>
<p>Does give the monthly mean and NOT the mean of the last calander day of month </p>
<p>Also I've seen <code>W-Mon</code> which would give an average of the monday at a frequency of a week. What would be the equivalent to compare the monthly average of October over multiple years.
I thought it would be this- but it doesn't seem to recognise the command</p>
<pre><code>df=df.resample("M-OCT").mean()
</code></pre>
| 2 | 2016-07-21T12:55:01Z | 38,505,500 | <p>try this:</p>
<pre><code>df.assign(y=df.index.year, m=df.index.month).query('m==10').groupby(['y', 'm']).mean()
</code></pre>
<p>PS if you need a neat and tested answer please post sample data set and desired output in your question...</p>
| 1 | 2016-07-21T13:23:37Z | [
"python",
"pandas",
"dataframe",
"time-series"
] |
A more pythonic (or pandorable) way to change a list of columns to different data types | 38,504,885 | <p>Often when wrangling data I have to change datatypes.</p>
<p>For example </p>
<pre><code> In [11]: import pandas as pd
In [12]: import numpy as np
In [13]: df = pd.DataFrame({'col2': {0: 'apples', 1: 'oranges', 2: 'rabbit'}, 'col1': {0: 'white', 1: 'marshmallow', 2: 'bandwagon'}}
)
In [14]: df.dtypes
Out[14]:
col1 object
col2 object
dtype: object
In [15]: for col in cols:
df[col] = df[col].astype('category')
....:
In [16]: df.dtypes
Out[16]:
col1 category
col2 category
dtype: object
</code></pre>
<p>Is there a more pandas friendly way to do this - using for example a list comprehension? I feel that the for loop is slow... </p>
<p>This is a really common thing I have to do, and I'm just wondering if there's an idiom I'm not aware of. </p>
| 1 | 2016-07-21T12:56:13Z | 38,505,051 | <p>I think your solution is nice.</p>
<p>Another is:</p>
<pre><code>df[['col1','col2']] = df[['col1','col2']].apply(lambda x: x.astype('category'))
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>In [32]: %timeit (orig(df))
10 loops, best of 3: 27.8 ms per loop
In [33]: %timeit df.apply(lambda x: x.astype('category'))
10 loops, best of 3: 41.5 ms per loop
In [34]: %timeit pd.concat([df[col].astype('category') for col in df], axis=1)
100 loops, best of 3: 18.7 ms per loop
</code></pre>
<p>Code for timings:</p>
<pre><code>df = pd.DataFrame({'col2': ['apples', 'oranges', 'rabbit'],
'col1': ['white', 'marshmallow', 'bandwagon']})
df = pd.concat([df]*1000)
df = pd.concat([df]*100, axis=1)
df.columns = range(df.shape[1])
df[df.columns] = df[df.columns].apply(lambda x: x.astype('category'))
print (df)
df = pd.concat([df[col].astype('category') for col in df], axis=1)
print (df)
def orig(df):
for col in df.columns:
df[col] = df[col].astype('category')
return df
</code></pre>
| 1 | 2016-07-21T13:03:40Z | [
"python",
"pandas"
] |
Reading a .VTK polydata file and converting it into Numpy array | 38,504,907 | <p>I want to convert a .VTK ASCII polydata file into numpy array of just the coordinates of the points. I first tried this: <a href="http://stackoverflow.com/a/11894302">http://stackoverflow.com/a/11894302</a> but it stores a (3,3) numpy array where each entry is actually the coordinates of THREE points that make that particular cell (in this case a triangle). However, I don't want the cells, I want the coordinates of each point (without repeatition). Next I tried this: <a href="http://stackoverflow.com/a/23359921/6619666">http://stackoverflow.com/a/23359921/6619666</a> with some modifications. Here is my final code. Instead of numpy array, the values are being stored as a tuple but I am not sure if that tuple represents each point.</p>
<pre><code>import sys
import numpy
import vtk
from vtk.util.numpy_support import vtk_to_numpy
reader = vtk.vtkPolyDataReader()
reader.SetFileName('Filename.vtk')
reader.ReadAllScalarsOn()
reader.ReadAllVectorsOn()
reader.Update()
nodes_vtk_array= reader.GetOutput().GetPoints().GetData()
print nodes_vtk_array
</code></pre>
<p>Please give suggestions.</p>
| 0 | 2016-07-21T12:57:13Z | 38,512,033 | <p>You can get the point coordinates from a polydata object like so:</p>
<pre><code>polydata = reader.GetOutput()
points = polydata.GetPoints()
array = points.GetData()
numpy_nodes = vtk_to_numpy(array)
</code></pre>
| 1 | 2016-07-21T18:41:36Z | [
"python",
"arrays",
"numpy",
"vtk"
] |
Python: sorting list of lists not functioning as intended | 38,505,170 | <p>I am trying to sort the following list of lists by the first item of each list in ascending order:</p>
<pre><code>framenos = [
['1468', '2877', 'Pos.:', 95],
['3185', '4339', 'Pos.:', 96],
['195', '1460', 'Pos.:', 97]
]
</code></pre>
<p>I am using the following to do so:</p>
<pre><code>framesorted = sorted(framenos, key=lambda x: x[0]) #sort ranges by start numbers
</code></pre>
<p>Which gives:</p>
<pre><code>[['1468', '2877', 'Pos.:', 95], ['195', '1460', 'Pos.:', 97], ['3185', '4339', 'Pos.:', 96]]
</code></pre>
<p>What's going wrong?</p>
| 0 | 2016-07-21T13:09:06Z | 38,505,230 | <p>Your values are strings, so you are sorting <a href="https://en.wikipedia.org/wiki/Lexicographical_order"><em>lexicographically</em></a>, not numerically. <code>'1468'</code> is sorted before <code>'195'</code> because <code>'4'</code> comes before <code>'9'</code> in the ASCII standard, just like <code>'Ask'</code> would be sorted before <code>'Attribution'</code>.</p>
<p>Convert your strings to numbers if you need a numeric sort:</p>
<pre><code>framesorted = sorted(framenos, key=lambda x: int(x[0]))
</code></pre>
<p>Demo:</p>
<pre><code>>>> framenos = [
... ['1468', '2877', 'Pos.:', 95],
... ['3185', '4339', 'Pos.:', 96],
... ['195', '1460', 'Pos.:', 97]
... ]
>>> sorted(framenos, key=lambda x: int(x[0]))
[['195', '1460', 'Pos.:', 97], ['1468', '2877', 'Pos.:', 95], ['3185', '4339', 'Pos.:', 96]]
>>> from pprint import pprint
>>> pprint(_)
[['195', '1460', 'Pos.:', 97],
['1468', '2877', 'Pos.:', 95],
['3185', '4339', 'Pos.:', 96]]
</code></pre>
| 5 | 2016-07-21T13:11:23Z | [
"python",
"list",
"sorting",
"lambda"
] |
Python: sorting list of lists not functioning as intended | 38,505,170 | <p>I am trying to sort the following list of lists by the first item of each list in ascending order:</p>
<pre><code>framenos = [
['1468', '2877', 'Pos.:', 95],
['3185', '4339', 'Pos.:', 96],
['195', '1460', 'Pos.:', 97]
]
</code></pre>
<p>I am using the following to do so:</p>
<pre><code>framesorted = sorted(framenos, key=lambda x: x[0]) #sort ranges by start numbers
</code></pre>
<p>Which gives:</p>
<pre><code>[['1468', '2877', 'Pos.:', 95], ['195', '1460', 'Pos.:', 97], ['3185', '4339', 'Pos.:', 96]]
</code></pre>
<p>What's going wrong?</p>
| 0 | 2016-07-21T13:09:06Z | 38,505,289 | <p>Since the first element of each list is a string, is is sorting these numbers in alphabetical order. in order to sort these lists based on the integer value of the first element try casting to int:</p>
<pre><code>framesorted = sorted(framenos, key=lambda x: int(x[0]))
</code></pre>
| 0 | 2016-07-21T13:14:06Z | [
"python",
"list",
"sorting",
"lambda"
] |
How to remove double quotes from index of csv file in python | 38,505,250 | <p>I'm trying to read multiple <code>csv</code> files with python. The index of raw data(or the first column) has a little problem, the partial csv file looks like this:</p>
<pre><code>NoDemande;"NoUsager";"Sens";"IdVehiculeUtilise";"NoConducteur";"NoAdresse";"Fait";"HeurePrevue"
42210000003;"42210000529";"+";"265Véh";"42210000032";"42210002932";"1";"25/07/2015 10:00:04"
42210000005;"42210001805";"+";"265Véh";"42210000032";"42210002932";"1";"25/07/2015 10:00:04"
42210000004;"42210002678";"+";"265Véh";"42210000032";"42210002932";"1";"25/07/2015 10:00:04"
42210000003;"42210000529";"â";"265Véh";"42210000032";"42210004900";"1";"25/07/2015 10:50:03"
42210000004;"42210002678";"â";"265Véh";"42210000032";"42210007072";"1";"25/07/2015 11:25:03"
42210000005;"42210001805";"â";"265Véh";"42210000032";"42210004236";"1";"25/07/2015 11:40:03"
</code></pre>
<p>The first index has no <code>""</code>, after reading the file, it looks like: <code>"NoDemande"</code> while others have no <code>""</code>, and the rest of column looks just fine, which makes the result looks like(not the same lines):</p>
<pre><code>"NoDemande" NoUsager Sens IdVehiculeUtilise NoConducteur NoAdresse Fait HeurePrevue
42209000003 42209001975 + 245Véh 42209000002 42209005712 1 24/07/2015 06:30:04
42209000004 42209002021 + 245Véh 42209000002 42209005712 1 24/07/2015 06:30:04
42209000005 42209002208 + 245Véh 42209000002 42209005713 1 24/07/2015 06:45:04
42216000357 42216001501 - 190Véh 42216000139 42216001418 1 31/07/2015 17:15:03
42216000139 42216000788 - 309V7pVéh 42216000059 42216006210 1 31/07/2015 17:15:03
42216000118 42216000188 - 198Véh 42216000051 42216006374 1 31/07/2015 17:15:03
</code></pre>
<p>It causes problem identifying name of index in the coming moves. How to solve this problem?
Here's my code of reading files:</p>
<pre><code>import pandas as pd
import glob
pd.set_option('expand_frame_repr', False)
path = r'D:\Python27\mypfe\data_test'
allFiles = glob.glob(path + "/*.csv")
frame = pd.DataFrame()
list_ = []
for file_ in allFiles:
#Read file
df = pd.read_csv(file_,header=0,sep=';',dayfirst=True,encoding='utf8',
dtype='str')
df['Sens'].replace(u'\u2014','-',inplace=True)
list_.append(df)
print"fichier lu ",file_
frame = pd.concat(list_)
print frame
</code></pre>
| 1 | 2016-07-21T13:12:08Z | 39,364,688 | <p>In fact, I was stuck with how to remove double quotes from index. After changing the angle, I think maybe it's better to add a new column, copying the values from the original one and delete it. So the new column will have the index you want.
In my case, I did:</p>
<pre><code>frame['NoDemande'] = frame.ix[:, 0]
tl = frame.drop(frame.columns[0],axis=1)
</code></pre>
<p>So I got a new one with all I wanted.</p>
| 0 | 2016-09-07T08:19:03Z | [
"python",
"csv",
"pandas",
"dataframe",
"double-quotes"
] |
How to remove double quotes from index of csv file in python | 38,505,250 | <p>I'm trying to read multiple <code>csv</code> files with python. The index of raw data(or the first column) has a little problem, the partial csv file looks like this:</p>
<pre><code>NoDemande;"NoUsager";"Sens";"IdVehiculeUtilise";"NoConducteur";"NoAdresse";"Fait";"HeurePrevue"
42210000003;"42210000529";"+";"265Véh";"42210000032";"42210002932";"1";"25/07/2015 10:00:04"
42210000005;"42210001805";"+";"265Véh";"42210000032";"42210002932";"1";"25/07/2015 10:00:04"
42210000004;"42210002678";"+";"265Véh";"42210000032";"42210002932";"1";"25/07/2015 10:00:04"
42210000003;"42210000529";"â";"265Véh";"42210000032";"42210004900";"1";"25/07/2015 10:50:03"
42210000004;"42210002678";"â";"265Véh";"42210000032";"42210007072";"1";"25/07/2015 11:25:03"
42210000005;"42210001805";"â";"265Véh";"42210000032";"42210004236";"1";"25/07/2015 11:40:03"
</code></pre>
<p>The first index has no <code>""</code>, after reading the file, it looks like: <code>"NoDemande"</code> while others have no <code>""</code>, and the rest of column looks just fine, which makes the result looks like(not the same lines):</p>
<pre><code>"NoDemande" NoUsager Sens IdVehiculeUtilise NoConducteur NoAdresse Fait HeurePrevue
42209000003 42209001975 + 245Véh 42209000002 42209005712 1 24/07/2015 06:30:04
42209000004 42209002021 + 245Véh 42209000002 42209005712 1 24/07/2015 06:30:04
42209000005 42209002208 + 245Véh 42209000002 42209005713 1 24/07/2015 06:45:04
42216000357 42216001501 - 190Véh 42216000139 42216001418 1 31/07/2015 17:15:03
42216000139 42216000788 - 309V7pVéh 42216000059 42216006210 1 31/07/2015 17:15:03
42216000118 42216000188 - 198Véh 42216000051 42216006374 1 31/07/2015 17:15:03
</code></pre>
<p>It causes problem identifying name of index in the coming moves. How to solve this problem?
Here's my code of reading files:</p>
<pre><code>import pandas as pd
import glob
pd.set_option('expand_frame_repr', False)
path = r'D:\Python27\mypfe\data_test'
allFiles = glob.glob(path + "/*.csv")
frame = pd.DataFrame()
list_ = []
for file_ in allFiles:
#Read file
df = pd.read_csv(file_,header=0,sep=';',dayfirst=True,encoding='utf8',
dtype='str')
df['Sens'].replace(u'\u2014','-',inplace=True)
list_.append(df)
print"fichier lu ",file_
frame = pd.concat(list_)
print frame
</code></pre>
| 1 | 2016-07-21T13:12:08Z | 39,364,879 | <p>I think the simpliest is set new column names:</p>
<pre><code>df.columns = ['NoDemande1'] + df.columns[1:].tolist()
print (df)
NoDemande1 NoUsager Sens IdVehiculeUtilise NoConducteur NoAdresse \
0 42210000003 42210000529 + 265Véh 42210000032 42210002932
1 42210000005 42210001805 + 265Véh 42210000032 42210002932
2 42210000004 42210002678 + 265Véh 42210000032 42210002932
3 42210000003 42210000529 - 265Véh 42210000032 42210004900
4 42210000004 42210002678 - 265Véh 42210000032 42210007072
5 42210000005 42210001805 - 265Véh 42210000032 42210004236
Fait HeurePrevue
0 1 25/07/2015;10:00:04
1 1 25/07/2015;10:00:04
2 1 25/07/2015;10:00:04
3 1 25/07/2015;10:50:03
4 1 25/07/2015;11:25:03
5 1 25/07/2015;11:40:03
</code></pre>
<p>Another solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow"><code>strip</code></a> values <code>"</code> from column names:</p>
<pre><code>print (df)
"NoDemande" NoUsager Sens IdVehiculeUtilise NoConducteur NoAdresse \
0 42210000003 42210000529 + 265Véh 42210000032 42210002932
1 42210000005 42210001805 + 265Véh 42210000032 42210002932
2 42210000004 42210002678 + 265Véh 42210000032 42210002932
3 42210000003 42210000529 - 265Véh 42210000032 42210004900
4 42210000004 42210002678 - 265Véh 42210000032 42210007072
5 42210000005 42210001805 - 265Véh 42210000032 42210004236
Fait HeurePrevue
0 1 25/07/2015;10:00:04
1 1 25/07/2015;10:00:04
2 1 25/07/2015;10:00:04
3 1 25/07/2015;10:50:03
4 1 25/07/2015;11:25:03
5 1 25/07/2015;11:40:03
df.columns = df.columns.str.strip('"')
print (df)
NoDemande NoUsager Sens IdVehiculeUtilise NoConducteur NoAdresse \
0 42210000003 42210000529 + 265Véh 42210000032 42210002932
1 42210000005 42210001805 + 265Véh 42210000032 42210002932
2 42210000004 42210002678 + 265Véh 42210000032 42210002932
3 42210000003 42210000529 - 265Véh 42210000032 42210004900
4 42210000004 42210002678 - 265Véh 42210000032 42210007072
5 42210000005 42210001805 - 265Véh 42210000032 42210004236
Fait HeurePrevue
0 1 25/07/2015;10:00:04
1 1 25/07/2015;10:00:04
2 1 25/07/2015;10:00:04
3 1 25/07/2015;10:50:03
4 1 25/07/2015;11:25:03
5 1 25/07/2015;11:40:03
</code></pre>
| 0 | 2016-09-07T08:28:51Z | [
"python",
"csv",
"pandas",
"dataframe",
"double-quotes"
] |
Trigger loop after n lines in a text file | 38,505,339 | <p>I want to execute a loop if and only 5 lines have been executed inside the text file that's being written to. The reason being, I want the average to be calculated from the final 5 lines of the text file and if the program doesn't have 5 numbers to work with, then a rumtime error is thrown. </p>
<pre><code> #Imports
from bs4 import BeautifulSoup
from urllib import urlopen
import time
#Required Fields
pageCount = 1290429
#Loop
logFile = open("PastWinners.txt", "r+")
logFile.truncate()
while(pageCount>0):
time.sleep(1)
html = urlopen('https://www.csgocrash.com/game/1/%s' % (pageCount)).read()
soup = BeautifulSoup(html, "html.parser")
try:
section = soup.find('div', {"class":"row panel radius"})
crashPoint = section.find("b", text="Crashed At: ").next_sibling.strip()
logFile.write(crashPoint[0:-1]+"\n")
except:
continue
for i, line in enumerate(logFile): #After 5 lines, execute this
if i > 4:
data = [float(line.rstrip()) for line in logFile]
print("Average: " + "{0:0.2f}".format(sum(data[-5:])/len(data[-5:])))
else:
continue
print(crashPoint[0:-1])
pageCount+=1
logFile.close()
If anyone knows the solution, or knows a better way to go about doing this, it would be helpful, thanks :).
</code></pre>
<p><strong>Edit:</strong></p>
<p>Updated Code:</p>
<pre><code>#Imports
from bs4 import BeautifulSoup
from urllib import urlopen
import time
#Required Fields
pageCount = 1290429
lineCount = 0
def FindAverage():
with open('PastWinners.txt') as logFile:
data = [float(line.rstrip()) for line in logFile]
print("Average: " + "{0:0.2f}".format(sum(data[-5:])/len(data[-5:])))
#Loop
logFile = open("PastWinners.txt", "r+")
logFile.truncate()
while(pageCount>0):
time.sleep(1)
html = urlopen('https://www.csgocrash.com/game/1/%s' % (pageCount)).read()
soup = BeautifulSoup(html, "html.parser")
if lineCount > 4:
logFile.close()
FindAverage()
else:
continue
try:
section = soup.find('div', {"class":"row panel radius"})
crashPoint = section.find("b", text="Crashed At: ").next_sibling.strip()
logFile.write(crashPoint[0:-1]+"\n")
except:
continue
print(crashPoint[0:-1])
pageCount+=1
lineCount+=1
logFile.close()
</code></pre>
<p>New Problem:
The program runs as expected, however once the average is calculated and displayed, the program doesn't loop again, it stops. I want it to work so after 5 lines it calculates the average and then displays the next number, then displays a new average and so on and so.</p>
| 1 | 2016-07-21T13:16:27Z | 38,506,115 | <p>Your <code>while</code> loop is never going to end. I think you meant to decrement: <code>pageCount-=1</code>.</p>
| 0 | 2016-07-21T13:49:49Z | [
"python",
"file",
"loops",
"if-statement",
"text"
] |
Trigger loop after n lines in a text file | 38,505,339 | <p>I want to execute a loop if and only 5 lines have been executed inside the text file that's being written to. The reason being, I want the average to be calculated from the final 5 lines of the text file and if the program doesn't have 5 numbers to work with, then a rumtime error is thrown. </p>
<pre><code> #Imports
from bs4 import BeautifulSoup
from urllib import urlopen
import time
#Required Fields
pageCount = 1290429
#Loop
logFile = open("PastWinners.txt", "r+")
logFile.truncate()
while(pageCount>0):
time.sleep(1)
html = urlopen('https://www.csgocrash.com/game/1/%s' % (pageCount)).read()
soup = BeautifulSoup(html, "html.parser")
try:
section = soup.find('div', {"class":"row panel radius"})
crashPoint = section.find("b", text="Crashed At: ").next_sibling.strip()
logFile.write(crashPoint[0:-1]+"\n")
except:
continue
for i, line in enumerate(logFile): #After 5 lines, execute this
if i > 4:
data = [float(line.rstrip()) for line in logFile]
print("Average: " + "{0:0.2f}".format(sum(data[-5:])/len(data[-5:])))
else:
continue
print(crashPoint[0:-1])
pageCount+=1
logFile.close()
If anyone knows the solution, or knows a better way to go about doing this, it would be helpful, thanks :).
</code></pre>
<p><strong>Edit:</strong></p>
<p>Updated Code:</p>
<pre><code>#Imports
from bs4 import BeautifulSoup
from urllib import urlopen
import time
#Required Fields
pageCount = 1290429
lineCount = 0
def FindAverage():
with open('PastWinners.txt') as logFile:
data = [float(line.rstrip()) for line in logFile]
print("Average: " + "{0:0.2f}".format(sum(data[-5:])/len(data[-5:])))
#Loop
logFile = open("PastWinners.txt", "r+")
logFile.truncate()
while(pageCount>0):
time.sleep(1)
html = urlopen('https://www.csgocrash.com/game/1/%s' % (pageCount)).read()
soup = BeautifulSoup(html, "html.parser")
if lineCount > 4:
logFile.close()
FindAverage()
else:
continue
try:
section = soup.find('div', {"class":"row panel radius"})
crashPoint = section.find("b", text="Crashed At: ").next_sibling.strip()
logFile.write(crashPoint[0:-1]+"\n")
except:
continue
print(crashPoint[0:-1])
pageCount+=1
lineCount+=1
logFile.close()
</code></pre>
<p>New Problem:
The program runs as expected, however once the average is calculated and displayed, the program doesn't loop again, it stops. I want it to work so after 5 lines it calculates the average and then displays the next number, then displays a new average and so on and so.</p>
| 1 | 2016-07-21T13:16:27Z | 38,506,516 | <p>The problem at the end was that the loop wasn't restarting and just finishing on the first average calculation. This was due to the logFile being closed and not being reopen, so the program thought it and appending to the file, it works just as expected. Thanks to all for the help.</p>
<pre><code>#Imports
from bs4 import BeautifulSoup
from urllib import urlopen
import time
#Required Fields
pageCount = 1290429
lineCount = 0
def FindAverage():
with open('PastWinners.txt') as logFile:
data = [float(line.rstrip()) for line in logFile]
print("Average: " + "{0:0.2f}".format(sum(data[-5:])/len(data[-5:])))
#Loop
logFile = open("PastWinners.txt", "r+")
logFile.truncate()
while(pageCount>0):
time.sleep(1)
html = urlopen('https://www.csgocrash.com/game/1/%s' % (pageCount)).read()
soup = BeautifulSoup(html, "html.parser")
try:
section = soup.find('div', {"class":"row panel radius"})
crashPoint = section.find("b", text="Crashed At: ").next_sibling.strip()
logFile.write(crashPoint[0:-1]+"\n")
except:
continue
print(crashPoint[0:-1])
pageCount+=1
lineCount+=1
if lineCount > 4:
logFile.close()
FindAverage()
logFile = open("PastWinners.txt", "a+")
else:
continue
logFile.close()
</code></pre>
| 0 | 2016-07-21T14:08:01Z | [
"python",
"file",
"loops",
"if-statement",
"text"
] |
Django storing a lot of data in table | 38,505,345 | <p>Right now, I use this code to save the data to the database-</p>
<pre><code>for i in range(len(companies)):
for j in range(len(final_prices)):
linechartdata = LineChartData()
linechartdata.foundation = company //this refers to a foreign-key of a different model
linechartdata.date = finald[j]
linechartdata.price = finalp[j]
linechartdata.save()
</code></pre>
<p>Now <code>len(companies)</code> can vary from [3-50] and <code>len(final_prices)</code> can vary from somewhere between [5000-10000]. I know its a very inefficient way to store it in the database and takes a lot of time. What should I do to make it effective and less time consuming?</p>
| 0 | 2016-07-21T13:16:38Z | 38,505,564 | <p>If you really need to store them in the database you might check <a href="https://docs.djangoproject.com/en/1.9/ref/models/querysets/#bulk-create" rel="nofollow">bulk_create</a>. From the documents: </p>
<blockquote>
<p>This method inserts the provided list of objects into the database in an efficient manner (generally only 1 query, no matter how many objects there are):</p>
</blockquote>
<p>Although, I never personally used it for that many objects, docs say it can. This could make your code more efficient in terms of hitting the database and using multiple <code>save()</code>. </p>
<p>Basically to try; create list of objects (without saving) and then use <code>bulk_create</code>. Like this:</p>
<pre><code>arr = []
for i in range(len(companies)):
for j in range(len(final_prices)):
arr.append(
LineChartData(
foundation = company,
date = finald[j],
price = finalp[j]
)
)
LineChartData.objects.bulk_create(arr)
</code></pre>
| 2 | 2016-07-21T13:26:16Z | [
"python",
"django"
] |
How do I convert text lines to columns? | 38,505,404 | <p>It's probably very simple thing but I am totally new in Python so sorry. The cas is I have a lot of files containing this type of text:</p>
<pre><code>name1
[1.0 2.0 3.0],[1.1 2.1 3.1]
</code></pre>
<p>(the directory /data/text1/1.txt)</p>
<p>the other file for example contains</p>
<pre><code>name2
[4.0 5.0 6.0],[4.1 5.1 6.1]
</code></pre>
<p>(the directory /data/text2/2.txt)</p>
<p>and the output should be:</p>
<pre><code>name1
1.0 1.1
2.0 2.1
3.0 3.1
name2
4.0 4.1
5.0 5.1
6.0 6.1
</code></pre>
<p>What's the best way to do it?</p>
<p>I tried to write the code:</p>
<pre><code>with open('1.txt','r+') as f:
for line in f:
a = line.split(',', 1)
new_line = line[0] + '\n' + line[1]
f.write(new_line)
</code></pre>
<p>(It's probably really stupid.)</p>
<p>Ex.:(it's only two lines)</p>
<pre><code>sm_CCC1OCO1
[ 71.54252843 52.88596242 51.64903087],[ 62.07181336 44.1827499 42.9019055 ]
</code></pre>
| -5 | 2016-07-21T13:19:25Z | 38,505,863 | <p>You can <code>zip</code> and <code>re</code> to solve your problem.
Below is the code:</p>
<pre><code>>>> import re, os
>>> for file in os.listdir("directory"):
>>> with open(file) as fp:
>>> for line in fp.readlines():
>>> lists = [re.findall("\d+\.\d+",l) for l in line.split(',')]
>>> for a,b in zip(lists[0],lists[1]):
>>> print a, b
</code></pre>
<p>For file with content:</p>
<pre><code>[1.0 2.0 3.0],[1.1 2.1 3.1]
</code></pre>
<p>Output:</p>
<pre><code>1.0 1.1
2.0 2.1
3.0 3.1
</code></pre>
<p>Hope this is what you expects.</p>
| 1 | 2016-07-21T13:38:03Z | [
"python"
] |
Pandas implicit type casting from index to series | 38,505,415 | <p>This works:</p>
<pre><code> s['Date'] = s.index.get_level_values('Date')
s['Expire Days'] = (pd.to_datetime(s['Expiration']) - s['Date'])
</code></pre>
<p>But this does not:</p>
<pre><code> s['Expire Days'] = (pd.to_datetime(s['Expiration']) - s.index.get_level_values('Date'))
</code></pre>
<p>The error is:</p>
<pre><code>pandas/index.pyx in pandas.index.IndexEngine.get_indexer_non_unique (pandas/index.c:6148)()
TypeError: 'NoneType' object is not iterable
</code></pre>
<p>s is a Pandas DataFrame with a multi-index.</p>
<p>I'm mostly interested in why one works and not the other. As I see it, both should work.</p>
| 1 | 2016-07-21T13:20:01Z | 38,505,528 | <p>For me works add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.values.html" rel="nofollow"><code>values</code></a> for converting <code>Series</code> to <code>numpy array</code>:</p>
<pre><code>s['Expire Days'] = pd.to_datetime(s['Expiration']).values -
s.index.get_level_values('Date')
</code></pre>
<p>Sample:</p>
<pre><code>import pandas as pd
s = pd.DataFrame({'Expiration': {(pd.Timestamp('2015-03-04 00:00:00'), 1): '2015-03-05',
(pd.Timestamp('2015-03-03 00:00:00'), 2): '2015-03-05'}})
s = s.rename_axis(['Date','a'])
print (s)
Expiration
Date a
2015-03-03 2 2015-03-05
2015-03-04 1 2015-03-05
s['Expire Days'] = pd.to_datetime(s['Expiration']).values -
s.index.get_level_values('Date')
print (s)
Expiration Expire Days
Date a
2015-03-04 1 2015-03-05 1 days
2015-03-03 1 2015-03-05 2 days
</code></pre>
<p>EDIT by comment:</p>
<pre><code>s['Date'] = s.index.get_level_values('Date')
s['Expire Days'] = (pd.to_datetime(s['Expiration']) - s['Date'])
</code></pre>
<p>work nice, because <code>ndarray</code> as output of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_level_values.html" rel="nofollow"><code>get_level_values</code></a> is converted to <code>Series</code> in column <code>Expire Days</code>.</p>
<hr>
<pre><code>s['Expire Days'] = (pd.to_datetime(s['Expiration']) - s.index.get_level_values('Date'))
</code></pre>
<p>doesnt work, pd.to_datetime(s['Expiration']) is <code>Series</code> and <code>s.index.get_level_values('Date')</code> is <code>ndarray</code>. So you need both numpy arrays or both Series.</p>
<p>And because error:</p>
<blockquote>
<p>"Index._join_level on non-unique index is not implemented."</p>
</blockquote>
<p>in <code>pd.to_datetime(s['Expiration']) - s.index.get_level_values('Date').to_series()</code>, use converting both to <code>ndarray</code>.</p>
| 1 | 2016-07-21T13:24:45Z | [
"python",
"pandas"
] |
Efficient NumPy rows rotation over variable distances | 38,505,440 | <p>Given a 2D <code>M x N</code> NumPy array and a list of rotation distances, I want to rotate all <code>M</code> rows over the distances in the list. This is what I currently have:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
M = 6
N = 8
dists = [2,0,2,1,4,2] # for example
matrix = np.random.randint(0,2,(M,N))
for i in range(M):
matrix[i] = np.roll(matrix[i], -dists[i])
</code></pre>
<p>The last two lines are actually part of an inner loop that gets executed hundreds of thousands of times and it is bottlenecking my performance as measured by cProfile. Is it possible to, for instance, avoid the for-loop and to do this more efficiently?</p>
| 3 | 2016-07-21T13:20:59Z | 38,506,037 | <p>We can simulate the rolling behaviour with modulus operation after adding <code>dists</code> with a <code>range(0...N)</code> array to give us column indices for each row from where elements are to be picked and shuffled in the same row. We can vectorize this process across all rows with the help of <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a>. Thus, we would have an implementation like so -</p>
<pre><code>M,N = matrix.shape # Store matrix shape
# Get column indices for all elems for a rolled version with modulus operation
col_idx = np.mod(np.arange(N) + dists[:,None],N)
# Index into matrix with ranged row indices and col indices to get final o/p
out = matrix[np.arange(M)[:,None],col_idx]
</code></pre>
| 1 | 2016-07-21T13:46:01Z | [
"python",
"python-3.x",
"numpy",
"optimization"
] |
Scapy - How can I hide the report of sendp\sr1 and just get the final?â | 38,505,507 | <p>I am working with scapy and I started to learn how to build packets (if someone has a good example on the internet to learn from it - it will be great! thanks.).</p>
<p>I have the next command in scapy:</p>
<pre><code>srp(Ether(dst='ff:ff:ff:ff:ff:ff')/ARP(pdst=ip)/Padding(load='\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'),timeout=2)
</code></pre>
<p>Which send an arp packet in layer 2.
When I do this command, its giving me the next answer:</p>
<blockquote>
<p>WARNING: No route found for IPv6 destination :: (no default route?)
Begin emission:
*Finished to send 1 packets.</p>
<p>Received 1 packets, got 1 answers, remaining 0 packets</p>
<p>00:50:56:e9:b8:b1</p>
</blockquote>
<p>for the next code:</p>
<pre><code>def Arp_Req(ip):
packet = srp(Ether(dst='ff:ff:ff:ff:ff:ff')/ARP(pdst=ip)/Padding(load='\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'),timeout=2)
try:
packet[0][0]
return packet[0][0][1].hwsrc
except IndexError:
return "(E2)CANT FIND AN ANSWER FOR "+ip+"."
</code></pre>
<p>I want to hide all the report and print just the return answer. How can I do it?</p>
| 1 | 2016-07-21T13:23:55Z | 38,507,048 | <p>
Part of the output here come from a warning, due to IPv6, which you may avoid by disabling IPv6 support (from scapy), but you also have output generated by the function <code>srp()</code> itself, and for that you need to set the <code>verbose</code> argument:</p>
<pre><code>from scapy.config import conf
conf.ipv6_enabled = False
from scapy.all import *
def Arp_Req(ip):
packet = srp(Ether(dst='ff:ff:ff:ff:ff:ff')/ARP(pdst=ip)/Padding(load='\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'),timeout=2, verbose=0)
try:
packet[0][0]
return packet[0][0][1].hwsrc
except IndexError:
return "(E2)CANT FIND AN ANSWER FOR "+ip+"."
# example
print Arp_Req("192.168.0.254")
</code></pre>
| 2 | 2016-07-21T14:31:01Z | [
"python",
"scapy",
"packets"
] |
How to escape unicode string for regular expressions? | 38,505,706 | <p>I need to build an re pattern based on the unicode string (e.g. I have "word", and I need something like ^"word"| "word"). However the "word" can contain special re characters. To match the "word" as it is, I need to escape special re characters in unicode string. The basic re.escape() function does the job for ascii strings. How can I do this for unicode?</p>
| 0 | 2016-07-21T13:32:24Z | 38,506,087 | <p><code>re.escape()</code> inserts a backslash before every character that's not an ASCII alphanumeric. This may in fact lead to a multitude of unnecessary backslashes to be inserted, however, Python ignores backslashes that don't start a recognized escape sequence, so there is no big harm done (except possibly some performance penalty).</p>
<p>But if you want to build a stricter <code>escape()</code>, you can:</p>
<pre><code>def escape(s):
return re.sub(r"[(){}\[\].*?|^$\\+-]", r"\\\g<0>", s)
</code></pre>
<p>which only touches the actual regex metacharacters. I sure hope I didn't miss any :)</p>
| 1 | 2016-07-21T13:48:31Z | [
"python",
"regex",
"unicode",
"escaping"
] |
jupyter doesn't import numpy after upgrade with anaconda | 38,505,832 | <p>I updated the packages with <code>conda update --all</code> and was using jupyter to work. Before the update, everything was working, but now jupyter doesn't import any module beside the sys, os, copy and time. Numpy, matplotlib and theano are not being imported. But they are definitely in the conda list... the python version is 2.7.12</p>
<p>When I updated with conda, I remember that there was a message that numpy was being deprecieted due to conflicts. Now in the <code>conda list</code> I have numpy 1.11.1.</p>
<p>I'm new in python, so I don't understand the import error. Before uninstalling everything again, I would like to understand what the problem is to learn and of course to continue using jupyter ;)
I found this post https:// github.com/jupyter/notebook/issues/397 (sorry I can't link it, I'm new here) which seems to be a problem related to mine or similar, but I don't think I understand it so well... so before I break more I wanted to ask here!</p>
<p>Is jupyter badly "connected" to anaconda? How can I check where the packages are being searched? For any comment on this I would be very grateful!!
Here are the cells of jupyter:</p>
<p><a href="http://i.stack.imgur.com/kMALA.png" rel="nofollow">cells of jupyter</a></p>
<p>and the Error I get:</p>
<p><a href="http://i.stack.imgur.com/m9viP.png" rel="nofollow">ImportError</a></p>
<p>Thanks!</p>
| 0 | 2016-07-21T13:37:04Z | 38,637,838 | <p>I followed the idea as in <a href="http://stackoverflow.com/questions/9386048/ipython-reads-wrong-python-version">here</a> and changed the file that launches the root jupyter command (cf. <code>cat /dir_where_installed/anaconda2/bin/jupyter</code> and the jupyter-notebook ( cf. <code>cat /dir_where_installed/anaconda2/bin/jupyter-notebook</code>). </p>
<p>It was set as in the anaconda environment <code>conda info --envs</code> as expected (both files had in the first line something like <code>#! /dir_where_installed/anaconda2/bin/python</code>), but for some reason after the update I did and even after installing again everything(!), jupyter wasn't taking that path, instead it was importing from the 'stock' python (apparently).<br>
Anyway, I changed both lines with <code>#!</code> to take the path as in the output of <code>which python</code>. </p>
<p><strong>Summary</strong>: </p>
<ol>
<li>check path in <code>cat /dir_where_installed/anaconda2/bin/jupyter</code> and <code>cat /dir_where_installed/anaconda2/bin/jupyter-notebook</code> </li>
<li><code>which python</code> out put should be something like <code>/usr/bin/python</code> </li>
<li>substitute the lines in both files starting with <code>#!</code> with <code>#! /usr/bin/python</code><br>
I'm not sure if this is a good idea, but it worked for me and now I can import all packages in jupyter. If anyone has any idea if this is a bad idea or a better solution, please let me know!</li>
</ol>
| 0 | 2016-07-28T13:27:12Z | [
"python",
"jupyter"
] |
How to delete a function argument early? | 38,505,862 | <p>I'm writing a function which takes a huge argument, and runs for a long time. It needs the argument only halfway. Is there a way for the function to delete the value pointed to by the argument if there are no more references to it?</p>
<p>I was able to get it deleted as soon as the function returns, like this:</p>
<pre><code>def f(m):
print 'S1'
m = None
#__import__('gc').collect() # Uncommenting this doesn't help.
print 'S2'
class M(object):
def __del__(self):
print '__del__'
f(M())
</code></pre>
<p>This prints:</p>
<pre><code>S1
S2
__del__
</code></pre>
<p>I need:</p>
<pre><code>S1
__del__
S2
</code></pre>
<p>I was also trying <code>def f(*args):</code> and <code>def f(**kwargs)</code>, but it didn't help, I still get <code>__del__</code> last.</p>
<p>Please note that my code is relying on the fact that Python has reference counting, and <code>__del__</code> gets called as soon as an object's reference count drops to zero. I want the reference count of a function argument drop to zero in the middle of a function. Is this possible?</p>
<p>Please note that I know of a workaround: passing a list of arguments:</p>
<pre><code>def f(ms):
print 'S1'
del ms[:]
print 'S2'
class M(object):
def __del__(self):
print '__del__'
f([M()])
</code></pre>
<p>This prints:</p>
<pre><code>S1
__del__
S2
</code></pre>
<p>Is there a way to get the early deletion without changing the API (e.g. introducing lists to the arguments)?</p>
<p>If it's hard to get a portable solution which works in many Python implementations, I need something which works in the most recent CPython 2.7. It doesn't have to be documented.</p>
| 1 | 2016-07-21T13:38:02Z | 38,506,738 | <p>From <a href="https://docs.python.org/2/reference/datamodel.html" rel="nofollow">the documentation</a>:</p>
<blockquote>
<p>CPython implementation detail: CPython currently uses a reference-counting scheme with (optional) delayed detection of cyclically linked garbage, which collects most objects as soon as they become unreachable, but is not guaranteed to collect garbage containing circular references. See the documentation of the gc module for information on controlling the collection of cyclic garbage. Other implementations act differently and CPython may change. <strong>Do not depend on immediate finalization of objects when they become unreachable</strong> (ex: always close files).</p>
</blockquote>
<p>Short of modifying the interpreter yourself, you <em>cannot</em> achieve what you want. <code>__del__</code> will be called when the interpreter decides to do it.</p>
| 1 | 2016-07-21T14:17:51Z | [
"python",
"python-2.7",
"destructor",
"reference-counting"
] |
How do I filter rows in a dataframe that have whole numbers in one column | 38,505,895 | <p>I am new to Python and Pandas and I am struggling a bit with this.</p>
<p>I have a set of data with an <code>Age</code> column of type <code>float64</code>. Some of the values have a fractional part and some do not. I want to remove all the rows that have whole number values for <code>Age</code>.</p>
<p>This was my attempt at it:</p>
<pre><code>estimatedAges = train[int(train['Age']) < train['Age']]
</code></pre>
<p>But I got this error:</p>
<blockquote>
<p>TypeError Traceback (most recent call last)
in ()
1 #estimatedAges = train[train['Age'] > 1]
----> 2 estimatedAges = train[int(train['Age']) < train['Age']]
3 estimatedAges.info()</p>
<p>C:\Anaconda3\lib\site-packages\pandas\core\series.py in wrapper(self)
76 return converter(self.iloc[0])
77 raise TypeError("cannot convert the series to "
---> 78 "{0}".format(str(converter)))
79
80 return wrapper</p>
<p>TypeError: cannot convert the series to <class 'int'`></p>
</blockquote>
<p>So, it looks like <code>int()</code> does not work on series data and I am going to have to find another approach, I'm just not sure what that other approach is.</p>
| 2 | 2016-07-21T13:39:35Z | 38,505,914 | <p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow"><code>astype</code></a> for cast to <code>int</code>:</p>
<pre><code>estimatedAges = train[train['Age'].astype(int) < train['Age']]
</code></pre>
<p>Sample:</p>
<pre><code>train = pd.DataFrame({'Age':[1,2,3.4]})
print (train)
Age
0 1.0
1 2.0
2 3.4
print (train[train['Age'].astype(int) < train['Age']])
Age
2 3.4
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>train = pd.DataFrame({'Age':[1,2,3.4]})
train = pd.concat([train]*10000).reset_index(drop=True)
In [62]: %timeit (train[train['Age'].astype(int) < train['Age']])
The slowest run took 6.59 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 544 µs per loop
In [63]: %timeit (train[train['Age'].apply(int) < train['Age']])
100 loops, best of 3: 11.1 ms per loop
In [64]: %timeit (train[train.Age > train.Age.round(0)])
1000 loops, best of 3: 1.55 ms per loop
</code></pre>
<p>EDIT by comment of <a href="http://stackoverflow.com/questions/38505895/how-do-i-filter-out-whole-numbers/38505914?noredirect=1#comment64411644_38505914">ajcr</a>, thank you:</p>
<p>If values are negative and positive float, use:</p>
<pre><code>train = pd.DataFrame({'Age':[1,-2.8,3.9]})
print (train)
Age
0 1.0
1 -2.8
2 3.9
print (train[train['Age'].astype(int) != train['Age']])
Age
1 -2.8
2 3.9
</code></pre>
| 4 | 2016-07-21T13:40:35Z | [
"python",
"pandas",
"dataframe"
] |
How do I filter rows in a dataframe that have whole numbers in one column | 38,505,895 | <p>I am new to Python and Pandas and I am struggling a bit with this.</p>
<p>I have a set of data with an <code>Age</code> column of type <code>float64</code>. Some of the values have a fractional part and some do not. I want to remove all the rows that have whole number values for <code>Age</code>.</p>
<p>This was my attempt at it:</p>
<pre><code>estimatedAges = train[int(train['Age']) < train['Age']]
</code></pre>
<p>But I got this error:</p>
<blockquote>
<p>TypeError Traceback (most recent call last)
in ()
1 #estimatedAges = train[train['Age'] > 1]
----> 2 estimatedAges = train[int(train['Age']) < train['Age']]
3 estimatedAges.info()</p>
<p>C:\Anaconda3\lib\site-packages\pandas\core\series.py in wrapper(self)
76 return converter(self.iloc[0])
77 raise TypeError("cannot convert the series to "
---> 78 "{0}".format(str(converter)))
79
80 return wrapper</p>
<p>TypeError: cannot convert the series to <class 'int'`></p>
</blockquote>
<p>So, it looks like <code>int()</code> does not work on series data and I am going to have to find another approach, I'm just not sure what that other approach is.</p>
| 2 | 2016-07-21T13:39:35Z | 38,506,092 | <p>try this:</p>
<pre><code>In [179]: train[train.Age != train.Age // 1]
Out[179]:
Age
2 3.4
</code></pre>
| 2 | 2016-07-21T13:48:44Z | [
"python",
"pandas",
"dataframe"
] |
How do I filter rows in a dataframe that have whole numbers in one column | 38,505,895 | <p>I am new to Python and Pandas and I am struggling a bit with this.</p>
<p>I have a set of data with an <code>Age</code> column of type <code>float64</code>. Some of the values have a fractional part and some do not. I want to remove all the rows that have whole number values for <code>Age</code>.</p>
<p>This was my attempt at it:</p>
<pre><code>estimatedAges = train[int(train['Age']) < train['Age']]
</code></pre>
<p>But I got this error:</p>
<blockquote>
<p>TypeError Traceback (most recent call last)
in ()
1 #estimatedAges = train[train['Age'] > 1]
----> 2 estimatedAges = train[int(train['Age']) < train['Age']]
3 estimatedAges.info()</p>
<p>C:\Anaconda3\lib\site-packages\pandas\core\series.py in wrapper(self)
76 return converter(self.iloc[0])
77 raise TypeError("cannot convert the series to "
---> 78 "{0}".format(str(converter)))
79
80 return wrapper</p>
<p>TypeError: cannot convert the series to <class 'int'`></p>
</blockquote>
<p>So, it looks like <code>int()</code> does not work on series data and I am going to have to find another approach, I'm just not sure what that other approach is.</p>
| 2 | 2016-07-21T13:39:35Z | 38,517,722 | <p>I ultimately went with @jezreal's answer because his speed tests were convincing but I wanted to add one more solution that I found to the mix. It requires numpy, but if you have imported pandas then you have more than likely imported numpy as well.</p>
<pre><code>import numpy as np
train[np.floor(train['Age']) != train['Age']]
</code></pre>
| 0 | 2016-07-22T03:32:38Z | [
"python",
"pandas",
"dataframe"
] |
numpy testing assert array NOT equal | 38,506,044 | <p>We have <a href="http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.testing.assert_array_equal.html#numpy.testing.assert_array_equal" rel="nofollow"><code>numpy.testing.assert_array_equal</code></a> to assert that two arrays are equal.</p>
<p>But what is the best way to do <code>numpy.testing.assert_array_not_equal</code>, that is, to make sure that two arrays are NOT equal?</p>
| 5 | 2016-07-21T13:46:24Z | 38,506,922 | <p>I don't think there is anything built directly into the NumPy testing framework but you could just use:</p>
<pre><code>np.any(np.not_equal(a1,a2))
</code></pre>
<p>and assert true with the built in unittest framework or check with NumPy as <code>assert_equal</code> to <code>True</code> e.g.</p>
<pre><code>np.testing.assert_equal(np.any(np.not_equal(a,a)), True)
</code></pre>
| 3 | 2016-07-21T14:25:54Z | [
"python",
"numpy",
"python-unittest"
] |
numpy testing assert array NOT equal | 38,506,044 | <p>We have <a href="http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.testing.assert_array_equal.html#numpy.testing.assert_array_equal" rel="nofollow"><code>numpy.testing.assert_array_equal</code></a> to assert that two arrays are equal.</p>
<p>But what is the best way to do <code>numpy.testing.assert_array_not_equal</code>, that is, to make sure that two arrays are NOT equal?</p>
| 5 | 2016-07-21T13:46:24Z | 38,507,093 | <p>If you want to use specifically NumPy testing, then you can use <code>numpy.testing.assert_array_equal</code> together with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.testing.assert_raises.html#numpy.testing.assert_raises" rel="nofollow">numpy.testing.assert_raises</a> for the opposite result. For example:</p>
<pre><code>assert_raises(AssertionError, assert_array_equal, array_1, array_2)
</code></pre>
<p>Also there is <code>numpy.testing.utils.assert_array_compare</code> (it is used by <code>numpy.testing.assert_array_equal</code>), but I don't see it documented anywhere, so use with caution. This one will check that every element is different, so I guess this is not your use case:</p>
<pre><code>import operator
assert_array_compare(operator.__ne__, array_1, array_2)
</code></pre>
| 3 | 2016-07-21T14:33:00Z | [
"python",
"numpy",
"python-unittest"
] |
Fitting histograms of log-normal distributions in subplots with shared x-axis | 38,506,120 | <p>I have three arrays, of different lengths, say <code>standx</code>, <code>standy</code> and <code>standz</code>, which contain positive only values.
I want to plot their histogram distributions in a similar fashion than <a href="http://i.stack.imgur.com/gZa3k.png" rel="nofollow">this plot</a>, that is, sharing the x-axis (see also the plot below, after the EDIT).
But I want the the x-axis is <code>log</code> in scale, and the bins among the three plots are of the same size (this latter condition can be relaxed for the moment).</p>
<p>Then I want to fit these distributions with a Gaussian function in the <code>log</code> space (that is, a log-normal distribution). I somehow always mess up things with the fitting, and the Gaussian really does not reproduce the distributions (it usually is much flatter than the actual distribution, or other weird behaviours).</p>
<p><strong>LAST UPDATE</strong>
Here is what I managed to obtain: the fitted curve is not going as expected</p>
<pre><code>import pyfits
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.optimize import curve_fit
import pylab as py
def gaussian(x, a, mean, sigma):
return a * np.exp(-((x - mean)**2 / (2 * sigma**2)))
f, (ax1, ax2, ax3) = plt.subplots(3, sharex=True)
bins = np.histogram(standx, bins = 100)[1]
num_1, bins_1 = np.histogram(standx, np.histogram(standx, bins = 100)[1])
bins_01 = np.logspace( np.log10( standx.min() ), np.log10(standx.max() ), 100 )
x_fit = py.linspace(bins_01[0], bins_01[-1], 100)
popt, pcov = curve_fit(gaussian, x_fit, num_1, p0=[1, np.mean(standx), np.std(standx)])
y_fit = gaussian(bins_01, *popt)
counts, edges, patches = ax1.hist(standx, bins_01, facecolor='blue', alpha=0.5) # bins=100
area = sum(np.diff(edges)*counts)
# calculate length of each bin (required for scaling PDF to histogram)
bins_log_len = np.zeros( x_fit.size )
for ii in range( counts.size):
bins_log_len[ii] = edges[ii+1]-edges[ii]
# Create an array of length num_bins containing the center of each bin.
centers = 0.5*(edges[:-1] + edges[1:])
# Make a fit to the samples.
shape, loc, scale = stats.lognorm.fit(standx, floc=0)
# get pdf-values for same intervals as histogram
samples_fit_log = stats.lognorm.pdf( bins_01, shape, loc=loc, scale=scale )
# oplot fitted and scaled PDF into histogram
new_x = np.linspace(np.min(standx), np.max(standx), 100)
pdf = stats.norm.pdf(new_x, loc=np.log(scale), scale=shape)
ax1.plot(new_x, pdf*sum(counts), 'k-')
ax1.plot(bins_01, np.multiply(samples_fit_log, bins_log_len)*sum(counts), 'g--', label='PDF using histogram bins', linewidth=2 )
ax1.set_xscale('log')
ax1.plot(x_fit, stats.norm.pdf(x_fit, popt[1], popt[2])*area,'r--',linewidth=2,label='Fit: $\mu$=%.3f , $\sigma$=%.3f'%(popt[1],popt[2]) )
ax1.legend(loc='best', frameon=False, prop={'size':15})
# And similar for the ax2, ax3 plots
</code></pre>
<p>And here is the resulting plot:
<a href="http://i.stack.imgur.com/w5mWI.png" rel="nofollow"><img src="http://i.stack.imgur.com/w5mWI.png" alt="enter image description here"></a></p>
<p>The fitted Gaussian left wing in the top plot is lifted above zero even when the histogram distribution goes to zero. What am I doing wrong here?</p>
<p><strong>EDIT 2:</strong> Here is an example of data that reproduce the top plot in the figure.</p>
<pre><code>[ 101.51694114 118.85313212 91.69531845 90.26532237 90.28341631 105.12906896 262.7891152 486.49418076 161.05389372 163.73690191 166.77302778 222.02090477 126.19058434 86.05609479 88.91853857 193.97923929 239.15533093 106.52112332 60.84555301 88.45753752 123.02881537 124.81366349 27.19285691 104.71247832 146.07595491 106.56780994 118.54743181 182.01683537 155.86798209 212.47778143 154.97126376 91.52202431 112.49359451 164.37672439 173.27686471 209.55033453 224.81250249 117.96784525 241.48515315 90.20163858 242.82090455 195.16391416 157.28399949 236.17969925 52.60286058 153.19747048 220.8835675 160.28413028 183.82540253 78.87306634 87.7934009 29.2185999 129.05052788 105.9416127 104.47906222 303.81976836 231.82568094 234.7277374 133.87567039 84.21624497 83.77612409 100.3160127 66.60196186 93.82032598 98.88012693 235.07139859 44.74506772 90.43154857 97.83903455 56.6958664 87.39357325 80.4975729 44.50914276 80.04352253 122.69702279 181.73622079 114.35305809 72.8500753 92.97985176 167.82181244 23.89170096 277.56842175 120.27960673 188.24283156 87.85287841 104.65666064 55.56738985 113.74158901 160.78501265 144.7793944 146.26352811 72.42916164 81.58934891 82.03941082 140.62209553 98.12528712 27.80664138 60.33766399 69.16640959 76.58721414 129.2027075 92.0469369 58.65284569 74.47532813 272.38073082 25.6830871 120.49394762 153.67903201 108.99329823 73.31596785 158.44313205 108.06319404 149.67655877 100.98970685 183.89276773 259.99372599 146.67345963 151.87414015 56.50412433 68.30454916 87.91449416 136.98367718 85.89559447 146.20528695 137.48987622 119.43868024 127.65423602 95.12679396 74.19057758 37.78992221 124.93823546 76.83988791 156.26098736 52.77456371 74.56009299 72.83196226 126.33366119 114.75476007 71.07015661 203.58334989 115.37482779 112.41575426 52.67146874 34.41173382 91.43309873 84.56022527 97.52863818 64.69175291 98.82649613 110.33549604 88.73162329 63.33406042 67.50249703 51.80125226 93.77331898 134.86070329 104.78906904 180.36527776 96.10291219 73.86951609 61.85057464 85.4873267 19.49122558 94.90673405 54.70439619 44.11875268 77.00669426 106.03192447 72.14576138 32.88507942 43.71636039 69.09934896 164.33347129 184.71203014 91.85472367 112.8524319 130.65249146 93.07362972 82.04078274 77.55368682 37.01401147 95.27927068 45.84825324 78.97197286 56.51405138 55.6592834 123.75173665 146.25507348 100.94836797 148.27354976 75.66748311 249.42155118 103.90381969 96.81010983 94.77583435 68.77485119 23.38673989 88.64533289 67.76195191 177.0339476 103.49888373 101.77976527 121.43646273 150.67473968 134.80596161 110.43357052 109.31380389 46.4057108 202.95885552 368.77902191 151.79275675 84.19636911 72.80008013 46.03038795 57.46082639 53.41813204 178.14381109 135.27764511 76.58440241 71.31719469 60.19553618 27.25850013 32.44469416 22.57373214 36.81684014 27.31495127 70.17993686 142.8763359 135.88971259 72.97332852 86.41262044 64.57571923 143.87039206 155.27256205 110.78974448 151.27678795 147.15253312 52.58800732 104.08482961 79.94199525 122.04554796 110.58938546 50.32322361 77.34908774 111.69467931 166.33807553 72.91820982 79.81368763 57.5947018 103.52493188 163.77297985 144.02647916 113.26699317 147.49539845 85.72692319 30.22168157 116.74761705 74.51974655 80.10030241 75.37240728 63.55822184 243.37524675 231.9249136 113.26550804 72.43832113 55.14416523 120.54661712 147.10974035 72.92975739 69.32965749 120.95141745 37.68729105 66.24036939 203.91863535 55.8913402 95.73112443 96.24012717 176.62058262 79.31680757 162.42756296 78.39239957 169.11233776 100.20872299 62.93332374 30.91932801 38.07484721 54.18812526 172.53322492 89.52425567 84.25552157 130.99786509 94.25222458 60.10524134 62.86851886 76.52525125 59.58721735 92.13854969 174.06688353 138.10744182 194.01223744 151.1429943 140.01681885 47.14387464 11.84490967 6.96245414 47.70510341 101.54753328 108.36307095 157.82389186 166.39075768 151.60755493 65.70209698 143.84160067 126.19604257 102.22278009 45.26080872 108.46101698 158.36097588 141.08731145 83.69653695 118.36827104 118.32749524 143.7909344 37.68873242 115.57921476 139.13432742 138.44656014 94.29691791 94.18191872 85.85732773 51.69086583 66.97353588 59.40691006 95.74665069 92.3880327 75.95646049 18.87321191 54.9681136 114.54996764 131.89699216 123.48381482 72.87593216 139.98739954 122.6154045 143.29503576 271.88908663 262.73039299 155.66868313 101.36700756 216.940961 84.36613486 74.54262361 170.46092396 74.96294713 80.65423117 123.18869993 90.12445866 63.49877742 118.44434098 308.95279788 255.71401823 162.75657523 153.1426693 18.39821795 13.24170647 112.97427259 220.2135291 102.58993152 43.24075783 54.34572251 106.78667036 113.02930818 84.60049337 125.86238265 37.77423088 59.49255685 118.06299299 113.96271631 24.43862174 57.94269235 50.87677692 116.38177017 177.47487286 110.86615691 108.23451165 170.39527188 326.17663873 183.0187635 91.91273324 101.3131493 35.39369149 122.47551828 148.65749349 95.25557961 57.29064772 70.35810775 69.1915958 81.80452845 125.35745323 71.86708276 109.91184751 93.73739808 98.42700723 76.31195397 95.91546147 177.6087925 170.84268012 82.02914243 93.76613621 78.39962097 104.58703334 36.59546855 116.05663747 116.3494942 68.79781642 109.93397594 151.25008586 172.46504215 85.93646199 51.43955677 42.28647472 66.93113746 60.77211697 96.28259636 82.22735049 49.54423262 178.94159839 93.76859479 45.54744672 94.4599803 71.19930623 104.09904187 75.79761794 69.93849545 130.88921733 126.67404755 81.1833829 62.33448081 84.5987729 152.13563736 96.85621001 276.75452386 139.3158367 171.07567204 173.5501148 148.58205472 43.75713099 80.5508343 51.58395044 95.91107361 129.91845099 124.15592207 137.38840679 92.28611414 120.2618697 187.74571371 22.86841981 119.45375294 105.22286286 80.31061238 62.40199987 167.05483245 47.33392878 166.50472376 153.6375309 88.34718903 135.61514556 119.43909776 128.71538875 140.71852651 169.89867936 219.83340846 143.79419523 47.90655796 179.50489278 146.87141422 52.42075947 57.91783746 68.93906889 37.94645557 88.17616503 112.79640294 103.59258333 134.18698633 116.95667835 70.14118921 56.32427154 125.85321223 61.04903197 43.4000049 87.08489101 40.89691119 79.42038892 106.29486574 74.89994892 104.88572333 152.7553574 172.16266051 117.84344965 89.89983418 73.36633027 101.8498084 71.1734305 63.86839788 52.28033569 87.30368207 58.4308207 54.05836602 149.96873987 54.83900084 64.84848435 309.27088231 138.21289193 122.33905816 89.70053273 39.84886492 98.53375932 95.2274298 92.20005886 90.92608997 81.77090328 104.50069549 78.80647072 131.17258666 163.53527862]
</code></pre>
| 2 | 2016-07-21T13:50:12Z | 38,623,367 | <p>I think you want to fit on log transformed bins and correct the scaling by multiplying at each bin as you do above. </p>
<p><a href="http://i.stack.imgur.com/C5I8E.png" rel="nofollow"><img src="http://i.stack.imgur.com/C5I8E.png" alt="enter image description here"></a></p>
<pre><code>def gaussian(x, a, mean, sigma):
return a * np.exp(-((x - mean)**2 / (2 * sigma**2)))
f, (ax1, ax2, ax3) = plt.subplots(3, sharex=True)
bins = np.histogram(standx, bins = 100)[1]
from scipy.optimize import curve_fit
from scipy import stats
num_1, bins_1 = np.histogram(standx, np.histogram(standx, bins = 100)[1])
#log transform the bins!
bins_log=np.log10(bins_1[:-1])
bins_01 = np.logspace( np.log10( standx.min() ), np.log10(standx.max() ), 100 )
x_fit = np.linspace(bins_01[0], bins_01[-1], 100)
#popt, pcov = curve_fit(gaussian, x_fit, num_1, p0=[1, np.mean(standx), np.std(standx)])
popt, pcov = curve_fit(gaussian, bins_log, num_1, p0=[1, np.mean(standx), np.std(standx)])
#y_fit = gaussian(bins_01, *popt)
y_fit = gaussian(bins_log, *popt)
counts, edges, patches = ax1.hist(standx, bins_01, facecolor='blue', alpha=0.5) # bins=100
area = sum(np.diff(edges)*counts)
# calculate length of each bin (required for scaling PDF to histogram)
bins_log_len = np.zeros( x_fit.size )
for ii in range( counts.size):
bins_log_len[ii] = edges[ii+1]-edges[ii]
# Create an array of length num_bins containing the center of each bin.
centers = 0.5*(edges[:-1] + edges[1:])
# Make a fit to the samples.
shape, loc, scale = stats.lognorm.fit(standx, floc=0)
# get pdf-values for same intervals as histogram
samples_fit_log = stats.lognorm.pdf( bins_01, shape, loc=loc, scale=scale )
# oplot fitted and scaled PDF into histogram
new_x = np.linspace(np.min(standx), np.max(standx), 100)
pdf = stats.norm.pdf(new_x, loc=np.log(scale), scale=shape)
ax1.plot(new_x, pdf*sum(counts), 'k-')
ax1.plot(bins_01, np.multiply(samples_fit_log, bins_log_len)*sum(counts), 'g--', label='PDF using histogram bins', linewidth=2 )
#ax1.plot(x_fit, stats.norm.pdf(x_fit, popt[1], popt[2])*area,'r--',linewidth=2,label='Fit: $\mu$=%.3f , $\sigma$=%.3f'%(popt[1],popt[2]) )
log_adjusted_pdf=np.multiply(bins_log_len,stats.norm.pdf(bins_log, popt[1], popt[2]))
scale_factor=len(standx)/sum(log_adjusted_pdf)
ax1.plot(bins_1[:-1], scale_factor*log_adjusted_pdf,'r--',linewidth=2,label='Fit: $\mu$=%.3f , $\sigma$=%.3f'%(popt[1],popt[2]) )
ax1.set_xscale('log')
ax1.legend(loc='best', frameon=False, prop={'size':15})
# And similar for the ax2, ax3 plots
</code></pre>
| 1 | 2016-07-27T21:19:32Z | [
"python",
"matplotlib",
"gaussian",
"data-fitting"
] |
Trouble using OpenGL mouse callbacks in Pygame | 38,506,181 | <p>I'm using OpenGL through Pygame to render things and I want to get OpenGL mouse information. I know that I can get mouse position & click state directly through Pygame, but I need mouse position in OpenGL's coordinates, not just pixel coordinates of the viewport. The issue is that I can't get OpenGL's mouse callbacks to fire. Consider the following code:</p>
<pre><code>import pygame
from pygame.locals import DOUBLEBUF, OPENGL
from OpenGL.GLUT import glutMouseFunc, glutInit
def mouse_handler(button, state, x, y):
print("mouse handler: ",button, state, x, y)
pygame.init()
display = (800,600)
pygame.display.set_mode(display, DOUBLEBUF|OPENGL)
glutInit()
glutMouseFunc(mouse_handler)
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
quit()
pygame_mouse_pos = pygame.mouse.get_pos()
print(pygame_mouse_pos)
</code></pre>
<p>When you run this code it will start a continuous printout of the mouse position <strong>according to Pygame</strong>, which is obtained from the line <code>pygame_mouse_pos = pygame.mouse.get_pos()</code>. But I can't get the OpenGL mouse callback, set up with <code>glutMouseFunc(mouse_handler)</code>, to fire. Can somebody tell me what I'm doing wrong? This code runs in both Python 2 and 3 and I get the same results in each.</p>
| 0 | 2016-07-21T13:52:58Z | 38,506,814 | <p>You might be able to just use <code>pygame coordinates / viewport</code> to get coordinates in the [0;1] interval (e.g. for texture coordinates) or <code>pygame coordinates / viewport * 2 - 1</code> for [-1;1] interval (NDC coordinates).</p>
<p>I think using glut for this won't work, because you create your window with pygame and glut knows nothing about that window. I might be wrong though, i know neither glut nor pygames.</p>
<p>And just a tip for googling, OpenGL itself does not know mouse coordinates at all, this is provided by glut.</p>
| 0 | 2016-07-21T14:21:03Z | [
"python",
"opengl",
"pygame",
"pyopengl"
] |
recursing a list of 73033 elements with HTML tags and get context from it | 38,506,210 | <p>I have a long list of elements with a length of 73,033. I would like to get the context from it. In the list, each element has the same structure (if the block of the following code helps), and it looks like this <code><div align="center" class="photocaption"> Author/Designer Carleton Varney with Jim Druckman </div></code>. What I am interested in getting is the text <code>Author/Designer Carleton Varney with Jim Druckman.</code> </p>
<p><strong>Main code</strong></p>
<pre><code>NewSoups = [BeautifulSoup(NewR) for NewR in NewRs].
captions = [soup.find_all("div", class_ = "photocaption") for soup in NewSoups]
flattened_captions = []
for x in captions:
for y in x:
flattened_captions.append(y)
print(len(flattened_captions)) #73033
import re
results = [re.sub('<[^>]*>', '', y) for y in flattened_captions] #where the error comes from
</code></pre>
<p><strong>Error</strong></p>
<pre><code>Traceback (most recent call last):
File "picked.py", line 22, in <module>
results = [re.sub('<[^>]*>', '', y) for y in flattened_captions]
File "/opt/conda/lib/python2.7/re.py", line 155, in sub
return _compile(pattern, flags).sub(repl, string, count)
TypeError: expected string or buffer
</code></pre>
<p>I am wondering if there is a convenient way to loop through the long list of <code><div ></div></code>. If not, what would be the best way to extract all text that I desire? Thank you very much. </p>
| 0 | 2016-07-21T13:54:04Z | 38,511,386 | <p>What I am going to post is not the most elegant or efficient way to deal with the posted problem. As Welbog pointed out, BeautifulSoup itself has provided the function of extracting context. As I received the error as I posted my original question, however, I was just curious where the error came from. It turned out that the returned things from flattented_captions were not strings. It's quite simple to solve. Method as below.</p>
<pre><code>str_flattened_captions = [str(flattened_captions[i]) for i in range(len(flattened_captions))]
gains = [re.sub('<[^>]*>', '', item) for item in str_flattened_captions]
</code></pre>
<p>To test</p>
<pre><code>print(gains[:5])
r Barbara Schorr ', ' Architect Joan Dineen with Alyson Liss ', ' Author/Designer Carleton Varney with Jim Druckman ', ' Designers Richard Cerrone, Lisa Hyman and Rhonda Eleish (front) in their room called "Holiday Nod To Nature" ']
</code></pre>
| 0 | 2016-07-21T18:06:55Z | [
"python",
"regex",
"list"
] |
How to get input from multiple lines? | 38,506,250 | <p>i want to take input from user in and each value of the input is on consecutive line.this is to be implemented in python</p>
<pre><code>while x=int(raw_input()): ##<=showing error at this line
print(x)
gollum(x)
#the function gollum() has to be called if the input is present
</code></pre>
| -2 | 2016-07-21T13:55:45Z | 38,506,376 | <p>The reason why your code does not work is why wants a condition or an object. As you are assigning a value (<code>x=raw_input()</code>), while does not find anything to test (an assignment does NOT return any value).<br>
You can either request an input, and then do a while loop depending on the value of this input (that will be modified inside the while loop) :</p>
<pre><code>x = int(raw_input())
while x:
print(x)
gollum(x)
x = int(raw_input())
</code></pre>
| 0 | 2016-07-21T14:01:51Z | [
"python"
] |
How to get input from multiple lines? | 38,506,250 | <p>i want to take input from user in and each value of the input is on consecutive line.this is to be implemented in python</p>
<pre><code>while x=int(raw_input()): ##<=showing error at this line
print(x)
gollum(x)
#the function gollum() has to be called if the input is present
</code></pre>
| -2 | 2016-07-21T13:55:45Z | 38,506,440 | <p>That gives you an error because x=int(raw_input()) doesn't return a boolean, and you need a boolean inside the while condition.
You can try this one:</p>
<pre><code> while True:
x = raw_input()
if x=='':
break
x = int(x)
print(x)
gollum(x)
</code></pre>
<p>that way if you put an empty string (just an enter) the program just stops and doesn't give an annoying error :P</p>
| 0 | 2016-07-21T14:04:35Z | [
"python"
] |
For loop output to a string in python | 38,506,358 | <p>So i wrote a code that takes a string and print out the ascii code but i have this problem that i'm printing out the ascii for every letter in a for loop and in the end i want it to be a string of a number.</p>
<p>This is the code:</p>
<pre><code>getname='test'
for letter in getname:
print ord(letter)
</code></pre>
<p>And the output is:</p>
<pre><code>116
101
115
116
</code></pre>
<p>How can i take the for loop output and make it a string?
in the end i want it to be like this:</p>
<pre><code>116101115116
</code></pre>
<p>Thanks.</p>
| 0 | 2016-07-21T14:00:42Z | 38,506,422 | <p>You want to create a string and append to it, like this:</p>
<pre><code>getname = 'test'
result = ''
for letter in getname:
result += ord(letter)
print result
</code></pre>
<p>Output: </p>
<pre><code>116101115116
</code></pre>
| 1 | 2016-07-21T14:03:55Z | [
"python",
"string",
"for-loop",
"ascii"
] |
For loop output to a string in python | 38,506,358 | <p>So i wrote a code that takes a string and print out the ascii code but i have this problem that i'm printing out the ascii for every letter in a for loop and in the end i want it to be a string of a number.</p>
<p>This is the code:</p>
<pre><code>getname='test'
for letter in getname:
print ord(letter)
</code></pre>
<p>And the output is:</p>
<pre><code>116
101
115
116
</code></pre>
<p>How can i take the for loop output and make it a string?
in the end i want it to be like this:</p>
<pre><code>116101115116
</code></pre>
<p>Thanks.</p>
| 0 | 2016-07-21T14:00:42Z | 38,506,429 | <p>You can do a one line statement like this</p>
<pre><code>>>> "".join(str(ord(x)) for x in getname)
'116101115116'
</code></pre>
| 2 | 2016-07-21T14:04:12Z | [
"python",
"string",
"for-loop",
"ascii"
] |
Randomly concat data frames by row | 38,506,360 | <p>How can I randomly merge, join or concat pandas data frames by row? Suppose I have four data frames something like this (with a lot more rows): </p>
<pre><code>df1 = pd.DataFrame({'col1':["1_1", "1_1"], 'col2':["1_2", "1_2"], 'col3':["1_3", "1_3"]})
df2 = pd.DataFrame({'col1':["2_1", "2_1"], 'col2':["2_2", "2_2"], 'col3':["2_3", "2_3"]})
df3 = pd.DataFrame({'col1':["3_1", "3_1"], 'col2':["3_2", "3_2"], 'col3':["3_3", "3_3"]})
df4 = pd.DataFrame({'col1':["4_1", "4_1"], 'col2':["4_2", "4_2"], 'col3':["4_3", "4_3"]})
</code></pre>
<p>How can I join these four data frames randomly output something like this (they are randomly merged row for row):</p>
<pre><code> col1 col2 col3 col1 col2 col3 col1 col2 col3 col1 col2 col3
0 1_1 1_2 1_3 4_1 4_2 4_3 2_1 2_2 2_3 3_1 3_2 3_3
1 2_1 2_2 2_3 1_1 1_2 1_3 3_1 3_2 3_3 4_1 4_2 4_3
</code></pre>
<p>I was thinking I could do something like this: </p>
<pre><code>my_list = [df1,df2,df3,df4]
my_list = random.sample(my_list, len(my_list))
df = pd.DataFrame({'empty' : []})
for row in df:
new_df = pd.concat(my_list, axis=1)
print new_df
</code></pre>
<p>Above <code>for</code> statement will not work for more than the first row, every row after (I have more) will just be the same, i.e it will only shuffle once: </p>
<pre><code> col1 col2 col3 col1 col2 col3 col1 col2 col3 col1 col2 col3
0 4_1 4_2 4_3 1_1 1_2 1_3 2_1 2_2 2_3 3_1 3_2 3_3
1 4_1 4_2 4_3 1_1 1_2 1_3 2_1 2_2 2_3 3_1 3_2 3_3
</code></pre>
| 2 | 2016-07-21T14:00:43Z | 38,509,931 | <p><strong>UPDATE:</strong> a much better solution from @Divakar:</p>
<pre><code>df1 = pd.DataFrame({'col1':["1_1", "1_1"], 'col2':["1_2", "1_2"], 'col3':["1_3", "1_3"], 'col4':["1_4", "1_4"]})
df2 = pd.DataFrame({'col1':["2_1", "2_1"], 'col2':["2_2", "2_2"], 'col3':["2_3", "2_3"], 'col4':["2_4", "2_4"]})
df3 = pd.DataFrame({'col1':["3_1", "3_1"], 'col2':["3_2", "3_2"], 'col3':["3_3", "3_3"], 'col4':["3_4", "3_4"]})
df4 = pd.DataFrame({'col1':["4_1", "4_1"], 'col2':["4_2", "4_2"], 'col3':["4_3", "4_3"], 'col4':["4_4", "4_4"]})
dfs = [df1, df2, df3, df4]
n = len(dfs)
nrows = dfs[0].shape[0]
ncols = dfs[0].shape[1]
A = pd.concat(dfs, axis=1).values.reshape(nrows,-1,ncols)
sidx = np.random.rand(nrows,n).argsort(1)
out_arr = A[np.arange(nrows)[:,None],sidx,:].reshape(nrows,-1)
df = pd.DataFrame(out_arr)
</code></pre>
<p>Output:</p>
<pre><code>In [203]: df
Out[203]:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 3_1 3_2 3_3 3_4 1_1 1_2 1_3 1_4 4_1 4_2 4_3 4_4 2_1 2_2 2_3 2_4
1 4_1 4_2 4_3 4_4 2_1 2_2 2_3 2_4 3_1 3_2 3_3 3_4 1_1 1_2 1_3 1_4
</code></pre>
<p>Explanation: (c) Divakar</p>
<p><strong>NumPy based solution</strong></p>
<p>Let's have a NumPy based vectorized solution and hopefully a fast one!</p>
<p>1) Let's reshape an array of concatenated values into a <code>3D</code> array "cutting" each row into groups of <code>ncols</code> corresponding to the # of columns in each of the input dataframes -</p>
<pre><code>A = pd.concat(dfs, axis=1).values.reshape(nrows,-1,ncols)
</code></pre>
<p>2) Next up, we trick <code>np.aragsort</code> to give us random unique indices ranging from 0 to <code>N-1</code>, where N is the number of input dataframes - </p>
<pre><code>sidx = np.random.rand(nrows,n).argsort(1)
</code></pre>
<p>3) Final trick is NumPy's fancy indexing together with some broadcasting to index into <code>A</code> with <code>sidx</code> to give us the output array - </p>
<pre><code>out_arr = A[np.arange(nrows)[:,None],sidx,:].reshape(nrows,-1)
</code></pre>
<p>4) If needed, convert to dataframe -</p>
<pre><code>df = pd.DataFrame(out_arr)
</code></pre>
<p><strong>OLD answer:</strong></p>
<p>IIUC you can do it this way:</p>
<pre><code>dfs = [df1, df2, df3, df4]
n = len(dfs)
ncols = dfs[0].shape[1]
v = pd.concat(dfs, axis=1).values
a = np.arange(n * ncols).reshape(n, df1.shape[1])
df = pd.DataFrame(np.asarray([v[i, a[random.sample(range(n), n)].reshape(n * ncols,)] for i in dfs[0].index]))
</code></pre>
<p>Output</p>
<pre><code>In [150]: df
Out[150]:
0 1 2 3 4 5 6 7 8 9 10 11
0 1_1 1_2 1_3 3_1 3_2 3_3 4_1 4_2 4_3 2_1 2_2 2_3
1 2_1 2_2 2_3 1_1 1_2 1_3 3_1 3_2 3_3 4_1 4_2 4_3
</code></pre>
<p>Explanation:</p>
<pre><code>In [151]: v
Out[151]:
array([['1_1', '1_2', '1_3', '2_1', '2_2', '2_3', '3_1', '3_2', '3_3', '4_1', '4_2', '4_3'],
['1_1', '1_2', '1_3', '2_1', '2_2', '2_3', '3_1', '3_2', '3_3', '4_1', '4_2', '4_3']], dtype=object)
In [152]: a
Out[152]:
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11]])
</code></pre>
| 2 | 2016-07-21T16:44:14Z | [
"python",
"numpy",
"pandas"
] |
How to format a list when I print it | 38,506,394 | <pre><code>values = [[3.5689651969162908, 4.664618442892583, 3.338666695570425],
[6.293153787450157, 1.1285723419142026, 10.923859694586376],
[2.052506259736077, 3.5496423448584924, 9.995488620338277],
[9.41858935127928, 10.034233496516803, 7.070345442417161]]
def flatten(values):
new_values = []
for i in range(len(values)):
for v in range(len(values[0])):
new_values.append(values[i][v])
return new_values
v = flatten(values)
print("A 2D list contains:")
print("{}".format(values))
print("The flattened version of the list is:")
print("{}".format(v))
</code></pre>
<p>I am flatting the 2D list to 1D, but I can format it. I know the (v) is a list, and I tried to use for loop to print it, but I still can't get the result I want. I am wondering are there any ways to format the list. I want to print the (v) as a result with two decimal places. Like this</p>
<blockquote>
<p>[3.57, 4.66, 3.34, 6.29, 1.13, 10.92, 2.05, 3.55, 10.00, 9.42, 10.03, 7.07]</p>
</blockquote>
<p>I am using the Eclipse and Python 3.0+.</p>
| 2 | 2016-07-21T14:02:36Z | 38,506,679 | <p>You could use:</p>
<pre><code>print(["{:.2f}".format(val) for val in v])
</code></pre>
<p>Note that you can flatten your list using <code>itertools.chain</code>:</p>
<pre><code>import itertools
v = list(itertools.chain(*values))
</code></pre>
| 3 | 2016-07-21T14:15:09Z | [
"python",
"python-3.x"
] |
How to format a list when I print it | 38,506,394 | <pre><code>values = [[3.5689651969162908, 4.664618442892583, 3.338666695570425],
[6.293153787450157, 1.1285723419142026, 10.923859694586376],
[2.052506259736077, 3.5496423448584924, 9.995488620338277],
[9.41858935127928, 10.034233496516803, 7.070345442417161]]
def flatten(values):
new_values = []
for i in range(len(values)):
for v in range(len(values[0])):
new_values.append(values[i][v])
return new_values
v = flatten(values)
print("A 2D list contains:")
print("{}".format(values))
print("The flattened version of the list is:")
print("{}".format(v))
</code></pre>
<p>I am flatting the 2D list to 1D, but I can format it. I know the (v) is a list, and I tried to use for loop to print it, but I still can't get the result I want. I am wondering are there any ways to format the list. I want to print the (v) as a result with two decimal places. Like this</p>
<blockquote>
<p>[3.57, 4.66, 3.34, 6.29, 1.13, 10.92, 2.05, 3.55, 10.00, 9.42, 10.03, 7.07]</p>
</blockquote>
<p>I am using the Eclipse and Python 3.0+.</p>
| 2 | 2016-07-21T14:02:36Z | 38,506,682 | <p>You can first flatten the list (<a href="http://stackoverflow.com/questions/952914/making-a-flat-list-out-of-list-of-lists-in-python">as described here</a>) and then use <code>round</code> to solve this:</p>
<pre><code>flat_list = [number for sublist in l for number in sublist]
# All numbers are in the same list now
print(flat_list)
[3.5689651969162908, 4.664618442892583, 3.338666695570425, 6.293153787450157, ..., 7.070345442417161]
rounded_list = [round(number, 2) for number in flat_list]
# The numbers are rounded to two decimals (but still floats)
print(flat_list)
[3.57, 4.66, 3.34, 6.29, 1.13, 10.92, 2.05, 3.55, 10.00, 9.42, 10.03, 7.07]
</code></pre>
<p>This can be written shorter if we put the rounding directly into the list comprehension:</p>
<pre><code>print([round(number, 2) for sublist in l for number in sublist])
</code></pre>
| 0 | 2016-07-21T14:15:21Z | [
"python",
"python-3.x"
] |
How to format a list when I print it | 38,506,394 | <pre><code>values = [[3.5689651969162908, 4.664618442892583, 3.338666695570425],
[6.293153787450157, 1.1285723419142026, 10.923859694586376],
[2.052506259736077, 3.5496423448584924, 9.995488620338277],
[9.41858935127928, 10.034233496516803, 7.070345442417161]]
def flatten(values):
new_values = []
for i in range(len(values)):
for v in range(len(values[0])):
new_values.append(values[i][v])
return new_values
v = flatten(values)
print("A 2D list contains:")
print("{}".format(values))
print("The flattened version of the list is:")
print("{}".format(v))
</code></pre>
<p>I am flatting the 2D list to 1D, but I can format it. I know the (v) is a list, and I tried to use for loop to print it, but I still can't get the result I want. I am wondering are there any ways to format the list. I want to print the (v) as a result with two decimal places. Like this</p>
<blockquote>
<p>[3.57, 4.66, 3.34, 6.29, 1.13, 10.92, 2.05, 3.55, 10.00, 9.42, 10.03, 7.07]</p>
</blockquote>
<p>I am using the Eclipse and Python 3.0+.</p>
| 2 | 2016-07-21T14:02:36Z | 38,506,724 | <p>I would use the built-in function <code>round()</code>, and while I was about it I would simplify your <code>for</code> loops:</p>
<pre><code>def flatten(values):
new_values = []
for i in values:
for v in i:
new_values.append(round(v, 2))
return new_values
</code></pre>
| 0 | 2016-07-21T14:17:12Z | [
"python",
"python-3.x"
] |
How to format a list when I print it | 38,506,394 | <pre><code>values = [[3.5689651969162908, 4.664618442892583, 3.338666695570425],
[6.293153787450157, 1.1285723419142026, 10.923859694586376],
[2.052506259736077, 3.5496423448584924, 9.995488620338277],
[9.41858935127928, 10.034233496516803, 7.070345442417161]]
def flatten(values):
new_values = []
for i in range(len(values)):
for v in range(len(values[0])):
new_values.append(values[i][v])
return new_values
v = flatten(values)
print("A 2D list contains:")
print("{}".format(values))
print("The flattened version of the list is:")
print("{}".format(v))
</code></pre>
<p>I am flatting the 2D list to 1D, but I can format it. I know the (v) is a list, and I tried to use for loop to print it, but I still can't get the result I want. I am wondering are there any ways to format the list. I want to print the (v) as a result with two decimal places. Like this</p>
<blockquote>
<p>[3.57, 4.66, 3.34, 6.29, 1.13, 10.92, 2.05, 3.55, 10.00, 9.42, 10.03, 7.07]</p>
</blockquote>
<p>I am using the Eclipse and Python 3.0+.</p>
| 2 | 2016-07-21T14:02:36Z | 38,506,768 | <p>How to flatten and transform the list in one line</p>
<pre><code>[round(x,2) for b in [x for x in values] for x in b]
</code></pre>
<p>It returns a list of two decimals after the comma.</p>
| 0 | 2016-07-21T14:19:07Z | [
"python",
"python-3.x"
] |
How to format a list when I print it | 38,506,394 | <pre><code>values = [[3.5689651969162908, 4.664618442892583, 3.338666695570425],
[6.293153787450157, 1.1285723419142026, 10.923859694586376],
[2.052506259736077, 3.5496423448584924, 9.995488620338277],
[9.41858935127928, 10.034233496516803, 7.070345442417161]]
def flatten(values):
new_values = []
for i in range(len(values)):
for v in range(len(values[0])):
new_values.append(values[i][v])
return new_values
v = flatten(values)
print("A 2D list contains:")
print("{}".format(values))
print("The flattened version of the list is:")
print("{}".format(v))
</code></pre>
<p>I am flatting the 2D list to 1D, but I can format it. I know the (v) is a list, and I tried to use for loop to print it, but I still can't get the result I want. I am wondering are there any ways to format the list. I want to print the (v) as a result with two decimal places. Like this</p>
<blockquote>
<p>[3.57, 4.66, 3.34, 6.29, 1.13, 10.92, 2.05, 3.55, 10.00, 9.42, 10.03, 7.07]</p>
</blockquote>
<p>I am using the Eclipse and Python 3.0+.</p>
| 2 | 2016-07-21T14:02:36Z | 38,506,845 | <p>One you have v you can use a list comprehension like:</p>
<pre><code>formattedList = ["%.2f" % member for member in v]
</code></pre>
<p>output was as follows:</p>
<pre><code>['3.57', '4.66', '3.34', '6.29', '1.13', '10.92', '2.05', '3.55', '10.00', '9.42', '10.03', '7.07']
</code></pre>
<p>Hope that helps!</p>
| 0 | 2016-07-21T14:22:34Z | [
"python",
"python-3.x"
] |
How do I convert greyscale images represented as strings of space separated pixel values to column of features to train a classifier in scikit-learn? | 38,506,413 | <pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
data = pd.read_csv('fer2013.csv')
data.head()
</code></pre>
<p><img src="http://i.stack.imgur.com/XeKSW.jpg" alt="First 5 entries in training set"></p>
<pre><code>face1 = np.fromstring(data['pixels'][0], dtype=int, sep=' ')
exp1 = np.zeros((48,48))
k = 0
for i in range(len(exp1)):
for j in range(len(exp1[0])):
exp1[i][j] = face1[k]
k = k + 1
imgplot = plt.imshow(exp1, cmap="Greys_r")
plt.show()
mpimg.imsave('save.png', exp1)
</code></pre>
<p><img src="http://i.stack.imgur.com/bUfTB.png" alt="This is the image that i get"></p>
<p>The images are 48 x 48 pixels represented as a string ("12 34 12 34 55 ... "). So the first value in the string corresponds to the first pixel value.</p>
<p>Hence, my question is: How do I convert the string of space separated pixel values to columns of features that I can use to train an SVM classifier with and why is the image not greyscale??? The training part I can do for myself.</p>
<p>There are 35887 training examples denoting 7 different expressions so i need an efficient way of doing this.</p>
<p>P.S. The problem originated from attempting Challenges in Representation Learning: Facial Expression Recognition Challenge (Kaggle.com)</p>
| 0 | 2016-07-21T14:03:33Z | 38,506,672 | <p>You should show current attempts/ research you've done already to solve the problem when positing questions on SO.</p>
<p>You can load an image in Python easily using OpenCV, the result <code>img</code> is a numpy array, so you can just print it as a string e.g.</p>
<pre><code>import numpy as np
import cv2
# Load image
img = cv2.imread('image.jpg',0)
print img
</code></pre>
<p><em>Update after question revision:</em></p>
<p>If you want to just convert the string of numbers to an image, you can use something like the following:</p>
<pre><code>import numpy as np
image = '1 2 3 4 5 6'
image_width, image_height = 2, 3
result = np.fromstring(image, dtype=int, sep=" ").reshape((image_height, image_width))
</code></pre>
| 0 | 2016-07-21T14:14:58Z | [
"python",
"numpy",
"pandas",
"matplotlib",
"scikit-learn"
] |
How do I convert greyscale images represented as strings of space separated pixel values to column of features to train a classifier in scikit-learn? | 38,506,413 | <pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
data = pd.read_csv('fer2013.csv')
data.head()
</code></pre>
<p><img src="http://i.stack.imgur.com/XeKSW.jpg" alt="First 5 entries in training set"></p>
<pre><code>face1 = np.fromstring(data['pixels'][0], dtype=int, sep=' ')
exp1 = np.zeros((48,48))
k = 0
for i in range(len(exp1)):
for j in range(len(exp1[0])):
exp1[i][j] = face1[k]
k = k + 1
imgplot = plt.imshow(exp1, cmap="Greys_r")
plt.show()
mpimg.imsave('save.png', exp1)
</code></pre>
<p><img src="http://i.stack.imgur.com/bUfTB.png" alt="This is the image that i get"></p>
<p>The images are 48 x 48 pixels represented as a string ("12 34 12 34 55 ... "). So the first value in the string corresponds to the first pixel value.</p>
<p>Hence, my question is: How do I convert the string of space separated pixel values to columns of features that I can use to train an SVM classifier with and why is the image not greyscale??? The training part I can do for myself.</p>
<p>There are 35887 training examples denoting 7 different expressions so i need an efficient way of doing this.</p>
<p>P.S. The problem originated from attempting Challenges in Representation Learning: Facial Expression Recognition Challenge (Kaggle.com)</p>
| 0 | 2016-07-21T14:03:33Z | 38,531,397 | <pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from sklearn import svm, metrics
#Read csv file
data = pd.read_csv('fer2013.csv')
#Number of samples
n_samples = len(data)
n_samples_train = 28709
n_samples_test = 3589
n_samples_validation = 3589
#Pixel width and height
w = 48
h = 48
#Separating labels and features respectively
y = data['emotion']
X = np.zeros((n_samples, w, h))
for i in range(n_samples):
X[i] = np.fromstring(data['pixels'][i], dtype=int, sep=' ').reshape(w, h)
#Training set
X_train = X[:n_samples_train].reshape(n_samples_train, -1)
y_train = y[:n_samples_train]
#Classifier
clf = svm.SVC(gamma=0.001, kernel='rbf', class_weight='balanced')
print('Training Classifier...')
clf.fit(X_train, y_train)
print('Done!!!')
#Testing set
X_test = X[n_samples_train : (n_samples_train + n_samples_test)].reshape(n_samples_test, -1)
y_test = y[n_samples_train : (n_samples_train + n_samples_test)]
#Prediction
expected = y_test
predicted = clf.predict(X_test)
#Results
print("Classification report for classifier %s:\n%s\n" % (clf, metrics.classification_report(expected, predicted)))
</code></pre>
<p>Here is my solution! Kindly let me know if certain things that can be done more efficiently. Thank you mark and tom for all your help.</p>
| 0 | 2016-07-22T16:32:03Z | [
"python",
"numpy",
"pandas",
"matplotlib",
"scikit-learn"
] |
Python download images with changing variables | 38,506,431 | <p>I was trying to download images with url's that change but got an error.</p>
<pre><code>import urllib.request
import random
random_number=random.randint(500,600)
url_image="'https://csgostash.com/img/skins/s"+str(random_number)+"fn.png'"
image=urllib.request.urlretrieve(url_image, 'skin.png')
</code></pre>
<p>Traceback (most recent call last):
File "C:/Users/luke/Desktop/scraper/test image download/cs test.py", line 8, in
image=urllib.request.urlretrieve(url_image, 'skin.png')
File "C:\Users\luke\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 187, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "C:\Users\luke\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 162, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\luke\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 465, in open
response = self._open(req, data)
File "C:\Users\luke\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 488, in _open
'unknown_open', req)
File "C:\Users\luke\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 443, in _call_chain
result = func(*args)
File "C:\Users\luke\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 1310, in unknown_open
raise URLError('unknown url type: %s' % type)
urllib.error.URLError: </p>
| 0 | 2016-07-21T14:04:17Z | 38,506,903 | <p>First, url_image has an weird syntax.</p>
<pre><code> url_image="https://csgostash.com/img/skins/s"+str(random_number)+"fn.png"
</code></pre>
<p>If you fix this, you will notice an 403 - Vax! Protection against bot: use a user agent.</p>
<pre><code>import urllib.request
import random
random_number=random.randint(500,600)
url_image="https://csgostash.com/img/skins/s"+str(random_number)+"fn.png"
user_agent = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64)'
headers = {'User-Agent': user_agent}
req = urllib.request.Request(url_image, None, headers)
print(url_image)
#image, h = urllib.request.urlretrieve(url_image)
with urllib.request.urlopen(req) as response:
the_page = response.read()
print (the_page)
</code></pre>
<p>Edit: ofcourse you may save to file:</p>
<pre><code>with open('skin.png', 'wb') as f:
f.write(the_page)
</code></pre>
| 0 | 2016-07-21T14:24:58Z | [
"python",
"image",
"url",
"urllib",
"python-3.5"
] |
Calling a python script with arguments using subprocess | 38,506,442 | <p>I have a python script which call another python script from another directory. To do that I used <code>subprocess.Popen</code> :</p>
<pre><code>import os
import subprocess
arg_list = [project, profile, reader, file, str(loop)]
</code></pre>
<p>where all args are string if not converted implicitely</p>
<pre><code>f = open(project_path + '/log.txt','w')
proc = subprocess.Popen([sys.executable, python_script] + arg_list, stdin=subprocess.PIPE, stdout=f, stderr=f)
streamdata = proc.communicate()[0]
retCode = proc.returncode
f.close()
</code></pre>
<p>This part works well, because of the log file I can see errors that occurs on the called script. Here's the python script called:</p>
<pre><code>import time
import csv
import os
class loading(object):
def __init__(self, project=None, profile=None, reader=None, file=None, loop=None):
self.project=project
self.profile=profile
self.reader=reader
self.file=file
self.loop=loop
def csv_generation(self):
f=open(self.file,'a')
try:
writer=csv.writer(f)
if self.loop==True:
writer.writerow((self.project,self.profile,self.reader))
else:
raise('File already completed')
finally:
file.close()
def main():
p = loading(project, profile, reader, file, loop)
p.csv_generation()
if __name__ == "__main__":
main()
</code></pre>
<p>When I launch my subprocess.Popen, I have an error from the called script which tell me that <code>'project' is not defined</code>. It looks the Popen method doesn't pass arguments to that script. I think i'm doing something wrong, someone has an idea ?</p>
| 0 | 2016-07-21T14:04:38Z | 38,506,932 | <p>When you pass parameters to a new process they are passed positionally, the names from the parent process do not survive, only the values. You need to add:</p>
<pre><code>import sys
def main():
if len(sys.argv) == 6:
project, profile, reader, file, loop = sys.argv[1:]
else:
raise ValueError,("incorrect number of arguments")
p = loading(project, profile, reader, file, loop)
p.csv_generation()
</code></pre>
<p>We are testing the length of <code>sys.argv</code> before the assignment (the first element is the name of the program).</p>
| 2 | 2016-07-21T14:26:22Z | [
"python",
"arguments",
"subprocess",
"popen"
] |
I keep getting errors like 'ResultSet' object has no attribute 'get' and 'NoneType' object has no attribute 'get' | 38,506,478 | <p>I am trying to scrape the youtube water mark a element href but I can't seem to grab it.</p>
<p>if I try </p>
<pre><code> def youtube_link(url):
youtube_page = requests.get(url, headers=headers)
soupdata = BeautifulSoup(youtube_page.text, 'html5lib')
video_row = soupdata.find_all('a', {'class': 'ytp-watermark'})
entries = video_row.get('href')
return entries
</code></pre>
<p>I get </p>
<pre><code>'ResultSet' object has no attribute 'get'
</code></pre>
<p>If I try</p>
<pre><code> def youtube_link(url):
youtube_page = requests.get(url, headers=headers)
soupdata = BeautifulSoup(youtube_page.text, 'html5lib')
video_row = soupdata.find('a', {'class': 'ytp-watermark'})
entries = video_row.get('href')
return entries
</code></pre>
<p>I get </p>
<pre><code>'NoneType' object has no attribute 'get'
</code></pre>
<p>If I try</p>
<pre><code> def youtube_link(url):
youtube_page = requests.get(url, headers=headers)
soupdata = BeautifulSoup(youtube_page.text, 'html5lib')
video_row = soupdata.find('a', {'target': '_blank'})
entries = video_row.get('href')[24]
return entries
</code></pre>
<p>I get a single character</p>
<pre><code>'s'
</code></pre>
<p>if i try</p>
<pre><code> def youtube_link(url):
youtube_page = requests.get(url, headers=headers)
soupdata = BeautifulSoup(youtube_page.text, 'html5lib')
video_row = soupdata.find('a', {'target': '_blank'})[24]
entries = video_row.get('href')
return entries
</code></pre>
<p>i get </p>
<pre><code>24
</code></pre>
<p>if i try</p>
<pre><code> def youtube_link(url):
youtube_page = requests.get(url, headers=headers)
soupdata = BeautifulSoup(youtube_page.text, 'html5lib')
video_row = soupdata.find('a', {'target': '_blank'})[24:]
entries = video_row.get('href')
return entries
</code></pre>
<p>I get</p>
<pre><code>unhashable type: 'slice'
</code></pre>
<p>if I try</p>
<pre><code>def panties():
from lxml import html
pan_url = 'http://www.panvideos.com'
shtml = requests.get(pan_url, headers=headers)
soup = BeautifulSoup(shtml.text, 'html5lib')
video_row = soup.find_all('div', {'class': 'video'})
def youtube_link(url):
youtube_page = requests.get(url, headers=headers)
soupdata = BeautifulSoup(youtube_page.text, 'html5lib')
video_row = soupdata.find('a', {'target': '_blank'})
entries = [{'text': div.get('href'),
} for div in video_row][24]
return entries
</code></pre>
<p>I get</p>
<pre><code>'NavigableString' object has no attribute 'get'
</code></pre>
<p>if i try</p>
<pre><code> def youtube_link(url):
youtube_page = requests.get(url, headers=headers)
soupdata = BeautifulSoup(youtube_page.text, 'html5lib')
video_row = soupdata.find_all('a', {'class': 'ytp-title-link'})
entries = [{'text': div.get('href'),
} for div in video_row]
return entries
</code></pre>
<p>I get </p>
<pre><code> []
</code></pre>
<p>If I use the chrome inspect and hover over the water mark I get</p>
<pre><code> <a class="ytp-watermark yt-uix-sessionlink" target="_blank" aria-label="Watch on www.youtube.com" data-sessionlink="feature=player-watermark" href="https://www.youtube.com/watch?v=Xjww1pgKgnU" data-layer="7">
<svg xmlns:xlink="http://www.w3.org/1999/xlink" height="100%" version="1.1" viewBox="0 0 77 34" width="100%">
........
</svg>
</a>
</code></pre>
<p>but if I use the search ability of inspect and type _blank I get</p>
<pre><code><a class="ytp-title-link yt-uix-sessionlink" target="_blank" data-sessionlink="feature=player-title" href="https://www.youtube.com/watch?v=Xjww1pgKgnU">
<span class="ytp-title-playlist-icon" style="display: none;">
.....
</span>
<span>Packer Luther King Feat Mgp the Saw -BIEN MALA (Video Oficial)</span></a>
</code></pre>
<p>neither one of these are returning results. Is my syntax wrong? any help will be appreciated </p>
<p>this is my whole function</p>
<pre><code>def panties():
from lxml import html
pan_url = 'http://www.panvideos.com'
shtml = requests.get(pan_url, headers=headers)
soup = BeautifulSoup(shtml.text, 'html5lib')
video_row = soup.find_all('div', {'class': 'video'})
def youtube_link(url):
youtube_page = requests.get(url, headers=headers)
soupdata = BeautifulSoup(youtube_page.text, 'html5lib')
video_row = soupdata.find('a', {'class': 'ytp-title-link yt-uix-sessionlink'})
entries = [{'text': div.get('href'),
} for div in video_row]
return entries
entries = [{'text': div.h4.text,
'href': div.a.get('href'),
'tube': youtube_link(div.a.get('href')),
} for div in video_row][:1]
return entries
</code></pre>
<p>It gets fed a url, uses that url as a way to get the detail page and from that page get that info and return it. For some reason a link is being returned as None. If I try find all or find it will not return a single a element. But if I look for h1 it will work.</p>
<p>EDIT I have tried different parsers</p>
<p>html.parser, lxml, and html5lib</p>
<p>EDIT:</p>
<p>I believe the data can't be scraped because it is coming from the media player. when I did </p>
<pre><code> video_row = soupdata.find_all('body')
</code></pre>
<p>the data I was looking for did not show up. So It's not me and I don't think it's a bug or anything it's just not obtainable by the normal means. Link tags meta tags and a few other tags can not be grabbed.</p>
| 1 | 2016-07-21T14:06:10Z | 38,506,855 | <p>When I used full value for the class I got the href...</p>
<pre><code>video_row = soupdata.find('a', {'class': 'ytp-watermark yt-uix-sessionlink'})
</code></pre>
<p>and if you want to use findAll you have to iterate over entries. for example create yourself additional list entries_final and do this:</p>
<pre><code>video_rows = soupdata.findAll('a', {'class': 'ytp-watermark yt-uix-sessionlink'})
entries_final = []
for row in video_rows:
entries_final.append(row.get('href'))
</code></pre>
<p>and then <code>return entries_final</code></p>
| 0 | 2016-07-21T14:22:55Z | [
"python",
"django",
"web-scraping",
"beautifulsoup"
] |
How can I edit an object without saved it in Django | 38,506,481 | <p>There is a problem with my Django project, when I add an object it saves immediatly after that I will be redirected by object id to <strong>server_edit</strong> where I can fill fields. If I fill no fields and push "back" (go to previous page) browser button object will be saved without any data even if <strong>Save</strong> button was not pushed on template. </p>
<p>Is there any way do not save object where no fields was filled?</p>
<p>How can I edit an object without saved it?</p>
<p>I have a model "Server" that contains a few CharField</p>
<pre><code>class Server(models.Model):
name = models.CharField(max_length=256)
</code></pre>
<p>I add and save an object: </p>
<pre><code>def server_add(request):
server = Server()
server.save()
return HttpResponseRedirect(reverse('server:server_edit', args=(server.id,)))
</code></pre>
<p>after this I redirect to edit page:</p>
<pre><code>def server_edit(request, server_id):
server = get_object_or_404(Server, pk=server_id)
return render(request, 'server/server_edit.html'{'server': server})
</code></pre>
<p>Fields will be edited on html template:</p>
<pre><code><form action="{% url 'server:server_edit_post' server.id %}" method="post">
{% csrf_token %}
<tr>
<td>{% trans "Name:" %}</td>
<td><input type="text" name="name" maxlength="256" value="{{server.name}}" required></td>
</tr>
<button type="submit" class="btn btn-success">{% trans "Save" %}</button>
</form>
</code></pre>
<p>This view gets data from the template and allows to edit them:</p>
<pre><code>def server_edit_post(request, server_id):
server= get_object_or_404(Server, pk=server_id)
name = request.POST['name']
server.name = name
server.save()
return HttpResponseRedirect(reverse('server:server_index', args=()))
</code></pre>
| 0 | 2016-07-21T14:06:22Z | 38,507,873 | <p>You should avoid saving object and <strong>then</strong> filling it with data in different view.</p>
<p>Try using generic edit views such as CreateView/EditView or FormView with Django forms (<a href="https://docs.djangoproject.com/en/1.9/ref/class-based-views/generic-editing/" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/class-based-views/generic-editing/</a>).</p>
<p>Example:</p>
<pre><code>class ServerCreateView(CreateView):
form_class = ServerCreateForm
template_name = 'servers/add.html'
</code></pre>
<p>With this, all validation is done automatically.</p>
| 1 | 2016-07-21T15:06:59Z | [
"python",
"django",
"django-forms"
] |
How can I edit an object without saved it in Django | 38,506,481 | <p>There is a problem with my Django project, when I add an object it saves immediatly after that I will be redirected by object id to <strong>server_edit</strong> where I can fill fields. If I fill no fields and push "back" (go to previous page) browser button object will be saved without any data even if <strong>Save</strong> button was not pushed on template. </p>
<p>Is there any way do not save object where no fields was filled?</p>
<p>How can I edit an object without saved it?</p>
<p>I have a model "Server" that contains a few CharField</p>
<pre><code>class Server(models.Model):
name = models.CharField(max_length=256)
</code></pre>
<p>I add and save an object: </p>
<pre><code>def server_add(request):
server = Server()
server.save()
return HttpResponseRedirect(reverse('server:server_edit', args=(server.id,)))
</code></pre>
<p>after this I redirect to edit page:</p>
<pre><code>def server_edit(request, server_id):
server = get_object_or_404(Server, pk=server_id)
return render(request, 'server/server_edit.html'{'server': server})
</code></pre>
<p>Fields will be edited on html template:</p>
<pre><code><form action="{% url 'server:server_edit_post' server.id %}" method="post">
{% csrf_token %}
<tr>
<td>{% trans "Name:" %}</td>
<td><input type="text" name="name" maxlength="256" value="{{server.name}}" required></td>
</tr>
<button type="submit" class="btn btn-success">{% trans "Save" %}</button>
</form>
</code></pre>
<p>This view gets data from the template and allows to edit them:</p>
<pre><code>def server_edit_post(request, server_id):
server= get_object_or_404(Server, pk=server_id)
name = request.POST['name']
server.name = name
server.save()
return HttpResponseRedirect(reverse('server:server_index', args=()))
</code></pre>
| 0 | 2016-07-21T14:06:22Z | 38,509,199 | <p>Http calls should be stateless. The connection can be dropped any time, which leaves the DB in the same inconsistent state which is what you try to avoid here. Instead of using the form which only contains name, you could just redirect the user to a new page with the form for the rest of the data with the previously entered name as a get parameter, and pre-fill that form with the name on that page</p>
| 0 | 2016-07-21T16:08:52Z | [
"python",
"django",
"django-forms"
] |
Tornado Escape Pound # character | 38,506,499 | <p>I am writing a program/ service that handle text and its attributes.
localhost:8000/title?img=img_url.jpg&text_color=#FF0000 </p>
<p>and In my handler I have something like:</p>
<pre><code>application = tornado.web.Application([
(r"/title_overlay", MainHandler),
])
class MainHandler(tornado.web.RequestHandler):
def check_origin(self, origin):
return True
def get(self):
image_url = self.get_argument("img", None, True)
image_local_file = 'image' + "_" + image_url.split('/')[-1]
urllib.urlretrieve(image_url, image_local_file)
text_color = self.get_argument('text_color', '', True)
.....
.....
.....
</code></pre>
<p>I'm unable to get text_color value ie.#FF0000. Extracting imgurl works but not text_color.
Does this have to do something with the # character. ?</p>
| 0 | 2016-07-21T14:07:17Z | 38,510,311 | <p>Yes, it's because the portion after '#' is called the <a href="https://en.wikipedia.org/wiki/Fragment_identifier" rel="nofollow">fragment identifier</a>, which the browser does not even send to the server when it fetches the URL. In order to encode a color in your URL you'll have to omit the '#' character, or URL-encode it:</p>
<p><code>text_color=%23FF0000</code></p>
| 1 | 2016-07-21T17:05:19Z | [
"python",
"python-2.7",
"web-services",
"url",
"tornado"
] |
Python to convert special unicode characters ( like â¡) to their hexadecimal literals in a string (like 0x2661) | 38,506,568 | <p>I'm writing a program that pulls screen names and tweets from twitter into a txt file. Some screen names contain special unicode characters like â¡. In my Bash terminal these characters show up as an empty box. My sql fails when I try to insert this character and tells me it contains an untranslatable character. Is there a way to convert only special characters in python to their hexadecimal form? I would also be happy just replacing these special characters with </p>
<p>Ideally "screenNameâ¡" would convert to "screenName0x2661" or just replace special characters to something like "screenName#REPLACE#"</p>
<p>Thanks!</p>
| 0 | 2016-07-21T14:10:21Z | 38,506,689 | <p>You can achieve this using the <code>encode</code> method, explained <a href="https://docs.python.org/2.7/howto/unicode.html" rel="nofollow">here</a>. From the docs:</p>
<blockquote>
<p>Another important method is .encode([encoding], [errors='strict']),
which returns an 8-bit string version of the Unicode string, encoded
in the requested encoding. The errors parameter is the same as the
parameter of the unicode() constructor, with one additional
possibility; as well as âstrictâ, âignoreâ, and âreplaceâ, you can
also pass âxmlcharrefreplaceâ which uses XMLâs character references.
The following example shows the different results:</p>
</blockquote>
<pre><code>>>> u = unichr(40960) + u'abcd' + unichr(1972)
>>> u.encode('utf-8') '\xea\x80\x80abcd\xde\xb4'
>>> u.encode('ascii') Traceback (most recent call last):
... UnicodeEncodeError: 'ascii' codec can't encode character u'\ua000' in position 0: ordinal not in range(128)
>>> u.encode('ascii', 'ignore') 'abcd'
>>> u.encode('ascii', 'replace') '?abcd?'
>>> u.encode('ascii', 'xmlcharrefreplace') '&#40960;abcd&#1972;'
</code></pre>
| 0 | 2016-07-21T14:15:42Z | [
"python",
"string",
"bash",
"unicode",
"tweepy"
] |
Failed to parse arguments | 38,506,569 | <p>I'm trying to run this code:</p>
<pre><code>os.system("""gnome-terminal -e 'bash -c "arpspoof -i " + inter + " -t " + target + " " + gateway" ' """)
</code></pre>
<p>and the error is:</p>
<p>"Failed to parse arguments: Argument to "--command/-e" is not a valid command: Text ended before matching quote was found for ". (The text was 'bash -c "arpspoof -i " + inter + " -t " + target + " " + gateway" ')"</p>
<p>Here's my entire code:</p>
<pre><code>import os
import time
def drift():
global gateway
gateway = raw_input("Gateway IP > ")
time.sleep(0.5)
global target
target = raw_input("Target IP > ")
time.sleep(0.5)
global inter
inter = raw_input("Interface > ")
drift()
os.system("""gnome-terminal -e 'bash -c "arpspoof -i " + inter + " -t " + target + " " + gateway" ' """)
</code></pre>
<p>So for those of you who don't know what "Driftnet" is, its a MITM attack program to pick up pictures. To set it up you have to type in one terminal</p>
<p>"arpspoof -i -t "</p>
<p>Then open a new terminal and type the same thing but with the order of gateway IP and target IP switched, to trick your target into thinking you're a router. </p>
<p>I want my program to ask for gateway IP, target IP, interface, then run
"arpspoof -i -t "</p>
<p>Then open a new terminal and and type out the same thing except switch the order of the gateway IP and target IP to where the target is first and gateway is second without the user having to type anything, and I'm trying to use <code>os.system("""gnome-terminal -e 'bash -c "arpspoof -i " + inter + " -t " + target + " " + gateway" ' """)</code> to do that, but it returns the error:</p>
<p>"Failed to parse arguments: Argument to "--command/-e" is not a valid command: Text ended before matching quote was found for ". (The text was 'bash -c "arpspoof -i " + inter + " -t " + target + " " + gateway" ')"</p>
<p>Thanks.</p>
| 1 | 2016-07-21T14:10:21Z | 38,506,694 | <p>The issue is that you're trying to add strings in a triple quoted string. You seem to be trying to put the value of your variables into your triple quoted string, but you're actually passing the literal string <code>gnome-terminal -e 'bash -c "arpspoof -i " + inter + " -t " + target + " " + gateway" '</code> to <code>os.system()</code>.</p>
<p>What you need to do is use <code>format</code>.</p>
<pre><code>os.system("""gnome-terminal -e 'bash -c "arpspoof -i {inter} -t {target} {gateway}" ' """.format(inter=inter, target=target, gateway=gateway))
</code></pre>
| 1 | 2016-07-21T14:15:46Z | [
"python",
"linux",
"bash"
] |
Cython: python int to uint8_t | 38,506,662 | <p>For my work, I need to use this C++ function with python.</p>
<pre><code>std::vector<std::string> pinCertificate(const std::vector<uint8_t>& certificate, bool local)
</code></pre>
<p>I've already translated the prototype to this in Cython</p>
<pre><code>vector[string] pinCertificate(const vector[uint8_t]& certificate, const boolean& local)
</code></pre>
<p>But the real problem comes when I try to use it. I always get the following error, or a segfault.</p>
<pre><code>TypeError: an integer is required
</code></pre>
<p>Here's how I call my function:</p>
<pre><code># cert_id is a simple string
certificate = [np.uint8(x) for x in list(cert_id.encode())]
result = self.dring.config.pin_certificate(certificate, local)
</code></pre>
<p>I don't know why it is crashing, certificate contains only <code>numpy.uint8</code>'s.</p>
<p>Is there anything that I did wrong? Thanks in advance.</p>
| 1 | 2016-07-21T14:14:37Z | 38,513,407 | <p>Well, it seems that the problem came from the internal C++ code I had.</p>
| 0 | 2016-07-21T20:05:31Z | [
"python",
"c++",
"type-conversion",
"cython",
"typeerror"
] |
What does this decorator of decorator do? | 38,506,699 | <p>I'm reading <a href="http://stackoverflow.com/a/1594484/5399734">this answer</a> to understand what decorators are and what can they do, from which my question emerges. The author provide a bonus snippet, which can make any decorator accept generically any argument, and this snippet really confused me:</p>
<pre><code>def decorator_with_args(decorator_to_enhance):
def decorator_maker(*args, **kwargs):
def decorator_wrapper(func):
return decorator_to_enhance(func, *args, **kwargs)
return decorator_wrapper
return decorator_maker
@decorator_with_args
def decorated_decorator(func, *args, **kwargs):
def wrapper(function_arg1, function_arg2):
print "Decorated with", args, kwargs
return func(function_arg1, function_arg2)
return wrapper
@decorated_decorator(42, 404, 1024)
def decorated_function(function_arg1, function_arg2):
print "Hello", function_arg1, function_arg2
decorated_function("Universe and", "everything")
</code></pre>
<p>This outputs:</p>
<pre><code>Decorated with (42, 404, 1024) {}
Hello Universe and everything
</code></pre>
<p><strong>My question is:</strong> What exactly does <code>decorator_with_args</code> do? </p>
<p>Seems that it takes a decorator as its argument, wrap it with a decorator maker that accept arbitrary arguments, which are passed to the argument decorator, and return that decorator maker. This means <code>decorator_with_args</code> actually turns a decorator into a decorator maker. Sounds impossible, right? Anyway, I think it's tricky to tell its function.</p>
<p>And yes, the original code contains many comments, but I failed to get the answer from them, so I removed them to make the code shorter and cleaner.</p>
<hr>
| 0 | 2016-07-21T14:15:55Z | 38,508,431 | <p>In short, <code>decorator_with_args</code> turns a function call like this:</p>
<pre><code>function(func, *args, **kwargs)
</code></pre>
<p>to this form:</p>
<pre><code>function(*args, **kwargs)(fund)
</code></pre>
<p>Note that both function call return another function.</p>
| 1 | 2016-07-21T15:32:43Z | [
"python",
"python-2.7",
"decorator",
"python-decorators"
] |
How can I get a list of genes from a list of refseq accession numbers (NM_<num> and NR_<num>) | 38,506,744 | <p>I am using python 2.7 and trying to use biopython or pyensembl to obtain a list of genes from a list of refseq accession numbers. Is there a simple way I can do this? </p>
| 0 | 2016-07-21T14:18:05Z | 38,513,914 | <p>Yes, there is a simple way. But you should always show some effort, some code you tried before asking. This is the code you need:</p>
<pre><code>from Bio import Entrez
Entrez.email = "trouselife@gmail.com"
handle = Entrez.efetch(db="nucleotide", id="NM_123456", retmode="xml")
record = Entrez.read(handle)
for feature in r[0]["GBSeq_feature-table"]:
for qualifier in feature["GBFeature_quals"]:
if "gene" in qualifier["GBQualifier_name"]:
print(qualifier["GBQualifier_value"])
# MHK7.14; MHK7_14
</code></pre>
<p>You should get some samples in the <code>efetch</code> line, and then find the data you want to extract from the handler you get.</p>
| 0 | 2016-07-21T20:35:37Z | [
"python",
"bioinformatics",
"biopython"
] |
Random number does not update every time I call the function | 38,506,759 | <p>I am writing a game, "Guess the number". Initially, the computer chooses a random number in the range 0-99 and the player guesses what number it is. Once the player's guessed the number, the game starts again.</p>
<p>The player can restart the game at any time, using two buttons:
"New game. Range is 0-100"
and
"New game. Range is 0-1000"</p>
<p><a href="http://i.stack.imgur.com/wCn3M.png" rel="nofollow"><img src="http://i.stack.imgur.com/wCn3M.png" alt="enter image description here"></a></p>
<p>Here's the link to an online editor, where you can view and edit the file:</p>
<p><a href="http://www.codeskulptor.org/#user41_1sYMUy5rDi_0.py" rel="nofollow" title="CodeSkulptor">http://www.codeskulptor.org/#user41_1sYMUy5rDi_0.py</a></p>
<p>For now, the number that the computer chooses is always printed, but once the game works correctly, it'll be removed.</p>
<p>Anyway, the problem is, that when the user enters the number the computer chose, the game restarts, but the same number is chosen by the computer as last time. But if the user clicks on the new game button, a different number is chosen by the computer, which is correct.</p>
<p>The logical error occurs here:</p>
<pre><code>num_range = random.randrange(0,100)
# helper function to start and restart the game
def new_game():
print "Guess the number!"
global secret_number
global num_range
secret_number = num_range
print secret_number
</code></pre>
<p>If I remove the </p>
<blockquote>
<p>secret_number = num_range</p>
</blockquote>
<p>line from the new_game() function and replace it with </p>
<blockquote>
<p>num_range = random.randrange(0,100)</p>
</blockquote>
<p>every time the user correctly guesses the number, a new game starts with a different number in the same range, which is correct, but a I need variable, so that the two buttons work. Do you know how to use the variable num_range so that every time the game starts automatically after the user has guessed the number, the secret number's different than the last time?</p>
<p>Here's the entire program:</p>
<pre><code>import simplegui
import random
import math
num_range = random.randrange(0,100)
# helper function to start and restart the game
def new_game():
print "Guess the number!"
global secret_number
global num_range
secret_number = num_range
print secret_number
#event handlers for control panel
def range100():
# button that changes the range to [0,100) and starts a new game
global num_range
num_range = random.randrange(0,100)
global secret_number
secret_number = num_range
print "The range is 0-100"
new_game()
def range1000():
# button that changes the range to [0,1000) and starts a new game
global secret_number
global num_range
num_range = random.randrange(0, 1000)
secret_number = num_range
print "The range is 0-1000"
new_game()
def input_guess(guess):
# main game logic
g = int(guess)
# remove this when you add your code
print "Guess was", g
if g <secret_number :
print "Higer"
elif g > secret_number:
print "Lower"
else:
print "Correct"
print "Starting a new game..."
print ""
new_game()
# create frame
frame = simplegui.create_frame("Guess the number",200, 200)
frame.add_input("Enter your guess", input_guess, 200)
frame.add_button("New game. Range is 0-100", range100, 200)
frame.add_button("New game. Range is 0-1000", range1000, 200)
# register event handlers for control elements and start frame
# call new_game
new_game()
</code></pre>
| -1 | 2016-07-21T14:18:48Z | 38,507,219 | <p>Let's do it the quick but not so ugly way.</p>
<p>First, let's create a new global variable <code>max_number = 100</code></p>
<p>Then, the changes you must do to <code>new_game()</code></p>
<pre><code>def new_game():
print "Guess the number!"
global secret_number
global max_number
secret_number = random.randrange(0,max_number)
print secret_number
</code></pre>
<p>Finally, replace <code>range100()</code> and <code>range1000()</code>'s definitions by :</p>
<pre><code>def range100():
global max_number
max_number = 100
print "The range is 0-"+str(max_number)
new_game()
def range1000():
global max_number
max_number = 1000
print "The range is 0-"+str(max_number)
new_game()
</code></pre>
| 2 | 2016-07-21T14:38:20Z | [
"python"
] |
Deleting previous token in a sentence if same as current token python | 38,506,857 | <p>I have 2 dictionaries of key, value pairs like:</p>
<pre><code>tokenIDs2number = {(6, 7): 1000000000.0, (22,): 700.0, (12,): 3000.0}
tokenIDs2number = {(27, 28): u'South Asia'}
</code></pre>
<p>The keys are tuples of the index locations of number and location slots in the sentence:</p>
<pre><code>GDP in 2007 totaled about $ 1 billion , or about $ 3,000 per capita -LRB- exceeding the average of about $ 700 in the rest of South Asia -RRB- .
</code></pre>
<p>I want to loop through all the tuples for both the numbers and locations, and remove values from the tuples if they are next to each other, e.g. make them:</p>
<pre><code>tokenIDs2number = {(7,): 1000000000.0, (22,): 700.0, (12,): 3000.0}
tokenIDs2number = {(28,): u'South Asia'}
</code></pre>
<p>So that later on, I can fill this sentence token in with location and number slots, so the sentence becomes:</p>
<pre><code>GDP in 2007 totaled about $ NUMBER_SLOT , or about $ NUMBER_SLOT per capita -LRB- exceeding the average of about $ NUMBER_SLOT in the rest of LOCATION_SLOT -RRB- .
</code></pre>
<p>Instead of:</p>
<pre><code>GDP in 2007 totaled about $ NUMBER_SLOT NUMBER_SLOT , or about $ NUMBER_SLOT per capita -LRB- exceeding the average of about $ 700 in the rest of LOCATION_SLOT LOCATION_SLOT -RRB- .
</code></pre>
<p>Current code:</p>
<pre><code>for locationTokenIDs, location in tokenIDs2location.items():
for numberTokenIDs, number in tokenIDs2number.items():
prevNoID=numberTokenIDs[0]
prevLocID=locationTokenIDs[0]
for numberTokenID in numberTokenIDs:
for locationTokenID in locationTokenIDs:
if numberTokenID==prevNoID+1:
numberTokenIDs.remove(numberTokenIDs[prevNoID])
if numberTokenID>0 and numberTokenID<(len(sampleTokens)-1):
prevNoID = numberTokenID
if locationTokenID==prevLocID+1:
locationTokenIDs.remove(locationTokenIDs[prevLocID])
if locationTokenID>0 and locationTokenID<(len(sampleTokens)-1):
prevLocID = locationTokenID
</code></pre>
<p>However, it seems I cannot just remove numbers from a tuple, so I am struggling to figure out how to do this.</p>
| 1 | 2016-07-21T14:23:04Z | 38,507,047 | <p>Since <code>tuple</code>s (and usually <code>dict</code> keys in general) are immutable, you can not change the keys directly. However, you can use a dictionary comprehension to transform your dict to what you need in one line:</p>
<pre><code>tokenIDs2number = {(6, 7): 1000000000.0, (22,): 700.0, (12,): 3000.0}
tokenIDs2number = {(k[-1],): v for k, v in tokenIDs2number.items()}
</code></pre>
<p>Using <code>k[-1]</code> to always access the last element lets you handle tuples of any length the same way.</p>
| 1 | 2016-07-21T14:30:59Z | [
"python",
"dictionary",
"tuples"
] |
Python Sqlite3- Python shell can create and edit databases but why can't a .py file create or access any database | 38,506,909 | <pre><code>import sqlite3
db=sqlite3.connect('new.db')
cursor=db.cursor()
cursor.execute('''CREATE TABLE hello(id INTEGER PRIMARY KEY,
Message_type, time_sent, time_received, response)''')
</code></pre>
<p>The above program when executed from the python shell will execute and create a database by the said name but when i run the same program from a .py file,it won't create any table or database</p>
| 0 | 2016-07-21T14:25:12Z | 38,507,004 | <p>I suppose that the .py file did create the database file, but you aren't looking for the database file at the correct location. Add this to the end of your program:</p>
<pre><code>print(os.path.join(os.getcwd(), 'new.db'))
</code></pre>
<p>Whatever it prints, look there for your database file.</p>
| 0 | 2016-07-21T14:29:21Z | [
"python",
"sqlite"
] |
Python Sqlite3- Python shell can create and edit databases but why can't a .py file create or access any database | 38,506,909 | <pre><code>import sqlite3
db=sqlite3.connect('new.db')
cursor=db.cursor()
cursor.execute('''CREATE TABLE hello(id INTEGER PRIMARY KEY,
Message_type, time_sent, time_received, response)''')
</code></pre>
<p>The above program when executed from the python shell will execute and create a database by the said name but when i run the same program from a .py file,it won't create any table or database</p>
| 0 | 2016-07-21T14:25:12Z | 38,507,282 | <pre><code>import sqlite3
from os.path import expanduser
db_dir = expanduser("~")
db=sqlite3.connect(db_dir+'/new.db')
cursor=db.cursor()
cursor.execute('''CREATE TABLE hello(id INTEGER PRIMARY KEY,
Message_type, time_sent, time_received, response)''')
</code></pre>
<p>Now look in your home directory for <code>new.db</code></p>
| 0 | 2016-07-21T14:41:01Z | [
"python",
"sqlite"
] |
Copy Image to Local Folder Using Python Selenium | 38,506,978 | <p>How can we copy image from source to local folder using python selenium process.</p>
<p>Kindly help me.</p>
| 0 | 2016-07-21T14:28:25Z | 38,813,784 | <p>Here you go:</p>
<pre><code>import urllib
from selenium import webdriver
driver = webdriver.Firefox()
driver.get('https://www.python.org/')
# get the image source
img = driver.find_element_by_xpath('//img[@class="python-logo"]')
src = img.get_attribute('src')
# download the image
urllib.urlretrieve(src, "python-logo.png")
driver.close()
</code></pre>
| 0 | 2016-08-07T11:45:15Z | [
"python",
"selenium",
"selenium-webdriver",
"selenium-ide",
"selenium-rc"
] |
Creating an array of pyqtSignal | 38,506,979 | <p>For a QThread I would like to create an array of pyqtSignal</p>
<pre><code>class MyThread(QtCore.QThread):
Trigger = []
for i in range(0,10):
Trigger.append(QtCore.pyqtSignal(int))
def __init__(self, Function):
self.Function = Function
super(MyThread, self).__init__(None)
def run(self):
self.Function()
</code></pre>
<p>The main part of the following code looks like:</p>
<pre><code>class Main(QtWidgets.QMainWindow):
def __init__(self):
self.MyQThread = MyThread(lambda: self.PrintTest(5))
def StartTestThread(self):
self.MyQThread.Trigger[0].connect(self.update_text)
self.MyQThread.start()
def PrintTest(self,InputValue):
for i in range (0,100):
print(InputValue*i)
time.sleep(0.2)
self.MyQThread.Trigger[0].emit(5)
def update_text(self, thread_no):
self.ui.MY_LISTWIDGET.addItem('123')
</code></pre>
<p>executing the StartTestThread leads to the following error</p>
<blockquote>
<p>AttributeError: 'PyQt5.QtCore.pyqtSignal' object has no attribute
'connect'</p>
</blockquote>
<p>If I initiliaize the pyqtSignal without being an array, it works.
What am I doing wrong? Thanks for the help in advance! </p>
| 0 | 2016-07-21T14:28:30Z | 38,533,205 | <p>You cannot create a list of pyqtSignal(s).</p>
<p>Unfortunately the way pyqt implement signals use a bit of python magics, and the <code>pyqtSignal</code> objects are actually "converted" into <code>pyqtBoundSignal</code> when a <code>QObject</code> subclass (technically a class that have <code>pyqtWrapperType</code> as a metaclass) is loaded.</p>
<p>You can solve your problem in different ways:</p>
<h2>1) Wrap the signal</h2>
<p>I'm not 100% sure about this, but it's a modified version of your attempt:</p>
<pre><code>class FooWrap(QObject):
Signal = QtCore.pyqtSignal(int)
class MyThread(QtCore.QThread):
Trigger = [FooWrap] * 10
</code></pre>
<h2>2) Don't use a list</h2>
<p>If the number of signals is fixed, just create them directly as <code>signal1</code>, <code>signal2</code>, <code>signalN</code>, then you can call them directly</p>
<p>If you can determine which signal to call only at runtime you can get the signal you need using the <a href="https://docs.python.org/3/library/functions.html#getattr" rel="nofollow"><code>gettattr(object, name)</code></a> function, for example:</p>
<pre><code>getattr(self.MyQThread, 'signal' + str(n)).connect(self.update_text)
</code></pre>
<p>and</p>
<pre><code>getattr(self.MyQThread, 'signal' + str(n)).emit(value)
</code></pre>
<h2>3) Use only one signal</h2>
<p>Use only one signal that emit two values, one identify the "step" in which the signal is emitted, and the second is your value, doing so the connected functions can decide what to do based on the "step" value.</p>
| 0 | 2016-07-22T18:33:13Z | [
"python",
"arrays",
"qt5",
"pyqt5"
] |
Implementing keyPressEvent in QWidget | 38,507,011 | <p>I have a QDialog window that has a continue button. The continue button is the default button because whenever I press the enter key, the continue button is pressed. I discovered something strange: when I press the enter key three times, the continue button presses three times. However, when I press it a fourth time, the whole window closes. I have a cancel button right below the continue button that closes the window, but I don't make the cancel button the default button or anything. </p>
<p>I wanted to override the <code>keyPressEvent</code> so that whenever I'm in the window, the enter button will always be connected to the continue button. </p>
<p>This is what I have right now:</p>
<pre><code>class ManualBalanceUI(QtGui.QWidget):
keyPressed = QtCore.pyqtSignal()
def __init__(self, cls):
super(QtGui.QWidget, self).__init__()
self.window = QtGui.QDialog(None, QtCore.Qt.WindowSystemMenuHint)
self.ui = uic.loadUi('ManualBalanceUI.ui', self.window)
self.keyPressed.connect(self.on_key)
def keyPressEvent(self, event):
super(ManualBalanceUI, self).keyPressEvent(event)
self.keyPressed.emit(event)
def on_key(self, event):
if event.key() == QtCore.Qt.Key_Enter and self.ui.continueButton.isEnabled():
self.proceed() # this is called whenever the continue button is pressed
elif event.key() == QtCore.Qt.Key_Q:
self.window.close() # a test I implemented to see if pressing 'Q' would close the window
def proceed(self):
...
...
</code></pre>
<p>However, this doesn't seem to be doing anything right now. Pressing 'Q' doesn't close the window, and I can't really tell if the 'enter' key is working or not. </p>
<p>I looked at this question beforehand: <a href="http://stackoverflow.com/questions/27475940/pyqt-connect-to-keypressevent">PyQt Connect to KeyPressEvent</a></p>
<p>I also reviewed all the documentation on SourceForge. Any help would be greatly appreciated!</p>
| 0 | 2016-07-21T14:29:43Z | 38,514,893 | <p>You can do two ways and one is simply re implement keyPressevent with out any fancy work. Like this</p>
<pre><code>from PyQt4 import QtCore, QtGui
import sys
class Example(QtGui.QWidget):
def __init__(self):
super(Example, self).__init__()
self.setGeometry(300, 300, 250, 150)
self.show()
def keyPressEvent(self, event):
if event.key() == QtCore.Qt.Key_Q:
print "Killing"
self.deleteLater()
elif event.key() == QtCore.Qt.Key_Enter:
self.proceed()
event.accept()
def proceed(self):
print "Call Enter Key"
def main():
app = QtGui.QApplication(sys.argv)
ex = Example()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
</code></pre>
<p>Or as you tried with signals, in your case you where missing to implement this signal properly, here is updated version.</p>
<pre><code>class Example(QtGui.QWidget):
keyPressed = QtCore.pyqtSignal(QtCore.QEvent)
def __init__(self):
super(Example, self).__init__()
self.setGeometry(300, 300, 250, 150)
self.show()
self.keyPressed.connect(self.on_key)
def keyPressEvent(self, event):
super(Example, self).keyPressEvent(event)
self.keyPressed.emit(event)
def on_key(self, event):
if event.key() == QtCore.Qt.Key_Enter and self.ui.continueButton.isEnabled():
self.proceed() # this is called whenever the continue button is pressed
elif event.key() == QtCore.Qt.Key_Q:
print "Killing"
self.deleteLater() # a test I implemented to see if pressing 'Q' would close the window
def proceed(self):
print "Call Enter Key"
def main():
app = QtGui.QApplication(sys.argv)
ex = Example()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
</code></pre>
| 1 | 2016-07-21T21:46:23Z | [
"python",
"event-handling",
"pyqt",
"keypress"
] |
How can i choose unique users in mongodb | 38,507,020 | <p>I have a database with events that have happened in a game. I am trying to retrieve information about how many unique players have started a level. I use this piece of code for that: </p>
<pre><code>heh = list(db.events.aggregate(
[
{ "$match": {"status": 'start'}},
{"$group": {"_id": "$eventName", "players": {"$sum": 1}}},
]))
print(heh)
</code></pre>
<p>But I am getting information about how many times the level was started. How can I change my code to get the right info? Unique users have unique "uid".</p>
| 0 | 2016-07-21T14:30:04Z | 38,508,339 | <p>Try this.. $addToSet adds the unique records and then count the records in array</p>
<pre><code>db.events.aggregate(
[
{ "$match": {"status": 'start'}},
{"$group": {"_id": "$eventName", "players": {"$addToSet": "$uid"}}},
{"$project": {"_id": 1, "Count": {"$size": "$players"}}}
])
</code></pre>
<p><a href="http://i.stack.imgur.com/YMiuB.png" rel="nofollow"><img src="http://i.stack.imgur.com/YMiuB.png" alt="Screen"></a></p>
<pre><code>db.test1.aggregate(
[
{ "$match": {"status": 'start'}},
{"$group": {"_id": "$eventName", "players": {"$addToSet": "$uid"}}},
{$unwind:"$players"},
{$group:{_id:"$_id",count:{$sum:1}}}
]
)
</code></pre>
| 0 | 2016-07-21T15:28:12Z | [
"python",
"mongodb",
"python-2.7",
"pymongo"
] |
How to make a flight path projection if possible in Python? | 38,507,069 | <p>I have latitude, longitude and altitude data and want to make a plot such as in the image below using Python. The map can be left out, it is unnecessary.
<a href="http://i.stack.imgur.com/PkV0h.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/PkV0h.jpg" alt="enter image description here"></a></p>
<p>I have tried using the <a href="http://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html#polygon-plots" rel="nofollow">mplot3d polygon plot tutorial</a> but can't figure out how to have varying x and y values such that it is not a straight line. Any ideas?</p>
| 1 | 2016-07-21T14:31:51Z | 38,636,531 | <p>It seems that the easiest way to do this is to use Google Earth. </p>
<p>Now there are two ways to tackle this. The first method is simple if you want a static picture. The second method allows animation of the flight path. Both are explained in this answer.</p>
<h3>Method 1</h3>
<p>Using the following Python code I have made a <code>KML</code> file which allows me to create a static visualization. </p>
<pre><code>f = open('flight.kml', 'w')
#Writing the kml file.
f.write("<?xml version='1.0' encoding='UTF-8'?>\n")
f.write("<kml xmlns='http://earth.google.com/kml/2.2'>\n")
f.write("<Document>\n")
f.write("<Placemark>\n")
f.write(" <name>flight</name>\n")
f.write(" <LineString>\n")
f.write(" <extrude>1</extrude>\n")
f.write(" <altitudeMode>absolute</altitudeMode>\n")
f.write(" <coordinates>\n")
for i in range(0,len(data['altitude']),10): #Here I skip some data
f.write(" "+str(data['LON_GPS'][i]) + ","+ str(data['LAT_GPS'][i]) + "," + str(data['altitude'][i]) +"\n")
f.write(" </coordinates>\n")
f.write(" </LineString>\n")
f.write("</Placemark>\n")
f.write("</Document>")
f.write("</kml>\n")
f.close()
</code></pre>
<p>The code results in a <code>KML</code> file which in general looks like this:</p>
<pre><code><?xml version='1.0' encoding='UTF-8'?>
<kml xmlns='http://earth.google.com/kml/2.2'>
<Document>
<Placemark>
<name>flight</name>
<LineString>
<extrude>1</extrude>
<altitudeMode>absolute</altitudeMode>
<coordinates>
54.321976,-4.90948,39232.0
54.320946,-4.90621,39232.0
...
...
52.329865,4.71601,0
52.329693,4.71619,0
</coordinates>
</LineString>
</Placemark>
</Document></kml>
</code></pre>
<p>Once the <code>*.kml</code> file is made using the code above, in Google Earth one can simply import it using <em>File, Import...</em> Google Earth then automatically displays the image you see below.</p>
<p><a href="http://i.stack.imgur.com/Z4GKc.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Z4GKc.jpg" alt="Result"></a></p>
<h3>Method 2</h3>
<p>I also finally figured out how to animate the flight. The solution I found is to split the single <code><Placemark></code> from the static answer up into several Placemarks. Adding <code><TimeSpan></code> info into each placemark then allows an animation to take place. In order to get the same visual I had to have beginning coordinates and end coordinates for each Placemark in order to create a proper <code><LineString></code>. The result can be found <a href="https://youtu.be/MryXTZLvitA" rel="nofollow">in this video</a>. The video is created using the <code>Record a Tour</code> button in Google Earth</p>
<pre><code>f = open(fname+'.kml', 'w')
#Writing the kml file.
f.write("<?xml version='1.0' encoding='UTF-8'?>\n")
f.write("<kml xmlns='http://earth.google.com/kml/2.2'>\n")
f.write("<Document>\n")
f.write(" <name>flight</name>\n")
for i in range(1,len(data['altitude'])):
f.write("<Placemark>\n")
f.write(" <TimeSpan>\n <begin>" + '2015-12-02T%02i:%02i:%02iZ' % (data['UTC_HOUR'][i], data['UTC_MIN'][i], data['UTC_SEC'][i]) + "</begin>\n </TimeSpan>\n")
f.write(" <LineString>\n")
f.write(" <extrude>1</extrude>\n")
f.write(" <altitudeMode>absolute</altitudeMode>\n")
f.write(" <coordinates>" +str(data['LON_GPS'][i-1]) + ","+ str(data['LAT_GPS'][i-1]) + "," + str(data['altitude'][i-1]) + " " +str(data['LON_GPS'][i]) + ","+ str(data['LAT_GPS'][i]) + "," + str(data['altitude'][i]) +"</coordinates>\n")
f.write(" </LineString>\n")
f.write("</Placemark>\n")
f.write("</Document>")
f.write("</kml>\n")
f.close()
</code></pre>
<p>The resulting <code>KML</code> file looks like this:</p>
<pre><code><?xml version='1.0' encoding='UTF-8'?>
<kml xmlns='http://earth.google.com/kml/2.2'>
<Document>
<name>A332_Conventional</name>
<Placemark>
<TimeSpan>
<begin>2015-12-02T08:45:13Z</begin>
</TimeSpan>
<LineString>
<extrude>1</extrude>
<altitudeMode>absolute</altitudeMode>
<coordinates>-0.85058,53.338535,39200 -0.81538,53.332012,39200</coordinates>
</LineString>
</Placemark>
...
...
<Placemark>
<TimeSpan>
<begin>2015-12-02T09:27:03Z</begin>
</TimeSpan>
<LineString>
<extrude>1</extrude>
<altitudeMode>absolute</altitudeMode>
<coordinates>4.71361,52.331066,0 4.71498,52.330379,0</coordinates>
</LineString>
</Placemark>
</Document></kml>
</code></pre>
| 1 | 2016-07-28T12:34:35Z | [
"python",
"animation",
"3d",
"kml",
"altitude"
] |
Tkinter file dialog combining save and load dialogs | 38,507,251 | <p>I have an entry widget where the user can type in a file location, and underneath that a "save" button and a "load" button. Depending on which button is clicked, the file specified in the entry widget is either opened for writing, or for reading.</p>
<p>This all works fine and dandy.</p>
<p>Now I want to add a "browse" button, which the user can click to open a file dialog to select a file. When a file is selected, the filename is copied into the entry. From there on, the save and load buttons should work fine.</p>
<p>However, I can't figure out how to get the file dialog to work for both reading a file and writing. I can't use <code>tkFileDialog.asksaveasfilename</code> because that's going to complain to the user if a file already exists (which, if the user intends to "load", it should) and the <code>tkFileDialog.askloadasfilename</code> function doesn't let the user select a file which doesn't exist yet (which, if the user intends to "save", should be fine as well).</p>
<p>Is it possible to create a dialog which displays neither of these functionalities?</p>
| 0 | 2016-07-21T14:39:25Z | 38,525,367 | <p>Is this what you're looking for:</p>
<pre><code>from tkinter import *
from tkinter.filedialog import *
root = Tk()
root.title("Save and Load")
root.geometry("600x500-400+50")
def importFiles():
try:
filenames = askopenfilenames()
global file
for file in filenames:
fileList.insert(END, file)
except:
pass
def removeFiles():
try:
fileList.delete(fileList.curselection())
except:
pass
def openFile():
try:
text.delete(END)
fob = open(file, 'r')
text.insert(0.0, fob.read())
except:
pass
def saveFile():
try:
fob = open(file, 'w')
fob.write(text.get(0.0, 'end-1c'))
fob.close()
except:
pass
listFrame = Frame(root)
listFrame.pack()
sby = Scrollbar(listFrame, orient='vertical')
sby.pack(side=RIGHT, fill=Y)
fileList = Listbox(listFrame, width=100, height=5, yscrollcommand=sby.set)
fileList.pack()
sby.config(command=fileList.yview)
buttonFrame = Frame(root)
buttonFrame.pack()
importButton = Button(buttonFrame, text="Import", command=importFiles)
importButton.pack(side=LEFT)
removeButton = Button(buttonFrame, text="Remove", command=removeFiles)
removeButton.pack(side=LEFT)
openButton = Button(buttonFrame, text="Open", command=openFile)
openButton.pack(side=LEFT)
saveButton = Button(buttonFrame, text="Save", command=saveFile)
saveButton.pack(side=LEFT)
text = Text(root)
text.pack()
root.mainloop()
</code></pre>
<p>"I want one dialog which returns a filename that can be used for both saving and loading."<br>
You can import file names using a dialog window; remove the selected file name from the list (additional function); open the file you selected; and finally, write and save them.<br>
P.S.: There may be some bugs in my code, but I think, the <i>algorithm</i> does what the question asks.</p>
| 0 | 2016-07-22T11:30:51Z | [
"python",
"tkinter"
] |
Trouble creating pandas pivot table with preexisting Excel file | 38,507,320 | <p>I am very new to the Pandas module, and am trying to create a pivot table from my Excel file. </p>
<p>Here's my code:</p>
<pre><code>excel = pd.ExcelFile(filename)
df = excel.parse
df1 = df[['Product Description', 'Supervisor']]
table1 = pd.pivot_table(df1, index = ['Supervisor'],
columns = ['Product Description'],
values = ['Product Description'],
aggfunc = [lambda x: len(x)], fill_value = 0)
writer = pd.ExcelWriter(filename)
table1.to_excel(writer, 'Pivot Table')
writer.save()
workbook.save(filename)
</code></pre>
<p>It's giving me this error: <code>TypeError: 'instancemethod' object has no attribute '__getitem__'</code></p>
<p>Supervisor and Product Description are the two columns that i'm using to create the pivot table. Is this error happening because I can't reference the columns like that? Supervisor and Product description are the values in the first cell of each column. Do I have to reference the columns in some other way?</p>
| 0 | 2016-07-21T14:42:42Z | 38,508,393 | <p><code>parse</code> is a method (function attached to an object). So you need parentheses after the method name i.e. <code>df = excel.parse()</code>.</p>
<p>See <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.ExcelFile.parse.html" rel="nofollow">the docs</a> for details on how to supply arguments to the function call.</p>
| 0 | 2016-07-21T15:31:16Z | [
"python",
"pandas"
] |
Image to text python | 38,507,426 | <p>I am using python 3.x and using the following code to convert image into text:</p>
<pre><code>from PIL import Image
from pytesseract import image_to_string
image = Image.open('image.png', mode='r')
print(image_to_string(image))
</code></pre>
<p>I am getting the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/hp/Desktop/GII/Image_to_text.py", line 12, in <module>
print(image_to_string(image))
File "C:\Users\hp\Downloads\WinPython-64bit-3.5.1.2\python-3.5.1.amd64\lib\site-packages\pytesseract\pytesseract.py", line 161, in image_to_string
config=config)
File "C:\Users\hp\Downloads\WinPython-64bit-3.5.1.2\python-3.5.1.amd64\lib\site-packages\pytesseract\pytesseract.py", line 94, in run_tesseract
stderr=subprocess.PIPE)
File "C:\Users\hp\Downloads\WinPython-64bit-3.5.1.2\python-3.5.1.amd64\lib\subprocess.py", line 950, in __init__
restore_signals, start_new_session)
File "C:\Users\hp\Downloads\WinPython-64bit-3.5.1.2\python-3.5.1.amd64\lib\subprocess.py", line 1220, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
</code></pre>
<p>Please note that I have put the image in the same directory where my python is present. Also It does not raise error on <code>image = Image.open('image.png', mode='r')</code> but it raises on the line <code>print(image_to_string(image))</code>. </p>
<p>Any idea what might be wrong here? Thanks</p>
| 1 | 2016-07-21T14:47:13Z | 38,509,098 | <p>Your "current" directory is not where you think.</p>
<p>==> You may specify the full path to the image, for example:
image = Image.open(r'C:\Users\hp\Downloads\WinPython-64bit-3.5.1.2\python-3.5.1.amd64\image.png', mode='r') </p>
| -1 | 2016-07-21T16:04:09Z | [
"python",
"python-3.x",
"pytesser"
] |
Image to text python | 38,507,426 | <p>I am using python 3.x and using the following code to convert image into text:</p>
<pre><code>from PIL import Image
from pytesseract import image_to_string
image = Image.open('image.png', mode='r')
print(image_to_string(image))
</code></pre>
<p>I am getting the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/hp/Desktop/GII/Image_to_text.py", line 12, in <module>
print(image_to_string(image))
File "C:\Users\hp\Downloads\WinPython-64bit-3.5.1.2\python-3.5.1.amd64\lib\site-packages\pytesseract\pytesseract.py", line 161, in image_to_string
config=config)
File "C:\Users\hp\Downloads\WinPython-64bit-3.5.1.2\python-3.5.1.amd64\lib\site-packages\pytesseract\pytesseract.py", line 94, in run_tesseract
stderr=subprocess.PIPE)
File "C:\Users\hp\Downloads\WinPython-64bit-3.5.1.2\python-3.5.1.amd64\lib\subprocess.py", line 950, in __init__
restore_signals, start_new_session)
File "C:\Users\hp\Downloads\WinPython-64bit-3.5.1.2\python-3.5.1.amd64\lib\subprocess.py", line 1220, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
</code></pre>
<p>Please note that I have put the image in the same directory where my python is present. Also It does not raise error on <code>image = Image.open('image.png', mode='r')</code> but it raises on the line <code>print(image_to_string(image))</code>. </p>
<p>Any idea what might be wrong here? Thanks</p>
| 1 | 2016-07-21T14:47:13Z | 38,509,168 | <p>You have to have <code>tesseract</code> installed and accesible in your path.</p>
<p><a href="https://github.com/madmaze/pytesseract/blob/master/src/pytesseract.py#L74" rel="nofollow">According to source</a>, <code>pytesseract</code> is merely a wrapper for <code>subprocess.Popen</code> with tesseract binary as a binary to run. It does not perform any kind of OCR itself.</p>
<p>Relevant part of sources:</p>
<pre><code>def run_tesseract(input_filename, output_filename_base, lang=None, boxes=False, config=None):
'''
runs the command:
`tesseract_cmd` `input_filename` `output_filename_base`
returns the exit status of tesseract, as well as tesseract's stderr output
'''
command = [tesseract_cmd, input_filename, output_filename_base]
if lang is not None:
command += ['-l', lang]
if boxes:
command += ['batch.nochop', 'makebox']
if config:
command += shlex.split(config)
proc = subprocess.Popen(command,
stderr=subprocess.PIPE)
return (proc.wait(), proc.stderr.read())
</code></pre>
<p>Quoting another part of source:</p>
<pre><code># CHANGE THIS IF TESSERACT IS NOT IN YOUR PATH, OR IS NAMED DIFFERENTLY
tesseract_cmd = 'tesseract'
</code></pre>
<p>So quick way of changing tesseract path would be:</p>
<pre><code>import pytesseract
pytesseract.tesseract_cmd = "/absolute/path/to/tesseract" # this should be done only once
pytesseract.image_to_string(img)
</code></pre>
| 0 | 2016-07-21T16:07:22Z | [
"python",
"python-3.x",
"pytesser"
] |
Check if list A contains a prefix of an item in list B | 38,507,451 | <p>I have two list, which we can call <code>A</code> and <code>B</code>. I need to check the items in list <code>A</code> and see if items in <code>B</code> starts with an item from <code>A</code> and then stop the check.</p>
<p>Example of content in A:</p>
<pre><code>https://some/path
http://another/path
http://another.some/path
</code></pre>
<p>Example of content in B:</p>
<pre><code>http://another/path
http://this/wont/match/anything
</code></pre>
<p>Currently I'm doing this:</p>
<pre><code>def check_comps(self, comps):
for a in self.A:
for b in comps:
if b.startswith(a):
return a
</code></pre>
<p>Is there a better way to do this?</p>
| 1 | 2016-07-21T14:48:20Z | 38,556,585 | <p>Your solution has the worst-case O(nm) time complexity, that is O(n^2) if n ~ m. You can easily reduce it to O(n log(n)) and even O(log(n)). Here is how. </p>
<p>Consider a list of words (your <code>comps</code> attrubute) and a target (your <code>b</code>) </p>
<pre><code>words = ['abdc', 'abd', 'acb', 'abcabc', 'abc']
target = "abcd"
</code></pre>
<p>Observe, that by sorting the list of words in lexicographical order, you get a list of prefixes</p>
<pre><code>prefixes = ['abc', 'abcabc', 'abd', 'abdc', 'acb']
</code></pre>
<p>It is degenerate, because <code>prefixes[0]</code> is a prefix of <code>prefixes[1]</code>, hence everything that starts with <code>prefixes[1]</code> starts with <code>prefixes[0]</code> just as well. This is a bit problematic. Let's see why. Let's use the fast (binary) search to find the proper place of the target in the <code>prefix</code> list.</p>
<pre><code>import bisect
bisect.bisect(prefixes, target) # -> 2
</code></pre>
<p>This is because the <code>target</code> and <code>prefixes[1]</code> share a prefix, but <code>target[3] > prefixes[1][3]</code>, hence lexicographically it should go after. Hence, if there is a prefix of the <code>target</code> in the <code>prefixes</code>, it should be to the left of index <code>2</code>. Obviously, the <code>target</code> doesn't start with <code>prefixes[1]</code> hence in the worst case we would have to search all the way to the left to find whether there is a prefix. Now observe, that if we transform these <code>prefixes</code> into a nondegenerate list, the only possible prefix of a target will always be to the left of the position returned by <code>bisect.bisect</code>. Let's reduce the list of prefixes and write a helper function that will check whether there is a prefix of a target. </p>
<pre><code>from functools import reduce
def minimize_prefixes(prefixes):
"""
Note! `prefixes` must be sorted lexicographically !
"""
def accum_prefs(prefixes, prefix):
if not prefix.startswith(prefixes[-1]):
return prefixes.append(prefix) or prefixes
return prefixes
prefs_iter = iter(prefixes)
return reduce(accum_prefs, prefs_iter, [next(prefs_iter)]) if prefixes else []
def hasprefix(minimized_prefixes, target):
position = bisect.bisect(minimized_prefixes, target)
return target.startswith(minimized_prefixes[position-1]) if position else False
</code></pre>
<p>Now let's see</p>
<pre><code>min_prefixes = minimize_prefixes(prefixes)
print(min_prefixes) # -> ['abc', 'abd', 'acb']
hasprefix(min_prefixes, target) # -> True
</code></pre>
<p>Let's make a test that must fail:</p>
<pre><code>min_prefs_fail = ["abcde"]
hasprefix(min_prefs_fail, target) # -> False
</code></pre>
<p>This way you get O(n log(n)) search, which is asymptotically faster than your O(n^2) solution. Note! You can (and you really should) store the <code>minimize_prefixes(sorted(comps))</code> prefix set as an attribute in your object, making any prefix search O(log (n)), which is even more faster than what you have now. </p>
| 1 | 2016-07-24T20:42:23Z | [
"python",
"python-3.x",
"for-loop"
] |
With Beautifulsoup, Extract Tags of Element Except Those specified | 38,507,514 | <p>I'm using Beutifulsoup 4 and Python 3.5+ to extract webdata. I have the following html, from which I am extracting:</p>
<pre><code><div class="the-one-i-want">
<p>
content
</p>
<p>
content
</p>
<p>
content
</p>
<p>
content
</p>
<ol>
<li>
list item
</li>
<li>
list item
</li>
</ol>
<div class='something-i-don't-want>
content
</div>
<script class="something-else-i-dont-want'>
script
</script>
<p>
content
</p>
</div>
</code></pre>
<p>All of the content that I want to extract is found within the <code><div class="the-one-i-want"></code> element. Right now, I'm using the following methods, which work most of the time:</p>
<pre><code>soup = Beautifulsoup(html.text, 'lxml')
content = soup.find('div', class_='the-one-i-want').findAll('p')
</code></pre>
<p>This excludes scripts, weird insert <code>div</code>'s and otherwise un-predictable content such as ads or 'recommended content' type stuff.</p>
<p>Now, there are some instances in which there are elements other than just the <code><p></code> tags, which has content that is contextually important to the main content, such as lists.</p>
<p>Is there a way to get the content from the <code><div class="the-one-i-want"></code> in a manner as such:</p>
<pre><code>soup = Beautifulsoup(html.text, 'lxml')
content = soup.find('div', class_='the-one-i-want').findAll(desired-content-elements)
</code></pre>
<p>Where <code>desired-content-elements</code>would be inclusive of every element that I deemed fit for that particular content? Such as, all <code><p></code> tags, all <code><ol></code> and <code><li></code> tags, but no <code><div></code> or <code><script></code> tags.</p>
<p>Perhaps noteworthy, is my method of saving the content:</p>
<pre><code>content_string = ''
for p in content:
content_string += str(p)
</code></pre>
<p>This approach collects the data, in order of occurrence, which would prove difficult to manage if I simply found different element types through different iteration processes. I'm looking to NOT have to manage re-construction of split lists to re-assemble the order in which each element originally occurred in the content, if possible.</p>
| 2 | 2016-07-21T14:51:35Z | 38,510,730 | <p>Does this work for you? It should loop through the content adding the text you want while ignoring the div and script tags.</p>
<pre><code>for p in content:
if p.find('div') or p.find('script'):
continue
content_string += str(p)
</code></pre>
| -1 | 2016-07-21T17:30:19Z | [
"python",
"web-scraping",
"beautifulsoup"
] |
With Beautifulsoup, Extract Tags of Element Except Those specified | 38,507,514 | <p>I'm using Beutifulsoup 4 and Python 3.5+ to extract webdata. I have the following html, from which I am extracting:</p>
<pre><code><div class="the-one-i-want">
<p>
content
</p>
<p>
content
</p>
<p>
content
</p>
<p>
content
</p>
<ol>
<li>
list item
</li>
<li>
list item
</li>
</ol>
<div class='something-i-don't-want>
content
</div>
<script class="something-else-i-dont-want'>
script
</script>
<p>
content
</p>
</div>
</code></pre>
<p>All of the content that I want to extract is found within the <code><div class="the-one-i-want"></code> element. Right now, I'm using the following methods, which work most of the time:</p>
<pre><code>soup = Beautifulsoup(html.text, 'lxml')
content = soup.find('div', class_='the-one-i-want').findAll('p')
</code></pre>
<p>This excludes scripts, weird insert <code>div</code>'s and otherwise un-predictable content such as ads or 'recommended content' type stuff.</p>
<p>Now, there are some instances in which there are elements other than just the <code><p></code> tags, which has content that is contextually important to the main content, such as lists.</p>
<p>Is there a way to get the content from the <code><div class="the-one-i-want"></code> in a manner as such:</p>
<pre><code>soup = Beautifulsoup(html.text, 'lxml')
content = soup.find('div', class_='the-one-i-want').findAll(desired-content-elements)
</code></pre>
<p>Where <code>desired-content-elements</code>would be inclusive of every element that I deemed fit for that particular content? Such as, all <code><p></code> tags, all <code><ol></code> and <code><li></code> tags, but no <code><div></code> or <code><script></code> tags.</p>
<p>Perhaps noteworthy, is my method of saving the content:</p>
<pre><code>content_string = ''
for p in content:
content_string += str(p)
</code></pre>
<p>This approach collects the data, in order of occurrence, which would prove difficult to manage if I simply found different element types through different iteration processes. I'm looking to NOT have to manage re-construction of split lists to re-assemble the order in which each element originally occurred in the content, if possible.</p>
| 2 | 2016-07-21T14:51:35Z | 38,520,734 | <p>You can pass a list of tags that you want:</p>
<pre><code> content = soup.find('div', class_='the-one-i-want').find_all(["p", "ol", "whatever"])
</code></pre>
<p>If we run something similar on your question url looking for p and pre tags, you can see we get both:</p>
<pre><code> ...: for ele in soup.select_one("td.postcell").find_all(["pre","p"]):
...: print(ele)
...:
<p>I'm using Beutifulsoup 4 and Python 3.5+ to extract webdata. I have the following html, from which I am extracting:</p>
<pre><code>&lt;div class="the-one-i-want"&gt;
&lt;p&gt;
content
&lt;/p&gt;
&lt;p&gt;
content
&lt;/p&gt;
&lt;p&gt;
content
&lt;/p&gt;
&lt;p&gt;
content
&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
list item
&lt;/li&gt;
&lt;li&gt;
list item
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class='something-i-don't-want&gt;
content
&lt;/div&gt;
&lt;script class="something-else-i-dont-want'&gt;
script
&lt;/script&gt;
&lt;p&gt;
content
&lt;/p&gt;
&lt;/div&gt;
</code></pre>
<p>All of the content that I want to extract is found within the <code>&lt;div class="the-one-i-want"&gt;</code> element. Right now, I'm using the following methods, which work most of the time:</p>
<pre><code>soup = Beautifulsoup(html.text, 'lxml')
content = soup.find('div', class_='the-one-i-want').findAll('p')
</code></pre>
<p>This excludes scripts, weird insert <code>div</code>'s and otherwise un-predictable content such as ads or 'recommended content' type stuff.</p>
<p>Now, there are some instances in which there are elements other than just the <code>&lt;p&gt;</code> tags, which has content that is contextually important to the main content, such as lists.</p>
<p>Is there a way to get the content from the <code>&lt;div class="the-one-i-want"&gt;</code> in a manner as such:</p>
<pre><code>soup = Beautifulsoup(html.text, 'lxml')
content = soup.find('div', class_='the-one-i-want').findAll(desired-content-elements)
</code></pre>
<p>Where <code>desired-content-elements</code>would be inclusive of every element that I deemed fit for that particular content? Such as, all <code>&lt;p&gt;</code> tags, all <code>&lt;ol&gt;</code> and <code>&lt;li&gt;</code> tags, but no <code>&lt;div&gt;</code> or <code>&lt;script&gt;</code> tags.</p>
<p>Perhaps noteworthy, is my method of saving the content:</p>
<pre><code>content_string = ''
for p in content:
content_string += str(p)
</code></pre>
<p>This approach collects the data, in order of occurrence, which would prove difficult to manage if I simply found different element types through different iteration processes. I'm looking to NOT have to manage re-construction of split lists to re-assemble the order in which each element originally occurred in the content, if possible.</p>
</code></pre>
| 1 | 2016-07-22T07:35:01Z | [
"python",
"web-scraping",
"beautifulsoup"
] |
With Beautifulsoup, Extract Tags of Element Except Those specified | 38,507,514 | <p>I'm using Beutifulsoup 4 and Python 3.5+ to extract webdata. I have the following html, from which I am extracting:</p>
<pre><code><div class="the-one-i-want">
<p>
content
</p>
<p>
content
</p>
<p>
content
</p>
<p>
content
</p>
<ol>
<li>
list item
</li>
<li>
list item
</li>
</ol>
<div class='something-i-don't-want>
content
</div>
<script class="something-else-i-dont-want'>
script
</script>
<p>
content
</p>
</div>
</code></pre>
<p>All of the content that I want to extract is found within the <code><div class="the-one-i-want"></code> element. Right now, I'm using the following methods, which work most of the time:</p>
<pre><code>soup = Beautifulsoup(html.text, 'lxml')
content = soup.find('div', class_='the-one-i-want').findAll('p')
</code></pre>
<p>This excludes scripts, weird insert <code>div</code>'s and otherwise un-predictable content such as ads or 'recommended content' type stuff.</p>
<p>Now, there are some instances in which there are elements other than just the <code><p></code> tags, which has content that is contextually important to the main content, such as lists.</p>
<p>Is there a way to get the content from the <code><div class="the-one-i-want"></code> in a manner as such:</p>
<pre><code>soup = Beautifulsoup(html.text, 'lxml')
content = soup.find('div', class_='the-one-i-want').findAll(desired-content-elements)
</code></pre>
<p>Where <code>desired-content-elements</code>would be inclusive of every element that I deemed fit for that particular content? Such as, all <code><p></code> tags, all <code><ol></code> and <code><li></code> tags, but no <code><div></code> or <code><script></code> tags.</p>
<p>Perhaps noteworthy, is my method of saving the content:</p>
<pre><code>content_string = ''
for p in content:
content_string += str(p)
</code></pre>
<p>This approach collects the data, in order of occurrence, which would prove difficult to manage if I simply found different element types through different iteration processes. I'm looking to NOT have to manage re-construction of split lists to re-assemble the order in which each element originally occurred in the content, if possible.</p>
| 2 | 2016-07-21T14:51:35Z | 38,521,123 | <p>You can do that quite easily using</p>
<pre><code>soup = Beautifulsoup(html.text, 'lxml')
desired-tags = {'div', 'ol'} # add what you need
content = filter(lambda x: x.name in desired-tags
soup.find('div', class_='the-one-i-want').children)
</code></pre>
<p>This will go through all <em>direct</em> children of the <code>div</code> tag. If you want this to happen recursively (you said something about adding <code>li</code> tags), you should use <code>.decendants</code> instead of <code>.children</code>. Happy crawling!</p>
| 0 | 2016-07-22T07:55:31Z | [
"python",
"web-scraping",
"beautifulsoup"
] |
Split Python list into custom chunk size based on second list | 38,507,555 | <p>I am looking for a way to split a list into predefined slices: </p>
<pre><code>a = list(range(1, 1001)) # Added list()
b = [200, 500, 300]
</code></pre>
<p>List <code>a</code> should be sliced into <code>len(b)</code> sublists containing the first 200 elements of a, the following 500, and the last 300. It is safe to assume that <code>sum(b) == len(a)</code>. </p>
<p>Is there a common function for this? </p>
| 4 | 2016-07-21T14:53:05Z | 38,507,713 | <p>Make an iterator from the list (if it isn't one already) and get <code>n</code> times the <code>next</code> element from the iterator for each <code>n</code> in <code>b</code>.</p>
<pre><code>>>> a = range(1, 1001)
>>> b = [200, 500, 300]
>>> a_iter = iter(a)
>>> [[next(a_iter) for _ in range(n)] for n in b]
[[1,
2,
...
199,
200],
[201,
...
700],
[701,
702,
...
999,
1000]]
</code></pre>
| 5 | 2016-07-21T14:59:58Z | [
"python",
"list",
"slice"
] |
Split Python list into custom chunk size based on second list | 38,507,555 | <p>I am looking for a way to split a list into predefined slices: </p>
<pre><code>a = list(range(1, 1001)) # Added list()
b = [200, 500, 300]
</code></pre>
<p>List <code>a</code> should be sliced into <code>len(b)</code> sublists containing the first 200 elements of a, the following 500, and the last 300. It is safe to assume that <code>sum(b) == len(a)</code>. </p>
<p>Is there a common function for this? </p>
| 4 | 2016-07-21T14:53:05Z | 38,507,781 | <p>You could also use <a href="https://docs.python.org/2/library/itertools.html#itertools.islice" rel="nofollow"><code>itertools.islice</code></a> to consume an iterator of <code>a</code> in the predefined chunks:</p>
<pre><code>>>> from itertools import islice
>>> b = [200, 500, 300]
>>> a = range(1, 1001)
>>> it = iter(a)
>>> [list(islice(it, i)) for i in b]
</code></pre>
| 3 | 2016-07-21T15:02:54Z | [
"python",
"list",
"slice"
] |
Split Python list into custom chunk size based on second list | 38,507,555 | <p>I am looking for a way to split a list into predefined slices: </p>
<pre><code>a = list(range(1, 1001)) # Added list()
b = [200, 500, 300]
</code></pre>
<p>List <code>a</code> should be sliced into <code>len(b)</code> sublists containing the first 200 elements of a, the following 500, and the last 300. It is safe to assume that <code>sum(b) == len(a)</code>. </p>
<p>Is there a common function for this? </p>
| 4 | 2016-07-21T14:53:05Z | 38,507,786 | <p><code>itertools.islice</code>:</p>
<pre><code>from itertools import islice
a = range(1, 1001)
b = [200, 500, 300]
c = iter(a)
results = [list(islice(c, length)) for length in b]
</code></pre>
<p><code>islice</code> behaves like slice, except that slice takes sequence and returns sequence, whereas <code>islice</code> takes iterable and returns iterable.</p>
<p>Iterators are "disposable" -- once you extract element from it, it's no longer there, and next element becomes new first element.</p>
| 3 | 2016-07-21T15:03:15Z | [
"python",
"list",
"slice"
] |
How to download file using python when url doesn't change | 38,507,599 | <p>I want to download a file from a webpage. That webpage only has one .zip file (that's what I want to download), but when I click on the .zip file, it starts download but the url doesn't change (the url still remains of the form <a href="http://ldn2800:8080/id=2800" rel="nofollow">http://ldn2800:8080/id=2800</a>). How can I download this using python, considering that there is no url of the form <code>http://example.com/1.zip</code>?</p>
<p>Also, when I directly go to the page <a href="http://ldn2800:8080/id=2800" rel="nofollow">http://ldn2800:8080/id=2800</a>, it just opens that page with the .zip file but doesn't download it without clicking. How do download it using python?</p>
<p>UPDATE: Right now I'm doing it this way:</p>
<pre><code>if (str(dict.get('id')) == winID):
#or str(dict.get('id')) == linuxID):
#if str(dict.get('number')) == buildNo:
buildTypeId = dict.get('id')
ID = dict.get('id')
downloadURL = "http://example:8080/viewType.html?buildId=26009&tab=artifacts&buildTypeId=" + ID
directory = BindingsDest + "\\" + buildNo
if not os.path.exists(directory):
os.makedirs(directory)
fileName = None
if buildTypeId == linuxID:
fileName = linuxLib + "-" + buildNo + ".zip"
elif buildTypeId == winID:
fileName = winLib + "-" + buildNo + ".zip"
if fileName is not None:
print(dict)
downloadFile(downloadURL, directory, fileName)
def downloadFile(downloadURL, directory, fileName, user=user, password=password):
if user is not None and password is not None:
request = requests.get(downloadURL, stream=True, auth=(user, password))
else:
request = requests.get(downloadURL, stream=True)
with open(directory + "\\" + fileName, 'wb') as handle:
for block in request.iter_content(1024):
if not block:
break
handle.write(block)
</code></pre>
<p>But, it just creates a zip in the required location but that zip can't be opened and has nothing.
Can something like this be done: like searching for the filename on the webpage and then download that pattern matched?</p>
| 1 | 2016-07-21T14:54:49Z | 38,509,911 | <p>Check the HTTP status code to make sure that no error happened. You may use the builtin method raise_for_status to do so: <a href="https://requests.readthedocs.io/en/master/api/#requests.Response.raise_for_status" rel="nofollow">https://requests.readthedocs.io/en/master/api/#requests.Response.raise_for_status</a></p>
<pre><code>def downloadFile(downloadURL, directory, fileName, user=user, password=password):
if user is not None and password is not None:
request = requests.get(downloadURL, stream=True, auth=(user, password))
else:
request = requests.get(downloadURL, stream=True)
request.raise_for_status()
with open(directory + "\\" + fileName, 'wb') as handle:
for block in request.iter_content(1024):
if not block:
break
handle.write(block)
</code></pre>
<p>Are you sure that there is no networking issue such as proxy/fw/etc ?</p>
<p>EDIT: according to your above comment, I'm not sure that this answers your actual problem. Revised answer:</p>
<p>You access a web page containing a link to a zip file. This link, you say, is the same as the page itself. But if you click on it in a browser, it downloads the file instead of reaching the HTML page again. That's <strong>strange</strong> but can be explained in various ways. Please copy/paste the whole HTML page code (including the link to the zip file), that will probably help us understanding the issue.</p>
| 1 | 2016-07-21T16:43:18Z | [
"python",
"file",
"url",
"downloadfile"
] |
Summing elements in a sliding window - NumPy | 38,507,672 | <p>There is a numpy way to make a sum each three elements in the interval? For example:</p>
<pre><code>import numpy as np
mydata = np.array([4, 2, 3, 8, -6, 10])
</code></pre>
<p>I would like to get this result:</p>
<pre><code>np.array([9, 13, 5, 12])
</code></pre>
| 0 | 2016-07-21T14:57:49Z | 38,507,725 | <p>We can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.convolve.html" rel="nofollow"><code>np.convolve</code></a> -</p>
<pre><code>np.convolve(mydata,np.ones(3,dtype=int),'valid')
</code></pre>
<p>The basic idea with <a href="https://en.wikipedia.org/wiki/Convolution" rel="nofollow"><code>convolution</code></a> is that we have a kernel that we slide through the input array and the convolution operation sums the elements multiplied by the kernel elements as the kernel slides through. So, to solve our case for a window size of <code>3</code>, we are using a kernel of three <code>1s</code> generated with <code>np.ones(3)</code>.</p>
<p>Sample run -</p>
<pre><code>In [334]: mydata
Out[334]: array([ 4, 2, 3, 8, -6, 10])
In [335]: np.convolve(mydata,np.ones(3,dtype=int),'valid')
Out[335]: array([ 9, 13, 5, 12])
</code></pre>
| 4 | 2016-07-21T15:00:35Z | [
"python",
"numpy"
] |
Python argparse - Mutually exclusive group with default if no argument is given | 38,507,675 | <p>I'm writing a Python script to process a machine-readable file and output a human-readable report on the data contained within.<br>
I would like to give the option of outputting the data to <code>stdout (-s)</code> (by default) or to a txt <code>(-t)</code> or csv <code>(-c)</code> file. I would like to have a switch for the default behaviour, as many commands do.</p>
<p>In terms of <code>Usage:</code>, I'd like to see something like <code>script [-s | -c | -t] input file</code>, and have <code>-s</code> be the default if no arguments are passed.</p>
<p>I currently have (for the relevant args, in brief):</p>
<pre><code>parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group()
group.add_argument('-s', '--stdout', action='store_true')
group.add_argument('-c', '--csv', action='store_true')
group.add_argument('-t', '--txt', action='store_true')
args = parser.parse_args()
if not any((args.stdout, args.csv, args.txt)):
args.stdout = True
</code></pre>
<p>So if none of <code>-s</code>, <code>-t</code>, or <code>-c</code> are set, <code>stdout (-s)</code> is forced to True, exactly as if <code>-s</code> had been passed.</p>
<p>Is there a better way to achieve this? Or would another approach entirely be generally considered 'better' for some reason?</p>
<p>Note: I'm using Python 3.5.1/2 and I'm not worried about compatibility with other versions, as there is no plan to share this script with others at this point. It's simply to make my life easier.</p>
| 2 | 2016-07-21T14:57:56Z | 38,508,118 | <p>You could have each of your actions update the same variable, supplying stdout as the default value for that variable.</p>
<p>Consider this program:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group()
group.add_argument(
'-s', '--stdout', action='store_const', dest='type', const='s', default='s')
group.add_argument(
'-c', '--csv', action='store_const', dest='type', const='c')
group.add_argument(
'-t', '--txt', action='store_const', dest='type', const='t')
args = parser.parse_args()
print args
</code></pre>
<p>Your code could look like:</p>
<pre><code>if args.type == 's':
ofile = sys.stdout
elif args.type == 'c':
ofile = ...
...
</code></pre>
<h2>First alternative:</h2>
<p>Rather than arbitrarily choose one of the <code>.add_argument()</code>s to specify the default type, you can use <code>parser.set_defaults()</code> to specify the default type. </p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group()
group.add_argument('-s', '--stdout', action='store_const', dest='type', const='s')
group.add_argument('-c', '--csv', action='store_const', dest='type', const='c')
group.add_argument('-t', '--txt', action='store_const', dest='type', const='t')
parser.set_defaults(type='s')
args = parser.parse_args()
print args
</code></pre>
<h2>Second alternative:</h2>
<p>Rather than specify the type as an enumerated value, you could store a callable into the type, and then invoke the callable:</p>
<pre><code>import argparse
def do_stdout():
# do everything that is required to support stdout
print("stdout!")
return
def do_csv():
# do everything that is required to support stdout
print("csv!")
return
def do_text():
# do everything that is required to support stdout
print("text!")
return
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group()
group.add_argument('-s', '--stdout', action='store_const', dest='type', const=do_stdout)
group.add_argument('-c', '--csv', action='store_const', dest='type', const=do_csv)
group.add_argument('-t', '--txt', action='store_const', dest='type', const=do_text)
parser.set_defaults(type=do_stdout)
args = parser.parse_args()
print args
args.type()
</code></pre>
| 5 | 2016-07-21T15:18:06Z | [
"python",
"python-3.x",
"argparse"
] |
Python argparse - Mutually exclusive group with default if no argument is given | 38,507,675 | <p>I'm writing a Python script to process a machine-readable file and output a human-readable report on the data contained within.<br>
I would like to give the option of outputting the data to <code>stdout (-s)</code> (by default) or to a txt <code>(-t)</code> or csv <code>(-c)</code> file. I would like to have a switch for the default behaviour, as many commands do.</p>
<p>In terms of <code>Usage:</code>, I'd like to see something like <code>script [-s | -c | -t] input file</code>, and have <code>-s</code> be the default if no arguments are passed.</p>
<p>I currently have (for the relevant args, in brief):</p>
<pre><code>parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group()
group.add_argument('-s', '--stdout', action='store_true')
group.add_argument('-c', '--csv', action='store_true')
group.add_argument('-t', '--txt', action='store_true')
args = parser.parse_args()
if not any((args.stdout, args.csv, args.txt)):
args.stdout = True
</code></pre>
<p>So if none of <code>-s</code>, <code>-t</code>, or <code>-c</code> are set, <code>stdout (-s)</code> is forced to True, exactly as if <code>-s</code> had been passed.</p>
<p>Is there a better way to achieve this? Or would another approach entirely be generally considered 'better' for some reason?</p>
<p>Note: I'm using Python 3.5.1/2 and I'm not worried about compatibility with other versions, as there is no plan to share this script with others at this point. It's simply to make my life easier.</p>
| 2 | 2016-07-21T14:57:56Z | 38,508,442 | <p>You can "cheat" with sys.argv : </p>
<pre><code>import sys
def main():
if len(sys.argv) == 2 and sys.argv[1] not in ['-s', '-c', '-t', '-h']:
filename = sys.argv[1]
print "mode : stdout", filename
else:
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group()
group.add_argument('-s', '--stdout')
group.add_argument('-c', '--csv')
group.add_argument('-t', '--txt')
args = parser.parse_args()
if args.stdout:
print "mode stdout :", args.stdout
if args.csv:
print "mode csv :", args.csv
if args.txt:
print "mode txt :", args.txt
if __name__ == "__main__":
main()
</code></pre>
| 0 | 2016-07-21T15:33:11Z | [
"python",
"python-3.x",
"argparse"
] |
How to unpack a binary file buffer into two variables? | 38,507,704 | <p>I have a binary file that contains 4-byte binary values that represent a set of two <code>short int</code> each. I know that I can unpack a single 4-byte binary value into two short integers like this:</p>
<pre><code>from struct import unpack
fval = b'\xba\x1e\x99\x01' #actualy read from some file
qualdip, azi = unpack('hh', fval)
print(type(qualdip), qualdip)
print(type(azi), azi)
>>> <class 'int'> 7866
>>> <class 'int'> 409
</code></pre>
<p>Now, I want to unpack the entire buffer. For the moment I am doing:</p>
<pre><code>qualdips = []
azis = []
with open(bfile, 'rb') as buf:
fval = buf.read(4)
while fval:
qualdip, azi = unpack('hh', fval)
azis.append(azi)
qualdips.append(qualdip)
fval = buf.read(4)
</code></pre>
<p>Which takes over a minute for a 277MB file and seems to produce a huge memory overhead.</p>
<p>I would like to unpack the entire filebuffer directly into the two variables. How do I accomplish this?</p>
<p>I suspect that <a href="https://docs.python.org/2/library/struct.html" rel="nofollow" title="optional title"><code>struct.unpack_from</code></a> is my friend, but I am unsure how to formulate the format.</p>
<pre><code>with open(bfile, 'rb') as buf:
qualdip, azi = unpack_from('hh', buf)
</code></pre>
<p>only extracts two values, and (i know the number of elements of my file)</p>
<pre><code>with open(bfile, 'rb') as buf:
qualdip, azi = unpack_from('72457091h72457091h', buf)
</code></pre>
<p>expects this ridiculous amount of output variables. So:</p>
<p>How <em>do</em> I unpack the entire filebuffer directly into the two variables?</p>
| 1 | 2016-07-21T14:59:19Z | 38,508,181 | <p>I don't know a way to unpack the values directly into two lists, but you can unpack the entire file into a tuple and then slice it in two:</p>
<pre><code>fval = b'\xba\x1e\x99\x01' * 3
unpacked= unpack('3h3h', fval)
qualdip = unpacked[0::2]
azi = unpacked[1::2]
</code></pre>
<p>Alternatively, use <a href="https://docs.python.org/2/library/itertools.html#itertools.islice" rel="nofollow"><code>islice</code></a> to create a <a class='doc-link' href="http://stackoverflow.com/documentation/python/292/generators-yield">generator object</a> to reduce memory consumption.</p>
<pre><code>qualdip = islice(unpacked, 0, None, 2)
azi = islice(unpacked, 1, None, 2)
</code></pre>
| 1 | 2016-07-21T15:21:18Z | [
"python",
"unpack"
] |
How to unpack a binary file buffer into two variables? | 38,507,704 | <p>I have a binary file that contains 4-byte binary values that represent a set of two <code>short int</code> each. I know that I can unpack a single 4-byte binary value into two short integers like this:</p>
<pre><code>from struct import unpack
fval = b'\xba\x1e\x99\x01' #actualy read from some file
qualdip, azi = unpack('hh', fval)
print(type(qualdip), qualdip)
print(type(azi), azi)
>>> <class 'int'> 7866
>>> <class 'int'> 409
</code></pre>
<p>Now, I want to unpack the entire buffer. For the moment I am doing:</p>
<pre><code>qualdips = []
azis = []
with open(bfile, 'rb') as buf:
fval = buf.read(4)
while fval:
qualdip, azi = unpack('hh', fval)
azis.append(azi)
qualdips.append(qualdip)
fval = buf.read(4)
</code></pre>
<p>Which takes over a minute for a 277MB file and seems to produce a huge memory overhead.</p>
<p>I would like to unpack the entire filebuffer directly into the two variables. How do I accomplish this?</p>
<p>I suspect that <a href="https://docs.python.org/2/library/struct.html" rel="nofollow" title="optional title"><code>struct.unpack_from</code></a> is my friend, but I am unsure how to formulate the format.</p>
<pre><code>with open(bfile, 'rb') as buf:
qualdip, azi = unpack_from('hh', buf)
</code></pre>
<p>only extracts two values, and (i know the number of elements of my file)</p>
<pre><code>with open(bfile, 'rb') as buf:
qualdip, azi = unpack_from('72457091h72457091h', buf)
</code></pre>
<p>expects this ridiculous amount of output variables. So:</p>
<p>How <em>do</em> I unpack the entire filebuffer directly into the two variables?</p>
| 1 | 2016-07-21T14:59:19Z | 38,509,977 | <p>I think this might be a faster way to do it:</p>
<pre><code>import os
import struct
def pairwise(iterable):
"s -> (s0,s1), (s2,s3), (s4, s5), ..."
a = iter(iterable)
return zip(a, a)
bfile = 'bfile.bin'
filesize = os.stat(bfile).st_size
numvals = filesize // 2
with open(bfile, 'rb') as bf:
fmt = '{}h'.format(numvals)
values = struct.unpack(fmt, str(bf.read()))
qualdips, azis = zip(*pairwise(values))
</code></pre>
| 0 | 2016-07-21T16:47:00Z | [
"python",
"unpack"
] |
Cython __pyx_r may be used uninitialized in this function | 38,507,806 | <p>I am trying to utilize cython to provide a wrapper for my C++ utilities. One such function I am trying to make accessible is an accessor that returns an enum based on file type.</p>
<p>Here is how I re-define the function in cython:</p>
<pre><code>cdef extern from "reader.h" namespace "magic_number":
enum mcr_magic_number_t:
MDI = 0
EOT
RV
UNKNOWN
</code></pre>
<p>and then in my <code>reader.pxd</code> file I have</p>
<pre><code>cpdef mcr_magic_number_t magic_number(self)
</code></pre>
<p>and then in my <code>reader.pyx</code> file I have</p>
<pre><code>cpdef mcr_magic_number_t magic_number(self):
"""
:return: the magic_number enum
:rtype: mcr_magic_number_t
"""
return self.thisptr.magic_number()
</code></pre>
<p>Now, when I go to compile this, I get a warning</p>
<p><code>warning: â__pyx_râ may be used uninitialized in this function</code> </p>
<p>Anyone know how is best to get around this? I tried searching for solutions on google but all I got were pages of other people reporting the same __pyx_r warning. Maybe there is a way to set a default value or to make sure that it is always initialized within cython?</p>
| 1 | 2016-07-21T15:03:52Z | 38,523,517 | <p>Try checking self.thisptr for non-NULL value:</p>
<pre><code>if <void*>self.thisptr != NULL:
return self.thisptr.magic_number()
</code></pre>
| 0 | 2016-07-22T09:58:07Z | [
"python",
"c++",
"cython"
] |
ImportError: No module named 'users' | 38,507,856 | <p>When I run <code>python /manage.py runserver</code> , it generates the following error. </p>
<pre><code>ImportError: No module named 'users'
</code></pre>
<p>I was thinking about this error, maybe I had a mistake about app setting.</p>
<p><strong>$tree</strong></p>
<pre><code>.
âââ LICENSE
âââ README.md
âââ functional_test.py
âââ requirement
â  âââ development.txt
â  âââ production.txt
âââ users
â  âââ __init__.py
â  âââ __pycache__
â  â  âââ __init__.cpython-35.pyc
â  â  âââ tests.cpython-35.pyc
â  â  âââ views.cpython-35.pyc
â  âââ tests.py
â  âââ views.py
âââ wef
âââ db.sqlite3
âââ manage.py
âââ wef
âââ __init__.py
âââ __pycache__
â  âââ __init__.cpython-35.pyc
â  âââ settings.cpython-35.pyc
â  âââ urls.cpython-35.pyc
âââ settings.py
âââ urls.py
âââ wsgi.py
</code></pre>
<p>I think it is not a problem.</p>
<p>2nd, maybe I don't insert <code>'users'</code> in <code>settings.py</code></p>
<p><strong>In settings.py</strong></p>
<pre><code>INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'debug_toolbar',
'django_extensions',
'users',
]
</code></pre>
<p>I have to double check about these situations.</p>
<p>Here's my code:</p>
<p><strong>urls.py</strong></p>
<pre><code>from django.conf.urls import url
from django.contrib import admin
from users.views import JoinUsView
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^$', user, name='home'),
]
</code></pre>
<p><strong>users/views.py</strong></p>
<pre><code>from django.http import HttpResponse
def user(request):
return HttpResponse("hello world")
</code></pre>
| 0 | 2016-07-21T15:06:03Z | 38,507,938 | <p>You get the import error because the <code>users</code> directory is not on the Python path. The easiest solution is to move the <code>users</code> directory into the project <code>wef</code> directory (the one that contains <code>manage.py</code>). </p>
<pre><code>âââ wef
âââ db.sqlite3
âââ manage.py
âââ users
â âââ __init__.py
â âââ __pycache__
...
âââ wef
âââ __init__.py
âââ __pycache__
â âââ __init__.cpython-35.pyc
â âââ settings.cpython-35.pyc
â âââ urls.cpython-35.pyc
âââ settings.py
âââ urls.py
âââ wsgi.py
</code></pre>
<p>This will work because <code>./manage.py</code> adds the project directory to the Python path. If the <code>users</code> directory is outside of the project directory, then you will have to modify the python path yourself.</p>
| 3 | 2016-07-21T15:09:45Z | [
"python",
"django"
] |
Capturing output of scp from python | 38,507,920 | <p>I have a django server, and I have to upload some data through scp. I have this view:</p>
<pre><code>pathFile = '/home/user1/foo.json'
user = 'user1scp'
server = 'someserver.com'
pathServer = '/var/www/foo.json'
os.system("scp %s %s@%s:%s" % (pathFile, user, server, pathServer))
</code></pre>
<p>On the console window where the page is running (i.e. where I called the command 'runserver') I have this output shown:</p>
<pre><code>[21/Jul/2016 18:55:12] "GET /someurl/upload HTTP/1.1" 301 0
foo.json 100% 609 0.6KB/s 00:00
</code></pre>
<p>I want to be able to manipulate that output, so I can notify the user that all the files (there are multiple files to upload) were upload correctly, or which were not.</p>
<p>I tried the solution in this answer <a href="http://stackoverflow.com/questions/16571150/how-to-capture-stdout-output-from-a-python-function-call">How to capture stdout output from a Python function call?</a> but it didn't work. I tried Popen and subprocess and had no results as well. Maybe I'm doing something wrong?</p>
| 0 | 2016-07-21T15:08:53Z | 38,508,039 | <p>This doesn't directly answer your question, but I won't do the raw <code>scp</code> command in python because the output is hard to parse even if you have it. You should consider using tools like <a href="http://docs.fabfile.org/en/1.11/index.html" rel="nofollow"><code>fabric</code></a> to handle this. It's pythonic and you have full control over the input/output. The operation same as <code>scp</code> is <a href="http://docs.fabfile.org/en/1.11/api/core/operations.html#fabric.operations.put" rel="nofollow"><code>put</code></a>. For an example you could check this <a href="http://stackoverflow.com/questions/5314711/how-do-i-copy-a-directory-to-a-remote-machine-using-fabric?answertab=votes#tab-top">SO answer</a>.</p>
<p><sub>Almost all command line operations can be done using <code>fabric</code>, you won't regret learning it.</sub></p>
| 1 | 2016-07-21T15:14:18Z | [
"python",
"django",
"scp"
] |
Save a file generated by app running on docker to a given path in the host machine | 38,507,929 | <p>I have a python app running on a docker container and it generates a pdf file. I want to store the generated pdf file in a given path in the host machine.</p>
<p>I am not sure on how can this be achieved. Any ideas? </p>
| 2 | 2016-07-21T15:09:21Z | 38,508,158 | <p>Use volumes to mount directory in a container to host director:</p>
<pre><code>docker run -v /MY/HOST_DIR:/MY/CONTAINER_DIR
</code></pre>
<p>Your pdf file will be stored in /MY/HOST_DIR on host.</p>
| 0 | 2016-07-21T15:20:07Z | [
"python",
"docker",
"docker-compose"
] |
Save a file generated by app running on docker to a given path in the host machine | 38,507,929 | <p>I have a python app running on a docker container and it generates a pdf file. I want to store the generated pdf file in a given path in the host machine.</p>
<p>I am not sure on how can this be achieved. Any ideas? </p>
| 2 | 2016-07-21T15:09:21Z | 38,508,482 | <p>Mount a volume in your container mapped to the desired path in your host</p>
<pre><code>docker run -d -v /host/path:/python_app/output your_docker_image
</code></pre>
<p>Where <code>/python_app/output</code> is the path inside the container where your app is writing the pdf file.</p>
<p>Note that <code>/host/path</code> should have enough permissions</p>
<pre><code>chmod 777 /host/path
</code></pre>
| 2 | 2016-07-21T15:34:49Z | [
"python",
"docker",
"docker-compose"
] |
Transform every two fields of rows into columns containing two rows | 38,507,958 | <p>I would like to transform every two fields of rows into columns containing two rows. And loop this transformation for each row</p>
<p>This is the input:</p>
<pre><code>id refpop001 altpop001 refpop002 altpop002 refpop003 altpop003
id1 6 274 2 93 5 95
id2 202 0 220 0 73 0
id3 166 159 0 173 114 90
</code></pre>
<p>This is the desired output:</p>
<pre><code>id pop001 pop002 pop003
id1ref 6 2 5
id1alt 274 93 95
id2ref 202 220 73
id2alt 0 0 0
id3ref 166 0 114
id3alt 159 173 90
</code></pre>
<p>Header and id column are only indicated for clarification and are not required in the output</p>
| -3 | 2016-07-21T15:11:12Z | 38,508,191 | <p>You can loop through the input and then split it up, maybe something along the lines of this</p>
<pre><code>int i = 0
for row in input:
row_array = row.split()
i = i+=1
ref = row_array[0] + " " + row_array[2] + " " + row_array[4]]
alt = row_array[1] + " " + row_array[3] + " " + row_array[5]
print "id" + i +"ref " + ref
print "id" + i + "alt" + alt
</code></pre>
<p>Didn't actually run this code, but the idea is there so manipulate this as necessary.</p>
| 0 | 2016-07-21T15:21:54Z | [
"python",
"loops",
"rows"
] |
Transform every two fields of rows into columns containing two rows | 38,507,958 | <p>I would like to transform every two fields of rows into columns containing two rows. And loop this transformation for each row</p>
<p>This is the input:</p>
<pre><code>id refpop001 altpop001 refpop002 altpop002 refpop003 altpop003
id1 6 274 2 93 5 95
id2 202 0 220 0 73 0
id3 166 159 0 173 114 90
</code></pre>
<p>This is the desired output:</p>
<pre><code>id pop001 pop002 pop003
id1ref 6 2 5
id1alt 274 93 95
id2ref 202 220 73
id2alt 0 0 0
id3ref 166 0 114
id3alt 159 173 90
</code></pre>
<p>Header and id column are only indicated for clarification and are not required in the output</p>
| -3 | 2016-07-21T15:11:12Z | 38,508,197 | <p>Given you are transforming tab delimited plain text which is in file and your data size is not changing, straightforward approach is:</p>
<pre><code>lines=open('file_or_stream_name.txt','r').readlines();
newLines=[]
newLines.append('\t'.join('id','pop001','pop002','pop003')) #header line
for line in lines[1:]:
elements=line.split('\t')
newLine=[]
newLine.append(elements[0]+'ref')
newLine.extend(elements[1::2])
newLines.append('\t'.join(newLine))
newLine=[]
newLine.append(elements[0]+'alt')
newLine.extend(elements[2::2])
newLines.append('\t'.join(newLine))
newText='\n'.join(newLines) #or '\r\n'.join(...), if you're in Windows
</code></pre>
| 0 | 2016-07-21T15:22:11Z | [
"python",
"loops",
"rows"
] |
Python. Send Uploaded File To Remote Server | 38,508,111 | <p>In My Flask App, i want to upload a file to a remote server.</p>
<p>i tried this code but i get an error</p>
<pre><code>import subprocess
import os
c_dir = os.path.dirname(os.path.abspath(__file__))
myfile = open(c_dir + '\\cape-kid.png')
p = subprocess.Popen(["scp", myfile, destination])
sts = os.waitpid(p.pid, 0)
</code></pre>
<p>this was just a test file. there's an image in the same directory as my test python file. the error said:</p>
<blockquote>
<p>Traceback (most recent call last): File
"C:\Users\waite-ryan-m\Desktop\remote-saving\test-send.py", line 20,
in
p = subprocess.Popen(["scp", c_dir + '\cape-kid.png', 'destination']) File
"C:\Users\waite-ryan-m\Desktop\WPython\WinPython-64bit-2.7.12.1Zero\python-2.7.12.amd64\lib\subprocess.py",
line 711, in <strong>init</strong>
errread, errwrite) File "C:\Users\waite-ryan-m\Desktop\WPython\WinPython-64bit-2.7.12.1Zero\python-2.7.12.amd64\lib\subprocess.py",
line 959, in _execute_child
startupinfo) WindowsError: [Error 2] The system cannot find the file specified</p>
</blockquote>
| 1 | 2016-07-21T15:17:46Z | 38,539,097 | <p>With <code>open()</code> you open an file to read or write on it. What you want is to concatinate the string and use this as parameter for scp. Maybe the file you want to copy also doesn't exist - have you tried printing the path you constructed and checking it manually?
And have you defined <code>destination</code> anywhere? This message could also mean, that the system cannot find <code>scp</code>.</p>
| 1 | 2016-07-23T06:58:51Z | [
"python"
] |
Replace values in a large list of arrays (performance) | 38,508,116 | <p>I have a performance problem with replacing values of a list of arrays using a dictionary. </p>
<p>Let's say this is my dictionary:</p>
<pre><code># Create a sample dictionary
keys = [1, 2, 3, 4]
values = [5, 6, 7, 8]
dictionary = dict(zip(keys, values))
</code></pre>
<p>And this is my list of arrays:</p>
<pre><code># import numpy as np
# List of arrays
listvalues = []
arr1 = np.array([1, 3, 2])
arr2 = np.array([1, 1, 2, 4])
arr3 = np.array([4, 3, 2])
listvalues.append(arr1)
listvalues.append(arr2)
listvalues.append(arr3)
listvalues
>[array([1, 3, 2]), array([1, 1, 2, 4]), array([4, 3, 2])]
</code></pre>
<p>I then use the following function to replace all values in a nD nummpy array using a dictionary:</p>
<pre><code># Replace function
def replace(arr, rep_dict):
rep_keys, rep_vals = np.array(list(zip(*sorted(rep_dict.items()))))
idces = np.digitize(arr, rep_keys, right=True)
return rep_vals[idces]
</code></pre>
<p>This function is really fast, however I need to iterate over my list of arrays to apply this function to each array:</p>
<pre><code>replaced = []
for i in xrange(len(listvalues)):
replaced.append(replace(listvalues[i], dictionary))
</code></pre>
<p>This is the bottleneck of the process, as it needs to iterate over thousands of arrays.
How could I do achieve the same result without using the for-loop? It is important that the result is in the same format as the input (a list of arrays with replaced values)</p>
<p>Many thanks guys!!</p>
| 1 | 2016-07-21T15:17:59Z | 38,509,123 | <p>This will do the trick efficiently, using the <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="nofollow">numpy_indexed</a> package. It can be further simplified if all values in 'listvalues' are guaranteed to be present in 'keys'; but ill leave that as an exercise to the reader.</p>
<pre><code>import numpy_indexed as npi
arr = np.concatenate(listvalues)
idx = npi.indices(keys, arr, missing='mask')
remap = np.logical_not(idx.mask)
arr[remap] = np.array(values)[idx[remap]]
replaced = np.array_split(arr, np.cumsum([len(a) for a in listvalues][:-1]))
</code></pre>
| 2 | 2016-07-21T16:05:07Z | [
"python",
"arrays",
"performance",
"numpy",
"for-loop"
] |
Django: how to cache a function | 38,508,240 | <p>I have a web application that runs python in the back-end. When my page loads, a django function is called which runs a SQL query and that query takes about 15-20 seconds to run and return the response. And that happens every time the page loads and it would be very annoying for the user to wait 15-20 secs every time the page refreshes. </p>
<p>So I wanted to know if there is a way to cache the response from the query and store it somewhere in the browser when the page loads the first time. And whenever, the page refreshes afterwards, instead of running the query again, I would just get the data from browser's cache and so the page would load quicker. </p>
<p>This is the function that runs when the page loads</p>
<pre><code>def populateDropdown(request):
database = cx_Oracle.connect('username', 'password', 'host')
cur = database.cursor()
cur.execute("select distinct(item) from MY_TABLE")
dropList = list(cur)
dropList = simplejson.dumps({"dropList": dropList})
return HttpResponse(dropList, content_type="application/json")
</code></pre>
<p>I can't seem to find an example on how to do this. I looked up Django's documentation on caching but it shows how to cache entire page not a specific function. It would be great if you can provide a simple example or link to a tutorial. Thanks :)</p>
| 1 | 2016-07-21T15:24:37Z | 38,508,402 | <p>You can the cache the result of the view that runs that query:</p>
<pre><code>from django.views.decorators.cache import cache_page
@cache_page(600) # 10 minutes
def populateDropdown(request):
...
</code></pre>
<p>Or cache the expensive functions in the view which in your case is almost synonymous to caching the entire view:</p>
<pre><code>from django.core.cache import cache
def populateDropdown(request):
if not cache.get('droplist'): # check if droplist has expired in cache
database = cx_Oracle.connect('username', 'password', 'host')
cur = database.cursor()
cur.execute("select distinct(item) from MY_TABLE")
dropList = list(cur)
dropList = simplejson.dumps({"dropList": dropList})
cache.set('droplist', dropList, 600) # 10 minutes
return HttpResponse(cache.get('droplist'), content_type="application/json")
</code></pre>
| 2 | 2016-07-21T15:31:32Z | [
"python",
"django",
"caching"
] |
How to get the max/min value in Pandas DataFrame when nan value in it | 38,508,294 | <p>Since one column of my pandas dataframe has <code>nan</code> value, so when I want to get the max value of that column, it just return error. </p>
<pre><code>>>> df.iloc[:, 1].max()
'error:512'
</code></pre>
<p>How can I skip that <code>nan</code> value and get the max value of that column?</p>
| 2 | 2016-07-21T15:26:32Z | 38,508,323 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dropna.html" rel="nofollow">Series.dropna</a>.</p>
<pre><code>res = df.iloc[:, 1].dropna().max()
</code></pre>
| 4 | 2016-07-21T15:27:46Z | [
"python",
"pandas"
] |
How to get the max/min value in Pandas DataFrame when nan value in it | 38,508,294 | <p>Since one column of my pandas dataframe has <code>nan</code> value, so when I want to get the max value of that column, it just return error. </p>
<pre><code>>>> df.iloc[:, 1].max()
'error:512'
</code></pre>
<p>How can I skip that <code>nan</code> value and get the max value of that column?</p>
| 2 | 2016-07-21T15:26:32Z | 38,508,465 | <p>You can use <code>NumPy</code>'s help with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.nanmax.html" rel="nofollow"><code>np.nanmax</code></a>, <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.nanmin.html" rel="nofollow"><code>np.nanmin</code></a> :</p>
<pre><code>In [28]: df
Out[28]:
A B C
0 7 NaN 8
1 3 3 5
2 8 1 7
3 3 0 3
4 8 2 7
In [29]: np.nanmax(df.iloc[:, 1].values)
Out[29]: 3.0
In [30]: np.nanmin(df.iloc[:, 1].values)
Out[30]: 0.0
</code></pre>
| 3 | 2016-07-21T15:34:03Z | [
"python",
"pandas"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.