title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
I need to split a python list into several python lists but the new lists need to contain fields between certain strings | 38,618,526 | <p>I have a fairly large Python 2.7 list containing strings like this: </p>
<pre><code>biglist = ['A','B1','C00','D','A','1','2000','A','X','3','1','C','D','A','B','C']
</code></pre>
<p>I need to cut this up in several seperate lists cutted each time it finds a 'A' string in the list and then that new list contains everything until the next 'A'. So the result is this: </p>
<pre><code>list1 = ['A','B1','C00','D']
list2 = ['A','1','2000']
list3 = ['A','X','3','1','C','D']
list4 = ['A','B','C']
listx = ...
</code></pre>
<p>The amount of newly created list is also varying.</p>
<p>I'm completely stuck on this and it's completely over my head, I research all day can't find anything. Thank you for helping me out. I use python2.7 </p>
<p><em>EDITED: MY STRINGS IN THE BIGLIST ARE NOT ALL 1 CHAR, THEY ARE DIFFERENT IN SIZE, THANK YOU FOR THE HELP.</em></p>
| 0 | 2016-07-27T16:27:32Z | 38,618,709 | <p>First take your big-list and join it together as a string.</p>
<p><code>new_list = ''.join(biglist)</code></p>
<p>then you have <code>new_list = 'ABCDA12AX31CDABC'</code></p>
<p>split up new_list on 'A'</p>
<p><code>split_list = new_list.split('A')</code></p>
<p>then you have <code>split_list = ['', 'BCD', '12', 'X31CD', 'BC']</code></p>
<p>then add the 'A's back in there</p>
<p><code>final_list = ['A'+x for x in split_list if x]</code></p>
<p>alltoghether: </p>
<pre><code>new_list = ''.join(biglist)
split_list = new_list.split('A')
final_list = ['A'+x for x in split_list if x]
>>> final_list
['ABCD', 'A12', 'AX31CD', 'ABC']
</code></pre>
<p>or in neato one line format: </p>
<pre><code>final_list = ['A'+x for x in ''.join(biglist).split('A') if x]
</code></pre>
<p>slap it into a dictionary:</p>
<pre><code>dict_lists = {}
for i,v in enumerate(final_list):
dict_lists['list{}'.format(i)] = v
</code></pre>
<p>and access them like</p>
<pre><code>>>> dict_lists['list0']
'ABCD'
</code></pre>
| 0 | 2016-07-27T16:37:25Z | [
"python",
"python-2.7"
] |
I need to split a python list into several python lists but the new lists need to contain fields between certain strings | 38,618,526 | <p>I have a fairly large Python 2.7 list containing strings like this: </p>
<pre><code>biglist = ['A','B1','C00','D','A','1','2000','A','X','3','1','C','D','A','B','C']
</code></pre>
<p>I need to cut this up in several seperate lists cutted each time it finds a 'A' string in the list and then that new list contains everything until the next 'A'. So the result is this: </p>
<pre><code>list1 = ['A','B1','C00','D']
list2 = ['A','1','2000']
list3 = ['A','X','3','1','C','D']
list4 = ['A','B','C']
listx = ...
</code></pre>
<p>The amount of newly created list is also varying.</p>
<p>I'm completely stuck on this and it's completely over my head, I research all day can't find anything. Thank you for helping me out. I use python2.7 </p>
<p><em>EDITED: MY STRINGS IN THE BIGLIST ARE NOT ALL 1 CHAR, THEY ARE DIFFERENT IN SIZE, THANK YOU FOR THE HELP.</em></p>
| 0 | 2016-07-27T16:27:32Z | 38,618,719 | <p>You could put the results in a dictionary with <code>'list1'</code>, <code>'list2'</code>, <code>...</code> as keys. The <code>defaultdict</code> creates a new key with an empty list every time an <code>A</code> is encountered in the list. The items followinng <code>'A'</code> are added to this list until another <code>'A'</code> is encountered.</p>
<pre><code>from collections import defaultdict
from itertools import count
biglist = ['A','B','C','D','A','1','2','A','X','3','1','C','D','A','B','C']
c = count(1)
d = defaultdict(list)
for i in biglist:
if i == 'A':
j = str(next(c))
d['list'+ j].append(i)
print(d)
# defaultdict(<class 'list'>, {'list2': ['A', '1', '2'], 'list3': ['A', 'X', '3', '1', 'C', 'D'], 'list1': ['A', 'B', 'C', 'D'], 'list4': ['A', 'B', 'C']})
</code></pre>
<p>The first list can be accessed via <code>d['list1']</code> and generally <code>d['listn']</code> where <code>n</code> is the number of lists in the dictionary values.</p>
| 0 | 2016-07-27T16:38:06Z | [
"python",
"python-2.7"
] |
I need to split a python list into several python lists but the new lists need to contain fields between certain strings | 38,618,526 | <p>I have a fairly large Python 2.7 list containing strings like this: </p>
<pre><code>biglist = ['A','B1','C00','D','A','1','2000','A','X','3','1','C','D','A','B','C']
</code></pre>
<p>I need to cut this up in several seperate lists cutted each time it finds a 'A' string in the list and then that new list contains everything until the next 'A'. So the result is this: </p>
<pre><code>list1 = ['A','B1','C00','D']
list2 = ['A','1','2000']
list3 = ['A','X','3','1','C','D']
list4 = ['A','B','C']
listx = ...
</code></pre>
<p>The amount of newly created list is also varying.</p>
<p>I'm completely stuck on this and it's completely over my head, I research all day can't find anything. Thank you for helping me out. I use python2.7 </p>
<p><em>EDITED: MY STRINGS IN THE BIGLIST ARE NOT ALL 1 CHAR, THEY ARE DIFFERENT IN SIZE, THANK YOU FOR THE HELP.</em></p>
| 0 | 2016-07-27T16:27:32Z | 38,618,749 | <p>Aside from the other excellent suggestions, you could write a generator which will give you things you can enumerate over late. This could be tidier, but...</p>
<pre><code>def group(stuff):
item = []
for thing in stuff:
if thing != 'A':
item.append(thing)
continue
if len(item) > 0:
yield item
item = ['A']
yield item
if __name__ == '__main__':
biglist = ['A','B','C','D','A','1','2','A','X','3','1','C','D','A','B','C']
for i in group(biglist):
print i
</code></pre>
| 1 | 2016-07-27T16:39:56Z | [
"python",
"python-2.7"
] |
I need to split a python list into several python lists but the new lists need to contain fields between certain strings | 38,618,526 | <p>I have a fairly large Python 2.7 list containing strings like this: </p>
<pre><code>biglist = ['A','B1','C00','D','A','1','2000','A','X','3','1','C','D','A','B','C']
</code></pre>
<p>I need to cut this up in several seperate lists cutted each time it finds a 'A' string in the list and then that new list contains everything until the next 'A'. So the result is this: </p>
<pre><code>list1 = ['A','B1','C00','D']
list2 = ['A','1','2000']
list3 = ['A','X','3','1','C','D']
list4 = ['A','B','C']
listx = ...
</code></pre>
<p>The amount of newly created list is also varying.</p>
<p>I'm completely stuck on this and it's completely over my head, I research all day can't find anything. Thank you for helping me out. I use python2.7 </p>
<p><em>EDITED: MY STRINGS IN THE BIGLIST ARE NOT ALL 1 CHAR, THEY ARE DIFFERENT IN SIZE, THANK YOU FOR THE HELP.</em></p>
| 0 | 2016-07-27T16:27:32Z | 38,618,782 | <p>it's can be done fairly simply with a generator</p>
<pre><code>def split(biglist):
last = None
for x in biglist:
if x == "A":
if last:
yield last
last = [x]
else:
if last is None: # in case the list didn't start with 'A'
last = []
last.append(x)
for x in split(biglist):
print x
['A', 'B', 'C', 'D']
['A', '1', '2']
['A', 'X', '3', '1', 'C', 'D']
</code></pre>
| 2 | 2016-07-27T16:41:37Z | [
"python",
"python-2.7"
] |
I need to split a python list into several python lists but the new lists need to contain fields between certain strings | 38,618,526 | <p>I have a fairly large Python 2.7 list containing strings like this: </p>
<pre><code>biglist = ['A','B1','C00','D','A','1','2000','A','X','3','1','C','D','A','B','C']
</code></pre>
<p>I need to cut this up in several seperate lists cutted each time it finds a 'A' string in the list and then that new list contains everything until the next 'A'. So the result is this: </p>
<pre><code>list1 = ['A','B1','C00','D']
list2 = ['A','1','2000']
list3 = ['A','X','3','1','C','D']
list4 = ['A','B','C']
listx = ...
</code></pre>
<p>The amount of newly created list is also varying.</p>
<p>I'm completely stuck on this and it's completely over my head, I research all day can't find anything. Thank you for helping me out. I use python2.7 </p>
<p><em>EDITED: MY STRINGS IN THE BIGLIST ARE NOT ALL 1 CHAR, THEY ARE DIFFERENT IN SIZE, THANK YOU FOR THE HELP.</em></p>
| 0 | 2016-07-27T16:27:32Z | 38,618,800 | <p>I'd probably use <code>itertools.groupby</code>:</p>
<pre><code>from itertools import groupby
def group_stuff(iterable, partition='A'):
out = []
for k, v in groupby(iterable, key=lambda x: x != partition):
if not k:
out = list(v)
else:
out.extend(v)
yield out
out = []
if out:
yield out
# Test cases
biglist = ['A','B','C','D','A','1','2','A','X','3','1','C','D','A','B','C']
for item in group_stuff(biglist):
print(item)
print('*' * 80)
biglist.append('A')
for item in group_stuff(biglist):
print(item)
print('*' * 80)
biglist.pop(0)
for item in group_stuff(biglist):
print(item)
</code></pre>
<p>Basically, we notice that in your list we have 2 separate groups... The first group is "It's an A!", the second group is "It isn't an A". <code>groupby</code> will partition your iterable into those two groups trivially. All that remains is a little logic to merge the groups appropriately (adding a "It's an A!" group -- if it exists -- to the start of an "It's not an A" group).</p>
<p>If you have consecutive <code>'A'</code> in your list, this will give you a list that has more than one <code>'A'</code> at the beginning. If that's a problem, we can modify the logic in the <code>if not k:</code> block slightly to yield all but the last value as a list...</p>
<pre><code>if not k:
values = list(v)
for item in values[:-1]:
yield [item]
out = [values[-1]]
</code></pre>
<p>As for setting that output as names in the local namespace, there are <em>LOTS</em> of questions around here which point out that this is generally a bad idea. <a href="http://nedbatchelder.com/blog/201112/keep_data_out_of_your_variable_names.html" rel="nofollow">Here's</a> an external post which talks about it. The gist of it is that you'll do much better if you just use a to hold the data. Instead of</p>
<pre><code>list0 = ...
list1 = ...
</code></pre>
<p>do:</p>
<pre><code>lst[0] = ...
lst[1] = ...
</code></pre>
<p>etc. Your code will end up being much easier to work with.</p>
| 1 | 2016-07-27T16:42:36Z | [
"python",
"python-2.7"
] |
I need to split a python list into several python lists but the new lists need to contain fields between certain strings | 38,618,526 | <p>I have a fairly large Python 2.7 list containing strings like this: </p>
<pre><code>biglist = ['A','B1','C00','D','A','1','2000','A','X','3','1','C','D','A','B','C']
</code></pre>
<p>I need to cut this up in several seperate lists cutted each time it finds a 'A' string in the list and then that new list contains everything until the next 'A'. So the result is this: </p>
<pre><code>list1 = ['A','B1','C00','D']
list2 = ['A','1','2000']
list3 = ['A','X','3','1','C','D']
list4 = ['A','B','C']
listx = ...
</code></pre>
<p>The amount of newly created list is also varying.</p>
<p>I'm completely stuck on this and it's completely over my head, I research all day can't find anything. Thank you for helping me out. I use python2.7 </p>
<p><em>EDITED: MY STRINGS IN THE BIGLIST ARE NOT ALL 1 CHAR, THEY ARE DIFFERENT IN SIZE, THANK YOU FOR THE HELP.</em></p>
| 0 | 2016-07-27T16:27:32Z | 38,618,851 | <p>You can convert the list of characters in a string and use the split() function to divide the string at each occurrence of 'A'.</p>
<pre><code>biglist = ['A','B','C','D','A','1','2','A','X','3','1','C','D','A','B','C']
lists = [list('A'+x) for x in ''.join(s).split("A") if x]
</code></pre>
<p>will give you a list of characters as required.</p>
<pre><code>>>> lists
[['A', 'B', 'C', 'D'], ['A', '1', '2'], ['A', 'X', '3', '1', 'C', 'D'], ['A', 'B', 'C']]
</code></pre>
| 0 | 2016-07-27T16:45:05Z | [
"python",
"python-2.7"
] |
I need to split a python list into several python lists but the new lists need to contain fields between certain strings | 38,618,526 | <p>I have a fairly large Python 2.7 list containing strings like this: </p>
<pre><code>biglist = ['A','B1','C00','D','A','1','2000','A','X','3','1','C','D','A','B','C']
</code></pre>
<p>I need to cut this up in several seperate lists cutted each time it finds a 'A' string in the list and then that new list contains everything until the next 'A'. So the result is this: </p>
<pre><code>list1 = ['A','B1','C00','D']
list2 = ['A','1','2000']
list3 = ['A','X','3','1','C','D']
list4 = ['A','B','C']
listx = ...
</code></pre>
<p>The amount of newly created list is also varying.</p>
<p>I'm completely stuck on this and it's completely over my head, I research all day can't find anything. Thank you for helping me out. I use python2.7 </p>
<p><em>EDITED: MY STRINGS IN THE BIGLIST ARE NOT ALL 1 CHAR, THEY ARE DIFFERENT IN SIZE, THANK YOU FOR THE HELP.</em></p>
| 0 | 2016-07-27T16:27:32Z | 38,618,966 | <p>One option to use <code>groupby</code> from itertools:</p>
<pre><code># create a group variable by looping through the list
from itertools import groupby
acc, grp = 0, []
for e in biglist:
acc += (e == 'A')
grp.append(acc)
# split the original list by the group variable
[[i[0] for i in g] for _, g in groupby(zip(biglist, grp), lambda x: x[1])]
# [['A', 'B', 'C', 'D'],
# ['A', '1', '2'],
# ['A', 'X', '3', '1', 'C', 'D'],
# ['A', 'B', 'C']]
</code></pre>
<p>We can also use <code>pandas</code>:</p>
<pre><code>import pandas as pd
s = pd.Series(biglist)
[list(g) for _, g in s.groupby((s == 'A').cumsum())]
# [['A', 'B', 'C', 'D'],
# ['A', '1', '2'],
# ['A', 'X', '3', '1', 'C', 'D'],
# ['A', 'B', 'C']]
</code></pre>
| 0 | 2016-07-27T16:50:59Z | [
"python",
"python-2.7"
] |
I need to split a python list into several python lists but the new lists need to contain fields between certain strings | 38,618,526 | <p>I have a fairly large Python 2.7 list containing strings like this: </p>
<pre><code>biglist = ['A','B1','C00','D','A','1','2000','A','X','3','1','C','D','A','B','C']
</code></pre>
<p>I need to cut this up in several seperate lists cutted each time it finds a 'A' string in the list and then that new list contains everything until the next 'A'. So the result is this: </p>
<pre><code>list1 = ['A','B1','C00','D']
list2 = ['A','1','2000']
list3 = ['A','X','3','1','C','D']
list4 = ['A','B','C']
listx = ...
</code></pre>
<p>The amount of newly created list is also varying.</p>
<p>I'm completely stuck on this and it's completely over my head, I research all day can't find anything. Thank you for helping me out. I use python2.7 </p>
<p><em>EDITED: MY STRINGS IN THE BIGLIST ARE NOT ALL 1 CHAR, THEY ARE DIFFERENT IN SIZE, THANK YOU FOR THE HELP.</em></p>
| 0 | 2016-07-27T16:27:32Z | 38,619,103 | <p>This will create module level variables list1, .. listn.</p>
<p><strong>If it possible to use <em>list of lists</em> or <em>dict of lists</em> you should prefer other answers.</strong></p>
<p>This answer base on python function <a href="https://docs.python.org/2/library/functions.html#globals" rel="nofollow"><code>globals</code></a> that return dict of current global namespace. It modifying this dict to create variables on fly. There is also same function for getting local variables, but there is "note" in documentation with warning that is bad idea to change this dict. However there is no such warning for <code>globals</code> so, i hope, the code is safe.</p>
<pre><code> biglist = ['A','B','C','D','A','1','2','A','X','3','1','C','D','A','B','C']
last_arr_index = 1;
tmp_list = []
for idx, letter in enumerate( biglist ):
if letter == 'A' and idx > 0:
globals()[ 'list' + str(last_arr_index) ] = tmp_list
last_arr_index+=1
tmp_list = ['A']
else:
tmp_list.append( letter )
print( list1 )
print( list2 )
print( list3 )
</code></pre>
| 0 | 2016-07-27T16:58:38Z | [
"python",
"python-2.7"
] |
how to get value of an xml element not directly under root | 38,618,778 | <p>I am trying to parse an xml and get the value of <code>dir_path</code> as below,however I dont seem to get the desired output,whats wrong here and how to fix it?</p>
<p><strong>input.xml</strong></p>
<pre><code><?xml version="1.0" ?>
<data>
<software>
<name>xyz</name>
<role>xyz</role>
<future>unknown</future>
</software>
<software>
<name>abc</name>
<role>abc</role>
<future>clear</future>
<dir_path cmm_root_path_var="COMP_softwareROOT">\\location\software\INR\</dir_path>
<loadit reduced="true">
<RW>yes</RW>
<readonly>R/</readonly>
</loadit>
<upload reduced="true">
</upload>
</software>
<software>
<name>def</name>
<role>def</role>
<future>clear</future>
<dir_path cmm_root_path_var="COMP2_softwareROOT">\\location1\software\INR\</dir_path>
<loadit reduced="true">
<RW>yes</RW>
<readonly>R/</readonly>
</loadit>
<upload reduced="true">
</upload>
</software>
</data>
</code></pre>
<p>CODE:-</p>
<pre><code>tree = ET.parse(input.xml)
root = tree.getroot()
dir_path = root.find(".//dir_path")
print dir_path.text
</code></pre>
<p>OUTPUT:-</p>
<pre><code>.\
</code></pre>
<p>EXPECTED OUTPUT:-</p>
<pre><code>\\location\software\INR\
</code></pre>
| 0 | 2016-07-27T16:41:21Z | 38,619,959 | <p>Try the following:</p>
<pre><code>from xml.etree import ElementTree as ET
tree = ET.parse('filename.xml')
item = tree.find('software/[name="abc"]/dir_path')
print(item.text if item is not None else None)
</code></pre>
| 1 | 2016-07-27T17:49:57Z | [
"python"
] |
merging json with python | 38,618,824 | <p>I have following code to merge values in my json:</p>
<pre><code>from jsonmerge import merge
with open('env.json') as data_file:
data = json.load(data_file)
result2 = merge("", data.get('default_attributes'))
result3 = merge(result2, data.get('normal_attributes'))
result4 = merge(result3, data.get('override_attributes'))
result5 = merge(result4, data.get('force_override_attributes'))
> print result4, result5
result6 = merge(result5, data.get('automatic_attributes'))
cookbook_versions = {"cookbook_versions" : data.get('cookbook_versions')}
result7 = merge(result6, cookbook_versions)
</code></pre>
<p>Now when I print result4, result5 I get :</p>
<blockquote>
<p>result4 = {u'modmon': {u'env': u'dev'}, u'default': {u'env':
u'developmen-jq'}, u'paypal': {u'artifact': u'%5BINTEGRATION%5D'},
u'windows': {u'password': u'Pib1StheK1N5'}, u'task_sched':
{u'credentials': u'kX?rLQ4XN$q'}, u'seven_zip': {u'url':
u'<a href="https://.io/artifactory/djcm-zip-local/djcm/chef/paypal/7z1514-x64.msi" rel="nofollow">https://.io/artifactory/djcm-zip-local/djcm/chef/paypal/7z1514-x64.msi</a>'},
u'7-zip': {u'home': u'%SYSTEMDRIVE%\7-zip'}}</p>
<p>result5 = None</p>
</blockquote>
<p>which doesn't make sense to me as in result5 I'm merging result 4 which already has content in it then why does it come out null ?</p>
| 1 | 2016-07-27T16:43:41Z | 38,618,930 | <p>If <code>data.get('force_override_attributes')</code> is <code>None</code> then <code>merge(result4, data.get('force_override_attributes'))</code> is <code>None</code></p>
<pre><code>>>> a = {"a":10}
>>> b = merge(a, None)
>>> print b
None
</code></pre>
<p>What you can do is:</p>
<pre><code>result5 = merge(result4, data.get('force_override_attributes') or {})
</code></pre>
<p>So even if it is an <code>None</code> the value of result4 will be retained.</p>
<p>or another option is to reverse the order, this should also work:</p>
<pre><code>result5 = merge(data.get('force_override_attributes'), result4)
</code></pre>
| 2 | 2016-07-27T16:49:42Z | [
"python",
"json",
"python-2.7",
"python-3.x"
] |
Python package recognized in Pycharm, not in terminal | 38,618,867 | <p>I'm developing a Django project which imports django-imagekit; everything works fine on my Windows machine. On my Linux-Ubuntu laptop though, Pycharm recognizes the package in the editor, it's listed in the project's interpreter's packages but it's not recognized from the command line:</p>
<pre><code> simon@Simon-Swanky:~/PycharmProjects/tcspt$ python manage.py check
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/simon/.local/lib/python2.7/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/home/simon/.local/lib/python2.7/site-packages/django/core/management/__init__.py", line 327, in execute
django.setup()
File "/home/simon/.local/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/simon/.local/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/home/simon/.local/lib/python2.7/site-packages/django/apps/config.py", line 202, in import_models
self.models_module = import_module(models_module_name)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/simon/PycharmProjects/tcspt/assetmanage/models.py", line 3, in <module>
from imagekit.models import ProcessedImageField
ImportError: No module named imagekit.models
</code></pre>
<p>It seems to be looking in python 2's packages but I'm using python 3 for this project. I tried a few things like adding the path to the project variables but so far I can't get it to work.</p>
<p>Trying to import imagekit from python 2's shell:</p>
<pre><code>Python 2.7.11+ (default, Apr 17 2016, 14:00:29)
[GCC 5.3.1 20160413] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import imagekit
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named imagekit
</code></pre>
<p>Trying to import imagekit from python 3's shell:</p>
<pre><code>>>> import imagekit
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/imagekit/__init__.py", line 2, in <module>
from . import conf
File "/usr/local/lib/python3.5/dist-packages/imagekit/conf.py", line 5, in <module>
class ImageKitConf(AppConf):
File "/usr/local/lib/python3.5/dist-packages/appconf/base.py", line 74, in __new__
new_class._configure()
File "/usr/local/lib/python3.5/dist-packages/appconf/base.py", line 100, in _configure
value = getattr(obj._meta.holder, prefixed_name, default_value)
File "/home/simon/.local/lib/python3.5/site-packages/django/conf/__init__.py", line 55, in __getattr__
self._setup(name)
File "/home/simon/.local/lib/python3.5/site-packages/django/conf/__init__.py", line 41, in _setup
% (desc, ENVIRONMENT_VARIABLE))
django.core.exceptions.ImproperlyConfigured: Requested setting IMAGEKIT_DEFAULT_CACHEFILE_BACKEND, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
</code></pre>
| 0 | 2016-07-27T16:46:17Z | 38,618,956 | <p>If you are using Python 3, then you should be using <code>python3 manage.py check</code>. It might be better to use a virtual environment, in which case you would activate the virtual environment before running <code>python manage.py check</code>.</p>
<p>The import fails in the Python 3 shell because you have not set the <code>DJANGO_SETTINGS_MODULE</code> environment variable (see <a href="https://docs.djangoproject.com/en/1.9/topics/settings/#designating-the-settings" rel="nofollow">the docs</a> for more info). The easiest fix is to use the Django shell, which takes care of this for you.</p>
<pre><code>$ python3 manage.py shell
>>> import imagekit
</code></pre>
| 1 | 2016-07-27T16:50:43Z | [
"python",
"django",
"packages",
"django-imagekit"
] |
Exiting subprocess on control-D? | 38,618,884 | <p>I am trying to stop a server which is a subprocess when the parent process gets killed by a cntrl-D(EOF on stdin). I tried many ways including reading stdin in the subprocess but that blocks all keyboard input. Is there a way to kill the subprocess when the parent process encounters a EOF.</p>
<p>Creating a subprocess in python via <code>subprocess.Popen</code></p>
<p>polling for EOF in subprocess by this:</p>
<pre><code>self.t = threading.Thread(target=self.server.serve_forever)
self.t.start()
# quit on cntrl-d (EOF)
while True:
if len(sys.stdin.readline()) == 0:
self.stop()
def stop(self):
manager.save()
# shutdown bottle
self.server.shutdown()
# close socket
self.server.server_close()
self.t.join()
sys.exit()
</code></pre>
| 0 | 2016-07-27T16:47:18Z | 38,619,786 | <p>with @thatotherguy 's suggestion of using os.getppid(), here is the new working solution to end the child process when it is orphaned by the parent (ie. when a control-D occurs on the parent and it closes without signaling the child)</p>
<pre><code>self.t = threading.Thread(target=self.server.serve_forever)
self.t.start()
# quit on cntrl-d (EOF)
if os.getppid() != 1:
while True:
if os.getppid() == 1:
self.stop()
else:
time.sleep(1)
</code></pre>
| 0 | 2016-07-27T17:40:24Z | [
"python",
"linux",
"subprocess"
] |
Joining strings of different lengths in a loop | 38,618,888 | <p>I'm working on transforming a file from this format:</p>
<pre class="lang-none prettyprint-override"><code># SampleNamea seq1a seq2a
# SampleNameb seq1b seq2b
# SampleNamec seq1c seq2c
# SampelNamed seq1d seq2d
</code></pre>
<p>To this format:</p>
<pre class="lang-none prettyprint-override"><code># SampleNamea SampleNameb 0 0 0 0 s s e e q q 1 1 a b s s e e q q 2 2 a b
# SampleNamec SampleNamed 0 0 0 0 s s e e q q 1 1 c d s s e e q q 2 2 c d
</code></pre>
<p>Currently the script I have works if the <code>seq1a</code>, <code>seq1b</code>, etc are the same length. But in the dataset I have the length of the strings vary. If I try to run the script on my dataset, I get the message <code>IndexError: string index out of range</code>.</p>
<p>This is the portion of the script that: figures out the length of the string (i.e. <code>seq1aseq2a</code>, <code>seq1bseq2b</code>) that was append into the <code>InputMasterList</code>, adds the <code>SampleName</code>s with the extra zero's to the <code>OutputMasterList</code>. Then it is supposed to append to <code>OutputMasterList</code> the strings by selecting each consecutive element starting with element[0] from the <code>InputMasterList[LineEven]</code> string (<code>seq1aseq2a</code>) and <code>InputMasterList[LineOdd]</code> string (<code>seq1bseq2b</code>) and grouping them together into <code>OutputMasterList</code>. So the results would be (<code>s s e e q q 1 1 a b s s e e q q 2 2 a b</code>). </p>
<p>How can I get this script to work on different string lengths?</p>
<pre><code>LineEven = 0
LineOdd = 1
RecordNum = 1
while RecordNum < (NumofLinesInFile/2):
for i in range(len(InputMasterList[LineEven])):
if i == 0:
OutputMasterList.append(SampleList[LineEven]+'\t'+ SampleList[LineEven]+'\t'+'0'+'\t'+'0'+'\t'+'0'+'\t'+'0'+'\t')
OutputMasterList[RecordNum] = InputMasterList[LineEven][i]+'\t'+InputMasterList[LineOdd][i]+'\t'
RecordNum = RecordNum + 1
LineEven = LineEven + 2
LineOdd = LineOdd + 2
</code></pre>
<p>I am very much a beginner so I know this code is quite cumbersome, but any help would be appreciated. If you need clarification about what I'm attempting to do with this script please don't hesitate to ask.</p>
<p><strong><em>Update:</em></strong> Thank you for your prompt responses. Due to your feedback I realized that I had to change the nature of my question. In my dataset I have missing sequences that my script does not like and I need to account for this missing data with a placeholder which would be the same length as its counterpart. </p>
<p>Old format:</p>
<pre><code># SampleNamea seq1a seq2a
# SampleNameb '.' seq2b
</code></pre>
<p>New format:</p>
<pre><code># SampleNamea seq1a seq2a
# SampleNameb NNNNN seq2b
</code></pre>
<p>Then I believe my script will work!</p>
<p><strong>TL;DR</strong> - Based on your feedback I have a basis on what my next steps should be.</p>
| 1 | 2016-07-27T16:47:33Z | 38,626,566 | <p>InputMasterList[LineOdd]Â string might look like (.seq2b)Â as per your new update.</p>
<p>Then before before proceeding to append, do a check on InputMasterList</p>
<pre><code>if '.' in InputMasterList[LineOdd]:
InputMasterList[LineOdd] = InputMasterList[LineOdd].replace('.', 'NNNNN', 1)
</code></pre>
<p>You can do this for both LineOdd and LineEven</p>
<p><strong>Note</strong>: This is based on your new input</p>
| 0 | 2016-07-28T03:24:06Z | [
"python"
] |
sum up two pandas dataframes with different indexes element by element | 38,618,911 | <p>I have two pandas dataframes, say df1 and df2, of some size each but with different indexes and I would like to sum up the two dataframes element by element. I provide you an easy example to better understand the problem:</p>
<pre><code>dic1 = {'a': [3, 1, 5, 2], 'b': [3, 1, 6, 3], 'c': [6, 7, 3, 0]}
dic2 = {'c': [7, 3, 5, 9], 'd': [9, 0, 2, 5], 'e': [4, 8, 3, 7]}
df1 = pd.DataFrame(dic1)
df2 = pd.DataFrame(dic2, index = [4, 5, 6, 7])
</code></pre>
<p>so df1 will be</p>
<pre><code> a b c
0 3 3 6
1 1 1 7
2 5 6 3
3 2 3 0
</code></pre>
<p>and df2 will be</p>
<pre><code> c d e
4 7 9 4
5 3 0 8
6 5 2 3
7 9 5 7
</code></pre>
<p>now if type</p>
<pre><code>df1 + df2
</code></pre>
<p>what I get is</p>
<pre><code> a b c d e
0 NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN
5 NaN NaN NaN NaN NaN
6 NaN NaN NaN NaN NaN
7 NaN NaN NaN NaN NaN
</code></pre>
<p>How can I make pandas understand that I want to sum up the two dataframe just element by element?</p>
| 3 | 2016-07-27T16:48:18Z | 38,618,948 | <p><strong>UPDATE:</strong> much better solution from <a href="http://stackoverflow.com/questions/38618911/sum-up-two-pandas-dataframes-with-different-indexes-element-by-element/38618948#comment64623588_38618948">piRSquared</a>:</p>
<pre><code>In [39]: df1 + df2.values
Out[39]:
a b c
0 10 12 10
1 4 1 15
2 10 8 6
3 11 8 7
</code></pre>
<p><strong>Old answer:</strong></p>
<pre><code>In [37]: df1.values + df2.values
Out[37]:
array([[10, 12, 10],
[ 4, 1, 15],
[10, 8, 6],
[11, 8, 7]], dtype=int64)
In [38]: pd.DataFrame(df1.values + df2.values, columns=df1.columns)
Out[38]:
a b c
0 10 12 10
1 4 1 15
2 10 8 6
3 11 8 7
</code></pre>
| 5 | 2016-07-27T16:50:25Z | [
"python",
"numpy",
"pandas",
"dataframe"
] |
Creating multiple runloops that share data | 38,618,939 | <p>I am fetching data from a server every 5 seconds updating a list. In addition I am also listening to a button press.</p>
<p>So what I need are two independent loops (pulling data and listening to a physical button on a Raspberry Pi via GPIO) and both need access to a shared list.</p>
<p>For just one loop I could use a simple <code>while = True</code> with a <code>time.sleep(5.0)</code> but how can I work with two infinite run loops at the same time that both access a shared variable and donât block each other? Also pressing the button should always be responsive.</p>
<p>How can I do this? Do I need threads for this?</p>
| 0 | 2016-07-27T16:50:04Z | 38,619,334 | <p>So, if I am understanding you correctly, the problem is that you want a button-checking loop that loops every, say, millisecond, while your server polling loop should only run every 5 seconds. Is that correct?</p>
<p>The simple solution would be to just have the server polling code execute inside the faster loop every time 5 seconds have passed since the last poll.
If the poll is time consuming and it becomes a problem that the button detection is blocked during the poll, I think you will have to run the two loops in parallel processes. However, that makes the problem more complex, especially since they are sharing resources.</p>
<p>To implement the 5 second interval inside the fast loop, you could do something like</p>
<pre><code>import datetime
# [...] other code
# inside fast loop
if last_poll_time - datetime.now() >= 5:
poll_again()
last_poll_time = datetime.now()
</code></pre>
| 0 | 2016-07-27T17:12:48Z | [
"python",
"python-2.7",
"raspberry-pi"
] |
Pull out sections from XML in Python | 38,618,964 | <p>Please note that I have some Python experience but not a lot of deep experience so please bear with me.</p>
<p>I have a very large XML file, ~100 megs, that has many, many sections and subsections. I need to pull out each subsection of a certain type (and there are a lot with this type) and write each to a different file. The writing I can handle, but I'm staring at ElementTree documentation trying to make sense of how to traverse the tree, find an element declared this way, yank out just the data between those tags and process it, then continue down the file.</p>
<p>The structure is similar to this (slightly obfuscated). What I want to do is pull out each section labeled "data" individually.</p>
<pre><code><filename>
<config>
<collections>
<datas>
<data>
...
</data>
<data>
...
</data>
<data>
...
</data>
</datas>
</collections>
</config>
</filename>
</code></pre>
| 0 | 2016-07-27T16:50:57Z | 38,621,943 | <p>Consider an <a href="https://www.w3.org/Style/XSL/" rel="nofollow">XSLT</a> solution with Python's third-party module, <a href="http://lxml.de/" rel="nofollow"><code>lxml</code></a>. Specifically, you <code>xpath()</code> for the length of <code><data></code> nodes and then iteratively build a dynamic XSLT script parsing only the needed element by node index <code>[#]</code> for outputted individual XML files:</p>
<pre><code>import lxml.etree as et
dom = et.parse('Input.xml')
datalen = len(dom.xpath("//data"))
for i in range(1, datalen+1):
xsltstr = '''<xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output version="1.0" encoding="UTF-8" indent="yes" />
<xsl:strip-space elements="*"/>
<xsl:template match="datas">
<xsl:apply-templates select="data[{0}]" />
</xsl:template>
<xsl:template match="data[{0}]">
<xsl:copy>
<xsl:copy-of select="*"/>
</xsl:copy>
</xsl:template>
</xsl:transform>'''.format(i)
xslt = et.fromstring(xsltstr)
transform = et.XSLT(xslt)
newdom = transform(dom)
tree_out = et.tostring(newdom, encoding='UTF-8', pretty_print=True,
xml_declaration=True)
xmlfile = open('Data{}.xml', 'wb')
xmlfile.write(tree_out)
xmlfile.close()
</code></pre>
| -1 | 2016-07-27T19:48:07Z | [
"python",
"xml"
] |
Pull out sections from XML in Python | 38,618,964 | <p>Please note that I have some Python experience but not a lot of deep experience so please bear with me.</p>
<p>I have a very large XML file, ~100 megs, that has many, many sections and subsections. I need to pull out each subsection of a certain type (and there are a lot with this type) and write each to a different file. The writing I can handle, but I'm staring at ElementTree documentation trying to make sense of how to traverse the tree, find an element declared this way, yank out just the data between those tags and process it, then continue down the file.</p>
<p>The structure is similar to this (slightly obfuscated). What I want to do is pull out each section labeled "data" individually.</p>
<pre><code><filename>
<config>
<collections>
<datas>
<data>
...
</data>
<data>
...
</data>
<data>
...
</data>
</datas>
</collections>
</config>
</filename>
</code></pre>
| 0 | 2016-07-27T16:50:57Z | 38,622,970 | <p>I think you can read in each <code>data</code> element using <code>iterparse</code> and then write it out, the following simply prints the element using the <code>print</code> function but you could of course instead write it to a file:</p>
<pre><code>import xml.etree.ElementTree as ET
for event, elem in ET.iterparse("input.xml"):
if elem.tag == 'data':
print(ET.tostring(elem, 'UTF-8', 'xml'))
elem.clear()
</code></pre>
| 1 | 2016-07-27T20:51:46Z | [
"python",
"xml"
] |
Why is my @classmethod variable "not defined"? | 38,618,985 | <p>I'm currently writing code in Python 2.7, which involves creating an object, in which I have two class methods and other regular methods. I need to use this specific combination of methods because of the larger context of the code I am writing- it's not relevant to this question, so I won't go into depth.</p>
<p>Within my __init__ function, I am creating a Pool (a multiprocessing object). In the creation of that, I call a setup function. This setup function is a @classmethod. I define a few variables in this setup function by using the cls.variablename syntax. As I mentioned, I call this setup function within my init function (inside the Pool creation), so these variables should be getting created, based on what I understand.</p>
<p>Later in my code, I call a few other functions, which eventually leads to me calling another @classmethod within the same object I was talking about earlier (same object as the first @classmethod). Within this @classmethod, I try to access the cls.variables I created in the first @classmethod. However, Python is telling me that my object doesn't have an attribute "cls.variable" (using general names here, obviously my actual names are specific to my code).</p>
<p>ANYWAYS...I realize that's probably pretty confusing. Here's some (very) generalized code example to illustrate the same idea:</p>
<pre><code>class General(object):
def __init__(self, A):
# this is correct syntax based on the resources I'm using,
# so the format of argument isn't the issue, in case anyone
# initially thinks that's the issue
self.pool = Pool(processes = 4, initializer=self._setup, initargs= (A, )
@classmethod
def _setup(cls, A):
cls.A = A
#leaving out other functions here that are NOT class methods, just regular methods
@classmethod
def get_results(cls):
print cls.A
</code></pre>
<p>The error I'm getting when I get to the equivalent of the <code>print cls.A line</code> is this:</p>
<pre class="lang-none prettyprint-override"><code>AttributeError: type object 'General' has no attribute 'A'
</code></pre>
<p>edit to show usage of this code:
The way I'm calling this in my code is as such:</p>
<pre><code>G = General(5)
G.get_results()
</code></pre>
<p>So, I'm creating an instance of the object (in which I create the Pool, which calls the setup function), and then calling get_results.</p>
<p>What am I doing wrong?</p>
| 0 | 2016-07-27T16:51:51Z | 38,620,138 | <p>The reason <code>General.A</code> does not get defined in the main process is that <a href="https://docs.python.org/2/library/multiprocessing.html#module-multiprocessing.pool" rel="nofollow"><code>multiprocessing.Pool</code></a> only runs <code>General._setup</code> in the <em>sub</em>processes. This means that it will <em>not</em> be called in the main process (where you call <code>Pool</code>).</p>
<p>You end up with 4 processes where in each of them there is <code>General.A</code> is defined, but not in the main process. You don't actually initialize a Pool like that (see <a href="http://stackoverflow.com/a/10118250/5754656">this answer</a> to the question <a href="http://stackoverflow.com/questions/10117073/how-to-use-initializer-to-set-up-my-multiprocess-pool"><em>How to use initializer to set up my multiprocess pool?</em></a>)</p>
<p>You want an <a href="https://en.wikipedia.org/wiki/Object_pool_pattern" rel="nofollow">Object Pool</a> which is not natively impemented in Python. There's a <a href="http://stackoverflow.com/questions/1514120/python-implementation-of-the-object-pool-design-pattern"><em>Python Implementation of the Object Pool Design Pattern</em></a> question here on StackOverflow, but you can find a bunch by just searching online.</p>
| 1 | 2016-07-27T18:01:26Z | [
"python",
"object",
"multiprocessing",
"pool",
"class-method"
] |
Download images automatically | 38,619,100 | <p>I have written this piece of python code which downloads a number of images from a repository of images and saves them in specified folder. The code looks like this: </p>
<pre><code>import urllib.request
import cv2
import numpy as np
import os
def store_raw_images():
neg_images_link = 'http://image- net.org/api/text/imagenet.synset.geturls?wnid=n00464651'
neg_images_urls = urllib.request.urlopen(neg_images_link).read().decode()
if not os.path.exists('neg'):
os.makedirs('neg')
pic_num = 1
for i in neg_images_urls.split('\n'):
try:
print(i)
urllib.request.urlretrieve(i, "neg/{}.jpg".format(pic_num))
img = cv2.imread("neg/{}.jpg".format(pic_num) + cv2.IMREAD_GRAYSCALE)
resized_image = cv2.resize(img, (100, 100))
cv2.imwrite("neg/{}.jpg".format(pic_num), resized_image)
pic_num = pic_num + 1
print(pic_num)
except Exception as e:
print(str(e))
store_raw_images()
</code></pre>
<p>For some reason the images are replaced and I do NOT see all images. I keep seeing one image <code>1.jpg</code> and all the images seem to replaced, though I expect the name of the images to go <code>1.jpg</code>, <code>2.jpg</code> , ... .</p>
<p>I also see this warning/error but I am not sure if it is relevant to this problem or not. </p>
<pre class="lang-none prettyprint-override"><code>Can't convert 'int' object to str
http://www.azjeugd.nl/site/modules/xcgal/albums/20082009seizoen/a1/groningen_thuis/IMG_7798.jpg
HTTP Error 403: Forbidden
http://www.ga-eagles.nl/images/duels1e0809/gaetel6.jpg
</code></pre>
<p>Where do you think the problem lies? </p>
<p>Note that I am incrementing the image number: </p>
<pre><code> pic_num = pic_num + 1
</code></pre>
| 1 | 2016-07-27T16:58:31Z | 38,619,287 | <p>You have everything in one <code>try/except</code> block. Assuming <code>cv2.imwrite</code> fails but all the other lines are executed without any problems, your code will never reach <code>picnum = picnum + 1</code>.
Try rearranging your code where you first increase <code>picnum</code> and check which lines actually gives you the error.</p>
| 1 | 2016-07-27T17:10:45Z | [
"python",
"download",
"urllib"
] |
Django not updating SQLite tables properly | 38,619,116 | <p>I've been working on a project that involves an authentication page using Django and AngularJS. I have created an extended version of the User class and have added "company" and "phone_number" as fields.</p>
<p>Here's my code for models.py:</p>
<pre><code>from django.contrib.auth.models import AbstractBaseUser, BaseUserManager
from django.db import models
from django.core.validators import RegexValidator
class AccountManager(BaseUserManager):
def create_user(self, email, password=None, **kwargs):
if not email:
raise ValueError('Users must have a valid email address')
#if not kwargs.get('username'):
#raise ValueError('Users must have a valid username')
#if access_code not in ['password']:
#raise ValueError('Sorry you are not eligible to join')
account = self.model(
email=self.normalize_email(email))
account.set_password(password)
account.save()
return account
def create_superuser(self, email, password, **kwargs):
account = self.create_user(email, password, **kwargs)
account.is_admin = True
account.save()
return account
class Account(AbstractBaseUser):
email = models.EmailField(unique=True)
first_name = models.CharField(max_length=40, blank=False)
last_name = models.CharField(max_length=40, blank=False)
company = models.CharField(max_length=40, blank=False)
phone_regex = RegexValidator(regex=r'^\+?1?\d{9,15}$', message="Phone number must be entered in the format: '+999999999'. Up to 15 digits allowed.")
phone_number = models.IntegerField(validators=[phone_regex], blank=False, null=True) # validators should be a list
# access_code = models.CharField(max_length=40, blank=False, default='SOME STRING')
is_admin = models.BooleanField(default=False)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
objects = AccountManager()
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['first_name', 'last_name', 'company', 'phone_number']
def __unicode__(self):
return self.email
def get_full_name(self):
return ' '.join([self.first_name, self.last_name])
def get_short_name(self):
return self.first_name
</code></pre>
<p>Now when I go to terminal and perform python manage.py createsuperuser all the field options pop up for me to enter text. However when I check the database afterwards, only the email and password fields are updated. Company, phone number, first name, and last name return as ' '.</p>
<p>Any clue what I am doing wrong? I've been spending too much time trying to fix this problem. </p>
<p>Thanks</p>
| 0 | 2016-07-27T16:59:20Z | 38,619,324 | <p>In <code>create_user</code>, you haven't passed any of the other arguments to the <code>self.model</code> call, so you only set the email and, later, the password. You need to pass the kwargs in there too.</p>
<pre><code> account = self.model(email=self.normalize_email(email), **kwargs)
</code></pre>
| 0 | 2016-07-27T17:12:29Z | [
"python",
"django",
"sqlite"
] |
Convert Python sequence to NumPy array, filling missing values | 38,619,143 | <p>The implicit conversion of a Python sequence of <em>variable-length</em> lists into a NumPy array cause the array to be of type <em>object</em>.</p>
<pre><code>v = [[1], [1, 2]]
np.array(v)
>>> array([[1], [1, 2]], dtype=object)
</code></pre>
<p>Trying to force another type will cause an exception:</p>
<pre><code>np.array(v, dtype=np.int32)
ValueError: setting an array element with a sequence.
</code></pre>
<p>What is the most efficient way to get a dense NumPy array of type int32, by filling the "missing" values with a given placeholder?</p>
<p>From my sample sequence <code>v</code>, I would like to get something like this, if 0 is the placeholder</p>
<pre><code>array([[1, 0], [1, 2]], dtype=int32)
</code></pre>
| 12 | 2016-07-27T17:01:10Z | 38,619,278 | <p>Pandas and its <code>DataFrame</code>-s deal beautifully with missing data.</p>
<pre><code>import numpy as np
import pandas as pd
v = [[1], [1, 2]]
print(pd.DataFrame(v).fillna(0).values.astype(np.int32))
# array([[1, 0],
# [1, 2]], dtype=int32)
</code></pre>
| 9 | 2016-07-27T17:10:13Z | [
"python",
"arrays",
"numpy",
"sequence",
"variable-length-array"
] |
Convert Python sequence to NumPy array, filling missing values | 38,619,143 | <p>The implicit conversion of a Python sequence of <em>variable-length</em> lists into a NumPy array cause the array to be of type <em>object</em>.</p>
<pre><code>v = [[1], [1, 2]]
np.array(v)
>>> array([[1], [1, 2]], dtype=object)
</code></pre>
<p>Trying to force another type will cause an exception:</p>
<pre><code>np.array(v, dtype=np.int32)
ValueError: setting an array element with a sequence.
</code></pre>
<p>What is the most efficient way to get a dense NumPy array of type int32, by filling the "missing" values with a given placeholder?</p>
<p>From my sample sequence <code>v</code>, I would like to get something like this, if 0 is the placeholder</p>
<pre><code>array([[1, 0], [1, 2]], dtype=int32)
</code></pre>
| 12 | 2016-07-27T17:01:10Z | 38,619,333 | <p>You can use <a href="https://docs.python.org/3.4/library/itertools.html#itertools.zip_longest">itertools.zip_longest</a>:</p>
<pre><code>import itertools
np.array(list(itertools.zip_longest(*v, fillvalue=0))).T
Out:
array([[1, 0],
[1, 2]])
</code></pre>
<p>Note: For Python 2, it is <a href="https://docs.python.org/2/library/itertools.html#itertools.izip_longest">itertools.izip_longest</a>.</p>
| 9 | 2016-07-27T17:12:47Z | [
"python",
"arrays",
"numpy",
"sequence",
"variable-length-array"
] |
Convert Python sequence to NumPy array, filling missing values | 38,619,143 | <p>The implicit conversion of a Python sequence of <em>variable-length</em> lists into a NumPy array cause the array to be of type <em>object</em>.</p>
<pre><code>v = [[1], [1, 2]]
np.array(v)
>>> array([[1], [1, 2]], dtype=object)
</code></pre>
<p>Trying to force another type will cause an exception:</p>
<pre><code>np.array(v, dtype=np.int32)
ValueError: setting an array element with a sequence.
</code></pre>
<p>What is the most efficient way to get a dense NumPy array of type int32, by filling the "missing" values with a given placeholder?</p>
<p>From my sample sequence <code>v</code>, I would like to get something like this, if 0 is the placeholder</p>
<pre><code>array([[1, 0], [1, 2]], dtype=int32)
</code></pre>
| 12 | 2016-07-27T17:01:10Z | 38,619,337 | <pre><code>max_len = max(len(sub_list) for sub_list in v)
result = np.array([sub_list + [0] * (max_len - len(sub_list)) for sub_list in v])
>>> result
array([[1, 0],
[1, 2]])
>>> type(result)
numpy.ndarray
</code></pre>
| 1 | 2016-07-27T17:13:02Z | [
"python",
"arrays",
"numpy",
"sequence",
"variable-length-array"
] |
Convert Python sequence to NumPy array, filling missing values | 38,619,143 | <p>The implicit conversion of a Python sequence of <em>variable-length</em> lists into a NumPy array cause the array to be of type <em>object</em>.</p>
<pre><code>v = [[1], [1, 2]]
np.array(v)
>>> array([[1], [1, 2]], dtype=object)
</code></pre>
<p>Trying to force another type will cause an exception:</p>
<pre><code>np.array(v, dtype=np.int32)
ValueError: setting an array element with a sequence.
</code></pre>
<p>What is the most efficient way to get a dense NumPy array of type int32, by filling the "missing" values with a given placeholder?</p>
<p>From my sample sequence <code>v</code>, I would like to get something like this, if 0 is the placeholder</p>
<pre><code>array([[1, 0], [1, 2]], dtype=int32)
</code></pre>
| 12 | 2016-07-27T17:01:10Z | 38,619,350 | <p>Here's an almost* vectorized boolean-indexing based approach that I have used in several other posts -</p>
<pre><code>def boolean_indexing(v):
lens = np.array([len(item) for item in v])
mask = lens[:,None] > np.arange(lens.max())
out = np.zeros(mask.shape,dtype=int)
out[mask] = np.concatenate(v)
return out
</code></pre>
<p><strong>Sample run</strong></p>
<pre><code>In [27]: v
Out[27]: [[1], [1, 2], [3, 6, 7, 8, 9], [4]]
In [28]: out
Out[28]:
array([[1, 0, 0, 0, 0],
[1, 2, 0, 0, 0],
[3, 6, 7, 8, 9],
[4, 0, 0, 0, 0]])
</code></pre>
<p>*Please note that this coined as almost vectorized because the only looping performed here is at the start, where we are getting the lengths of the list elements. But that part not being so computationally demanding should have minimal effect on the total runtime.</p>
<p><strong>Runtime test</strong></p>
<p>In this section I am timing <a href="http://stackoverflow.com/a/38619278/3293881"><code>DataFrame-based solution by @Alberto Garcia-Raboso</code></a>, <a href="http://stackoverflow.com/a/38619333/3293881"><code>itertools-based solution by @ayhan</code></a> as they seem to scale well and the boolean-indexing based one from this post for a relatively larger dataset with three levels of size variation across the list elements.</p>
<p>Case #1 : Larger size variation</p>
<pre><code>In [44]: v = [[1], [1,2,4,8,4],[6,7,3,6,7,8,9,3,6,4,8,3,2,4,5,6,6,8,7,9,3,6,4]]
In [45]: v = v*1000
In [46]: %timeit pd.DataFrame(v).fillna(0).values.astype(np.int32)
100 loops, best of 3: 9.82 ms per loop
In [47]: %timeit np.array(list(itertools.izip_longest(*v, fillvalue=0))).T
100 loops, best of 3: 5.11 ms per loop
In [48]: %timeit boolean_indexing(v)
100 loops, best of 3: 6.88 ms per loop
</code></pre>
<p>Case #2 : Lesser size variation</p>
<pre><code>In [49]: v = [[1], [1,2,4,8,4],[6,7,3,6,7,8]]
In [50]: v = v*1000
In [51]: %timeit pd.DataFrame(v).fillna(0).values.astype(np.int32)
100 loops, best of 3: 3.12 ms per loop
In [52]: %timeit np.array(list(itertools.izip_longest(*v, fillvalue=0))).T
1000 loops, best of 3: 1.55 ms per loop
In [53]: %timeit boolean_indexing(v)
100 loops, best of 3: 5 ms per loop
</code></pre>
<p>Case #3 : Larger number of elements (100 max) per list element</p>
<pre><code>In [139]: # Setup inputs
...: N = 10000 # Number of elems in list
...: maxn = 100 # Max. size of a list element
...: lens = np.random.randint(0,maxn,(N))
...: v = [list(np.random.randint(0,9,(L))) for L in lens]
...:
In [140]: %timeit pd.DataFrame(v).fillna(0).values.astype(np.int32)
1 loops, best of 3: 292 ms per loop
In [141]: %timeit np.array(list(itertools.izip_longest(*v, fillvalue=0))).T
1 loops, best of 3: 264 ms per loop
In [142]: %timeit boolean_indexing(v)
10 loops, best of 3: 95.7 ms per loop
</code></pre>
<p>To me, it seems <strike><code>itertools.izip_longest</code> is doing pretty well!</strike> there's no clear winner, but would have to be taken on a case-by-case basis!</p>
| 6 | 2016-07-27T17:13:39Z | [
"python",
"arrays",
"numpy",
"sequence",
"variable-length-array"
] |
Convert Python sequence to NumPy array, filling missing values | 38,619,143 | <p>The implicit conversion of a Python sequence of <em>variable-length</em> lists into a NumPy array cause the array to be of type <em>object</em>.</p>
<pre><code>v = [[1], [1, 2]]
np.array(v)
>>> array([[1], [1, 2]], dtype=object)
</code></pre>
<p>Trying to force another type will cause an exception:</p>
<pre><code>np.array(v, dtype=np.int32)
ValueError: setting an array element with a sequence.
</code></pre>
<p>What is the most efficient way to get a dense NumPy array of type int32, by filling the "missing" values with a given placeholder?</p>
<p>From my sample sequence <code>v</code>, I would like to get something like this, if 0 is the placeholder</p>
<pre><code>array([[1, 0], [1, 2]], dtype=int32)
</code></pre>
| 12 | 2016-07-27T17:01:10Z | 38,619,419 | <p>Here is a general way:</p>
<pre><code>>>> v = [[1], [2, 3, 4], [5, 6], [7, 8, 9, 10], [11, 12]]
>>> max_len = np.argmax(v)
>>> np.hstack(np.insert(v, range(1, len(v)+1),[[0]*(max_len-len(i)) for i in v])).astype('int32').reshape(len(v), max_len)
array([[ 1, 0, 0, 0],
[ 2, 3, 4, 0],
[ 5, 6, 0, 0],
[ 7, 8, 9, 10],
[11, 12, 0, 0]], dtype=int32)
</code></pre>
| 0 | 2016-07-27T17:17:50Z | [
"python",
"arrays",
"numpy",
"sequence",
"variable-length-array"
] |
ValueError: need more than 1 value to unpack psychopy: haven't touched the script since it was working yesterday and am suddenly getting this error | 38,619,151 | <p>I recognize there are a decent amount of ValueError questions on here, but it seems none are specifically related to psychopy or my issue. I am coding an experiment from scratch on psychopy (no builder involved). Yesterday, my script was running totally fine. Today I tried running it without adding anything new or taking anything away and it's suddenly giving me this error:
</p>
<pre class="lang-py prettyprint-override"><code>File "/Users/vpam/Documents/fMRI_binding/VSTMbindingpaige.py", line 53, in <module>
script, filename = argv
ValueError: need more than 1 value to unpack
</code></pre>
<p>These are lines 52 and 53, apparently something in 53 (the last one) is making this happen, but I can't imagine what since it was working just fine yesterday. Anyone know why it's doing that? (I am running the oldest version of python in order to be able to include corrective audio feedback, but I have been running it on that with success):</p>
<pre class="lang-py prettyprint-override"><code>from sys import argv
script, filename = argv
</code></pre>
<p>This is what I'm calling the filename (in the script it is above those other lines)</p>
<pre class="lang-py prettyprint-override"><code>from sys import argv
script, filename = argv
from psychopy import gui
myDlg = gui.Dlg(title="Dr. S's experiment")
myDlg.addField('Subject ID','PJP')
ok_data = myDlg.show()
if myDlg.OK:
print(ok_data)
else:
print('user cancelled')
[sID]=myDlg.data
# Data file name stem = absolute path + name; later add .psyexp, .csv, .log, etc
data_file = sID + '_VSTMbinding.txt'
f = open(data_file,'a') #name file here
f.write(sID)
print myDlg.data
</code></pre>
| -1 | 2016-07-27T17:01:41Z | 38,619,487 | <p>It looks like you're using Python2. Python3 gives a more detailed information in it's error message. The problem is that argv only contains a single value and you're trying to unpack it into two variables. <code>argv</code> contains the command line variables -- if this was running yesterday "without any changes" as you suggest, it's because you were providing a filename as a command-line argument.</p>
<h2>py2.py</h2>
<pre><code>#!/usr/bin/env python
from sys import argv
script, filename = argv
print("Script: {0}\nFilename: {1}".format(script, filename))
</code></pre>
<h2>py3.py</h2>
<pre><code>#!/usr/bin/env python3
from sys import argv
script, filename = argv
print("Script: {0}\nFilename: {1}".format(script, filename))
</code></pre>
<h2>Running py2.py:</h2>
<pre><code>$ charlie on laptop in ~
â¯â¯ ./py2.py
Traceback (most recent call last):
File "./py2.py", line 4, in <module>
script, filename = argv
ValueError: need more than 1 value to unpack
$ charlie on laptop in ~
â¯â¯ ./py2.py filename
Script: ./py2.py
Filename: filename
</code></pre>
<h2>Running py3.py:</h2>
<pre><code>$ charlie on laptop in ~
â¯â¯ ./py3.py
Traceback (most recent call last):
File "./py3.py", line 4, in <module>
script, filename = argv
ValueError: not enough values to unpack (expected 2, got 1)
$ charlie on laptop in ~
â¯â¯ ./py3.py filename
Script: ./py3.py
Filename: filename
</code></pre>
| 1 | 2016-07-27T17:21:25Z | [
"python",
"psychopy"
] |
How can I upload a picture to my django app on heroku and get it to be displayed? | 38,619,166 | <p>I have a django application on heroku where some data is added in the admin settings. It is linked to my github. One of the things you add is a picture. It doesn't show up on the site after its uploaded. What could be the cause and solution?</p>
| 0 | 2016-07-27T17:03:01Z | 38,627,826 | <p>I've been running my django app in heroku for about 6 months now and I've never experienced <code>db gets reset</code> when ever I updated/deploy/push to heroku
<strong>note</strong> I'm using heroku postgress for db</p>
| 0 | 2016-07-28T05:35:36Z | [
"python",
"django",
"database",
"heroku",
"web-applications"
] |
Upload file with hidden input with Selenium WebDriver Python | 38,619,191 | <p>Html:</p>
<pre><code><div id="js-cert-file" class="form-group">
<button id="js-ob-browse-n-upload" class="btn btn-ob browse-and-upload-onboarding-ssl-button" style=""> BROWSE & UPLOAD </button>
<input id="js-cert-file" class="hidden btn btn-ob" type="file" accept=".p12, .pem, .pfx" name="file">
<input id="file-name" type="text" disabled="" value="File Name" style="display:none">
</div>
</code></pre>
<p>I have tried uploading the document using the xpath and css selector but not able to do it since the input is hidden. I have spend few days banging my head on this and still not able to figure it out so thought it was time to ask the experts, please help!</p>
<p>Issue is that, I want to upload the file without clicking the "Browse and Upload" Button, but like I said am not able to do it since the input is hidden.</p>
<p>Here the my python code:</p>
<pre><code>BrowseAndUpload = driver.find_element_by_xpath("/html/body/div[3]/div/div[2]/div/div/div[1]/div[1]/input[1]")
clickBrowseAndUpload.send_keys('file full path')
</code></pre>
| 0 | 2016-07-27T17:04:44Z | 38,619,517 | <p>Try to make input field visible and upload file with following code:</p>
<pre><code>driver.execute_script('document.getElementById("js-cert-file").style.visibility="visible";')
driver.execute_script('document.getElementById("js-cert-file").style.display="block";')
driver.find_element_by_xpath('//input[@id="js-cert-file"]').send_keys('file full path')
</code></pre>
| 0 | 2016-07-27T17:23:19Z | [
"jquery",
"python",
"html",
"css",
"selenium"
] |
Many-to-Many Database design with Django | 38,619,218 | <p>Oftentimes a single drug can have lots of "nick names" which gets confusing. So, I'm trying to build a small Django app to help me with the issue. </p>
<p>What it should do is either refer the drugs acutal name (non_proprietary_name) to its "nick name" (proprietary_name) or vice versa. </p>
<p>For instance, "Aspirin" and "ASS" are the proprietary_name for "acetylsalicylic acid".</p>
<p>To complicate it a bit further I've decided to add a small Wiki page and categories (also, a drug can fall into many different categories).</p>
<p>Sadly, I'm not very familiar with database design, so that's where I need a bit of help. </p>
<p>What I've got so far:</p>
<pre><code>from django.db import models
# Create your models here.
class Proprietary_name(models.Model):
proprietary_name = models.CharField(max_length = 100, unique = True) #nick name
def __str__(self):
return self.proprietary_name
class Category(models.Model):
category = models.CharField(max_length = 100, unique = True)
def __str__(self):
return self.category
class Mediwiki(models.Model):
proprietary_name = models.ManyToManyField(Proprietary_name)
non_proprietary_name = models.CharField(max_length = 100, unique = True) # actual name
category = models.ManyToManyField(Category)
wiki_page = models.TextField()
def __str__(self):
return self.non_proprietary_name
~
</code></pre>
<p>So, if I've got the proprietary_name I can relate to the non_proprietary_name:</p>
<pre><code>>>> Mediwiki.objects.get(proprietary_name__proprietary_name='Aspirin')
<Mediwiki: acetylsalicylic acid>
</code></pre>
<p>However, I'm having trouble getting all the non_proprietary_names when I enter the proprietary one. Is this an issue with my Database or am I missing something else? </p>
<p>EDIT:</p>
<p>New models.py based on the comments:</p>
<pre><code>from django.db import models
# Create your models here.
class Category(models.Model):
category = models.CharField(max_length = 100, unique = True)
def __str__(self):
return self.category
class Mediwiki(models.Model):
non_proprietary_name = models.CharField(max_length = 100, unique = True)
category = models.ManyToManyField(Category)
wiki_page = models.TextField()
def __str__(self):
return self.non_proprietary_name
class ProprietaryName(models.Model):
proprietary_name = models.CharField(max_length = 100, unique = True)
non_proprietary_name = models.ForeignKey(Mediwiki)
def __str__(self):
return self.proprietary_name
</code></pre>
<p>So, it works! But I'm not so sure why.. Is this the best way to do it? Also, what about the Categories? Should they be change to foreign keys aswell? </p>
<pre><code>>>> Mediwiki.objects.get(proprietaryname__proprietary_name="Aspirin")
<Mediwiki: acetylsalicylic acid>
>>>
>>> ProprietaryName.objects.get(proprietary_name="Aspirin").non_proprietary_name
<Mediwiki: acetylsalicylic acid> # Works also, what's preferable?
>>>ProprietaryName.objects.filter(non_proprietary_name__non_proprietary_name="acetylsalicylic acid")
[<ProprietaryName: Aspirin>, <ProprietaryName: ASS>]
>>>
</code></pre>
| 0 | 2016-07-27T17:06:32Z | 38,619,543 | <p>First your model name <code>Mediwiki</code> is not straightforward because each entry is just recording all information about a <code>Drug</code>. So just change that to <code>Drug</code> would make more sense.</p>
<p>In your current design, you are using m2m fields, which indicates that one <code>proprietary_name</code> could be used on multiple drugs. If you want all <code>Drug</code> non_proprietary_names return when you enter <code>proprietary_name</code>, just do:</p>
<pre><code>Drug.objects.filter(proprietary_name__proprietary_name='Aspirin') \
.values_list('non_proprietary_name', flat=True).distinct()
</code></pre>
<p>Check django doc about <a href="https://docs.djangoproject.com/en/1.9/ref/models/querysets/#values-list" rel="nofollow">values_list</a>.</p>
<p>If however, one <code>proprietary_name</code> can only describe one drug, you should make <code>Drug</code> as foreign key on model <code>Proprietary_name</code> to indicate one-to-many relationship:</p>
<pre><code>class Proprietary_name(models.Model):
proprietary_name = models.CharField(max_length=100, unique=True)
drug = models.ForeignKey(Drug)
</code></pre>
| 1 | 2016-07-27T17:24:41Z | [
"python",
"django",
"database",
"database-design"
] |
Many-to-Many Database design with Django | 38,619,218 | <p>Oftentimes a single drug can have lots of "nick names" which gets confusing. So, I'm trying to build a small Django app to help me with the issue. </p>
<p>What it should do is either refer the drugs acutal name (non_proprietary_name) to its "nick name" (proprietary_name) or vice versa. </p>
<p>For instance, "Aspirin" and "ASS" are the proprietary_name for "acetylsalicylic acid".</p>
<p>To complicate it a bit further I've decided to add a small Wiki page and categories (also, a drug can fall into many different categories).</p>
<p>Sadly, I'm not very familiar with database design, so that's where I need a bit of help. </p>
<p>What I've got so far:</p>
<pre><code>from django.db import models
# Create your models here.
class Proprietary_name(models.Model):
proprietary_name = models.CharField(max_length = 100, unique = True) #nick name
def __str__(self):
return self.proprietary_name
class Category(models.Model):
category = models.CharField(max_length = 100, unique = True)
def __str__(self):
return self.category
class Mediwiki(models.Model):
proprietary_name = models.ManyToManyField(Proprietary_name)
non_proprietary_name = models.CharField(max_length = 100, unique = True) # actual name
category = models.ManyToManyField(Category)
wiki_page = models.TextField()
def __str__(self):
return self.non_proprietary_name
~
</code></pre>
<p>So, if I've got the proprietary_name I can relate to the non_proprietary_name:</p>
<pre><code>>>> Mediwiki.objects.get(proprietary_name__proprietary_name='Aspirin')
<Mediwiki: acetylsalicylic acid>
</code></pre>
<p>However, I'm having trouble getting all the non_proprietary_names when I enter the proprietary one. Is this an issue with my Database or am I missing something else? </p>
<p>EDIT:</p>
<p>New models.py based on the comments:</p>
<pre><code>from django.db import models
# Create your models here.
class Category(models.Model):
category = models.CharField(max_length = 100, unique = True)
def __str__(self):
return self.category
class Mediwiki(models.Model):
non_proprietary_name = models.CharField(max_length = 100, unique = True)
category = models.ManyToManyField(Category)
wiki_page = models.TextField()
def __str__(self):
return self.non_proprietary_name
class ProprietaryName(models.Model):
proprietary_name = models.CharField(max_length = 100, unique = True)
non_proprietary_name = models.ForeignKey(Mediwiki)
def __str__(self):
return self.proprietary_name
</code></pre>
<p>So, it works! But I'm not so sure why.. Is this the best way to do it? Also, what about the Categories? Should they be change to foreign keys aswell? </p>
<pre><code>>>> Mediwiki.objects.get(proprietaryname__proprietary_name="Aspirin")
<Mediwiki: acetylsalicylic acid>
>>>
>>> ProprietaryName.objects.get(proprietary_name="Aspirin").non_proprietary_name
<Mediwiki: acetylsalicylic acid> # Works also, what's preferable?
>>>ProprietaryName.objects.filter(non_proprietary_name__non_proprietary_name="acetylsalicylic acid")
[<ProprietaryName: Aspirin>, <ProprietaryName: ASS>]
>>>
</code></pre>
| 0 | 2016-07-27T17:06:32Z | 38,619,684 | <p>There can be many proprietary names, but no identical proprietary name can be assigned to more than one non-proprietary (generic) drug, so you need to change your relation from Many-To-Many to One-To-Many (nb. use CamelCase in class names, not underscores):</p>
<pre><code>class Mediwiki(models.Model):
non_proprietary_name = models.CharField(max_length = 100, unique = True) # actual name
category = models.ManyToManyField(Category)
wiki_page = models.TextField()
class ProprietaryName(models.Model):
proprietary_name = models.CharField(max_length = 100, unique = True) #nick name
non_proprietary_name = models.ForeignKey(Mediawiki)
</code></pre>
<p>then you can get all proprietary drugs for a non-proprietary name using <code>Mediawiki</code>'s <code>proprietaryname_set</code> property, and <code>ProprietaryName</code>'s <code>mediawiki</code> property for the other lookup. More on that in the <a href="https://docs.djangoproject.com/ja/1.9/topics/db/queries/" rel="nofollow">documentation</a>.</p>
| 1 | 2016-07-27T17:33:18Z | [
"python",
"django",
"database",
"database-design"
] |
How to count objects in image using python? | 38,619,382 | <p>I am trying to count the number of drops in this image and the coverage percentage of the area covered by those drops.
I tried to convert this image into black and white, but the center color of those drops seems too similar to the background. So I only got something like the second picture.
Is there any way to solve this problem or any better ideas?
Thanks a lot.</p>
<p><img src="http://i.stack.imgur.com/ba3g0.jpg" alt="source image"></p>
<p><img src="http://i.stack.imgur.com/xG91h.jpg" alt="converted image"></p>
| -5 | 2016-07-27T17:15:38Z | 38,632,224 | <p>I used the following code to detect the number of contours in the image using openCV and python. </p>
<pre><code>img = cv2.imread('ba3g0.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,127,255,1)
contours,h = cv2.findContours(thresh,1,2)
for cnt in contours:
cv2.drawContours(img,[cnt],0,(0,0,255),1)
</code></pre>
<p><a href="http://i.stack.imgur.com/xrHZC.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/xrHZC.jpg" alt="Result"></a>
For furthur removing the contours inside another contour, you need to iterate over the entire list and compare and remove the internal contours. After that the size of "contours" will give you the count</p>
| 1 | 2016-07-28T09:24:51Z | [
"python",
"image-processing"
] |
How to count objects in image using python? | 38,619,382 | <p>I am trying to count the number of drops in this image and the coverage percentage of the area covered by those drops.
I tried to convert this image into black and white, but the center color of those drops seems too similar to the background. So I only got something like the second picture.
Is there any way to solve this problem or any better ideas?
Thanks a lot.</p>
<p><img src="http://i.stack.imgur.com/ba3g0.jpg" alt="source image"></p>
<p><img src="http://i.stack.imgur.com/xG91h.jpg" alt="converted image"></p>
| -5 | 2016-07-27T17:15:38Z | 38,636,485 | <p>The idea is to isolate the background form the inside of the drops that look like the background.
Therefore i found the connected components for the background and the inside drops took the largest connected component and change its value to be like the foreground value which left me with an image which he inside drops as a different value than the background.
Than i used this image to fill in the original threshold image.
In the end using the filled image i calculated the relevant values </p>
<pre><code>import cv2
import numpy as np
from matplotlib import pyplot as plt
# Read image
I = cv2.imread('drops.jpg',0);
# Threshold
IThresh = (I>=118).astype(np.uint8)*255
# Remove from the image the biggest conneced componnet
# Find the area of each connected component
connectedComponentProps = cv2.connectedComponentsWithStats(IThresh, 8, cv2.CV_32S)
IThreshOnlyInsideDrops = np.zeros_like(connectedComponentProps[1])
IThreshOnlyInsideDrops = connectedComponentProps[1]
stat = connectedComponentProps[2]
maxArea = 0
for label in range(connectedComponentProps[0]):
cc = stat[label,:]
if cc[cv2.CC_STAT_AREA] > maxArea:
maxArea = cc[cv2.CC_STAT_AREA]
maxIndex = label
# Convert the background value to the foreground value
for label in range(connectedComponentProps[0]):
cc = stat[label,:]
if cc[cv2.CC_STAT_AREA] == maxArea:
IThreshOnlyInsideDrops[IThreshOnlyInsideDrops==label] = 0
else:
IThreshOnlyInsideDrops[IThreshOnlyInsideDrops == label] = 255
# Fill in all the IThreshOnlyInsideDrops as 0 in original IThresh
IThreshFill = IThresh
IThreshFill[IThreshOnlyInsideDrops==255] = 0
IThreshFill = np.logical_not(IThreshFill/255).astype(np.uint8)*255
plt.imshow(IThreshFill)
# Get numberof drops and cover precntage
connectedComponentPropsFinal = cv2.connectedComponentsWithStats(IThreshFill, 8, cv2.CV_32S)
NumberOfDrops = connectedComponentPropsFinal[0]
CoverPresntage = float(np.count_nonzero(IThreshFill==0)/float(IThreshFill.size))
# Print
print "Number of drops = " + str(NumberOfDrops)
print "Cover precntage = " + str(CoverPresntage)
</code></pre>
| 1 | 2016-07-28T12:32:37Z | [
"python",
"image-processing"
] |
How to count objects in image using python? | 38,619,382 | <p>I am trying to count the number of drops in this image and the coverage percentage of the area covered by those drops.
I tried to convert this image into black and white, but the center color of those drops seems too similar to the background. So I only got something like the second picture.
Is there any way to solve this problem or any better ideas?
Thanks a lot.</p>
<p><img src="http://i.stack.imgur.com/ba3g0.jpg" alt="source image"></p>
<p><img src="http://i.stack.imgur.com/xG91h.jpg" alt="converted image"></p>
| -5 | 2016-07-27T17:15:38Z | 38,672,355 | <p>You can fill the holes of your binary image using <code>scipy.ndimage.binary_fill_holes</code>. I also recommend using an automatic thresholding method such as Otsu's (avaible in <code>scikit-image</code>).<a href="http://i.stack.imgur.com/nE1fZ.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/nE1fZ.jpg" alt="enter image description here"></a> </p>
<pre><code>from skimage import io, filters
from scipy import ndimage
import matplotlib.pyplot as plt
im = io.imread('ba3g0.jpg', as_grey=True)
val = filters.threshold_otsu(im)
drops = ndimage.binary_fill_holes(im < val)
plt.imshow(drops, cmap='gray')
plt.show()
</code></pre>
<p>For the number of drops you can use another function of <code>scikit-image</code></p>
<pre><code>from skimage import measure
labels = measure.label(drops)
print(labels.max())
</code></pre>
<p>And for the coverage</p>
<pre><code>print('coverage is %f' %(drops.mean()))
</code></pre>
| 0 | 2016-07-30T09:23:14Z | [
"python",
"image-processing"
] |
Attaching event to self (canvas) tkinter | 38,619,393 | <p>i have created a class in python that extends the tkinter canvas. I am trying to attach an event to this canvas to handle click's within the class. It functions if i attach the event outside of the class itself but when binding within the class the click event only occur's once and then proceeds not to do anything at all only performing the first click:</p>
<pre><code>class myCanvas(Canvas):
def callback(event):
print('clicked at', event.x, event.y)
def __init__(self, parent, **kwargs):
Canvas.__init__(self, parent, **kwargs)
self.bind("<Button-1>", self.callback())
self.height = self.winfo_reqheight()
self.width = self.winfo_reqwidth()
</code></pre>
<p>Binding the event functions correctly only if i attach the event outside of the class. Any help in finding a way to attach the event to the extended canvas would be appreciated.</p>
| 0 | 2016-07-27T17:16:15Z | 38,619,785 | <p>The problem is in this line:</p>
<pre><code>self.bind("<Button-1>", self.callback())
</code></pre>
<p>You need to connect something callable (in other words, a function) to the event. The function is referenced as <code>self.callback</code>. If you call the function (<code>self.callback()</code>) then you're connecting the <em>return value</em> of <code>self.callback()</code> to the click event instead of the function itself.</p>
| 1 | 2016-07-27T17:40:18Z | [
"python",
"canvas",
"tkinter",
"tk"
] |
Pandas way of getting intersection between two rows in a python Pandas dataframe | 38,619,427 | <p>Say I have some data that looks like below. I want to get the count of ids that have two tags at the same time.</p>
<pre><code>tag id
a A
b B
a B
b A
c A
</code></pre>
<p>What I desire the result:</p>
<pre><code>tag1 tag2 count
a b 2
a c 1
b c 1
</code></pre>
<p>In plain python I could write pseudocode:</p>
<pre><code>d = defaultdict(set)
d[tag].add(id)
for tag1, tag2 in itertools.combinations(d.keys(), 2):
print tag1, tag2, len(d[tag1] & d[tag2])
</code></pre>
<p>Not the most efficient way but it should work. Now I already have the data stored in Pandas dataframe. Is there a more pandas-way to achieve the same result? </p>
| 2 | 2016-07-27T17:18:14Z | 38,620,118 | <p>Here is my attempt:</p>
<pre><code>from itertools import combinations
import pandas as pd
import numpy as np
In [123]: df
Out[123]:
tag id
0 a A
1 b B
2 a B
3 b A
4 c A
In [124]: a = np.asarray(list(combinations(df.tag, 2)))
In [125]: a
Out[125]:
array([['a', 'b'],
['a', 'a'],
['a', 'b'],
['a', 'c'],
['b', 'a'],
['b', 'b'],
['b', 'c'],
['a', 'b'],
['a', 'c'],
['b', 'c']],
dtype='<U1')
In [126]: a = a[a[:,0] != a[:,1]]
In [127]: a
Out[127]:
array([['a', 'b'],
['a', 'b'],
['a', 'c'],
['b', 'a'],
['b', 'c'],
['a', 'b'],
['a', 'c'],
['b', 'c']],
dtype='<U1')
In [129]: np.ndarray.sort(a)
In [130]: pd.DataFrame(a).groupby([0,1]).size()
Out[130]:
0 1
a b 4
c 2
b c 2
dtype: int64
</code></pre>
| 2 | 2016-07-27T18:00:04Z | [
"python",
"numpy",
"pandas",
"dataframe"
] |
alpha parameter from an alpha-stable distribution | 38,619,463 | <p>Considering I have a collection of data. Let's say for example they are length 100. My hypothesis say that these data follow the alpha-stable distribution. Is there a way to calculate the alpha parameter of these data? </p>
<p>I would like to do that in python more specifically. All I found was that package
<a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.levy_stable.html#scipy.stats.levy_stable" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.levy_stable.html#scipy.stats.levy_stable</a></p>
<p>which just calculates an alpha-stable distribution given the parameters of the distribution.</p>
<p>I am not that familiar with alpha-stable distributions. I will try to make it more clear using an example of Poisson distribution. If I have some data that I know they follow Poisson distribution isn't it possible to calculate the λ of that distribution? (Is this possible or am I miss something from statistics theory?)</p>
| 1 | 2016-07-27T17:20:08Z | 38,625,671 | <p><code>If I have some data that I know they follow Poisson distribution isn't it possible to calculate the λ of that distribution? (Is this possible or am I miss something from statistics theory?)</code></p>
<p>Sure. Mean of Poisson is equal to <strong>λ</strong>, so compute mean of your data and try ti use it. Because variance is equal to <strong>λ</strong> as well, there is quick check how Poisson your data are - compute variance as well and compare to mean/<strong>λ</strong>. If they are comparable, you're on a good track, though some MC sampling at the end might help as well.</p>
<p>Wrt alpha-stable distribution, I would start with computing data skewness, mean, median and mode. If data has no/little skew and mean, median and mode close, then one can assume that <code>beta</code> is 0 and <code>mu</code> is known. You have only two parameters left to define (<code>alpha</code> and <code>c</code>), and building PDF as FT and fitting might work</p>
| 0 | 2016-07-28T01:22:51Z | [
"python",
"statistics",
"probability",
"distribution",
"probability-density"
] |
Google Search Web Scraping with Python | 38,619,478 | <p>I've been learning a lot of python lately to work on some projects at work. </p>
<p>Currently I need to do some web scraping with google search results. I found several sites that demonstrated how to use ajax google api to search, however after attempting to use it, it appears to no longer be supported. Any suggestions? </p>
<p>I've been searching for quite a while to find a way but can't seem to find any solutions that currently work. </p>
| 0 | 2016-07-27T17:20:57Z | 38,620,894 | <p>You can always directly scrape Google results. To do this, you can use the URL <code>https://google.com/search?q=<Query></code> this will return the top 10 search results.</p>
<p>Then you can use <a href="http://lxml.de" rel="nofollow">lxml</a> for example to parse the page. Depending on what you use, you can either query the resulting node tree via a CSS-Selector (<code>.r a</code>) or using a XPath-Selector (<code>//h3[@class="r"]/a</code>)</p>
<p>In some cases the resulting URL will redirect to Google. Usually it contains a query-parameter <code>q</code>which will contain the actual request URL.</p>
<p>Example code using lxml and requests</p>
<pre><code>from urllib.parse import urlencode, urlparse, parse_qs
from lxml.html import fromstring
from requests import get
raw = get("https://www.google.com/search?q=StackOverflow").text
page = fromstring(raw)
for result in pg.cssselect(".r a"):
url = result.get("href")
if url.startswith("/url?"):
url = parse_qs(urlparse(url).query)['q']
print(url[0])
</code></pre>
<p>A note on google banning your IP: In my experience, google only bans if you start spamming google with search requests. It will respond with a 503 if Google thinks you are bot.</p>
| 0 | 2016-07-27T18:46:07Z | [
"python",
"python-2.7",
"google-search",
"google-search-api"
] |
How to use PyOrient to create functions (stored procedures) in OrientDB? | 38,619,508 | <p>I'm trying to create an OrientDB graph database using PyOrient, and I can't find enough documentation to allow me to get Functions working. I've been able to create a function using <code>record_create</code> into the <code>ofunction</code> cluster, but although it doesn't crash, it doesn't appear to work either.
Here's my code:</p>
<pre><code>#!/usr/bin/python
import pyorient
ousername="user"
opassword="pass"
client = pyorient.OrientDB("localhost", 2424)
session_id = client.connect( ousername, opassword )
db_name="database"
client.db_create( db_name, pyorient.DB_TYPE_GRAPH, pyorient.STORAGE_TYPE_PLOCAL )
# Set up the schema of the database
client.command( "create class URL extends V" )
client.command( "CREATE PROPERTY URL.url STRING")
client.command( "CREATE PROPERTY URL.id INTEGER")
client.command( "CREATE SEQUENCE urlseq")
client.command( "CREATE INDEX urls ON URL (url) UNIQUE")
# Get the id numbers of all the clusters
info=client.db_reload()
clusters={}
for c in info:
clusters[c.name]=c.id
print(clusters)
# Construct a test function
# All this should do is create a new URL vertex. Eventually it will check for uniqueness of url, etc.
code="INSERT INTO URL SET id = sequence('urlseq').next(), url='?'"
addURL_func = { '@OFunction': { 'name': 'addURL', 'code':'orient.getGraph().command("sql","%s",[urlparam]);' % code, 'language':'javascript', 'parameters':'urlparam', 'idempotent':False } }
client.record_create( clusters['ofunction'], addURL_func )
# Assume allURLs contains the list of URLs I want to store
for url in allURLs:
client.command("select addURL('%s')" % url)
vs = client.command("select * from URL")
for v in vs:
print(v.url)
</code></pre>
<p>Doing all the <code>select addURL</code> bits runs happily, but doing <code>select * from URL</code> simply times out. Presumably because (as I've discovered by examining the database in Studio) there are still no <code>URL</code> vertices. Although why that should timeout rather than returning an empty list or giving a useful error message, I'm not sure.</p>
<p>What am I doing wrong, and is there an easier way to create Functions through PyOrient?</p>
<p>I don't want to just write the Functions in Studio, because I am prototyping and want them written from the Python code rather than being lost every time I drop the mangled experimental graph!</p>
<p>I've mainly been using the <a href="http://orientdb.com/docs/2.0/orientdb.wiki/Functions.html" rel="nofollow">OrientDB wiki page</a> to find out about OrientDB functions, and the <a href="https://github.com/mogui/pyorient" rel="nofollow">PyOrient github page</a> as almost my only source of documentation for that.
<hr>
Edit: I've been able to create a working Function in SQL (see my own answer below) but I still can't create a working Javascript Function which creates a vertex. My current best attempt is:</p>
<pre><code>code2="""var g=orient.getGraph();g.command('sql','CREATE VERTEX URL SET id = sequence(\\"urlseq\\").next(), url = \\"'+urlparam+'\\"',[urlparam]);"""
myFunction2 = 'CREATE FUNCTION addURL2 "' + code2 + '" parameters [urlparam] idempotent false language javascript'
client.command(myFunction2)
</code></pre>
<p>which runs without crashing when called from PyOrient, but doesn't actually create any vertices. But if I call it from Studio, it works!?! I have no idea what's going on.</p>
| 0 | 2016-07-27T17:22:32Z | 38,631,949 | <p>You could try something like :</p>
<pre><code>code="var g=orient.getGraph();\ng.command(\\'sql\\',\\'%s\\',[urlparam]);"
myFunction = "CREATE FUNCTION addURL '" + code + "' parameters [urlparam] idempotent false language javascrip"
client.command(myFunction);
</code></pre>
<p><strong>UPDATE</strong> </p>
<p>I used this code (version 2.2.5) and it worked for me</p>
<pre><code>code="var g=orient.getGraph().command(\\'sql\\',\\'%s\\',[urlparam]);"
myFunction = "CREATE FUNCTION addURL '" + code + "' parameters [urlparam] idempotent false language javascrip"
client.command(myFunction);
</code></pre>
<p>Hope it helps</p>
| 0 | 2016-07-28T09:12:24Z | [
"python",
"stored-procedures",
"orientdb",
"pyorient"
] |
How to use PyOrient to create functions (stored procedures) in OrientDB? | 38,619,508 | <p>I'm trying to create an OrientDB graph database using PyOrient, and I can't find enough documentation to allow me to get Functions working. I've been able to create a function using <code>record_create</code> into the <code>ofunction</code> cluster, but although it doesn't crash, it doesn't appear to work either.
Here's my code:</p>
<pre><code>#!/usr/bin/python
import pyorient
ousername="user"
opassword="pass"
client = pyorient.OrientDB("localhost", 2424)
session_id = client.connect( ousername, opassword )
db_name="database"
client.db_create( db_name, pyorient.DB_TYPE_GRAPH, pyorient.STORAGE_TYPE_PLOCAL )
# Set up the schema of the database
client.command( "create class URL extends V" )
client.command( "CREATE PROPERTY URL.url STRING")
client.command( "CREATE PROPERTY URL.id INTEGER")
client.command( "CREATE SEQUENCE urlseq")
client.command( "CREATE INDEX urls ON URL (url) UNIQUE")
# Get the id numbers of all the clusters
info=client.db_reload()
clusters={}
for c in info:
clusters[c.name]=c.id
print(clusters)
# Construct a test function
# All this should do is create a new URL vertex. Eventually it will check for uniqueness of url, etc.
code="INSERT INTO URL SET id = sequence('urlseq').next(), url='?'"
addURL_func = { '@OFunction': { 'name': 'addURL', 'code':'orient.getGraph().command("sql","%s",[urlparam]);' % code, 'language':'javascript', 'parameters':'urlparam', 'idempotent':False } }
client.record_create( clusters['ofunction'], addURL_func )
# Assume allURLs contains the list of URLs I want to store
for url in allURLs:
client.command("select addURL('%s')" % url)
vs = client.command("select * from URL")
for v in vs:
print(v.url)
</code></pre>
<p>Doing all the <code>select addURL</code> bits runs happily, but doing <code>select * from URL</code> simply times out. Presumably because (as I've discovered by examining the database in Studio) there are still no <code>URL</code> vertices. Although why that should timeout rather than returning an empty list or giving a useful error message, I'm not sure.</p>
<p>What am I doing wrong, and is there an easier way to create Functions through PyOrient?</p>
<p>I don't want to just write the Functions in Studio, because I am prototyping and want them written from the Python code rather than being lost every time I drop the mangled experimental graph!</p>
<p>I've mainly been using the <a href="http://orientdb.com/docs/2.0/orientdb.wiki/Functions.html" rel="nofollow">OrientDB wiki page</a> to find out about OrientDB functions, and the <a href="https://github.com/mogui/pyorient" rel="nofollow">PyOrient github page</a> as almost my only source of documentation for that.
<hr>
Edit: I've been able to create a working Function in SQL (see my own answer below) but I still can't create a working Javascript Function which creates a vertex. My current best attempt is:</p>
<pre><code>code2="""var g=orient.getGraph();g.command('sql','CREATE VERTEX URL SET id = sequence(\\"urlseq\\").next(), url = \\"'+urlparam+'\\"',[urlparam]);"""
myFunction2 = 'CREATE FUNCTION addURL2 "' + code2 + '" parameters [urlparam] idempotent false language javascript'
client.command(myFunction2)
</code></pre>
<p>which runs without crashing when called from PyOrient, but doesn't actually create any vertices. But if I call it from Studio, it works!?! I have no idea what's going on.</p>
| 0 | 2016-07-27T17:22:32Z | 38,641,371 | <p>OK, after a lot of hacking and Googling, I've got it working:</p>
<pre><code>code="CREATE VERTEX URL SET id = sequence('urlseq').next(), url = :urlparam;"
myFunction = 'CREATE FUNCTION addURL "' + code + '" parameters [urlparam] idempotent false language sql'
client.command(myFunction)
</code></pre>
<p>The key here seems to be the use of a colon before parameter names in OrientDB's version of SQL. I couldn't find any reference to this anywhere in the OrientDB docs, but someone online had discovered it somehow.
I'm answering my own question in the hope that this will help others struggling wth ODB's poor documentation!</p>
| 0 | 2016-07-28T15:58:53Z | [
"python",
"stored-procedures",
"orientdb",
"pyorient"
] |
Working with variable-length text in Tensorflow | 38,619,526 | <p>I am building a Tensorflow model to perform inference on text phrases.
For sake of simplicity, assume I need a classifier with fixed number of output classes but a <em>variable-length text</em> in input. In other words, my mini batch would be a sequence of phrases but not all phrases have the same length. </p>
<pre><code>data = ['hello',
'my name is Mark',
'What is your name?']
</code></pre>
<p>My first preprocessing step was to build a dictionary of all possible words in the dictionary and map each word to its integer word-Id. The input becomes:</p>
<pre><code>data = [[1],
[2, 3, 4, 5],
[6, 4, 7, 3]
</code></pre>
<p>What's the best way to handle this kind of input? Can <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/io_ops.html#placeholder" rel="nofollow">tf.placeholder()</a> handle variable-size input within the same batch of data?
Or should I pad all strings such that they all have the same length, equal to the length of the longest string, using some placeholder for the missing words? This seems to be very memory inefficient if some string are much longer that most of the others.</p>
<p>-- EDIT --</p>
<p>Here is a concrete example.</p>
<p>When I know the size of my datapoints (and all the datapoint have the same length, eg. 3) I normally use something like:</p>
<pre><code>input = tf.placeholder(tf.int32, shape=(None, 3)
with tf.Session() as sess:
print(sess.run([...], feed_dict={input:[[1, 2, 3], [1, 2, 3]]}))
</code></pre>
<p>where the first dimension of the placeholder is the minibatch size.</p>
<p>What if the input sequences are words in sentences of different length?</p>
<pre><code>feed_dict={input:[[1, 2, 3], [1]]}
</code></pre>
| 1 | 2016-07-27T17:23:41Z | 38,634,226 | <p>I was building a sequence to sequence translator the other day. What I did is decided to do was make it for a fixed length of 32 words (which was a bit above the average sentence length) although you can make it as long as you want. I then added a NULL word to the dictionary and padded all my sentence vectors with it. That way I could tell the model where the end of my sequence was and the model would just output NULL at the end of its output. For instance take the expression "Hi what is your name?" This would become "Hi what is your name? NULL NULL NULL NULL ... NULL". It worked pretty well but your loss and accuracy during training will appear a bit higher than it actually is since the model usually gets the NULLs right which count towards the cost.</p>
<p>There is another approach called masking. This too allows you to build a model for a fixed length sequence but only evaluate the cost up to the end of a shorter sequence. You could search for the first instance of NULL in the output sequence (or expected output, whichever is greater) and only evaluate the cost up to that point. Also I think some tensor flow functions like tf.dynamic_rnn support masking which may be more memory efficient. I am not sure since I have only tried the first approach of padding.</p>
<p>Finally, I think in the tensorflow example of Seq2Seq model they use buckets for different sized sequences. This would probably solve your memory issue. I think you could share the variables between the different sized models.</p>
| 0 | 2016-07-28T10:50:45Z | [
"python",
"string",
"text",
"tensorflow",
"variable-length-array"
] |
matching tuples in pandas and data processing | 38,619,620 | <p>The following is a simplified blob of my dataframe. I want to process</p>
<p><a href="https://gist.github.com/ecenm/97248a1d1f6f301d662c6f238b6168df" rel="nofollow">first.csv</a></p>
<pre><code>,No.,Time,Source,Destination,Protocol,Length,Info,src_dst_pair
325778,112.305107,02:e0,Broadcast,ARP,64,Who has 253.244.230.77? Tell 253.244.230.67,"('02:e0', 'Broadcast')"
801130,261.868118,02:e0,Broadcast,ARP,64,Who has 253.244.230.156? Tell 253.244.230.67,"('02:e0', 'Broadcast')"
700094,222.055094,02:e0,Broadcast,ARP,60,Who has 253.244.230.77? Tell 253.244.230.156,"('02:e0', 'Broadcast')"
766542,766543,247.796156,100.118.138.150,41.177.26.176,TCP,66,32222 > http [SYN] Seq=0,"('100.118.138.150', '41.177.26.176')"
767405,248.073313,100.118.138.150,41.177.26.176,TCP,64,32222 > http [ACK] Seq=1,"('100.118.138.150', '41.177.26.176')"
767466,248.083268,100.118.138.150,41.177.26.176,HTTP,380,Continuation [Packet capture],"('100.118.138.150', '41.177.26.176')"
</code></pre>
<p>I have all the unique elements of the (last element) src_dst_pair </p>
<pre><code>uniq_src_dst_pair = numpy.unique(data.src_dst_pair.ravel())
[('02:e0', 'Broadcast') ('100.118.138.150', '41.177.26.176')]
</code></pre>
<p>How can I do the following in pandas</p>
<p>for each element in uniq_src_dst_pair, check against the df.src_dst_pair. If it matches, add df.Length and store it in a separate column</p>
<p><strong>my expected result is</strong></p>
<pre><code>('02:e0', 'Broadcast') : 188
('100.118.138.150', '41.177.26.176') : 510
</code></pre>
<p>How can I do this?</p>
<p><strong>Below is my try</strong></p>
<pre><code>import pandas
import numpy
data = pandas.read_csv('first.csv')
print data
uniq_src_dst_pair = numpy.unique(data.src_dst_pair.ravel())
print uniq_src_dst_pair
print len(uniq_src_dst_pair)
# following is hardcoded, but need to be more general for the above list
match1 = data[data.src_dst_pair == "('02:e0:ed:0a:fb:5f', 'Broadcast')"] # doesn't work
</code></pre>
| 3 | 2016-07-27T17:29:37Z | 38,620,048 | <p>Your csv file is messed up. You shouldn't have the first comma in the header, and you have an extra field in your 4th non-header row. Fixing that, you could use:</p>
<pre><code>In [6]: data.groupby('src_dst_pair').Length.sum()
Out[6]:
src_dst_pair
('02:e0', 'Broadcast') 188
('100.118.138.150', '41.177.26.176') 510
Name: Length, dtype: int64
</code></pre>
<p>However, your final field, 'src_dst_pair' is superfluous if this is what you wanted to accomplish because you can simply do something like the following:</p>
<pre><code>In [8]: data.groupby(['Source','Destination']).Length.sum()
Out[8]:
Source Destination
02:e0 Broadcast 188
100.118.138.150 41.177.26.176 510
Name: Length, dtype: int64
</code></pre>
| 2 | 2016-07-27T17:55:40Z | [
"python",
"csv",
"pandas"
] |
OpenCV fails to capture from more than 8 webcams on Linux | 38,619,801 | <p>OpenCV fails to open VideoCaptures for more than 8 webcams on Linux. Here a simple example:</p>
<pre><code># "opencap.py"
import cv2, sys
dev = int(sys.argv[1])
cap = cv2.VideoCapture(dev)
print "device %d: %s" %(dev, "success" if cap.isOpened() else "failure")
</code></pre>
<p>For my setup (OpenCV 2.4.11, Ubuntu 14.04) with, say, 9 webcams, opencap.py succeeds for the first 8 webcams (0-7), but for the last one I get</p>
<pre><code>> python opencap.py 8
HIGHGUI ERROR: V4L: index 8 is not correct!
device 8: failure
</code></pre>
<p>Note: <code>v4l2-ctl --list-devices</code> correctly lists the 9 webcams (/dev/video0, ..., /dev/video8).</p>
| 2 | 2016-07-27T17:41:13Z | 38,619,802 | <p>The problem is caused by this line in the OpenCV source code:</p>
<pre class="lang-cpp prettyprint-override"><code>#define MAX_CAMERAS 8
</code></pre>
<p>Simply changing the <code>MAX_CAMERAS</code> value and rebuilding OpenCV solves the problem. The file to change is modules/highgui/src/cap_libv4l.cpp (<a href="https://github.com/opencv/opencv/blob/2.4.11/modules/highgui/src/cap_libv4l.cpp#L260" rel="nofollow">line 260</a>) for a libv4l build, and cap_v4l.cpp for a v4l build. (See, e.g., this <a href="http://stackoverflow.com/a/36756451/1628638">answer</a> for more on the two build options.) For OpenCV 3.0, the directory changed to modules/videoio/src/.</p>
<p>Note: typically one runs into USB bandwidth issues with webcams before reaching the 8-camera limit. See, e.g., this <a href="http://stackoverflow.com/a/35161718/1628638">answer</a>.</p>
| 4 | 2016-07-27T17:41:13Z | [
"python",
"linux",
"opencv",
"limit",
"webcam"
] |
jquery tablesorter custom order | 38,619,805 | <p>I'm using this plugin on python.</p>
<p>Is it possible to make it order a column by a custom sequence?</p>
<p>In the <code>quality</code> column I can have only: 'low', 'mediocre', 'good', 'great' and I want ordered in that way (or reversed).</p>
<p>In the <code>Name</code> column I have (by the view) a custum order but I want to give the possibility to order alphabetically too and then return on the original order...</p>
<p>My <code>views.py</code>:</p>
<pre><code>def aff_list(request):
context_dict = {}
lista_aff_a=[]
lista_aff_s=[]
for aff in Aff.objects.all():
if aff.price=='luxury':
lista_aff_a.append(aff)
elif aff.price=='cheap':
lista_aff_s.append(aff)
lista_aff=lista_aff_a + lista_aff_s #this way is ordered before lista_aff_a, then lista_aff_s
context_dict['lista_aff'] = lista_aff
return render(request, 'aff_list.html', context_dict)
</code></pre>
<p>My <code>aff_list.html</code>:</p>
<pre><code><script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/3.1.0/jquery.min.js"></script>
<script src="{% static "js/jquery-tablesorter/jquery.tablesorter.js" %}"></script>
<link rel="stylesheet" href="{% static "css/tablesorter.css" %}" type="text/css" />
<script src="{% static "js/script-jquery.js" %}"></script>
...
<div class="panel panel-default">
<table id="lista_aff" class="tablesorter table table-hover table table-bordered table table-condensed">
<thead>
<tr>
<th>Name</th>
<th>Quality</th>
</tr>
</thead>
<tbody>
{% for aff in lista_aff %}
<tr>
<td>
{{ aff.name }}
</td>
<td>
{{ aff.quality }}
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
</code></pre>
<p>My <code>script-jquery</code>:</p>
<pre><code>$(document).ready(function() {
$("#lista_aff").tablesorter();
});
</code></pre>
<p>Edit: </p>
<p>A last question:</p>
<p>I download the file and decompressed in <code>static/js</code>, then I write in the head of my template:</p>
<pre><code><link href="{% static "js/tablesorter-master/css/theme-blue.css" %}" />
<link rel="stylesheet" href="{% static "css/dashboard.css" %}">
<script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/3.1.0/jquery.min.js"></script>
<script src="{% static "js/tablesorter-master/js/jquery.tablesorter.js" %}">
</code></pre>
<p>To make they work I must change the name of the themes from <code>theme.#</code> to <code>theme-#</code> and add in my <code>script-jquery.js</code>:</p>
<pre><code>theme : 'blue',
</code></pre>
<p>Just 'blue', not 'theme-blue'.
It works but maybe I'm doing something wrong.</p>
| 1 | 2016-07-27T17:41:25Z | 38,646,781 | <p>Using the fork suggested in the comment I solved my problems in this way:</p>
<p>My <code>script-jquery</code>:</p>
<pre><code>$.tablesorter.addParser({
id: 'quality',
is: function(s) {
// return false so this parser is not auto detected
return false;
},
format: function(s) {
// format your data for normalization
return s.toLowerCase().replace(/great/,0).replace(/good/,1
).replace(/mediocre/,2).replace(/low/,3);
},
// set type, either numeric or text
type: 'numeric'
});
$(document).ready(function() {
$("#lista_Aff").tablesorter({
theme : 'blue',
sortReset : true,
headers: { 1: { sorter: 'quality' } }
});
});
</code></pre>
| 1 | 2016-07-28T21:08:33Z | [
"jquery",
"python",
"tablesorter"
] |
Reference Link List Length Python? | 38,619,822 | <p><strong>EDIT:</strong> The terminology I was looking for was is called Cycle Detection. Thanks to @dhke for referring that in the comments.</p>
<p>I'm trying to figure out a better way to process a list of indexes and what it's length is if a list has a loop in its reference. I have a function that works but it passes the next index value and counter. I've been trying to figure out a way to do it by just passing the list into the function. It always starts as index 0.</p>
<p>Given a list, each node in the list references the index of some other node. I'm trying to get the length of the linked list not the number of nodes in the list.</p>
<pre><code># This list would have a length of 4, index 0->1->3->6->0
four_links_list = [1,3,4,6,0,4,0]
two_links_list = [3,2,1,0]
def my_ideal_func(list):
# Some better way to iterate over the list and count
def my_func(list, index, counter):
# We're just starting out
if index == 0 and counter == 0:
counter += 1
return my_func(list, list[index], counter)
# Keep going through the list as long as we're not looping back around
elif index != 0:
counter += 1
return my_func(list, list[index], counter)
# Stop once we hit a node with an index reference of 0
else:
return counter
</code></pre>
| 0 | 2016-07-27T17:42:20Z | 38,619,938 | <p>If you don't want extra data structures:</p>
<pre><code>def tortoise_and_hare(l):
tort = 0
hare = 0
count = 0
while tort != hare or count == 0:
count += 1
if l[tort] == 0:
return count
tort = l[tort]
hare = l[hare]
hare = l[hare]
return -1
>>> tortoise_and_hare([1,3,4,6,0,4,0])
4
>>> tortoise_and_hare([3,2,1,0])
2
>>> tortoise_and_hare([1,2,3,1,2,1,2,1])
-1
</code></pre>
| 1 | 2016-07-27T17:48:55Z | [
"python",
"list"
] |
Reference Link List Length Python? | 38,619,822 | <p><strong>EDIT:</strong> The terminology I was looking for was is called Cycle Detection. Thanks to @dhke for referring that in the comments.</p>
<p>I'm trying to figure out a better way to process a list of indexes and what it's length is if a list has a loop in its reference. I have a function that works but it passes the next index value and counter. I've been trying to figure out a way to do it by just passing the list into the function. It always starts as index 0.</p>
<p>Given a list, each node in the list references the index of some other node. I'm trying to get the length of the linked list not the number of nodes in the list.</p>
<pre><code># This list would have a length of 4, index 0->1->3->6->0
four_links_list = [1,3,4,6,0,4,0]
two_links_list = [3,2,1,0]
def my_ideal_func(list):
# Some better way to iterate over the list and count
def my_func(list, index, counter):
# We're just starting out
if index == 0 and counter == 0:
counter += 1
return my_func(list, list[index], counter)
# Keep going through the list as long as we're not looping back around
elif index != 0:
counter += 1
return my_func(list, list[index], counter)
# Stop once we hit a node with an index reference of 0
else:
return counter
</code></pre>
| 0 | 2016-07-27T17:42:20Z | 38,619,978 | <p>You can use a set to keep track of all nodes you've visited (sets have very fast membership tests). And there is absolutely no need for recursion here, a loop will do nicely:</p>
<pre><code>def my_ideal_func(list):
visited_nodes= set()
index= 0
length= 0
while True:
node= list[index]
if node in visited_nodes:
return length
visited_nodes.add(node)
length+= 1
index= list[index]
</code></pre>
| 1 | 2016-07-27T17:51:09Z | [
"python",
"list"
] |
Reference Link List Length Python? | 38,619,822 | <p><strong>EDIT:</strong> The terminology I was looking for was is called Cycle Detection. Thanks to @dhke for referring that in the comments.</p>
<p>I'm trying to figure out a better way to process a list of indexes and what it's length is if a list has a loop in its reference. I have a function that works but it passes the next index value and counter. I've been trying to figure out a way to do it by just passing the list into the function. It always starts as index 0.</p>
<p>Given a list, each node in the list references the index of some other node. I'm trying to get the length of the linked list not the number of nodes in the list.</p>
<pre><code># This list would have a length of 4, index 0->1->3->6->0
four_links_list = [1,3,4,6,0,4,0]
two_links_list = [3,2,1,0]
def my_ideal_func(list):
# Some better way to iterate over the list and count
def my_func(list, index, counter):
# We're just starting out
if index == 0 and counter == 0:
counter += 1
return my_func(list, list[index], counter)
# Keep going through the list as long as we're not looping back around
elif index != 0:
counter += 1
return my_func(list, list[index], counter)
# Stop once we hit a node with an index reference of 0
else:
return counter
</code></pre>
| 0 | 2016-07-27T17:42:20Z | 38,620,006 | <p>There's no need for recursion:</p>
<pre><code>def link_len(l):
cnt, idx = 0, 0
while not cnt or idx:
cnt = cnt + 1
idx = l[idx]
return cnt
</code></pre>
<p>This assumes the list loops back to 0.</p>
| 1 | 2016-07-27T17:53:12Z | [
"python",
"list"
] |
Can I somehow query all the existing tables in peewee / postgres? | 38,619,833 | <p>I am writing a basic gui for a program which uses Peewee. In the gui, I would like to show all the tables which exist in my database.</p>
<p>Is there any way to get the names of all existing tables, lets say in a list?</p>
| 0 | 2016-07-27T17:42:59Z | 38,620,529 | <p>To get a list of the tables in your schema, make sure that you have established your connection and cursor and try the following:</p>
<pre><code>cursor.execute("SELECT table_name FROM information_schema.tables WHERE table_schema='public'")
myables = cursor.fetchall()
mytables = [x[0] for x in mytables]
</code></pre>
<p>I hope this helps.</p>
| 0 | 2016-07-27T18:24:17Z | [
"python",
"peewee"
] |
Can I somehow query all the existing tables in peewee / postgres? | 38,619,833 | <p>I am writing a basic gui for a program which uses Peewee. In the gui, I would like to show all the tables which exist in my database.</p>
<p>Is there any way to get the names of all existing tables, lets say in a list?</p>
| 0 | 2016-07-27T17:42:59Z | 38,714,979 | <p>Peewee has the ability to introspect Postgres, MySQL and SQLite for the following types of schema information:</p>
<ul>
<li>Table names</li>
<li>Columns (name, data type, null?, primary key?, table)</li>
<li>Primary keys (column(s))</li>
<li>Foreign keys (column, dest table, dest column, table)</li>
<li>Indexes (name, sql*, columns, unique?, table)</li>
</ul>
<p>You can get this metadata using the following methods on the <code>Database</code> class:</p>
<ul>
<li><a href="http://docs.peewee-orm.com/en/latest/peewee/api.html?highlight=get_tables#Database.get_tables" rel="nofollow">Database.get_tables()</a></li>
<li><a href="http://docs.peewee-orm.com/en/latest/peewee/api.html?highlight=get_columns#Database.get_columns" rel="nofollow">Database.get_columns()</a></li>
<li><a href="http://docs.peewee-orm.com/en/latest/peewee/api.html?highlight=get_indexes#Database.get_indexes" rel="nofollow">Database.get_indexes()</a></li>
<li><a href="http://docs.peewee-orm.com/en/latest/peewee/api.html?highlight=get_primary_keys#Database.get_primary_keys" rel="nofollow">Database.get_primary_keys()</a></li>
<li><a href="http://docs.peewee-orm.com/en/latest/peewee/api.html?highlight=get_foreign_keys#Database.get_foreign_keys" rel="nofollow">Database.get_foreign_keys()</a></li>
</ul>
<p>So, instead of using a cursor and writing some SQL yourself, just do:</p>
<pre><code>db = PostgresqlDatabase('my_db')
tables = db.get_tables()
</code></pre>
<p>For even more craziness, check out the <a href="http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#reflection" rel="nofollow">reflection</a> module, which can actually generate Peewee model classes from an existing database schema.</p>
| 1 | 2016-08-02T08:05:31Z | [
"python",
"peewee"
] |
Finding a gps location at a certain time given two points? | 38,619,835 | <p>If I have two known locations and a known speed, how can I calculate the current position at distance d (in km)?</p>
<p>For example, given:</p>
<p><strong>Two gps locations in ES4236:</strong></p>
<pre><code>37.783333, -122.416667 # San Francisco
32.715, -117.1625 # San Diego
</code></pre>
<p>Traveling at <strong>1km/min</strong> in a straight line (ignoring altitude)</p>
<p><strong>How can I find the gps coordinate at a certain distance?</strong> A similar <a href="https://stackoverflow.com/questions/24427828">SO question</a> uses VincentyDistance in <code>geopy</code> to calculate the next point based on bearing and distance. </p>
<p>I guess, more specifically:</p>
<ul>
<li><p>How can I calculate the bearing between two gps points using <code>geopy</code>?</p></li>
<li><p>Using VincentyDistance to get the next gps point by bearing and distance, how do I know if I have arrived at my destination, or if I should keep going? It doesn't need to be exactly on the destination to be considered being arrived. Maybe any point with a radius of .5 km of the destination is considered 'arrived'.</p></li>
</ul>
<p>ie,</p>
<pre><code>import geopy
POS1 = (37.783333, -122.416667) # origin
POS2 = (32.715, -117.1625) # dest
def get_current_position(d):
# use geopy to calculate bearing between POS1 and POS2
# then use VincentyDistance to get next coord
return gps_coord_at_distance_d
# If current position is within .5 km of destination, consider it 'arrived'
def has_arrived(curr_pos):
return True/False
d = 50 # 50 km
print get_current_position(d)
print has_arrived(get_current_position(d))
</code></pre>
| 0 | 2016-07-27T17:43:07Z | 38,636,156 | <p>Ok, figured I'd come back to this question and give it my best shot given that it hasn't seen any other solutions. Unfortunately I can't test code right now, but I believe there is a solution to your problem using both geopy and geographiclib. Here goes.</p>
<p>From the terminal (possibly with sudo)</p>
<pre><code>pip install geographiclib
pip install geopy
</code></pre>
<p>Now with Python</p>
<h1>Get Current Position</h1>
<pre><code>import geographiclib
from geopy import geopy.distance
# Get the first azimuth, which should be the bearing
bearing = geographiclib.WGS84.Inverse(37.783333, -122.416667, 32.715, -117.1625)[2]
# Now we use geopy to calculate the distance over time
dist = geopy.distance.VincentyDistance(kilometers = 1)
san_fran = geopy.Point(37.783333, -122.416667)
print dist.destination(point=san_fran, bearing=bearing)
</code></pre>
<h1>Has Arrived</h1>
<pre><code>def has_arrived(d):
return geopy.distance.vincenty(curr_pos, (32.715, -117.1625)).kilometers < .5
</code></pre>
<p>Like I said, I unfortunately can't test this, but I believe this is correct. It's possible there will be some unit differences with the bearing calculation: <a href="http://geographiclib.sourceforge.net/html/python/geodesics.html" rel="nofollow">it calculates bearing off of North as seen here</a>. Sorry if this isn't exactly correct, but like I said since this hasn't received a response since I figured I may as well throw in what I know.</p>
| 0 | 2016-07-28T12:18:42Z | [
"python",
"python-2.7",
"gps",
"geolocation",
"coordinate-systems"
] |
Problems parsing a web page in python | 38,619,887 | <p>I would like to parse a web page in order to retrieve some information about it (my exact problem is to retrieve all the items in this list : <a href="http://www.computerhope.com/vdef.htm" rel="nofollow">http://www.computerhope.com/vdef.htm</a>).</p>
<p>However, I can't figure out how to do it.</p>
<p>A lot of tutorials on the internet start with this (simplified) :
<code>html5lib.parse(urlopen("http://www.computerhope.com/vdef.htm"))</code></p>
<p>But after that, none of the tutorials explain how I can browse the document and go the html part I am looking for.</p>
<p>Some other tutorials explain how to do it with <code>CSSSelector</code> but again, all the tutorials don't start with a web page but with a string instead (e.g. here : <a href="http://lxml.de/cssselect.html" rel="nofollow">http://lxml.de/cssselect.html</a>).</p>
<p>So I tried to create a tree with the web page using this :
<code>fromstring(urlopen("http://www.computerhope.com/vdef.htm").read())</code>
but I got this error :
<code>lxml.etree.XMLSyntaxError: Specification mandate value for attribute itemscope, line 3, column 28</code>. This error is due to the fact that there is an attribute that is not specified (e.g. <code><input attribute></input></code>) but as I don't control the webpage, I can't go around it.</p>
<p>So here are a few questions that could solve my problems :</p>
<ul>
<li>How can I browse a tree ?</li>
<li>Is there a way to make the parser less strict ?</li>
</ul>
<p>Thank you !</p>
| -1 | 2016-07-27T17:45:51Z | 38,619,966 | <p>Try using beautiful soup, it has some excellent features and makes parsing in Python extremely easy.</p>
<p>Check of their documentation at <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow">https://www.crummy.com/software/BeautifulSoup/bs4/doc/</a></p>
<p>EDIT:</p>
<p>As @mzjn pointed out, I did not Include the code sample in the answer(That is the reason for the down vote), because I thought The OP must figure it out himself. I think I can help him out, So here is the code</p>
<pre><code>from bs4 import BeautifulSoup
import requests
page = requests.get('http://www.computerhope.com/vdef.htm')
soup = BeautifulSoup(page.text)
tables = soup.findChildren('table')
for i in (tables[0].findAll('a')):
print(i.text)
</code></pre>
<p>It prints out all the items in the list, I hope the OP Will make adjustments accordingly.</p>
<p>At least Now I am hoping my answer will get upvoted.</p>
| -1 | 2016-07-27T17:50:29Z | [
"python",
"html",
"lxml",
"html5lib"
] |
How to print a list of tuples without commas in Python? | 38,619,902 | <p>I have two lists in python.</p>
<pre><code>a = [1, 2, 4, 8, 16]
b = [0, 1, 2, 3, 4]
</code></pre>
<p>and a third list <code>c</code> that is the <code>zip</code> of them.</p>
<pre><code>c = zip(a, b)
</code></pre>
<p>or simply I have a list of tuples like this:</p>
<pre><code>c = [(1, 0), (2, 1), (4, 2), (8, 3), (16, 4)]
</code></pre>
<p>I would like to print the list <code>c</code> without the commas after the parentheses. Is there a way to do this in Python?</p>
<p>I would like to print the list <code>c</code> like this:</p>
<pre><code>[(1, 0) (2, 1) (4, 2) (8, 3) (16, 4)]
</code></pre>
| 0 | 2016-07-27T17:46:59Z | 38,619,956 | <pre><code>print('[%s]' % ' '.join(map(str, c)))
</code></pre>
<p>Prints:</p>
<pre><code>[(1, 0) (2, 1) (4, 2) (8, 3) (16, 4)]
</code></pre>
<p>for your inputs.</p>
<p>You can basically take advantage of the fact that you're using the natural string representation of the tuples and just join with a space.</p>
| 3 | 2016-07-27T17:49:48Z | [
"python",
"list"
] |
How to print a list of tuples without commas in Python? | 38,619,902 | <p>I have two lists in python.</p>
<pre><code>a = [1, 2, 4, 8, 16]
b = [0, 1, 2, 3, 4]
</code></pre>
<p>and a third list <code>c</code> that is the <code>zip</code> of them.</p>
<pre><code>c = zip(a, b)
</code></pre>
<p>or simply I have a list of tuples like this:</p>
<pre><code>c = [(1, 0), (2, 1), (4, 2), (8, 3), (16, 4)]
</code></pre>
<p>I would like to print the list <code>c</code> without the commas after the parentheses. Is there a way to do this in Python?</p>
<p>I would like to print the list <code>c</code> like this:</p>
<pre><code>[(1, 0) (2, 1) (4, 2) (8, 3) (16, 4)]
</code></pre>
| 0 | 2016-07-27T17:46:59Z | 38,619,975 | <p>Would it be okay to create the final result as a string?</p>
<pre><code>c_no_comma = ''
for element in c:
c_no_comma += str(element) + ' '
c_no_comma = '[ ' + c_no_comma + ']'
</code></pre>
| 0 | 2016-07-27T17:50:50Z | [
"python",
"list"
] |
How to print a list of tuples without commas in Python? | 38,619,902 | <p>I have two lists in python.</p>
<pre><code>a = [1, 2, 4, 8, 16]
b = [0, 1, 2, 3, 4]
</code></pre>
<p>and a third list <code>c</code> that is the <code>zip</code> of them.</p>
<pre><code>c = zip(a, b)
</code></pre>
<p>or simply I have a list of tuples like this:</p>
<pre><code>c = [(1, 0), (2, 1), (4, 2), (8, 3), (16, 4)]
</code></pre>
<p>I would like to print the list <code>c</code> without the commas after the parentheses. Is there a way to do this in Python?</p>
<p>I would like to print the list <code>c</code> like this:</p>
<pre><code>[(1, 0) (2, 1) (4, 2) (8, 3) (16, 4)]
</code></pre>
| 0 | 2016-07-27T17:46:59Z | 38,620,061 | <pre><code>print('[' + ' '.join([str(tup) for tup in c]) + ']')
</code></pre>
<p>Using a list comprehension to create a list of the tuples in string form. Those are then joined and the square brackets are added to make it look as you want it.</p>
| 3 | 2016-07-27T17:56:59Z | [
"python",
"list"
] |
Sklearn-Pandas DataFrameMapper: mapper.fit_transform gives ValueError: bad input shape (8, 2) | 38,619,904 | <p>I was able to replicate the example given in the <a href="https://github.com/paulgb/sklearn-pandas" rel="nofollow">Github</a> repo. However, when I tried it on my own data, I got the ValueError.</p>
<p>Below is a dummy data that, which gives the same error as my real data.</p>
<pre><code>import pandas as pd
import numpy as np
from sklearn_pandas import DataFrameMapper
from sklearn.preprocessing import LabelEncoder, StandardScaler, MinMaxScaler
data = pd.DataFrame({'pet':['cat', 'dog', 'dog', 'fish', 'cat', 'dog','cat','fish'], 'children': [4., 6, 3, 3, 2, 3, 5, 4], 'salary': [90, 24, 44, 27, 32, 59, 36, 27], 'feat4': ['linear', 'circle', 'linear', 'linear', 'linear', 'circle', 'circle', 'linear']})
mapper = DataFrameMapper([
(['pet', 'feat4'], LabelEncoder()),
(['children', 'salary'], [StandardScaler(),
MinMaxScaler()])
])
np.round(mapper.fit_transform(data.copy()),2)
</code></pre>
<p>Below is the error</p>
<blockquote>
<hr>
<p>ValueError Traceback (most recent call last)
in ()
----> 1 np.round(mapper.fit_transform(data.copy()),2)</p>
<p>C:\Users\E245713\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\base.py in fit_transform(self, X, y, **fit_params)
453 if y is None:
454 # fit method of arity 1 (unsupervised transformation)
--> 455 return self.fit(X, **fit_params).transform(X)
456 else:
457 # fit method of arity 2 (supervised transformation)</p>
<p>C:\Users\E245713\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn_pandas\dataframe_mapper.py in fit(self, X, y)
95 for columns, transformers in self.features:
96 if transformers is not None:
---> 97 transformers.fit(self._get_col_subset(X, columns))
98 return self
99 </p>
<p>C:\Users\E245713\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\preprocessing\label.py in fit(self, y)
106 self : returns an instance of self.
107 """
--> 108 y = column_or_1d(y, warn=True)
109 _check_numpy_unicode_bug(y)
110 self.classes_ = np.unique(y)</p>
<p>C:\Users\E245713\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\utils\validation.py in column_or_1d(y, warn)
549 return np.ravel(y)
550
--> 551 raise ValueError("bad input shape {0}".format(shape))
552
553 </p>
<p>ValueError: bad input shape (8, 2)</p>
</blockquote>
<p>Can anyone help?</p>
<p>thanks</p>
| 1 | 2016-07-27T17:47:07Z | 38,623,218 | <p>You should only submit multiple arrays to a transform if it indeed takes multiple inputs (e.g. sklearn.decomposition.PCA(1) in the documentation). In your case the error ultimately comes from this line:</p>
<pre><code>(['pet', 'feat4'], LabelEncoder()),
</code></pre>
<p>Even this does not work:</p>
<pre><code>(['pet', 'feat4'], [LabelEncoder(), LabelEncoder()]),
</code></pre>
<p>You instead have to do something like this:</p>
<pre><code>mapper_good = DataFrameMapper([
(['pet'], LabelEncoder()),
(['feat4'], LabelEncoder()),
(['children'], StandardScaler()),
(['salary'], MinMaxScaler())
])
np.round(mapper_good.fit_transform(data.copy()),2)
</code></pre>
| 3 | 2016-07-27T21:08:48Z | [
"python",
"pandas",
"sklearn-pandas"
] |
Find duplicate values in list of tuples in Python | 38,619,907 | <p>How do I find duplicate values in the following list of tuples?</p>
<pre><code>[(1622, 4081), (1622, 4082), (1624, 4083), (1626, 4085), (1650, 4086), (1650, 4090)]
</code></pre>
<p>I want to get a list like:</p>
<pre><code>[4081, 4082, 4086, 4090]
</code></pre>
<p>I have tried using <code>itemgetter</code> then group by option but didn't work.</p>
<p>How can one do this?</p>
| 0 | 2016-07-27T17:47:11Z | 38,620,034 | <p>Haven't tested this.... (edit: yup, it works)</p>
<pre><code>l = [(1622, 4081), (1622, 4082), (1624, 4083), (1626, 4085), (1650, 4086), (1650, 4090)]
dup = []
for i, t1 in enumerate(l):
for t2 in l[i+1:]:
if t1[0]==t2[0]:
dup.extend([t1[1], t2[1]])
print dup
</code></pre>
| 0 | 2016-07-27T17:55:09Z | [
"python",
"list",
"tuples"
] |
Find duplicate values in list of tuples in Python | 38,619,907 | <p>How do I find duplicate values in the following list of tuples?</p>
<pre><code>[(1622, 4081), (1622, 4082), (1624, 4083), (1626, 4085), (1650, 4086), (1650, 4090)]
</code></pre>
<p>I want to get a list like:</p>
<pre><code>[4081, 4082, 4086, 4090]
</code></pre>
<p>I have tried using <code>itemgetter</code> then group by option but didn't work.</p>
<p>How can one do this?</p>
| 0 | 2016-07-27T17:47:11Z | 38,620,038 | <p>Use an ordered dictionary with first items as its keys and list of second items as values (for duplicates which created using <code>dict.setdefalt()</code>) then pick up those that have a length more than 1: </p>
<pre><code>>>> from itertools import chain
>>> from collections import OrderedDict
>>> d = OrderedDict()
>>> for i, j in lst:
... d.setdefault(i,[]).append(j)
...
>>>
>>> list(chain.from_iterable([j for i, j in d.items() if len(j)>1]))
[4081, 4082, 4086, 4090]
</code></pre>
| 2 | 2016-07-27T17:55:16Z | [
"python",
"list",
"tuples"
] |
Find duplicate values in list of tuples in Python | 38,619,907 | <p>How do I find duplicate values in the following list of tuples?</p>
<pre><code>[(1622, 4081), (1622, 4082), (1624, 4083), (1626, 4085), (1650, 4086), (1650, 4090)]
</code></pre>
<p>I want to get a list like:</p>
<pre><code>[4081, 4082, 4086, 4090]
</code></pre>
<p>I have tried using <code>itemgetter</code> then group by option but didn't work.</p>
<p>How can one do this?</p>
| 0 | 2016-07-27T17:47:11Z | 38,620,227 | <p>As an alternative, if you want to use <code>groupby</code>, here is a way to do it:</p>
<pre><code>In [1]: from itertools import groupby
In [2]: ts = [(1622, 4081), (1622, 4082), (1624, 4083), (1626, 4085), (1650, 4086), (1650, 4090)]
In [3]: dups = []
In [4]: for _, g in groupby(ts, lambda x: x[0]):
...: grouped = list(g)
...: if len(grouped) > 1:
...: dups.extend([dup[1] for dup in grouped])
...:
In [5]: print(dups)
[4081, 4082, 4086, 4090]
</code></pre>
<p>You use <code>groupby</code> to group from the first element of the tuple, and add the duplicate value into the list from the tuple.</p>
| 1 | 2016-07-27T18:07:17Z | [
"python",
"list",
"tuples"
] |
Find duplicate values in list of tuples in Python | 38,619,907 | <p>How do I find duplicate values in the following list of tuples?</p>
<pre><code>[(1622, 4081), (1622, 4082), (1624, 4083), (1626, 4085), (1650, 4086), (1650, 4090)]
</code></pre>
<p>I want to get a list like:</p>
<pre><code>[4081, 4082, 4086, 4090]
</code></pre>
<p>I have tried using <code>itemgetter</code> then group by option but didn't work.</p>
<p>How can one do this?</p>
| 0 | 2016-07-27T17:47:11Z | 38,621,510 | <p>Yet another approach (without any imports):</p>
<pre><code>In [896]: lot = [(1622, 4081), (1622, 4082), (1624, 4083), (1626, 4085), (1650, 4086), (1650, 4090)]
In [897]: d = dict()
In [898]: for key, value in lot:
...: d[key] = d.get(key, []) + [value]
...:
...:
In [899]: d
Out[899]: {1622: [4081, 4082], 1624: [4083], 1626: [4085], 1650: [4086, 4090]}
In [900]: [d[key] for key in d if len(d[key]) > 1]
Out[900]: [[4086, 4090], [4081, 4082]]
In [901]: sorted([num for num in lst for lst in [d[key] for key in d if len(d[key]) > 1]])
Out[901]: [4081, 4081, 4082, 4082]
</code></pre>
| 0 | 2016-07-27T19:22:33Z | [
"python",
"list",
"tuples"
] |
Testing Post Request Using ModelFormSet in Django | 38,619,999 | <p>I'm new to django and testing, so I'm not sure if there is a more simple solution to this question.</p>
<p>I'm creating an assessment app with a rubric that the user can edit, submit and update. Each rubric has a preset number of row models that are connected to the rubric model via a foreign key relationship. The user should be able to update multiple row_choice fields in multiple row models and post the row_choice fields to the database. </p>
<p>To portray this in a template, I decided to use a ModelFormSet and iterate over the ModelFormSet in rubric.html. This works OK, but whenever I try to test this layout using TestCase, I receive the error
['ManagementForm data is missing or has been tampered with']. I understand this error because the test does not pass a post request to the view using rubric.html (where ManagementForm is located), but the application works in the browser because the django template renders ManagementForm as html which has no problem in the view. </p>
<p>Can you test a ModelFormSet in django using TestCase, or do you <em>have</em> to use LiveServerTestCase and Selenium? Is there a way to get the example test to pass and still test a post request (while using ModelFormSet)? Any help is greatly appreciated.</p>
<p>forms.py</p>
<pre><code>class RowForm(ModelForm):
class Meta:
model = Row
fields = ['row_choice']
class RubricForm(ModelForm):
class Meta:
model = Rubric
fields = ['name']
RowFormSet = modelformset_factory(Row, fields=('row_choice',), extra=0)
</code></pre>
<p>an example of a failing test:</p>
<pre><code>def test_rubric_page_can_take_post_request(self):
self.add_two_classes_to_semester_add_two_students_to_class_add_one_row()
request = HttpRequest()
request.method = "POST"
response = rubric_page(request, "EG5000", "12345678")
self.assertEqual(response.status_code, 302)
</code></pre>
<p>and the Traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\python33\assessmenttoolstaging\source\rubricapp\tests\tests.py", line 240, in test_rubric_page_can_take_post_request
response = rubric_page(request, "EG5000", "12345678")
File "C:\python33\assessmenttoolstaging\source\rubricapp\views.py", line 52, in rubric_page
RowFormSetWeb.clean()
File "C:\Python33\lib\site-packages\django\forms\models.py", line 645, in clean
self.validate_unique()
File "C:\Python33\lib\site-packages\django\forms\models.py", line 651, in validate_unique
forms_to_delete = self.deleted_forms
File "C:\Python33\lib\site-packages\django\forms\formsets.py", line 205, in deleted_forms
if not self.is_valid() or not self.can_delete:
File "C:\Python33\lib\site-packages\django\forms\formsets.py", line 304, in is_valid
self.errors
File "C:\Python33\lib\site-packages\django\forms\formsets.py", line 278, in errors
self.full_clean()
File "C:\Python33\lib\site-packages\django\forms\formsets.py", line 325, in full_clean
for i in range(0, self.total_form_count()):
File "C:\Python33\lib\site-packages\django\forms\formsets.py", line 115, in total_form_count
return min(self.management_form.cleaned_data[TOTAL_FORM_COUNT], self.absolute_max)
File "C:\Python33\lib\site-packages\django\forms\formsets.py", line 97, in management_form
code='missing_management_form',
django.core.exceptions.ValidationError: ['ManagementForm data is missing or has been tampered with']
</code></pre>
<p>rubric_page view</p>
<pre><code>def rubric_page(request, edclass, studentname):
edClassSpaceAdded = re.sub('([A-Z]+)', r'\1 ', edclass)
enrollmentObj = Enrollment.objects.get(edclass__name=edClassSpaceAdded, student__lnumber=studentname)
rubricForClass = enrollmentObj.keyrubric.get()
rows = Row.objects.filter(rubric=rubricForClass)
student = Student.objects.get(lnumber=studentname)
if request.method == 'POST':
#TestCase cannot test this section of the view
RowFormSetWeb = RowFormSet(request.POST)
RowFormSetWeb.clean()
if RowFormSetWeb.is_valid():
savedFormset = RowFormSetWeb.save(commit=False)
for i in savedFormset:
i.rubric = rubricForClass
RowFormSetWeb.save()
return redirect('/'+ edclass + '/')
else:
return render(request, 'rubric.html', {'studentlnumber': student.lnumber,'studentname': student.lastname + ", " + student.firstname, 'RowFormSetWeb':RowFormSetWeb, 'rows':rows, 'edclass':edclass})
else:
RowFormSetWeb = RowFormSet(queryset=Row.objects.filter(rubric=rubricForClass))
return render(request, 'rubric.html', {'studentlnumber': student.lnumber,'studentname': student.lastname + ", " + student.firstname, 'RowFormSetWeb':RowFormSetWeb, 'rows':rows, 'edclass':edclass})
</code></pre>
<p>form section of rubric.html </p>
<pre><code> <h3 id="rubricheader">TODO Pull model into view</h3>
<form method="post" action= {% url 'rubricpage' edclass=edclass studentname=studentlnumber %}>
<table border="1">
<!-- TODO fix this so that it pulls from forms.py -->
<tr>
<th></th>
<th>Excellent</th>
<th>Proficient</th>
<th>Sub-par</th>
<th>Abysmal</th>
</tr>
{{ RowFormSetWeb.management_form }}
{% for form in RowFormSetWeb %}
{{ form.id }}
<tr>
<td>{{ form.row_choice }}</td><td>{{ form.excellenttext }}</td><td>{{ form.proficienttext }}</td><td>{{ form.satisfactorytext }}<td>{{ form.unsatisfactorytext }}</td>
</tr>
{{ RowFormSetWeb.errors }}
{% endfor %}
</table>
<input name="submitbutton" type="submit" name="submit" value="Submit" id="rubricsubmit">
{% csrf_token %}
</form>
{% endblock %}
</code></pre>
<p>models.py</p>
<p>from django.db import models</p>
<pre><code>class Student(models.Model):
firstname = models.TextField(default="")
lastname = models.TextField(default="")
lnumber = models.TextField(default="")
def __str__(self):
return self.lnumber
#TODO add models
class EdClasses(models.Model):
name = models.TextField(default='')
students = models.ManyToManyField(Student, through="Enrollment")
def __str__(self):
return self.name
class Semester(models.Model):
text = models.TextField(default='201530')
classes = models.ManyToManyField(EdClasses)
def __str__(self):
return self.text
class Rubric(models.Model):
name = models.TextField(default="Basic Rubric")
def __str__(self):
return self.name
class Row(models.Model):
CHOICES = (
(None, 'Your string for display'),
('1','Excellent'),
('2','Proficient'),
('3','Awful'),
('4','The worst ever'),
)
rubric = models.ForeignKey(Rubric)
row_choice = models.CharField(max_length=20,choices=CHOICES, default="None", blank=True)
excellenttext = models.TextField(default="")
proficienttext = models.TextField(default="")
satisfactorytext = models.TextField(default="")
unsatisfactorytext = models.TextField(default="")
def __str__(self):
return self.row_choice
class Enrollment(models.Model):
student = models.ForeignKey(Student)
edclass = models.ForeignKey(EdClasses)
grade = models.TextField(default='')
keyrubric = models.ManyToManyField(Rubric)
</code></pre>
<p>How the form is rendered in the browser:</p>
<pre><code><form action="/201530/EG5000/21743148/" method="post">
<table border="1">
<!-- TODO fix this so that it pulls from forms.py -->
<tr>
<th></th>
<th>Excellent</th>
<th>Proficient</th>
<th>Sub-par</th>
<th>Abysmal</th>
</tr>
<input id="id_form-TOTAL_FORMS" name="form-TOTAL_FORMS" type="hidden" value="2"/><input id="id_form-INITIAL_FORMS" name="form-INITIAL_FORMS" type="hidden" value="2"/><input id="id_form-MIN_NUM_FORMS" name="form-MIN_NUM_FORMS" type="hidden" value="0"/><input id="id_form-MAX_NUM_FORMS" name="form-MAX_NUM_FORMS" type="hidden" value="1000"/>
<input id="id_form-0-id" name="form-0-id" type="hidden" value="3"/>
<tr>
<td><select id="id_form-0-row_choice" name="form-0-row_choice">
<option value="">Your string for display</option>
<option value="1">Excellent</option>
<option value="2">Proficient</option>
<option value="3">Awful</option>
<option value="4">The worst ever</option>
</select></td>
<td>THE BEST!</td>
<td>THE SECOND BEST!</td>
<td>THE THIRD BEST!</td>
<td>YOURE LAST</td>
</tr>
[]
<input id="id_form-1-id" name="form-1-id" type="hidden" value="4"/>
<tr>
<td><select id="id_form-1-row_choice" name="form-1-row_choice">
<option value="">Your string for display</option>
<option value="1">Excellent</option>
<option value="2">Proficient</option>
<option value="3">Awful</option>
<option value="4">The worst ever</option>
</select></td>
<td>THE GREATEST!</td>
<td>THE SECOND BEST!</td>
<td>THE THIRD BEST!</td>
<td>YOURE LAST</td>
</tr>
[]
</table>
<input id="rubricsubmit" name="submit" type="submit" value="Submit">
<input name="csrfmiddlewaretoken" type="hidden" value="0OeU2n0v8ooXHBxdUfi26xxqMIdrA50L"/>
</input></form>
</code></pre>
| 0 | 2016-07-27T17:52:40Z | 38,620,214 | <p>One approach I've used in the past, albeit not a particularly nice one, is to use the client to get the rendered form, then use something like BeautifulSoup to parse out all the form data from that and update where necessary before posting back. That way you will get all the hidden and prepopulated fields, so you can be sure your tests behave the same way as a user would.</p>
| 0 | 2016-07-27T18:06:31Z | [
"python",
"django",
"django-forms",
"django-templates"
] |
Python How to concat and retrieve actual defined data? | 38,620,035 | <p>Ive been searching but I just cant find anything that describes my problem. Im just learning python so I might not even know how to properly phrase the question.</p>
<p>Im trying to randomize a selection of defined variables, but I can not figure out how to retrieve those variables. example:</p>
<pre><code>import random
user1 = "usernamehere1"
userkey1 = "3097fds09aj4023jr30mf2ag2"
user2 = "usernamehere2"
userkey2 = "09asfh34907fsenk32498fgg9"
user3 = "usernamehere3"
userkey3 = "234kn34bnero8wn34lnkjwi34"
numbers = ["1", "2", "3"]
user_number = random.choice(numbers)
user = "user" + user_number
wif = "userkey" + user_number
print(user)
print(wif)
</code></pre>
<p>Instead of getting: (say if it selects "2" as the random number):</p>
<ul>
<li>usernamehere2</li>
<li>09asfh34907fsenk32498fgg9</li>
</ul>
<p>I just get:</p>
<ul>
<li>user2</li>
<li>userkey2</li>
</ul>
<p>Any guesses as to what am I doing wrong?</p>
| 0 | 2016-07-27T17:55:09Z | 38,620,142 | <p>look at this post <a href="http://stackoverflow.com/questions/19122345/to-convert-string-to-variable-name-in-python">To convert string to variable name in python</a>, You can do what you want using exec to change strings to a variable, but this is not safe and definitely not recommended. As the post here explains you should use dictionaries instead to do this such as <code>users["user" + user_number]</code></p>
<pre><code>import random
user1 = "usernamehere1"
userkey1 = "3097fds09aj4023jr30mf2ag2"
user2 = "usernamehere2"
userkey2 = "09asfh34907fsenk32498fgg9"
user3 = "usernamehere3"
userkey3 = "234kn34bnero8wn34lnkjwi34"
dict = {}
dict[user1] = userkey1
dict[user2] = userkey2
dict[user3] = userkey3
numbers = ["1", "2", "3"]
user_number = random.choice(numbers)
user = "user" + user_number
print(user)
print(dict[user])
</code></pre>
| 1 | 2016-07-27T18:01:36Z | [
"python"
] |
Python How to concat and retrieve actual defined data? | 38,620,035 | <p>Ive been searching but I just cant find anything that describes my problem. Im just learning python so I might not even know how to properly phrase the question.</p>
<p>Im trying to randomize a selection of defined variables, but I can not figure out how to retrieve those variables. example:</p>
<pre><code>import random
user1 = "usernamehere1"
userkey1 = "3097fds09aj4023jr30mf2ag2"
user2 = "usernamehere2"
userkey2 = "09asfh34907fsenk32498fgg9"
user3 = "usernamehere3"
userkey3 = "234kn34bnero8wn34lnkjwi34"
numbers = ["1", "2", "3"]
user_number = random.choice(numbers)
user = "user" + user_number
wif = "userkey" + user_number
print(user)
print(wif)
</code></pre>
<p>Instead of getting: (say if it selects "2" as the random number):</p>
<ul>
<li>usernamehere2</li>
<li>09asfh34907fsenk32498fgg9</li>
</ul>
<p>I just get:</p>
<ul>
<li>user2</li>
<li>userkey2</li>
</ul>
<p>Any guesses as to what am I doing wrong?</p>
| 0 | 2016-07-27T17:55:09Z | 38,620,168 | <p>The comments are correct, I just wanted to share a way to implement them. You can think of dictionaries as "look up table" where you assign a key-->value pair. There is a lot of info <a href="http://www.tutorialspoint.com/python/python_dictionary.htm" rel="nofollow">here</a></p>
<pre><code>import random
users = {"1":"usernamehere1",
"2":"usernamehere2",
"3":"usernamehere3"}
keys = {"1":"3097fds09aj4023jr30mf2ag2",
"2":"09asfh34907fsenk32498fgg9",
"3":"234kn34bnero8wn34lnkjwi34"}
numbers = ["1", "2", "3"]
user_number = random.choice(numbers)
user = "user" + users[user_number]
wif = "userkey" + keys[user_number]
print(user)
print(wif)
</code></pre>
| 0 | 2016-07-27T18:03:13Z | [
"python"
] |
Python How to concat and retrieve actual defined data? | 38,620,035 | <p>Ive been searching but I just cant find anything that describes my problem. Im just learning python so I might not even know how to properly phrase the question.</p>
<p>Im trying to randomize a selection of defined variables, but I can not figure out how to retrieve those variables. example:</p>
<pre><code>import random
user1 = "usernamehere1"
userkey1 = "3097fds09aj4023jr30mf2ag2"
user2 = "usernamehere2"
userkey2 = "09asfh34907fsenk32498fgg9"
user3 = "usernamehere3"
userkey3 = "234kn34bnero8wn34lnkjwi34"
numbers = ["1", "2", "3"]
user_number = random.choice(numbers)
user = "user" + user_number
wif = "userkey" + user_number
print(user)
print(wif)
</code></pre>
<p>Instead of getting: (say if it selects "2" as the random number):</p>
<ul>
<li>usernamehere2</li>
<li>09asfh34907fsenk32498fgg9</li>
</ul>
<p>I just get:</p>
<ul>
<li>user2</li>
<li>userkey2</li>
</ul>
<p>Any guesses as to what am I doing wrong?</p>
| 0 | 2016-07-27T17:55:09Z | 38,620,183 | <pre><code>import random
users = [
{
'user':"usernamehere1",
'userkey':"3097fds09aj4023jr30mf2ag2"
},
{
'user':"usernamehere2",
'userkey':"09asfh34907fsenk32498fgg9"
},
{
'user':"usernamehere3",
'userkey':"234kn34bnero8wn34lnkjwi34"
}
]
user_number = random.choice(range(1,len(users)))
print(users[user_number]['user'])
print(users[user_number]['userkey'])
</code></pre>
| 0 | 2016-07-27T18:04:15Z | [
"python"
] |
Python How to concat and retrieve actual defined data? | 38,620,035 | <p>Ive been searching but I just cant find anything that describes my problem. Im just learning python so I might not even know how to properly phrase the question.</p>
<p>Im trying to randomize a selection of defined variables, but I can not figure out how to retrieve those variables. example:</p>
<pre><code>import random
user1 = "usernamehere1"
userkey1 = "3097fds09aj4023jr30mf2ag2"
user2 = "usernamehere2"
userkey2 = "09asfh34907fsenk32498fgg9"
user3 = "usernamehere3"
userkey3 = "234kn34bnero8wn34lnkjwi34"
numbers = ["1", "2", "3"]
user_number = random.choice(numbers)
user = "user" + user_number
wif = "userkey" + user_number
print(user)
print(wif)
</code></pre>
<p>Instead of getting: (say if it selects "2" as the random number):</p>
<ul>
<li>usernamehere2</li>
<li>09asfh34907fsenk32498fgg9</li>
</ul>
<p>I just get:</p>
<ul>
<li>user2</li>
<li>userkey2</li>
</ul>
<p>Any guesses as to what am I doing wrong?</p>
| 0 | 2016-07-27T17:55:09Z | 38,620,433 | <p>You can use this with just one dictionary:</p>
<pre><code>import random
users = {
"usernamehere1": "3097fds09aj4023jr30mf2ag2",
"usernamehere2": "09asfh34907fsenk32498fgg9",
"usernamehere3": "234kn34bnero8wn34lnkjwi34"
}
user = random.sample(users.keys(), 1)
print(user[0])
print(users[user[0]])
</code></pre>
| 0 | 2016-07-27T18:18:44Z | [
"python"
] |
Removing documents in Gensim | 38,620,124 | <p>I'm using Gensim for an NLP task and currently I have a corpus which includes empty documents. I don't want to rerun my code, although that is an option, and would just like to remove the documents that don't have any content. The documents are already saved as TF-IDF corpora and was wondering if there was a way to remove these documents that are empty. I can figure out which documents are empty but the corpora file is an iterator and not any type of data structure ie list. Thanks,</p>
<p>Cameron</p>
| 0 | 2016-07-27T18:00:35Z | 38,621,889 | <p>You might try converting the corpus to a numpy matrix, like so:</p>
<pre><code>numpy_matrix = gensim.matutils.corpus2dense(corpus, num_terms=number_of_corpus_features)
</code></pre>
<p>Then remove the appropriate columns (those with all zero entries). Then convert back to a gensim corpus to continue:</p>
<pre><code>corpus = gensim.matutils.Dense2Corpus(numpy_matrix)
</code></pre>
<p>If you plan on building any more corpora in your current context, it might be a good idea to modify the corpus creation process so you don't have to do this every time, but I'm sure you've thought of that.</p>
| 1 | 2016-07-27T19:44:51Z | [
"python",
"python-2.7",
"nlp",
"gensim"
] |
Skimage Region Adjacency Graph (RAG) from quickshift segmentation | 38,620,129 | <p>I'm trying to create a Region Adjacency Graph from after segmenting an image using the tools in the <code>Skimage</code> package. Using the examples in the documentation I can segment an image using SLIC and create the RAG successfully.</p>
<pre class="lang-py prettyprint-override"><code>from skimage import data
from skimage import segmentation
from skimage.future import graph
import matplotlib.pyplot as plt
#Load Image
img = data.coffee()
#Segment image
labels = segmentation.slic(img, compactness=30, n_segments=800)
#Create RAG
g = graph.rag_mean_color(img, labels)
#Draw RAG
gplt = graph.draw_rag(labels, g, img)
plt.imshow(gplt)
</code></pre>
<p><a href="http://i.stack.imgur.com/Kffnm.png" rel="nofollow"><img src="http://i.stack.imgur.com/Kffnm.png" alt="Successful RAG"></a></p>
<p>However, if I use either <code>segmentation.quickshift</code> or <code>segmentation.felzenszwalb</code> to segment the image and then create the RAG, I get an error at <code>draw_rag()</code>.</p>
<pre class="lang-py prettyprint-override"><code>labels = segmentation.quickshift(img, kernel_size=5, max_dist=5, ratio=0.5)
g = graph.rag_mean_color(img, labels)
gplt = graph.draw_rag(labels, g, img)
labels = segmentation.felzenszwalb(img, scale=100, sigma=0.5, min_size=50)
g = graph.rag_mean_color(img, labels)
gplt = graph.draw_rag(labels, g, img)
</code></pre>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "C:\Anaconda\lib\site-packages\IPython\core\interactiveshell.py", line 3032, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-34-c0784622a6c7>", line 1, in <module>
gplt = graph.draw_rag(labels, g, img)
File "C:\Anaconda\lib\site-packages\skimage\future\graph\rag.py", line 429, in draw_rag
out[circle] = node_color
IndexError: index 600 is out of bounds for axis 1 with size 600
</code></pre>
<p>The documentation seems to suggest that RAG methods should be compatible with segments from any of these methods, so I'm not sure if I'm doing something wrong, there's a bug, or RAG can only be used with the SLIC segmentation method. Any suggestions?</p>
| 1 | 2016-07-27T18:00:48Z | 38,623,398 | <p>Seems that this was an issue in <code>Skimage 0.11.2</code> but is fixed in version <code>0.12.3</code>.</p>
| 0 | 2016-07-27T21:21:18Z | [
"python",
"python-2.7",
"classification",
"image-segmentation",
"skimage"
] |
How to import a function from python file by Boost.Python | 38,620,134 | <p>I am totally new to boost.python.
I reviewed a lot of recommending of using boost.python to apply with python, however still not easy to understand and find a solution for me.</p>
<p>What I want is to import a function or class that directly from a python "SourceFile"</p>
<p>Example File:
Main.cpp
MyPythonClass.py</p>
<p>Let's says if there is a "Dog" class in "MyPythonClass.py" with "bark()" function, how do I get callback and send argument in cpp? </p>
<p>I have no idea what I should do!
Please help me!</p>
| 1 | 2016-07-27T18:01:09Z | 38,620,679 | <p>Boost python is used to call cplusplus functions from a python source. Pretty much like the Perl xs module.</p>
<p>If you have a function say bark() in main.cpp, you can use boost python to convert this main.cpp into a python callable module.</p>
<p>Then from python script(assuming link output file is main.so):</p>
<pre><code>import main
main.bark()
</code></pre>
| 0 | 2016-07-27T18:32:53Z | [
"python",
"c++",
"boost-python"
] |
How to import a function from python file by Boost.Python | 38,620,134 | <p>I am totally new to boost.python.
I reviewed a lot of recommending of using boost.python to apply with python, however still not easy to understand and find a solution for me.</p>
<p>What I want is to import a function or class that directly from a python "SourceFile"</p>
<p>Example File:
Main.cpp
MyPythonClass.py</p>
<p>Let's says if there is a "Dog" class in "MyPythonClass.py" with "bark()" function, how do I get callback and send argument in cpp? </p>
<p>I have no idea what I should do!
Please help me!</p>
| 1 | 2016-07-27T18:01:09Z | 38,702,099 | <p>When one needs to call Python from C++, and C++ owns the main function, then one must <em>embed</em> the Python interrupter within the C++ program. The Boost.Python API is not a complete wrapper around the Python/C API, so one may find the need to directly invoke parts of the Python/C API. Nevertheless, Boost.Python's API can make interoperability easier. Consider reading the official Boost.Python <a href="http://www.boost.org/doc/libs/1_61_0/libs/python/doc/html/tutorial/tutorial/embedding.html" rel="nofollow">embedding tutorial</a> for more information.</p>
<hr>
<p>Here is a basic skeleton for a C++ program that embeds Python:</p>
<pre class="lang-cpp prettyprint-override"><code>int main()
{
// Initialize Python.
Py_Initialize();
namespace python = boost::python;
try
{
... Boost.Python calls ...
}
catch (const python::error_already_set&)
{
PyErr_Print();
return 1;
}
// Do not call Py_Finalize() with Boost.Python.
}
</code></pre>
<p>When embedding Python, it may be necessary to augment the <a href="https://docs.python.org/2/tutorial/modules.html#the-module-search-path" rel="nofollow">module search path</a> via <a href="https://docs.python.org/2/using/cmdline.html#envvar-PYTHONPATH" rel="nofollow"><code>PYTHONPATH</code></a> so that modules can be imported from custom locations.</p>
<pre class="lang-cpp prettyprint-override"><code>// Allow Python to load modules from the current directory.
setenv("PYTHONPATH", ".", 1);
// Initialize Python.
Py_Initialize();
</code></pre>
<p>Often times, the Boost.Python API provides a way to write C++ code in a Python-ish manner. The following example <a href="http://coliru.stacked-crooked.com/a/e926ffe1ee46fd5a" rel="nofollow">demonstrates</a> embedding a Python interpreter in C++, and having C++ import a <code>MyPythonClass</code> Python module from disk, instantiate an instance of <code>MyPythonClass.Dog</code>, and then invoking <code>bark()</code> on the <code>Dog</code> instance:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <boost/python.hpp>
#include <cstdlib> // setenv
int main()
{
// Allow Python to load modules from the current directory.
setenv("PYTHONPATH", ".", 1);
// Initialize Python.
Py_Initialize();
namespace python = boost::python;
try
{
// >>> import MyPythonClass
python::object my_python_class_module = python::import("MyPythonClass");
// >>> dog = MyPythonClass.Dog()
python::object dog = my_python_class_module.attr("Dog")();
// >>> dog.bark("woof");
dog.attr("bark")("woof");
}
catch (const python::error_already_set&)
{
PyErr_Print();
return 1;
}
// Do not call Py_Finalize() with Boost.Python.
}
</code></pre>
<p>Given a <code>MyPythonClass</code> module that contains:</p>
<pre class="lang-python prettyprint-override"><code>class Dog():
def bark(self, message):
print "The dog barks: {}".format(message)
</code></pre>
<p>The above program outputs:</p>
<pre class="lang-none prettyprint-override"><code>The dog barks: woof
</code></pre>
| 0 | 2016-08-01T14:59:23Z | [
"python",
"c++",
"boost-python"
] |
Querying a list to a manytomany field in django returning more results than expected | 38,620,247 | <p>I wrote the following filter based off the answer provided here: <a href="http://stackoverflow.com/questions/8618068/django-filter-queryset-in-for-every-item-in-list/8637972#8637972">Django filter queryset __in for *every* item in list</a></p>
<pre><code>Distinct_Alert.objects.filter(entities__in=relevant_entities, alert_type=alert_type).annotate(num_entities=Count('entities')).filter(
num_entities=len(relevant_entities))
</code></pre>
<p>The issue that I'm running into is that it seems to be filtering inappropriately, there are times where it's matching sublists, I was using get to get the distinct alert, but I noticed that some of them error out because they return multiple matches, and here's the culprit. </p>
<pre><code>[2016-07-27 18:02:23,473: WARNING/Worker-4] [<Entity: DOGE>, <Entity: 8.8.8.8>]
[2016-07-27 18:02:23,474: WARNING/Worker-4] [<Entity: potato>, <Entity: DOGE>, <Entity: 8.8.8.8>]
[2016-07-27 18:02:23,475: WARNING/Worker-4] [<Entity: desktop_potato>, <Entity: DOGE>, <Entity: 8.8.8.8>]
</code></pre>
<p>My entities list should contain only </p>
<pre><code>[<Entity: DOGE>, <Entity: 8.8.8.8>]
</code></pre>
<p>but it's somehow matching on the other two. Any help would be appreciated</p>
<p>The temporary hack that I came up with is this:</p>
<pre><code>for alert in distinct_alert_query.all():
if alert.entities.all().count() == len(relevant_entities) and all([entity in relevant_entities for entity in alert.entities.all()]):
distinct_alert = alert
break
</code></pre>
<p>where distinct_alert_query refers to the really long query mentioned above. The issue with this is that if the query's matching on Distinct_Alerts with a set of entities larger than relevant_entities it will break :( </p>
<p>models:</p>
<pre><code>class Distinct_Alert(models.Model):
#alert_type = models.ForeignKey(Alert_Type, on_delete=models.CASCADE) for the sake of this problem and the filter, this isn't really needed.
entities = models.ManyToManyField(to='Entity', through='Entity_To_Alert_Map')
class Entity(models.Model):
label = models.CharField(max_length=700, blank=False)
#entity_type = models.ForeignKey(Entity_Type_Label) not necessary for this problem
related_entities = models.ManyToManyField('self')
class Entity_To_Alert_Map(models.Model):
entity = models.ForeignKey(Entity, on_delete=models.CASCADE)
distinct_alert = models.ForeignKey(Distinct_Alert, on_delete=models.CASCADE)
entity_alert_relationship_label = models.ForeignKey(Entity_Alert_Relationship_Label, on_delete=models.CASCADE)
class Meta:
unique_together = ('entity', 'distinct_alert', 'entity_alert_relationship_label')
</code></pre>
| 1 | 2016-07-27T18:08:17Z | 38,621,726 | <p>Try this:</p>
<pre><code>from django.db.models import IntegerField, Case, When, F
Distinct_Alert.objects.filter(
alert_type=alert_type
).annotate(
num_entities=Count('entities'),
num_relevant_entities=Count(
Case(When(entities__in=relevant_entities, then=1),
default=None,
output_field=IntegerField()),
),
).filter(
num_entities=F('num_relevant_entities'),
num_relevant_entities=len(relevant_entities),
)
</code></pre>
<p>Your query:</p>
<pre><code>Distinct_Alert.objects.filter(
entities__in=relevant_entities,
alert_type=alert_type
).annotate(
num_entities=Count('entities')
).filter(
num_entities=len(relevant_entities)
)
</code></pre>
<p><a href="https://docs.djangoproject.com/en/1.9/topics/db/aggregation/#order-of-annotate-and-filter-clauses" rel="nofollow">Order of <code>annotate()</code> and <code>filter()</code> matters</a>.</p>
| 1 | 2016-07-27T19:35:56Z | [
"python",
"django"
] |
Smooth-filtering image border to create circular image (fourier preparation) | 38,620,288 | <p>I learned how to do it yet couldn't remember now, and am failing to get something useful by searching. I have an image, and would like to use this image to create a larger image of size 3x3 by simply stacking them. The border had to be smoothed so that right edge of the image translates seamlessly to the left edge of the image. </p>
<p><a href="http://i.stack.imgur.com/un1vY.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/un1vY.jpg" alt="enter image description here"></a></p>
<p>Applying convolutional filter should be able to do it as it deems the image as circular, but how exactly should we do it? We could speak Matlab (or any other language you are comfortable speaking/typing). </p>
<p>EDIT 1:
I prefer only applying the filter to the border, while preserving the original image as much as possible.</p>
<p>EDIT 2:
I tried a gaussian filter. While it blurs the whole image, the edges became more salient compared to the blurry middle.
<code>imshow(repmat(imfilter(imread('un1vY.jpg'), fspecial('gaussian',64,8), 'circular'), [3 3]))</code></p>
<p><a href="http://i.stack.imgur.com/FqISl.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/FqISl.jpg" alt="enter image description here"></a></p>
| 0 | 2016-07-27T18:10:31Z | 38,622,716 | <p>You can try the following solution:<br>
Leave some overlapping area between two attached images.<br>
Do linear interpolation between two images in the overlapped area.<br>
Linear interpolation: h*A + (1-h)*B when h goes from 0 to 1.</p>
<p>Illustration of h (replicated to create an image) for attach images horizontally:<br>
<a href="http://i.stack.imgur.com/4T0Yw.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/4T0Yw.jpg" alt="enter image description here"></a></p>
<p>I took 100 pixels wide overlap.<br>
The following code attaches two images horizontally: </p>
<pre><code>I = imread('un1vY.jpg');
I = double(I)/255; %Convert uint8 to double
h = linspace(0,1,100); %Create ramp from 0 to 1 of 100 elements.
im_w = size(I, 2); %Image width
im_h = size(I, 1); %Image height
Hy = repmat(h, [size(I, 1), 1, 3]); %Replicate h to fit image height.
J = zeros(im_h, im_w*2-100, 3);
J(:, 1:im_w-100, :) = I(:, 1:im_w-100, :); %Fill pixels from the left to overlap.
J(:, im_w+1:end, :) = I(:, 101:end, :); %Fill pixels from the right of overlap.
%Fill overlap with linear intepolation between right side of left image and left side of right image.
J(:, im_w-99:im_w, :) = I(:, end-99:end, :).*(1-Hy) + I(:, 1:100, :).*Hy;
J = uint8(J*255); %Convert back to uint8
</code></pre>
<p>This is not the perfect solution, and not a complete one.<br>
Sorry, but I leave you (or another SO user) to finish the work.</p>
<p>Result:<br>
<a href="http://i.stack.imgur.com/EA4wd.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/EA4wd.jpg" alt="enter image description here"></a></p>
| 0 | 2016-07-27T20:35:24Z | [
"python",
"matlab",
"image-processing",
"filtering",
"convolution"
] |
Smooth-filtering image border to create circular image (fourier preparation) | 38,620,288 | <p>I learned how to do it yet couldn't remember now, and am failing to get something useful by searching. I have an image, and would like to use this image to create a larger image of size 3x3 by simply stacking them. The border had to be smoothed so that right edge of the image translates seamlessly to the left edge of the image. </p>
<p><a href="http://i.stack.imgur.com/un1vY.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/un1vY.jpg" alt="enter image description here"></a></p>
<p>Applying convolutional filter should be able to do it as it deems the image as circular, but how exactly should we do it? We could speak Matlab (or any other language you are comfortable speaking/typing). </p>
<p>EDIT 1:
I prefer only applying the filter to the border, while preserving the original image as much as possible.</p>
<p>EDIT 2:
I tried a gaussian filter. While it blurs the whole image, the edges became more salient compared to the blurry middle.
<code>imshow(repmat(imfilter(imread('un1vY.jpg'), fspecial('gaussian',64,8), 'circular'), [3 3]))</code></p>
<p><a href="http://i.stack.imgur.com/FqISl.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/FqISl.jpg" alt="enter image description here"></a></p>
| 0 | 2016-07-27T18:10:31Z | 38,623,608 | <p>Here is a full 3x3 sample: </p>
<pre><code>function Test()
close all
I = imread('un1vY.jpg');
I = double(I)/255; %Convert uint8 to double
J = HorizFuse(I);
Jtag = cat(3, J(:,:,1)', J(:,:,2)', J(:,:,3)'); %Transpose across 2'nd dimension.
K = HorizFuse(Jtag);
K = cat(3, K(:,:,1)', K(:,:,2)', K(:,:,3)'); %Transpose back
K = uint8(K*255); %Convert back to uint8
figure;imshow(K);
imwrite(K, 'K.jpg');
end
function K = HorizFuse(I)
h = linspace(0,1,100); %Create ramp from 0 to 1 of 100 elements.
im_w = size(I, 2); %Image width
im_h = size(I, 1); %Image height
Hy = repmat(h, [size(I, 1), 1, 3]); %Replicate h to fit image height.
J = zeros(im_h, im_w*2-100, 3);
J(:, 1:im_w-100, :) = I(:, 1:im_w-100, :); %Fill pixels from the left to overlap.
J(:, im_w+1:end, :) = I(:, 101:end, :); %Fill pixels from the right of overlap.
%Fill overlap with linear intepolation between right side of left image and left side of right image.
J(:, im_w-99:im_w, :) = I(:, end-99:end, :).*(1-Hy) + I(:, 1:100, :).*Hy;
K = zeros(im_h, im_w*3-100*2, 3);
K(1:size(J,1), 1:size(J,2), :) = J;
K(1:size(J,1), end-(im_w+100)+1:end, :) = J(1:size(J,1), end-(im_w+100)+1:end, :);
end
</code></pre>
<p>Result:<br>
<a href="http://i.stack.imgur.com/H6irH.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/H6irH.jpg" alt="enter image description here"></a></p>
| 1 | 2016-07-27T21:36:46Z | [
"python",
"matlab",
"image-processing",
"filtering",
"convolution"
] |
Placing scraped data into a list | 38,620,330 | <p>I want to place the data I have scraped from my website and place it into an array for further use.</p>
<pre><code>response = requests.get(url)
html = response.content
# Taking content from above and searching through it find certain elements with certain attributes
soup = BeautifulSoup(html, "html.parser")
# Looking at table with id=mlb
table = soup.select_one("#mlb").find_next("table")
team =[]
# Look at each row in table find all rows with class name team
for tr in table.find_all("tr",class_="team"):
# For each column in table row find all columns with
for td in tr.find_all("td",["name"]):
team.append(td)
#Need to place all items with tag "name" in an array called team.
</code></pre>
<p>Virtually need to place all extracted data from table "mlb" in row "team" with the attribute name. I then want to place this name in an array called team.</p>
<p>Any help would be greatly appreciated.</p>
| -2 | 2016-07-27T18:13:12Z | 38,620,901 | <p>I used td.text.strip() in the code where it says team.append(td) and got my desired response.</p>
<p>so it looks like this</p>
<pre><code>team.append(td.text.strip())
</code></pre>
<p>I believe I got this response because the data I was trying to append, was nested in other tags so it wasn't receiving the text from team.append(td)</p>
<p>Thanks!</p>
| 0 | 2016-07-27T18:46:32Z | [
"python",
"arrays",
"parsing",
"web-scraping",
"beautifulsoup"
] |
Calling an object method that was returned from a method as a method variable | 38,620,351 | <p>I have a list of things, I'll use files as an example. Each type of file needs to be processed in a different manner. I created a class, "MyFile" and rather than maintaining a giant if:elif:else: structure I created a dictionary with the file type as the key. This is stored as a class variable. </p>
<pre><code>class MyFile(object):
def process_xlsx(self):
#process file stuff for Excel
pass
def process_docx(self):
#process file stuff for Word
pass
def get_filetype(self):
try:
return self.FileTypes[file_extension]
except KeyError:
return None #filetype not handled... yet
FileTypes = {
"XLSX": ("Excel File", process_xlsx),
"DOCX": ("Word Document", process_docx)
}
</code></pre>
<p>Within a different class/object/module I instantiate the MyFile class and start stepping through the files. The MyFile objects are created properly.</p>
<pre><code>f = MyFile("full_file_path_and_name")
file_type = f.get_filetype()
</code></pre>
<p>file_type has a tuple with the key and a function object. Here is an example:
xls_0386.xlsx - ('Excel File', )
doc_0386.docx - ('Word File', )</p>
<p><strong>Question: How do I call the function that was returned from f.get_filetype()?</strong></p>
<pre><code>file_type[1]()
</code></pre>
<p>Returns: process_xlsx() missing 1 required positional argument: 'self'</p>
<pre><code>file_type[1]
</code></pre>
<p>Creates no errors, but never actually makes the call (breakpoints never reached within the process_???? methods)</p>
<p>So again, What is the syntax to call an object method in this manner?</p>
<p>I know this is a bit convoluted to explain with my specifics, and for that I apologize.</p>
<p>Thanks in advance.</p>
| 1 | 2016-07-27T18:14:08Z | 38,620,442 | <p>You call it with the first argument being what you want self to be. For example:</p>
<pre><code>f = MyFile("full_file_path_and_name")
file_type = f.get_filetype()
file_type[1](f)
</code></pre>
<p>This way, you can also use any other file with the function. If you want to only call the specific file with <code>file_type[1]</code>, have a function that is partially filled in, so <code>self</code> is already an argument. You can do this with <a href="https://docs.python.org/2/library/functools.html#functools.partial" rel="nofollow">functools.partial</a></p>
<pre><code>import functools
class MyFile(object):
# ...
def get_filetype(self):
def partial(func):
return functools.partial(func, self)
return {"XLSX": ("Excel File", partial(process_xlsx)),
"DOCX": ("Word Document", partial(process_docx))
}.get(self.file_extension)
# Then you just do
f = MyFile("full_file_path_and_name")
file_type = f.get_filetype()
file_type[1]()
</code></pre>
| 0 | 2016-07-27T18:19:25Z | [
"python",
"python-3.x"
] |
Calling an object method that was returned from a method as a method variable | 38,620,351 | <p>I have a list of things, I'll use files as an example. Each type of file needs to be processed in a different manner. I created a class, "MyFile" and rather than maintaining a giant if:elif:else: structure I created a dictionary with the file type as the key. This is stored as a class variable. </p>
<pre><code>class MyFile(object):
def process_xlsx(self):
#process file stuff for Excel
pass
def process_docx(self):
#process file stuff for Word
pass
def get_filetype(self):
try:
return self.FileTypes[file_extension]
except KeyError:
return None #filetype not handled... yet
FileTypes = {
"XLSX": ("Excel File", process_xlsx),
"DOCX": ("Word Document", process_docx)
}
</code></pre>
<p>Within a different class/object/module I instantiate the MyFile class and start stepping through the files. The MyFile objects are created properly.</p>
<pre><code>f = MyFile("full_file_path_and_name")
file_type = f.get_filetype()
</code></pre>
<p>file_type has a tuple with the key and a function object. Here is an example:
xls_0386.xlsx - ('Excel File', )
doc_0386.docx - ('Word File', )</p>
<p><strong>Question: How do I call the function that was returned from f.get_filetype()?</strong></p>
<pre><code>file_type[1]()
</code></pre>
<p>Returns: process_xlsx() missing 1 required positional argument: 'self'</p>
<pre><code>file_type[1]
</code></pre>
<p>Creates no errors, but never actually makes the call (breakpoints never reached within the process_???? methods)</p>
<p>So again, What is the syntax to call an object method in this manner?</p>
<p>I know this is a bit convoluted to explain with my specifics, and for that I apologize.</p>
<p>Thanks in advance.</p>
| 1 | 2016-07-27T18:14:08Z | 38,620,447 | <p>Because those functions are declared inside a class, they have the argument <code>self</code> representing the object from that class that's calling the function. When you try to call that function separately, outside of the context of a <code>MyFile</code> object, there is no <code>self</code> variable anymore associated with the function call. That's why you get the error:</p>
<blockquote>
<p>missing 1 required positional argument: 'self'</p>
</blockquote>
<p>Just give it the instance of <code>MyFile</code> that you already created to use as the <code>self</code> variable:</p>
<pre><code>file_type[1](f)
</code></pre>
| 0 | 2016-07-27T18:19:39Z | [
"python",
"python-3.x"
] |
Calling an object method that was returned from a method as a method variable | 38,620,351 | <p>I have a list of things, I'll use files as an example. Each type of file needs to be processed in a different manner. I created a class, "MyFile" and rather than maintaining a giant if:elif:else: structure I created a dictionary with the file type as the key. This is stored as a class variable. </p>
<pre><code>class MyFile(object):
def process_xlsx(self):
#process file stuff for Excel
pass
def process_docx(self):
#process file stuff for Word
pass
def get_filetype(self):
try:
return self.FileTypes[file_extension]
except KeyError:
return None #filetype not handled... yet
FileTypes = {
"XLSX": ("Excel File", process_xlsx),
"DOCX": ("Word Document", process_docx)
}
</code></pre>
<p>Within a different class/object/module I instantiate the MyFile class and start stepping through the files. The MyFile objects are created properly.</p>
<pre><code>f = MyFile("full_file_path_and_name")
file_type = f.get_filetype()
</code></pre>
<p>file_type has a tuple with the key and a function object. Here is an example:
xls_0386.xlsx - ('Excel File', )
doc_0386.docx - ('Word File', )</p>
<p><strong>Question: How do I call the function that was returned from f.get_filetype()?</strong></p>
<pre><code>file_type[1]()
</code></pre>
<p>Returns: process_xlsx() missing 1 required positional argument: 'self'</p>
<pre><code>file_type[1]
</code></pre>
<p>Creates no errors, but never actually makes the call (breakpoints never reached within the process_???? methods)</p>
<p>So again, What is the syntax to call an object method in this manner?</p>
<p>I know this is a bit convoluted to explain with my specifics, and for that I apologize.</p>
<p>Thanks in advance.</p>
| 1 | 2016-07-27T18:14:08Z | 38,620,675 | <p>When you create a <code>class</code>, the methods you define in it are <em>just functions</em>. Only when you retrieve their names from an <em>instance</em> of the class, will they be bound to that instance, producing bound methods. This is done via the <a href="https://docs.python.org/3/howto/descriptor.html" rel="nofollow">descriptor protocol</a>.</p>
<p>So when you create your <code>FileTypes</code> dictionary:</p>
<pre><code>FileTypes = {
"XLSX": ("Excel File", process_xlsx),
"DOCX": ("Word Document", process_docx)
}
</code></pre>
<p>those are plain functions.</p>
<p>You have three options, basically:</p>
<ol>
<li><p>Don't create the dictionary at class definition time. Create it when you create an instance, so you can store <em>bound methods</em> in it:</p>
<pre><code>class MyFile(object):
def __init__(self):
self.FileTypes = {
"XLSX": ("Excel File", self.process_xlsx),
"DOCX": ("Word Document", self.process_docx)
}
</code></pre>
<p>Because this looks up the methods on <code>self</code>, they are bound.</p></li>
<li><p>Bind the method 'manually' when you look them up in <code>get_filetype()</code>:</p>
<pre><code>def get_filetype(self):
try:
return self.FileTypes[file_extension].__get__(self) # binding!
except KeyError:
return None #filetype not handled... yet
</code></pre></li>
<li><p>Return a <a href="https://docs.python.org/3/library/functools.html#functools.partial" rel="nofollow"><code>functools.partial()</code> object</a> with <code>self</code> as a positional argument instead of just the function:</p>
<pre><code>from functools import partial
def get_filetype(self):
try:
return partial(self.FileTypes[file_extension], self) # also a kind of binding
except KeyError:
return None #filetype not handled... yet
</code></pre></li>
</ol>
<p>All three approaches result in an object being returned to the caller that, when called, will pass in the right instance of <code>MyFile()</code> to the chosen function.</p>
| 1 | 2016-07-27T18:32:36Z | [
"python",
"python-3.x"
] |
Python Qualtrics Data | 38,620,374 | <p>So, I am pulling data from the Qualtrics v3 API, and would like to pull the data every night. How could I pull all of the data one night and come back the next night and pull all of the new data. There is a parameter that the survey responds with called "lastModified" which is the last modified date.</p>
<p>Here is an example call:
import urllib.request #default module for Python 3.X</p>
<pre><code>url = 'https://yourdatacenterid.qualtrics.com/API/v3/surveys'
header = {'X-API-TOKEN': ''}
req = urllib.request.Request(url,None,header) #generating the request object
handler = urllib.request.urlopen(req) #running the request object
print(handler.status) #print status code
print(handler.reason)
</code></pre>
<p>Here is an example of the JSON:</p>
<pre><code>{
"result": {
"elements": [
{
"id": "SV_0D54a3emdOh7bBH",
"name": "Imported Survey",
"ownerId": "UR_8CywXqaSNzzu1Bb",
"lastModified": "2013-10-22T20:12:33Z",
"isActive": true
},
...
],
"nextPage": "https://yourdatacenterid.qualtrics.com/API/v3/surveys? offset=10"
},
"meta": {
"httpStatus": "200 - OK"
}
}
</code></pre>
| 0 | 2016-07-27T18:15:27Z | 38,621,548 | <p>I think you want to use responseexports instead surveys. Save the last response id retrieved each day. Then, you can use the lastResponseId parameter to specify where to start pulling the new data.</p>
| 0 | 2016-07-27T19:25:01Z | [
"python",
"qualtrics"
] |
What is this line of Theano doing? | 38,620,387 | <p>I don't understand if we are iterating over y as it appears what is being done with the values in y? Are they part of T.log? Are they added, multi, idk somehow combined with p_y_given_x? </p>
<pre><code>result = -T.mean(T.log(p_y_given_x)[T.arange(y1.shape[0]), y1])
print ("result1", result.eval())
print("_________________________")
print("y ", y2)
print("y.shape[0] ", y2.shape[0])
temp = (y2.shape[0], y2)
print("y.shape[0], y", temp)
temp2 = [T.arange(2), y2]
print("T.arange(y rows)", T.arange(2).eval())
print("[t.arange(2), y] [[0, 1], [1, 2]]")
print("T.log(p_y_given_x) ", (T.log(p_y_given_x)).eval())
print(-T.mean(T.log(p_y_given_x)).eval())
print("#########################")
result1 1.022485096286888
_________________________
y <TensorType(int64, matrix)>
y.shape[0] Subtensor{int64}.0
y.shape[0], y (Subtensor{int64}.0, <TensorType(int64, matrix)>)
T.arange(y rows) [0 1]
[t.arange(2), y] [[0, 1], [1, 2]]
T.log(p_y_given_x) [[-1.11190143 -0.91190143 -1.31190143]
[-1.13306876 -1.03306876 -1.13306876]]
1.10581842962
#########################
</code></pre>
| 0 | 2016-07-27T18:16:12Z | 38,620,882 | <p>I don't have enough reputation to comment so I'll post this as an answer.</p>
<p>Quoting verbatim from <a href="http://deeplearning.net/tutorial/logreg.html" rel="nofollow">here</a></p>
<pre><code>y.shape[0] is (symbolically) the number of rows in y, i.e.,
number of examples (call it n) in the minibatch
T.arange(y.shape[0]) is a symbolic vector which will contain
[0,1,2,... n-1] T.log(self.p_y_given_x) is a matrix of
Log-Probabilities (call it LP) with one row per example and
one column per class LP[T.arange(y.shape[0]),y] is a vector
v containing [LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ...,
LP[n-1,y[n-1]]] and T.mean(LP[T.arange(y.shape[0]),y]) is
the mean (across minibatch examples) of the elements in v,
i.e., the mean log-likelihood across the minibatch.
</code></pre>
<p>The values in y are the labels for the examples in a minibatch. For example, let a minibatch of three examples have the <code>y</code>(label) vector as <code>[0,6,9]</code>(considering the handwritten digits example).
So, the <code>[LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ...,LP[n-1,y[n-1]]]</code> will be <code>LP[1,0], L[2,6], LP[3,9]</code>
Now, why are we interested in these numbers?
That is because you need this numbers to compute the likelihood, which is defined as the average of the log probabilities for the examples in the minibatch. For example <code>LP[1,0]</code> contains the log probability that the first example belongs to class 0. You want this number to be as high as possible, since that is the truth. The mean is then taken to find the average of these numbers. And the negative sign is because the loss is negative of the likelihood. Does this help?</p>
| 1 | 2016-07-27T18:44:55Z | [
"python",
"neural-network",
"theano"
] |
Shape of numpy array comparison with empty list | 38,620,429 | <p>I have some problems understanding how python/numpy is casting array shapes when comparing to an empty list - which as far as I understand - is an implicit (element wise) comparison with False.</p>
<p>In the following example the shape decreases by one in the last dimension, if it is not greater than 1.</p>
<pre><code>z = N.zeros((2,2,1))
z == []
>> array([], shape=(2, 2, 0), dtype=bool)
z2 = N.zeros((2,2,2))
z2 ==[]
>> False
</code></pre>
<p>If, however, I compare with False directly, I get the expected output. </p>
<pre><code>z = N.zeros((2,2,1))
(z == False).shape
>> (2, 2, 2)
z2 = N.zeros((2,2,2))
(z2 == False).shape
>> (2, 2, 1)
</code></pre>
| 0 | 2016-07-27T18:18:23Z | 38,620,665 | <p>This is ordinary broadcasting at work. When you do</p>
<pre><code>z = N.zeros((2,2,1))
z == []
</code></pre>
<p><code>[]</code> is interpreted as an array of shape <code>(0,)</code>, and then the shapes are broadcast against each other:</p>
<pre><code>(2, 2, 1)
vs (0,)
</code></pre>
<p>Since <code>(0,)</code> is shorter than <code>(2, 2, 1)</code>, it gets expanded, as if the array were copied repeatedly:</p>
<pre><code> (2, 2, 1)
vs (2, 2, 0)
</code></pre>
<p>and since there's a 1 in the first shape and the other shape doesn't have a 1 there, the first shape gets "expanded" as if it were copied <em>zero</em> times:</p>
<pre><code> (2, 2, 0)
vs (2, 2, 0)
</code></pre>
<p>The comparison thus results in an array of booleans with shape <code>(2, 2, 0)</code>.</p>
<hr>
<p>When <code>z</code> has shape <code>(2, 2, 2)</code>:</p>
<pre><code>z2 = N.zeros((2,2,2))
z2 ==[]
</code></pre>
<p>broadcasting fails, since a length-2 axis and a length-0 axis can't be broadcast against each other. NumPy reports that it doesn't know how to perform the comparison:</p>
<pre><code>>>> numpy.zeros([2, 2, 2]).__eq__([])
NotImplemented
</code></pre>
<p>The list doesn't know how either, so Python falls back on the default comparison by identity, and gets a result of <code>False</code>.</p>
<hr>
<p>When you compare against <code>False</code>:</p>
<pre><code>z = N.zeros((2,2,1))
(z == False).shape
</code></pre>
<p><code>False</code> gets interpreted as an array of shape <strong><code>()</code></strong> - an empty shape! That gets broadcast out to shape <code>(2, 2, 1)</code>, as if copied out to an array full of <code>False</code>s, so the result has the same shape as <code>z</code>.</p>
| 2 | 2016-07-27T18:32:07Z | [
"python",
"arrays",
"list",
"numpy"
] |
Updating sqlite3 dababase from tkinter frame | 38,620,457 | <p>I am trying to update my sqlite3 database with this function but I can't seem the pass the parameters correctly. Person is picked up by a selection that is made in the tkinter frame manually.</p>
<pre><code>def updateContact():
person = select.get(ACTIVE)
conn.execute("UPDATE Table set NAME =? WHERE NAME = ?",(nameVar.get(), (person,)))
conn.execute("UPDATE Table set PHONE =? WHERE NAME = ?",(phoneVar.get(), (person,)))
</code></pre>
| 0 | 2016-07-27T18:20:02Z | 38,620,555 | <p>Just pass parameters in a <em>single non-nested tuple</em>:</p>
<pre><code>conn.execute("UPDATE Table set NAME = ? WHERE NAME = ?", (nameVar.get(), person))
</code></pre>
| 0 | 2016-07-27T18:25:55Z | [
"python",
"tkinter",
"sqlite3",
"labels"
] |
json.dumps \u escaped unicode to utf8 | 38,620,471 | <p>I came from this <a href="http://stackoverflow.com/questions/18337407/saving-utf-8-texts-in-json-dumps-as-utf8-not-as-u-escape-sequence">old discussion</a>, but the solution didn't help much as my original data was encoded differently:</p>
<p>My original data was already encoded in unicode, I need to output as UTF-8</p>
<p><code>data={"content":u"\u4f60\u597d"}</code></p>
<p>When I try to convert to utf:</p>
<p><code>json.dumps(data, indent=1, ensure_ascii=False).encode("utf8")</code></p>
<p>the output I get is
<code>"content": "ä½ å¥½"</code> and the expected out put should be
<code>"content": "ä½ å¥½"</code></p>
<p>I tried without <code>ensure_ascii=false</code> and the output becomes plain unescaped <code>"content": "\u4f60\u597d"</code></p>
<p>How can I convert the previously \u escaped json to UTF-8?</p>
| -1 | 2016-07-27T18:20:54Z | 38,620,562 | <p>You <em>have</em> UTF-8 JSON data:</p>
<pre><code>>>> import json
>>> data = {'content': u'\u4f60\u597d'}
>>> json.dumps(data, indent=1, ensure_ascii=False)
u'{\n "content": "\u4f60\u597d"\n}'
>>> json.dumps(data, indent=1, ensure_ascii=False).encode('utf8')
'{\n "content": "\xe4\xbd\xa0\xe5\xa5\xbd"\n}'
>>> print json.dumps(data, indent=1, ensure_ascii=False).encode('utf8')
{
"content": "ä½ å¥½"
}
</code></pre>
<p>My terminal just <em>happens</em> to be configured to handle UTF-8, so printing the UTF-8 bytes to my terminal produced the desired output.</p>
<p>However, if your terminal is <em>not</em> set up for such output, it is your <em>terminal</em> that then shows 'wrong' characters:</p>
<pre><code>>>> print json.dumps(data, indent=1, ensure_ascii=False).encode('utf8').decode('latin1')
{
"content": "你好"
}
</code></pre>
<p>Note how I <em>decoded</em> the data to Latin-1 to deliberately mis-read the UTF-8 bytes.</p>
<p>This isn't a Python problem; this is a problem with how you are handling the UTF-8 bytes in whatever tool you used to read these bytes.</p>
| 3 | 2016-07-27T18:26:09Z | [
"python",
"unicode",
"encoding",
"utf-8"
] |
Python regex split into characters except if followed by parentheses | 38,620,637 | <p>I have a string like <code>"F(230,24)F[f(22)_(23);(2)%[+(45)FF]]"</code>, where each character except for parentheses and what they enclose represents a kind of instruction. A character can be followed by an optional list of arguments specified in optional parentheses.</p>
<p>Such a string i would like to split the string into
<code>['F(230,24)', 'F', '[', 'f(22)', '_(23)', ';(2)', '%', '[', '+(45)', 'F', 'F', ']', ']']</code>, however at the moment i only get <code>['F(230,24)', 'F', '[', 'f(22)_(23);(2)', '%', '[', '+(45)', 'F', 'F', ']', ']']</code> (a substring was not split correctly).</p>
<p>Currently i am using <code>list(filter(None, re.split(r'([A-Za-z\[\]\+\-\^\&\\\/%_;~](?!\())', string)))</code>, which is just a mess of characters and a negative lookahead for <code>(</code>. <code>list(filter(None, <list>))</code> is used to remove empty strings from the result.</p>
<p>I am aware that this is likely caused by Python's <code>re.split</code> having been designed not to split on a zero length match, <a href="http://stackoverflow.com/questions/2713060/why-doesnt-pythons-re-split-split-on-zero-length-matches">as discussed here</a>.
However i was wondering what would be a good solution? Is there a better way than <code>re.findall</code>?</p>
<p>Thank you.</p>
<p>EDIT: Unfortunately i am not allowed to use custom packages like <a href="https://pypi.python.org/pypi/regex" rel="nofollow"><code>regex</code> module</a></p>
| 3 | 2016-07-27T18:30:01Z | 38,620,713 | <blockquote>
<p>I am aware that this is likely caused by Python's re.split having been designed not to split on a zero length match</p>
</blockquote>
<p>You can use the <code>VERSION1</code> flag of the <a href="https://pypi.python.org/pypi/regex" rel="nofollow"><code>regex</code> module</a>. Taking <a href="http://stackoverflow.com/q/2713060/771848">that example</a> from the thread you've linked - see how <code>split()</code> produces zero-width matches as well:</p>
<pre><code>>>> import regex as re
>>> re.split(r"\s+|\b", "Split along words, preserve punctuation!", flags=re.V1)
['', 'Split', 'along', 'words', ',', 'preserve', 'punctuation', '!']
</code></pre>
| 2 | 2016-07-27T18:34:57Z | [
"python",
"regex",
"string",
"split"
] |
Python regex split into characters except if followed by parentheses | 38,620,637 | <p>I have a string like <code>"F(230,24)F[f(22)_(23);(2)%[+(45)FF]]"</code>, where each character except for parentheses and what they enclose represents a kind of instruction. A character can be followed by an optional list of arguments specified in optional parentheses.</p>
<p>Such a string i would like to split the string into
<code>['F(230,24)', 'F', '[', 'f(22)', '_(23)', ';(2)', '%', '[', '+(45)', 'F', 'F', ']', ']']</code>, however at the moment i only get <code>['F(230,24)', 'F', '[', 'f(22)_(23);(2)', '%', '[', '+(45)', 'F', 'F', ']', ']']</code> (a substring was not split correctly).</p>
<p>Currently i am using <code>list(filter(None, re.split(r'([A-Za-z\[\]\+\-\^\&\\\/%_;~](?!\())', string)))</code>, which is just a mess of characters and a negative lookahead for <code>(</code>. <code>list(filter(None, <list>))</code> is used to remove empty strings from the result.</p>
<p>I am aware that this is likely caused by Python's <code>re.split</code> having been designed not to split on a zero length match, <a href="http://stackoverflow.com/questions/2713060/why-doesnt-pythons-re-split-split-on-zero-length-matches">as discussed here</a>.
However i was wondering what would be a good solution? Is there a better way than <code>re.findall</code>?</p>
<p>Thank you.</p>
<p>EDIT: Unfortunately i am not allowed to use custom packages like <a href="https://pypi.python.org/pypi/regex" rel="nofollow"><code>regex</code> module</a></p>
| 3 | 2016-07-27T18:30:01Z | 38,620,749 | <p>You can use <code>re.findall</code> to find out all single character optionally followed by a pair of parenthesis:</p>
<pre><code>import re
s = "F(230,24)F[f(22)_(23);(2)%[+(45)FF]]"
re.findall("[^()](?:\([^()]*\))?", s)
['F(230,24)',
'F',
'[',
'f(22)',
'_(23)',
';(2)',
'%',
'[',
'+(45)',
'F',
'F',
']',
']']
</code></pre>
<ul>
<li><code>[^()]</code> match a single character except for parenthesis;</li>
<li><code>(?:\([^()]*\))?</code> denotes a non-capture group(<code>?:</code>) enclosed by a pair of parenthesis and use <code>?</code> to make the group optional;</li>
</ul>
| 2 | 2016-07-27T18:36:36Z | [
"python",
"regex",
"string",
"split"
] |
Python regex split into characters except if followed by parentheses | 38,620,637 | <p>I have a string like <code>"F(230,24)F[f(22)_(23);(2)%[+(45)FF]]"</code>, where each character except for parentheses and what they enclose represents a kind of instruction. A character can be followed by an optional list of arguments specified in optional parentheses.</p>
<p>Such a string i would like to split the string into
<code>['F(230,24)', 'F', '[', 'f(22)', '_(23)', ';(2)', '%', '[', '+(45)', 'F', 'F', ']', ']']</code>, however at the moment i only get <code>['F(230,24)', 'F', '[', 'f(22)_(23);(2)', '%', '[', '+(45)', 'F', 'F', ']', ']']</code> (a substring was not split correctly).</p>
<p>Currently i am using <code>list(filter(None, re.split(r'([A-Za-z\[\]\+\-\^\&\\\/%_;~](?!\())', string)))</code>, which is just a mess of characters and a negative lookahead for <code>(</code>. <code>list(filter(None, <list>))</code> is used to remove empty strings from the result.</p>
<p>I am aware that this is likely caused by Python's <code>re.split</code> having been designed not to split on a zero length match, <a href="http://stackoverflow.com/questions/2713060/why-doesnt-pythons-re-split-split-on-zero-length-matches">as discussed here</a>.
However i was wondering what would be a good solution? Is there a better way than <code>re.findall</code>?</p>
<p>Thank you.</p>
<p>EDIT: Unfortunately i am not allowed to use custom packages like <a href="https://pypi.python.org/pypi/regex" rel="nofollow"><code>regex</code> module</a></p>
| 3 | 2016-07-27T18:30:01Z | 38,621,966 | <p>Another solution. This time the pattern recognize strings with the structure <strong>SYMBOL[(NUMBER[,NUMBER...])]</strong>. The function <code>parse_it</code> returns <em>True</em> and the tokens if the string match with the regular expression and <em>False</em> and empty if don't match.</p>
<pre><code>import re
def parse_it(string):
'''
Input: String to parse
Output: True|False, Tokens|empty_string
'''
pattern = re.compile('[A-Za-z\[\]\+\-\^\&\\\/%_;~](?:\(\d+(?:,\d+)*\))?')
tokens = pattern.findall(string)
if ''.join(tokens) == string:
res = (True, tokens)
else:
res = (False, '')
return res
good_string = 'F(230,24)F[f(22)_(23);(2)%[+(45)FF]]'
bad_string = 'F(2a30,24)F[f(22)_(23);(2)%[+(45)FF]]' # There is an 'a' in a bad place.
print(parse_it(good_string))
print(parse_it(bad_string))
</code></pre>
<p><strong>Output:</strong></p>
<blockquote>
<p>(True, ['F(230,24)', 'F', '[', 'f(22)', '_(23)', ';(2)', '%', '[',
'+(45)', 'F', 'F', ']', ']'])<br>(False, '')</p>
</blockquote>
| 1 | 2016-07-27T19:49:27Z | [
"python",
"regex",
"string",
"split"
] |
SQLAlchemy - Postgres: using with_entities for querying JSON element | 38,620,657 | <p>I am trying to query for a field in JSON column (Postgres):</p>
<pre><code>class MyTable(Base):
__tablename__ = 'my_table'
data = Column(JSONB)
</code></pre>
<p>Query:</p>
<pre><code>my_query = session.query(MyTable).limit(10).with_entities(MyTable.data['rule']).all()
</code></pre>
<p>I get no error, but the result is empty.</p>
<p>Even if I try with <code>astext</code>, same empty result:</p>
<pre><code>my_query = session.query(MyTable).limit(10).with_entities(MyTable.data['rule'].astext).all()
</code></pre>
<p>Can I use with_entities in this case? What would be the work around?
Thanks.</p>
| 0 | 2016-07-27T18:31:34Z | 38,622,690 | <p>The addition of <code>label()</code> method solved the issue for me:</p>
<pre><code>my_query = session.query(MyTable).limit(10). \
with_entities(MyTable.data['rule'].label('rule')).all()
</code></pre>
| 0 | 2016-07-27T20:34:27Z | [
"python",
"json",
"postgresql",
"sqlalchemy"
] |
A fast numpy way to find index in array where cumulative sum becomes greater? | 38,620,741 | <p>Basically, the <em>logic</em> of my problem is:</p>
<pre><code>running_sum = my_array.cumsum()
greater_than_threshold = running_sum > threshold
index = greater_than_threshold.searchsorted(True)
</code></pre>
<p>That is: Find the first index for which the cumulative sum of entries in <code>my_array</code> is above a threshold.</p>
<p>Now the problem is: I know that <code>my_array</code> will be large, but that the condition will be met fairly early. Of course that means I could just do a simple <code>while</code> loop to manually figure out when the cumulative sum is larger than the threshold, but I am wondering if there's a numpythonic way, i.e., a way to test for some condition without having the entire array evaluated.</p>
| 1 | 2016-07-27T18:36:13Z | 38,621,125 | <p>EDIT: This method is slower than using NumPy's <code>searchsorted</code> and <code>cumsum</code>, see user2357112's comments and <code>timeit</code> test.</p>
<p><code>cumsum</code> will calculate cumulative sums for the entire array. Instead, just iterate over the array yourself:</p>
<pre><code>running_sum = 0
for index, entry in enumerate(my_array.flat):
running_sum += entry
if running_sum > threshold:
break
if running_sum < threshold:
index = -1 #if the sum never reaches the threshold
</code></pre>
| 0 | 2016-07-27T18:58:48Z | [
"python",
"arrays",
"numpy"
] |
Count iterations in while loop | 38,620,745 | <p>Is there a way in Python to automatically add an iteration counter to a while loop?</p>
<p>I'd like to remove the lines <code>count = 0</code> and <code>count += 1</code> from the following code snippet but still be able to count the number of iterations and test against the boolean <code>elapsed < timeout</code>:</p>
<pre><code>import time
timeout = 60
start = time.time()
count = 0
while (time.time() - start) < timeout:
print 'Iteration Count: {0}'.format(count)
count += 1
time.sleep(1)
</code></pre>
| 5 | 2016-07-27T18:36:25Z | 38,620,976 | <p>The cleanest way is probably to convert this to an infinite <code>for</code> loop and move the loop test to the start of the body:</p>
<pre><code>import itertools
for i in itertools.count():
if time.time() - start >= timeout:
break
...
</code></pre>
| 6 | 2016-07-27T18:50:51Z | [
"python",
"python-2.7",
"loops"
] |
Count iterations in while loop | 38,620,745 | <p>Is there a way in Python to automatically add an iteration counter to a while loop?</p>
<p>I'd like to remove the lines <code>count = 0</code> and <code>count += 1</code> from the following code snippet but still be able to count the number of iterations and test against the boolean <code>elapsed < timeout</code>:</p>
<pre><code>import time
timeout = 60
start = time.time()
count = 0
while (time.time() - start) < timeout:
print 'Iteration Count: {0}'.format(count)
count += 1
time.sleep(1)
</code></pre>
| 5 | 2016-07-27T18:36:25Z | 38,621,012 | <p>You could instead move the while loop to a generator and use <a href="https://docs.python.org/2/library/functions.html#enumerate" rel="nofollow"><code>enumerate</code></a>:</p>
<pre><code>import time
def iterate_until_timeout(timeout):
start = time.time()
while time.time() - start < timeout:
yield None
for i, _ in enumerate(iterate_until_timeout(10)):
print "Iteration Count: {0}".format(count)
time.sleep(1)
</code></pre>
| 3 | 2016-07-27T18:52:45Z | [
"python",
"python-2.7",
"loops"
] |
Django passing JSON data to static getJSON/Javascript | 38,620,816 | <p>I am trying to grab data from my models.py and serialize it into a JSON object within my views.py. </p>
<p><strong>Models.py:</strong></p>
<pre><code>class Platform(models.Model):
platformtype = models.CharField(max_length=25)
</code></pre>
<p><strong>Views.py:</strong></p>
<pre><code>def startpage(request):
return render_to_response('Main.html');
def index(request):
platforms_as_json = serializers.serialize('json', Platform.objects.all())
return HttpResponse(platforms_as_json, content_type='json')
</code></pre>
<p>After doing this I want to pass this object into my static javascript file which is using getJSON to populate my drop down list for my template(Main.html). </p>
<p><strong>JavaScript:</strong></p>
<pre><code>$.getJSON("{{platforms_as_json}}", function (data) {
$.each(data, function (index, item) {
$('#platformList').append(
$('<option></option>').val(item).html(item.platformtype)
);
});
});
</code></pre>
<p>I have looked at many other threads within SO, but all of them are for those using embedded JS within their template and/or not using getJSON. As of right now, data is not being displayed in the list when I run my Django development server. What am I doing wrong? Thank you.</p>
<p><strong>UPDATE:</strong> </p>
<pre><code> <!DOCTYPE html>
<html>
<head>
{% load static from staticfiles %}
<script type = 'text/javascript' >
var platformsjson = "({% autoescape off %}{{platforms_as_json}}{% endautoescape %})";
</script>
</head>
<body>
<select id = "platformList"></select>
<ul id = "root"></ul>
<div id = "root"></div>
<script src = "{% static 'admin/js/platformddown_script.js' %}"></script>
</body>
</html>
</code></pre>
<p><strong>platformddown_script.js:</strong></p>
<pre><code>$.each(platformsjson, function (index, item) {
$('#platformList').append(
$('<option></option>').val(item.platformtype).html(item.platformtype)
)
});
</code></pre>
<p>After this update it still doesn't work.</p>
| 1 | 2016-07-27T18:40:54Z | 38,621,448 | <p>Main html render + json data</p>
<pre><code>import json
from django.shortcuts import render
def startpage(request):
platforms = Platform.objects.select_related().values('platformtype')
return render(request, 'Main.html', {'platforms_as_json': json.dumps(list(platforms)),})
</code></pre>
<p>in template </p>
<pre><code>{{ platforms_as_json }}
</code></pre>
<p>html and js</p>
<pre><code><select id="platformList"></select>
<script>
$.each({% autoescape off %}{{platforms_as_json}}{% endautoescape %}, function (index, item) {
$('#platformList').append(
$('<option></option>').val(item.platformtype).html(item.platformtype)
)
});
</script>
</code></pre>
<p>Old example
<a href="https://gist.github.com/leotop/014a38bd97407a6380f2526f11d17977" rel="nofollow">https://gist.github.com/leotop/014a38bd97407a6380f2526f11d17977</a></p>
| 1 | 2016-07-27T19:19:04Z | [
"javascript",
"jquery",
"python",
"mysql",
"django"
] |
matplotlib/python - How to draw a plot like this? mean ± 3*standard deviation | 38,620,828 | <p>I want a picture like this:
<a href="http://i.stack.imgur.com/d3ddl.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/d3ddl.jpg" alt="enter image description here"></a></p>
<p>The upper and lower limit of x axis is given and is much larger/smaller than the given data.</p>
<p>All plot I find is only ± 1*standard deviation.</p>
<p>Also I am not sure how to fix the x axis like this.</p>
<p>My data is a python list of floats.</p>
<p>Right now, I only have three points, but I would like to have line between them and vertical line at the points.
The x axis is also not correct. </p>
<pre><code>plt.figure()
x = []
for item in circ_list:
x.append(float(item))
mean = np.mean(x)
std = np.std(x)
target_upper_bound = mean + 3 * std
target_lower_bound = mean - 3 * std
total_intermid = wear_limit[1] - wear_limit[0]
ten_percent_intermid = total_intermid / 10.0
plt.gca().axes.get_yaxis().set_visible(False)
plt.xlim([wear_limit[0], wear_limit[1]])
plt.plot(x, np.zeros_like(x), 'x')
plt.plot(np.array([mean, mean + 2 * std, mean - 2 * std]),
np.zeros_like(np.array([mean, mean + 2 * std, mean - 2 * std])), '|')
for i in x:
plt.annotate(str(i), xy=(i, 0))
plt.annotate('mean', xy=(mean, 0))
plt.annotate('mean+3*std', xy=(target_upper_bound, 0))
plt.annotate('mean-3*std', xy=(target_lower_bound, 0))
plt.show()
</code></pre>
| 1 | 2016-07-27T18:41:32Z | 38,621,678 | <p>not sure I understand exactly what you are trying. Check out this example if it does something similar to what you want?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
#create random data
data=np.random.random(100)
std=np.std(data)
#display the individual data points
plt.scatter(data,np.zeros(data.shape[0]))
#use errorbar function to create symmetric horizontal error bar
#xerr provides error bar length
#fmt specifies plot icon for mean value
#ms= marker size
#mew = marker thickness
#capthick = thickness of error bar end caps
#capsize = size of those caps
plt.errorbar(np.mean(data),0,xerr=3*std,fmt='|', ms=30,mew=2,capthick=2,capsize=10)
#set x axis limits
plt.xlim([-10,10])
</code></pre>
| 1 | 2016-07-27T19:32:42Z | [
"python",
"matplotlib",
"plot"
] |
Flask Git Enoent Spawn Error | 38,620,842 | <p>I'm trying to deploy a flask application to Heroku. I'm following the steps on the website to deploy my app using the heroku toolbelt. I'm able to execute the command</p>
<pre><code>git init
</code></pre>
<p>but when I try to use</p>
<pre><code>heroku git:remote -a bcamarketplace
</code></pre>
<p>I get the following error:</p>
<pre><code> ! ENOENT: spawn git ENOENT
</code></pre>
<p>There is no further description and I'm lost on how to resolve this issue. Anyone have some suggestions?</p>
| 0 | 2016-07-27T18:42:30Z | 38,621,697 | <p>It might be that you haven't added a git remote for the app. Try running this:</p>
<pre><code>git remote add heroku git@heroku.com:your-app-name-here.git
</code></pre>
<p>Then proceed as you were. </p>
| 0 | 2016-07-27T19:34:05Z | [
"python",
"git",
"heroku",
"flask"
] |
Converting a pandas dataframe with a string type, three level MultiIndex into numeric type objects | 38,620,850 | <p>I have a pandas dataframe with a three level MultiIndex that looks like this:</p>
<pre><code>gene TIMP2 VEGFA VIM
2 TGFb 0.1 0.035655 0.876214 -0.158406
1 0.087623 1.049764 0.039158
10 0.054119 0.887348 -0.052608
24 TGFb 0.1 0.148470 0.565379 0.157153
1 0.233250 0.540806 0.206030
10 0.378658 0.861429 0.132580
48 TGFb 0.1 -0.203006 0.359409 -0.144209
1 -0.068495 0.845802 -0.093910
10 -0.105295 0.676591 -0.166819
6 TGFb 0.1 0.060129 1.766071 0.097548
1 0.075760 1.656494 0.026664
10 -0.029685 1.284003 -0.008032
NaN NaN 2.000000 12.000000 0.000000
</code></pre>
<p>The only problem is that because of the way I've built the MultiIndex (which is in-bedded into larger code so its difficult to paste here), the numbers in the index are strings. How do I convert the outer level to integer and the inner level to float? It sounds trivial but I'm having a lot of difficulty finding the solution. Thanks</p>
| 2 | 2016-07-27T18:43:04Z | 38,621,068 | <p>This should do it for you:</p>
<pre><code>df.index = df.index.set_levels(idx.levels[0].astype(int), level=0) \
.set_levels(idx.levels[2].astype(float), level=2)
</code></pre>
| 1 | 2016-07-27T18:55:22Z | [
"python",
"pandas",
"multi-index"
] |
Fix âUnicodeDecodeError: can't decode byte | 38,620,892 | <p>Well I have written a code in python2. Usually I use python3 and never had the issue but its not working here. Please help, can't post the code, it's the policy. I hope you would understand as a programmer.</p>
<p>Hope anyone can help.</p>
| -5 | 2016-07-27T18:46:05Z | 38,621,028 | <pre><code>import sys
reload(sys)
sys.setdefaultencoding('utf8')
</code></pre>
<p>Its just a guess as there is no code but if your code used to work in python3 and not in Python2 then this was the only problem I could think of. Try this if it helps. Otherwise its very difficult to recognize the issue.</p>
<p><strong>NOTE:</strong> I may help you in private too but by looking at the question title bytes error, this code usually will solve the issue, just place it at the top after your imports.</p>
| -2 | 2016-07-27T18:53:32Z | [
"python",
"python-3.x"
] |
Using command line args from different files in python | 38,620,899 | <p>I recently discovered (Much to my surprised) you can call command line args in files other than the one that is explicitly called when you enter it. </p>
<p>So, you can run <code>python file1.py abc</code> in command line, and use sys.argv[1] to get the string 'abc' from within file2.py or file3.py.</p>
<p>I still feel like this shouldn't work, but I'm glad it does, since it saved me a lot of trouble.</p>
<p>But now I'd really appreciate an answer as to why/how this works. I had assumed that sys.argv[1] would be local to each file.</p>
| 0 | 2016-07-27T18:46:22Z | 38,621,007 | <p>As for the how/why, <code>sys</code> is only imported once (when python starts up). When <code>sys</code> is imported, it's <code>argv</code> member gets populated with the commandline arguements. Subsequent <code>import</code> statements return the same <code>sys</code> module object so no matter where you <code>import sys</code> from, you'll always get the same object and therefore <code>sys.argv</code> will always be the same list no matter where you reference it in your application.</p>
<hr>
<p>Whether you <em>should</em> be doing commandline parsing in more than one place is a different question. Generally, my answer would be "NO" unless you are only hacking together a script to work for the next 2 or 3 days. Anything that you expect to last should do all it's parsing up front (probably with a robust argument parser like <code>argparse</code>) and pass the data necessary for the various functions/classes to them from it's entry point.</p>
| 2 | 2016-07-27T18:52:29Z | [
"python",
"command-line-arguments"
] |
Django extract queryset from ManyToMany with through field | 38,620,984 | <p>Say we have those models:</p>
<pre><code>class A(models.Model):
field = models.ManyToManyField(B, through="C")
class B(models.Model):
value = models.CharField()
class C(models.Model):
a = models.ForeignKey(A)
b = models.ForeignKey(B)
order = models.IntegerField()
</code></pre>
<p>Is there an option to extract queryset of B's, but taking into consideration order field?</p>
<p>Doing a <code>a.c_set.all()</code> returns queryset for C class (but it's ordered).</p>
<p>Doing a <code>a.fields.all()</code> works, but the queryset is unordered.</p>
<p>I need a queryset for initializing the formset.</p>
<p>I hope it's understandable - it's quite late and i can't think clearly already... I'll try to clear it out if anyone has any questions.</p>
<p>Thanks in advance</p>
| 0 | 2016-07-27T18:51:20Z | 38,621,259 | <p>Use the <code>C</code> model reverse relations to do the order, e.g.</p>
<pre><code>a.fields.order_by(c__order)
</code></pre>
| 0 | 2016-07-27T19:07:20Z | [
"python",
"django"
] |
Django extract queryset from ManyToMany with through field | 38,620,984 | <p>Say we have those models:</p>
<pre><code>class A(models.Model):
field = models.ManyToManyField(B, through="C")
class B(models.Model):
value = models.CharField()
class C(models.Model):
a = models.ForeignKey(A)
b = models.ForeignKey(B)
order = models.IntegerField()
</code></pre>
<p>Is there an option to extract queryset of B's, but taking into consideration order field?</p>
<p>Doing a <code>a.c_set.all()</code> returns queryset for C class (but it's ordered).</p>
<p>Doing a <code>a.fields.all()</code> works, but the queryset is unordered.</p>
<p>I need a queryset for initializing the formset.</p>
<p>I hope it's understandable - it's quite late and i can't think clearly already... I'll try to clear it out if anyone has any questions.</p>
<p>Thanks in advance</p>
| 0 | 2016-07-27T18:51:20Z | 38,621,291 | <p>If you put a an <code>ordering</code> on model <code>C</code>, all queryset on <code>C</code> would obey that order:</p>
<pre><code>class C(models.Model):
class Meta:
ordering = ('order', )
</code></pre>
<p>Now if you want <code>B</code> objects related to <code>A</code>, you could sort the <code>B</code>s based on <code>C</code>'s ordering:</p>
<pre><code>b_results = a.fields.order_by('c')
</code></pre>
<p>Or if the <code>order_by('c')</code> is not clear enough, you could change your model to be:</p>
<pre><code>class C(models.Model):
a = models.ForeignKey(A, related_name='a_relationship')
b = models.ForeignKey(B)
order = models.IntegerField()
class Meta:
ordering = ('order', )
</code></pre>
<p>Then you could do:</p>
<pre><code>b_results = a.fields.order_by('a_relationship')
</code></pre>
| 1 | 2016-07-27T19:09:42Z | [
"python",
"django"
] |
Sphinx fails to document flask project due to import current_app | 38,621,256 | <p>I have set up Sphinx to document my flask project, however, I encounter this error:</p>
<pre><code>[$]>>> make html
sphinx-build -b html -d build/doctrees -W -v source build/html
Running Sphinx v1.4.5
loading pickled environment... not yet created
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 5 source files that are out of date
updating environment: 5 added, 0 changed, 0 removed
reading sources... [ 20%] index
reading sources... [ 40%] modules
reading sources... [ 60%] quizApp
Traceback (most recent call last):
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/ext/viewcode.py", line 28, in _get_full_modname
return get_full_modname(modname, attribute)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/util/__init__.py", line 300, in get_full_modname
__import__(modname)
TypeError: __import__() argument 1 must be string, not None
viewcode can't import None, failed with error "__import__() argument 1 must be string, not None"
Traceback (most recent call last):
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/ext/viewcode.py", line 28, in _get_full_modname
return get_full_modname(modname, attribute)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/util/__init__.py", line 300, in get_full_modname
__import__(modname)
TypeError: __import__() argument 1 must be string, not None
viewcode can't import None, failed with error "__import__() argument 1 must be string, not None"
Didn't find ParticipantExperiment.activities in quizApp.models
Didn't find Question.explantion in quizApp.models
Didn't find User.name in quizApp.models
Didn't find User.authenticated in quizApp.models
reading sources... [ 80%] quizApp.forms
reading sources... [100%] quizApp.views
Traceback (most recent call last):
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/cmdline.py", line 244, in main
app.build(opts.force_all, filenames)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/application.py", line 297, in build
self.builder.build_update()
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/builders/__init__.py", line 251, in build_update
'out of date' % len(to_build))
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/builders/__init__.py", line 265, in build
self.doctreedir, self.app))
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/environment.py", line 569, in update
self._read_serial(docnames, app)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/environment.py", line 589, in _read_serial
self.read_doc(docname, app)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/environment.py", line 742, in read_doc
pub.publish()
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/core.py", line 217, in publish
self.settings)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/io.py", line 49, in read
self.parse()
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/readers/__init__.py", line 78, in parse
self.parser.parse(self.input, document)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/__init__.py", line 172, in parse
self.statemachine.run(inputlines, document, inliner=self.inliner)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 170, in run
input_source=document['source'])
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/statemachine.py", line 239, in run
context, state, transitions)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/statemachine.py", line 460, in check_line
return method(match, context, next_state)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2726, in underline
self.section(title, source, style, lineno - 1, messages)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 327, in section
self.new_subsection(title, lineno, messages)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 395, in new_subsection
node=section_node, match_titles=True)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse
node=node, match_titles=match_titles)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 195, in run
results = StateMachineWS.run(self, input_lines, input_offset)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/statemachine.py", line 239, in run
context, state, transitions)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/statemachine.py", line 460, in check_line
return method(match, context, next_state)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2726, in underline
self.section(title, source, style, lineno - 1, messages)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 327, in section
self.new_subsection(title, lineno, messages)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 395, in new_subsection
node=section_node, match_titles=True)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse
node=node, match_titles=match_titles)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 195, in run
results = StateMachineWS.run(self, input_lines, input_offset)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/statemachine.py", line 239, in run
context, state, transitions)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/statemachine.py", line 460, in check_line
return method(match, context, next_state)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2299, in explicit_markup
nodelist, blank_finish = self.explicit_construct(match)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2311, in explicit_construct
return method(self, expmatch)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2054, in directive
directive_class, match, type_name, option_presets)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2103, in run_directive
result = directive_instance.run()
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 1613, in run
documenter.generate(more_content=self.content)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 963, in generate
self.document_members(all_members)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 855, in document_members
if cls.can_document_member(member, mname, isattr, self)]
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 1458, in can_document_member
isdatadesc = isdescriptor(member) and not \
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/util/inspect.py", line 101, in isdescriptor
if hasattr(safe_getattr(x, item, None), '__call__'):
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/sphinx/util/inspect.py", line 113, in safe_getattr
if name in obj.__dict__:
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/werkzeug/local.py", line 343, in __getattr__
return getattr(self._get_current_object(), name)
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/werkzeug/local.py", line 302, in _get_current_object
return self.__local()
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/flask/globals.py", line 37, in _lookup_req_object
raise RuntimeError(_request_ctx_err_msg)
RuntimeError: Working outside of request context.
This typically means that you attempted to use functionality that needed
an active HTTP request. Consult the documentation on testing for
information about how to avoid this problem.
Exception occurred:
File "/home/alyosha/.virtualenvs/quizApp-new/lib/python2.7/site-packages/flask/globals.py", line 37, in _lookup_req_object
raise RuntimeError(_request_ctx_err_msg)
RuntimeError: Working outside of request context.
This typically means that you attempted to use functionality that needed
an active HTTP request. Consult the documentation on testing for
information about how to avoid this problem.
The full traceback has been saved in /tmp/sphinx-err-iI83eY.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
make: *** [Makefile:53: html] Error 1
</code></pre>
<p>After some digging, I found that the issue is because of this line:</p>
<p><a href="https://github.com/PlasmaSheep/sphinx-error/blob/master/app/issue.py" rel="nofollow">https://github.com/PlasmaSheep/sphinx-error/blob/master/app/issue.py</a></p>
<p>View the full minimum example here:</p>
<p><a href="https://github.com/PlasmaSheep/sphinx-error" rel="nofollow">https://github.com/PlasmaSheep/sphinx-error</a></p>
| 1 | 2016-07-27T19:07:11Z | 38,669,424 | <p>The issue is actually with sphinx. Sphinx 1.4.5 contains a bug that causes this behavior. This can be fixed by installing sphinx from git. Hopefully they will push a new version out to pypi soon.</p>
<p>edit: Sphinx 1.4.4 also works fine.</p>
| 3 | 2016-07-30T01:08:44Z | [
"python",
"flask",
"python-sphinx"
] |
Dropzone doesn't redirect after upload to Flask app | 38,621,273 | <p>I'm writing a resource to upload files to a Flask app using Dropzone. After files are uploaded the app should redirect to the hello world page. This is not happening and the app is stuck on the view that uploaded the files. I'm using jQuery 3.1.0 and Dropzone from master.</p>
<pre><code>from flask import Flask, request, flash, redirect, url_for render_template)
from validator import Validator
ALLOWED_EXTENSIONS = set(['csv', 'xlsx', 'xls'])
def allowed_file(filename):
return (filename != '') and ('.' in filename) and \
(filename.split('.')[-1] in ALLOWED_EXTENSIONS)
def create_app():
app = Flask(__name__)
app.secret_key = 'super secret key'
return app
app = create_app()
@app.route('/')
def index():
return render_template('index.html')
@app.route('/world')
def hello_world():
return render_template('hello_world.html')
@app.route('/upload', methods=['POST'])
def upload():
# check that a file with valid name was uploaded
if 'file' not in request.files:
flash('No file part')
return redirect(request.url)
file = request.files['file']
if not allowed_file(file.filename):
flash('No selected file')
return redirect(request.url)
# import ipdb; ipdb.set_trace()
validator = Validator(file)
validated = validator.validate()
if validated:
flash('Success')
else:
flash('Invalid file')
return redirect(url_for('hello_world'))
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<pre class="lang-html prettyprint-override"><code>{% extends "base.html" %}
{% block head %}
<link href="/static/css/dropzone.css" rel="stylesheet">
<script src="/static/js/dropzone.js"></script>
{% endblock %}
{% block body %}
<main>
<section>
<div id="dropzone">
<form action="upload" method="post" class="dropzone dz-clickable" id="demo-upload" multiple>
<div class="dz-message">
Drop files here or click to upload.
</div>
</form>
</div>
</section>
</main>
{% endblock %}
</code></pre>
| 0 | 2016-07-27T19:08:44Z | 38,621,506 | <p>I'm not too familiar with dropzone, but I will give you an example from one of my flask applications that uses file uploading. I'm just using the standard HTML upload form. Hopefully from here you should be able to get an idea of what's going on.</p>
<p>Note, i'm not using a template for my file uploading.</p>
<pre><code>def index():
return """<center><body bgcolor="#FACC2E">
<font face="verdana" color="black">
<title>TDX Report</title>
<form action="/upload" method=post enctype=multipart/form-data>
<p><input type=file name=file>
<input type=submit value=Upload>
</form></center></body>"""
# here is my function that deals with the file that was just uploaded
@app.route('/upload', methods = ['GET', 'POST'])
def upload():
if request.method == 'POST':
f = request.files['file']
f.save(f.filename)
# process is the function that i'm sending the file to, which in this case is a .xlsx file
return process(f.filename)
</code></pre>
<p>This line is where i'm setting the route path for post file upload:</p>
<pre><code><form action="/upload" method=post enctype=multipart/form-data>
</code></pre>
<p>Your issue could be that this line:
<code><form action="upload" method="post" class="dropzone dz-clickable" id="demo-upload" multiple></code> is missing the <code>/</code> before <code>upload</code>.</p>
| 0 | 2016-07-27T19:22:24Z | [
"javascript",
"python",
"flask",
"dropzone.js"
] |
Dropzone doesn't redirect after upload to Flask app | 38,621,273 | <p>I'm writing a resource to upload files to a Flask app using Dropzone. After files are uploaded the app should redirect to the hello world page. This is not happening and the app is stuck on the view that uploaded the files. I'm using jQuery 3.1.0 and Dropzone from master.</p>
<pre><code>from flask import Flask, request, flash, redirect, url_for render_template)
from validator import Validator
ALLOWED_EXTENSIONS = set(['csv', 'xlsx', 'xls'])
def allowed_file(filename):
return (filename != '') and ('.' in filename) and \
(filename.split('.')[-1] in ALLOWED_EXTENSIONS)
def create_app():
app = Flask(__name__)
app.secret_key = 'super secret key'
return app
app = create_app()
@app.route('/')
def index():
return render_template('index.html')
@app.route('/world')
def hello_world():
return render_template('hello_world.html')
@app.route('/upload', methods=['POST'])
def upload():
# check that a file with valid name was uploaded
if 'file' not in request.files:
flash('No file part')
return redirect(request.url)
file = request.files['file']
if not allowed_file(file.filename):
flash('No selected file')
return redirect(request.url)
# import ipdb; ipdb.set_trace()
validator = Validator(file)
validated = validator.validate()
if validated:
flash('Success')
else:
flash('Invalid file')
return redirect(url_for('hello_world'))
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<pre class="lang-html prettyprint-override"><code>{% extends "base.html" %}
{% block head %}
<link href="/static/css/dropzone.css" rel="stylesheet">
<script src="/static/js/dropzone.js"></script>
{% endblock %}
{% block body %}
<main>
<section>
<div id="dropzone">
<form action="upload" method="post" class="dropzone dz-clickable" id="demo-upload" multiple>
<div class="dz-message">
Drop files here or click to upload.
</div>
</form>
</div>
</section>
</main>
{% endblock %}
</code></pre>
| 0 | 2016-07-27T19:08:44Z | 39,529,288 | <p>I had similar problem in my flask application and I solved it with following jQuery function:</p>
<pre><code>Dropzone.options.myDropzone = {
autoProcessQueue: false,
init: function() {
var submitButton = document.querySelector("#upload-button");
myDropzone = this;
submitButton.addEventListener("click", function() {
myDropzone.processQueue();
});
this.on("sending", function() {
$("#myDropzone").submit()
});
}
};
</code></pre>
<p>Parameter "sending" is called just before file is send so I can submit my dropzone form. With this all redirects in my flask app works fine.</p>
<p>Piece of my html code for clarity:</p>
<pre><code><form action="/" method="POST" class="dropzone" id="myDropzone" enctype="multipart/form-data">
</form>
</code></pre>
| 0 | 2016-09-16T10:31:33Z | [
"javascript",
"python",
"flask",
"dropzone.js"
] |
How to Stop OneNote API From Returning ID for Deleted Pages | 38,621,322 | <p>I place a call using the OneNote REST API to return a list of all the pages in a section. This works successfully. However, some of the pages it returns should no longer exist! Yet I can see their information, IDs, etc., even though they have previously been deleted. But if I try to delete them again using REST, I get the error: </p>
<pre><code>ERROR (deleteFromURL): <Response [404]>
{
"error":{
"code":"20102","message":"The specified resource ID does not exist.","@api.url":"http://aka.ms/onenote-errors#C20102"
}
}
</code></pre>
<p>How come OneNote keeps returning pages that no longer exist (even after many days), and how do I prevent it from doing so?</p>
| 0 | 2016-07-27T19:11:26Z | 38,622,608 | <p>I assume these pages do appear as deleted in your notebook if you open OneNote.
Can you try adding this header to your GET ~/pages request?</p>
<pre><code>FavorDataRecency: true
</code></pre>
<p>This will bypass our index and go directly to your pages. It will take longer but should be consistent - do you see your pages when you do that?</p>
<p>Additionally, to better investigate this on our end, can you provide us with
- The value of the X-CorrelationId header of your API request to GET pages (the one without the FavorDataRecency header)
- One of the id's of your deleted pages</p>
| 0 | 2016-07-27T20:28:10Z | [
"python",
"delete-file",
"onenote",
"http-delete",
"onenote-api"
] |
Python function design for arithmetic operations | 38,621,338 | <p>Purpose:<br>
<i>"Write a Python function that takes 3 numbers as arguments. Your
function should repeatedly subtract the second argument from the first
argument until the value is less than zero. Your function should then
print out this (negative) value."</i></p>
<p>My suggested solution (What I have so far):</p>
<pre><code>def subtraction(a, b, c):
firstnum = a
if firstnum > 0:
firstnum = (a-b)
if firstnum < 0:
return firstnum
</code></pre>
<p>Problem: <br/>When I try it and the result is returned at the end, it comes up blank. Any suggestions for what I am missing?</p>
| 0 | 2016-07-27T19:12:49Z | 38,621,489 | <pre><code>def subtraction(a,b,c):
# why does c exist?
return a % b - b
</code></pre>
<p>The smallest positive number achieved by repeated subtraction (repeated subtraction is division) is the modulus of the two numbers. To find the next number, just subtract once more.</p>
<p>Unless you <em>have</em> to use a loop, but this seems like a homework question and I'm not sure if just giving the answer is the way to go about learning.</p>
| 3 | 2016-07-27T19:21:38Z | [
"python"
] |
Python function design for arithmetic operations | 38,621,338 | <p>Purpose:<br>
<i>"Write a Python function that takes 3 numbers as arguments. Your
function should repeatedly subtract the second argument from the first
argument until the value is less than zero. Your function should then
print out this (negative) value."</i></p>
<p>My suggested solution (What I have so far):</p>
<pre><code>def subtraction(a, b, c):
firstnum = a
if firstnum > 0:
firstnum = (a-b)
if firstnum < 0:
return firstnum
</code></pre>
<p>Problem: <br/>When I try it and the result is returned at the end, it comes up blank. Any suggestions for what I am missing?</p>
| 0 | 2016-07-27T19:12:49Z | 38,621,558 | <p>You're forgetting to loop! As a result, you're not subtracting <code>b</code> from <code>a</code> until <code>a</code> is less than zero. I suggest using a while loop like this,</p>
<pre><code>def subtraction(a, b, c):
firstnum = a
while firstnum >= 0:
firstnum -= b
return firstnum
</code></pre>
<p>Let me explain what was wrong before. Without the loop, your function would only subtract <code>b</code> from <code>a</code> once. Then it would check if <code>firstnum</code> was greater than zero. If AND only if <code>firstnum</code> was less than zero would it be returned. My guess is that <code>firstnum</code> would not be returned because it wouldn't be less than zero after one subtraction of <code>b</code>. With this loop, there will be a guarantee that <code>firstnum</code> will be returned as a negative value because the loop will keep subtracting <code>b</code> from <code>firstnum</code> until <code>firstnum</code> is less than zero.</p>
| 2 | 2016-07-27T19:25:44Z | [
"python"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.