title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
How to get SSE Last Event ID with CGI + Python | 38,511,222 | <p>I am using Server Sent Events (SSE) to push data to the front end. I was able to implement most of it just by researching its RFC <a href="https://www.w3.org/TR/2011/WD-eventsource-20111020/" rel="nofollow">https://www.w3.org/TR/2011/WD-eventsource-20111020/</a>. The only part I'm stuck on is if the SSE reconnects from a faulty connection.</p>
<p>The documentation says that when an SSE connection reconnects it adds the Last-Event-ID to the Header data. I have no idea how to get this from my script.</p>
<p>I am using lighttpd as my web server, cgi to run my scripts, and python 2.7 as my scripting language.</p>
<p>I attempted to read os.environ, but it does not contain anything related to the SSE last received ID.</p>
<p>Does anyone know how I can get the last event id received?</p>
| 0 | 2016-07-21T17:57:15Z | 38,600,374 | <p>I ended up figuring this out, so hopefully this will be useful for anyone in the future. The lastEventId is used silently in the back of the protocol. So, if the connection drops temporarily and then reconnects the lastEventId is used. All SSEs that were missed are queued up, and only the ones greater than the lastEventId are sent. However, if the connection permanently drops, or drops for longer than x minutes (browser dependent) then the SSE connection is terminated. A lastEventId will not be sent because it will be considered a new connection.</p>
<p>I was able to get around this by appending the lastEventId to the GET request of the SSE. So...</p>
<pre><code>/var/www/SSE.py?lastEventId=100
</code></pre>
| 0 | 2016-07-26T21:42:04Z | [
"python",
"python-2.7",
"lighttpd"
] |
Constructing a method name | 38,511,227 | <p>How do I construct a method name to use with an instantiated class? I'm trying to run a method in a class 'jsonmaker' where the method corresponds to the datatype specified in the filein string.</p>
<pre><code>for filein in filein_list:
datatype = filein[(filein.find('_')):-8]
method_name = pjoin(datatype + 'populate')
instantiated_class.method_name(arg1, arg2, arg3)
</code></pre>
<p>When I try the above code I get the error message </p>
<pre><code>'AttributeError: 'jsonmaker' object has no attribute 'method_name''
</code></pre>
<p>There is in fact a method in jsonmaker that matches pjoin(datatype + 'populate') so how do I tell the class to recognize that? Sorry if I'm not explaining this well. </p>
| 1 | 2016-07-21T17:57:36Z | 38,511,258 | <p>You can't reference the attribute of a class instance by putting a variable directly behind a dot reference. Not even when the variable references a string which is same as the name of the attribute.</p>
<p>You could instead use <code>getattr</code> to get the <code>method</code> from the string and then call it with those parameters:</p>
<pre><code>getattr(instantiated_class, method_name)(arg1, arg2, arg3)
</code></pre>
| 2 | 2016-07-21T17:59:14Z | [
"python",
"class",
"methods"
] |
Change the color of text within a pandas dataframe html table python using styles and css | 38,511,373 | <p>I have a pandas dataframe:</p>
<pre><code>arrays = [['Midland','Midland','Hereford','Hereford','Hobbs','Hobbs','Childress','Childress','Reese','Reese',
'San Angelo','San Angelo'],['WRF','MOS','WRF','MOS','WRF','MOS','WRF','MOS','WRF','MOS','WRF','MOS']]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples)
df = pd.DataFrame(np.random.randn(12,4),index=arrays,columns=['00 UTC','06 UTC','12 UTC','18 UTC'])
df
</code></pre>
<p>The table that prints from this looks like this:<a href="http://i.stack.imgur.com/FbsI6.png" rel="nofollow"><img src="http://i.stack.imgur.com/FbsI6.png" alt="enter image description here"></a></p>
<p>I would like to be able to format this table better. Specifically, I would like to color all of the values in the 'MOS' rows a certain color and color the left two index/header columns as well as the top header row a different background color than the rest of the cells with values in them. Any ideas how I can do this?</p>
| 6 | 2016-07-21T18:06:19Z | 38,511,805 | <p>This takes a few steps:</p>
<p>First import <code>HTML</code> and <code>re</code></p>
<pre><code>from IPython.display import HTML
import re
</code></pre>
<p>You can get at the <code>html</code> pandas puts out via the <code>to_html</code> method.</p>
<pre><code>df_html = df.to_html()
</code></pre>
<p>Next we are going to generate a random identifier for the html table and style we are going to create.</p>
<pre><code>random_id = 'id%d' % np.random.choice(np.arange(1000000))
</code></pre>
<p>Because we are going to insert some style, we need to be careful to specify that this style will only be for our table. Now let's insert this into the <code>df_html</code></p>
<pre><code>df_html = re.sub(r'<table', r'<table id=%s ' % random_id, df_html)
</code></pre>
<p>And create a style tag. This is really up to you. I just added some hover effect.</p>
<pre><code>style = """
<style>
table#{random_id} tr:hover {{background-color: #f5f5f5}}
</style>
""".format(random_id=random_id)
</code></pre>
<p>Finally, display it</p>
<pre><code>HTML(style + df_html)
</code></pre>
<h3>Function all in one.</h3>
<pre><code>def HTML_with_style(df, style=None, random_id=None):
from IPython.display import HTML
import numpy as np
import re
df_html = df.to_html()
if random_id is None:
random_id = 'id%d' % np.random.choice(np.arange(1000000))
if style is None:
style = """
<style>
table#{random_id} {{color: blue}}
</style>
""".format(random_id=random_id)
else:
new_style = []
s = re.sub(r'</?style>', '', style).strip()
for line in s.split('\n'):
line = line.strip()
if not re.match(r'^table', line):
line = re.sub(r'^', 'table ', line)
new_style.append(line)
new_style = ['<style>'] + new_style + ['</style>']
style = re.sub(r'table(#\S+)?', 'table#%s' % random_id, '\n'.join(new_style))
df_html = re.sub(r'<table', r'<table id=%s ' % random_id, df_html)
return HTML(style + df_html)
</code></pre>
<p>Use it like this:</p>
<pre><code>HTML_with_style(df.head())
</code></pre>
<p><a href="http://i.stack.imgur.com/iSiyr.png" rel="nofollow"><img src="http://i.stack.imgur.com/iSiyr.png" alt="enter image description here"></a></p>
<pre><code>HTML_with_style(df.head(), '<style>table {color: red}</style>')
</code></pre>
<p><a href="http://i.stack.imgur.com/LfFrq.png" rel="nofollow"><img src="http://i.stack.imgur.com/LfFrq.png" alt="enter image description here"></a></p>
<pre><code>style = """
<style>
tr:nth-child(even) {color: green;}
tr:nth-child(odd) {color: aqua;}
</style>
"""
HTML_with_style(df.head(), style)
</code></pre>
<p><a href="http://i.stack.imgur.com/a95aW.png" rel="nofollow"><img src="http://i.stack.imgur.com/a95aW.png" alt="enter image description here"></a></p>
<p>Learn CSS and go nuts!</p>
| 10 | 2016-07-21T18:28:57Z | [
"python",
"html",
"css",
"pandas",
"dataframe"
] |
Change the color of text within a pandas dataframe html table python using styles and css | 38,511,373 | <p>I have a pandas dataframe:</p>
<pre><code>arrays = [['Midland','Midland','Hereford','Hereford','Hobbs','Hobbs','Childress','Childress','Reese','Reese',
'San Angelo','San Angelo'],['WRF','MOS','WRF','MOS','WRF','MOS','WRF','MOS','WRF','MOS','WRF','MOS']]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples)
df = pd.DataFrame(np.random.randn(12,4),index=arrays,columns=['00 UTC','06 UTC','12 UTC','18 UTC'])
df
</code></pre>
<p>The table that prints from this looks like this:<a href="http://i.stack.imgur.com/FbsI6.png" rel="nofollow"><img src="http://i.stack.imgur.com/FbsI6.png" alt="enter image description here"></a></p>
<p>I would like to be able to format this table better. Specifically, I would like to color all of the values in the 'MOS' rows a certain color and color the left two index/header columns as well as the top header row a different background color than the rest of the cells with values in them. Any ideas how I can do this?</p>
| 6 | 2016-07-21T18:06:19Z | 39,347,921 | <p>Using pandas new styling functionality:</p>
<pre><code>import numpy as np
import pandas as pd
arrays = [['Midland','Midland','Hereford','Hereford','Hobbs','Hobbs','Childress','Childress','Reese','Reese',
'San Angelo','San Angelo'],['WRF','MOS','WRF','MOS','WRF','MOS','WRF','MOS','WRF','MOS','WRF','MOS']]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples)
df = pd.DataFrame(np.random.randn(12,4),index=arrays,columns=['00 UTC','06 UTC','12 UTC','18 UTC'])
def highlight_MOS(s):
is_mos = s.index.get_level_values(1)=='MOS'
return ['color: darkorange' if v else 'color: darkblue' for v in is_mos]
s = df.style.apply(highlight_MOS)
s
</code></pre>
<p><a href="http://i.stack.imgur.com/NzqKH.png" rel="nofollow"><img src="http://i.stack.imgur.com/NzqKH.png" alt="enter image description here"></a></p>
| 1 | 2016-09-06T11:25:30Z | [
"python",
"html",
"css",
"pandas",
"dataframe"
] |
Extending a template thats already extended in Django | 38,511,433 | <p>I'm trying to figure out if there is a way to extend a partial view into a view that already extends base.html.</p>
<p>Here is an example of what I'm trying to do:</p>
<p><strong>my-template.html</strong></p>
<pre><code>{% extends 'base.html '%}
<div class="row">
<div class="col-xs-12">
<ul class="nav nav-tabs">
<li role="presentation" class="active"><a href="#">Tab1</a></li>
<li role="presentation"><a href="#">Tab2</a></li>
</ul>
</div>
</div>
<div>
{% block tab_content %}
{% endblock %}
</div>
</code></pre>
<p><strong>partial1.html</strong></p>
<pre><code>{% extends 'my-template.html' %}
{% block tab_content %}
<h1>I'm partial 1</h1>
{% endblock %}
</code></pre>
<p>The my-template.html view has a url that is constructed like so:</p>
<pre><code>url(r'^my-template/(?P<id>[0-9]+)/$', views.my_template_view, name='my-template')
</code></pre>
<p>in addition a context dict is passed into the my_template_view providing the id for the url.</p>
<p>I would like the for the user to click on a tab and for its corresponding partial to be rendered with a url like so:</p>
<pre><code>url(r'^my-template/(?P<id>[0-9]+)/tab1/$', views.tab1_view, name='tab1-view')
</code></pre>
<p>but right now I'm getting a NoReverseMatch at /my-template/97/tab1/ which I'm assuming means that my tab1_view doesn't have access to the same context as the my_template_view and thus can't get the id to build the reverse of my url.</p>
<pre><code>In template /partial1.html, error at line 0
Reverse for 'tab1_view' with arguments '('',)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['/my-template/(?P<id>[0-9]+)/tab1/$']
</code></pre>
<p>So, is there a way for me, at the very least, to pass along the context or the id so this works, or am i going about this in the entirely wrong way?</p>
| 1 | 2016-07-21T18:09:03Z | 38,512,524 | <p>The typical way to solve this is by using the <code>include</code> template tag, not by extending with a new template.</p>
<p><a href="https://docs.djangoproject.com/en/1.9/ref/templates/builtins/#include" rel="nofollow">Here is the Django doc describing this.</a></p>
<p>You can even use a variable to define a dynamic template name that will be included based on logic in your view.</p>
<p>Little more clarification here:</p>
<p>You can also have the URL route direct to the same view and have the "tab" optionally passed in as a second parameter as so:</p>
<pre><code>url(r'^my-template/(?P<id>[0-9]+)/(?P<tab_name>\w+)/$', views.my_template_view, name='my-template')
url(r'^my-template/(?P<id>[0-9]+)/$', views.my_template_view, name='my-template')
</code></pre>
<p>And your view would look something like:</p>
<pre><code>def my_template_view(request, id, tab_name=None):
if not tab_name:
tab_name = "tab1"
if tab_name == "tab1":
partial = "tab1.django.html"
elif tab_name == "tab2":
partial = "tab2.django.html"
return render("my-template.html", { 'partial': partial })
</code></pre>
<p>And on your template you would have:</p>
<pre><code>{% include partial %}
</code></pre>
<p>Because the included template will have the same context, you will have access to any variables that were available in the original context as well.</p>
| 2 | 2016-07-21T19:09:21Z | [
"python",
"django"
] |
Python: download files from google drive using url | 38,511,444 | <p>I am trying to download files from google drive and all I have is the drive's url.</p>
<p>I have read about google api that talks about some drive_service and MedioIO, which also requires some credentials( mainly json file/oauth). But I am unable to get any idea about how its working. </p>
<p>Also, tried urllib2 urlretrieve, but my case is to get files from drive. Tried 'wget' too but no use.</p>
<p>Tried pydrive library. It has good upload functions to drive but no download options.</p>
<p>Any help will be appreciated.
Thanks.</p>
| 0 | 2016-07-21T18:09:36Z | 38,516,081 | <p><code>PyDrive</code> allows you to download a file with the function <code>GetContentFile()</code>. You can find the function's documentation <a href="https://googledrive.github.io/PyDrive/docs/_build/html/filemanagement.html#download-file-content" rel="nofollow">here</a>.</p>
<p>See example below:</p>
<pre><code># Initialize GoogleDriveFile instance with file id.
file_obj = drive.CreateFile({'id': '<your file ID here>'})
file_obj.GetContentFile('cats.png') # Download file as 'cats.png'.
</code></pre>
<p>This code assumes that you have an authenticated <code>drive</code> object, the docs on this can be found <a href="https://googledrive.github.io/PyDrive/docs/_build/html/oauth.html#authentication-in-two-lines" rel="nofollow">here</a> and <a href="https://googledrive.github.io/PyDrive/docs/_build/html/filemanagement.html#upload-a-new-file" rel="nofollow">here</a>.</p>
<p>In the general case this is done like so:</p>
<pre><code>from pydrive.auth import GoogleAuth
gauth = GoogleAuth()
# Create local webserver which automatically handles authentication.
gauth.LocalWebserverAuth()
# Create GoogleDrive instance with authenticated GoogleAuth instance.
drive = GoogleDrive(gauth)
</code></pre>
<p>Info on silent authentication on a server can be found <a href="https://googledrive.github.io/PyDrive/docs/_build/html/oauth.html#automatic-and-custom-authentication-with-settings-yaml" rel="nofollow">here</a> and involves writing a <code>settings.yaml</code> (example: <a href="https://googledrive.github.io/PyDrive/docs/_build/html/oauth.html#sample-settings-yaml" rel="nofollow">here</a>) in which you save the authentication details.</p>
| 0 | 2016-07-21T23:47:23Z | [
"python",
"download",
"google-drive-sdk",
"urllib2",
"pydrive"
] |
Python: download files from google drive using url | 38,511,444 | <p>I am trying to download files from google drive and all I have is the drive's url.</p>
<p>I have read about google api that talks about some drive_service and MedioIO, which also requires some credentials( mainly json file/oauth). But I am unable to get any idea about how its working. </p>
<p>Also, tried urllib2 urlretrieve, but my case is to get files from drive. Tried 'wget' too but no use.</p>
<p>Tried pydrive library. It has good upload functions to drive but no download options.</p>
<p>Any help will be appreciated.
Thanks.</p>
| 0 | 2016-07-21T18:09:36Z | 39,225,272 | <p>If by "drive's url" you mean the <strong>shareable link</strong> of a file on Google Drive, then the following might help:</p>
<pre><code>import requests
def download_file_from_google_drive(id, destination):
URL = "https://docs.google.com/uc?export=download"
session = requests.Session()
response = session.get(URL, params = { 'id' : id }, stream = True)
token = get_confirm_token(response)
if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)
save_response_content(response, destination)
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 32768
with open(destination, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
if __name__ == "__main__":
file_id = 'TAKE ID FROM SHAREABLE LINK'
destination = 'DESTINATION FILE ON YOUR DISK'
download_file_from_google_drive(file_id, destination)
</code></pre>
<p>The snipped does not use <em>pydrive</em>, nor the Google Drive SDK, though. It uses the <a href="http://docs.python-requests.org" rel="nofollow">requests</a> module (which is, somehow, an alternative to <em>urllib2</em>).</p>
<p>When downloading large files from Google Drive, a single GET request is not sufficient. A second one is needed - see <a href="http://stackoverflow.com/a/39225039/6770522">wget/curl large file from google drive</a>.</p>
| 0 | 2016-08-30T10:39:01Z | [
"python",
"download",
"google-drive-sdk",
"urllib2",
"pydrive"
] |
How to modify nested JSON with python | 38,511,586 | <p>I need to update (CRUD) a nested JSON file using Python. To be able to call python function(s)(to update/delete/create) entires and write it back to the json file.</p>
<p>Here is a <a href="https://www.dropbox.com/s/9j0pi6iiww0i9fi/InputJsonfile.json?dl=0" rel="nofollow">sample file</a>.</p>
<p>I am looking at <a href="https://boltons.readthedocs.io/en/latest/iterutils.html#boltons.iterutils.remap" rel="nofollow">the remap</a> library but not sure if this will work.</p>
<pre><code> {
"groups": [
{
"name": "group1",
"properties": [
{
"name": "Test-Key-String",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value1"
}
},
{
"name": "Test-Key-Integer",
"value": {
"type": "Integer",
"data": 1000
}
}
],
"groups": [
{
"name": "group-child",
"properties": [
{
"name": "Test-Key-String",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value1"
}
},
{
"name": "Test-Key-Integer",
"value": {
"type": "Integer",
"data": 1000
}
}
]
}
]
},
{
"name": "group2",
"properties": [
{
"name": "Test-Key2-String",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value2"
}
}
]
}
]
}
</code></pre>
| 0 | 2016-07-21T18:17:00Z | 38,511,889 | <p>I feel like I'm missing something in your question. In any event, what I understand is that you want to read a json file, edit the data as a python object, then write it back out with the updated data?</p>
<p>Read the json file:</p>
<pre><code>import json
f = open("data.json")
raw_data = f.read()
f.close()
data = json.loads(raw_data)
</code></pre>
<p>That creates a dictionary (given the format you've given) that you can manipulate however you want. Assuming you want to write it out:</p>
<pre><code>json_data = json.dumps(data)
f = open("data.json","w")
f.write(json_data)
f.close()
</code></pre>
| 1 | 2016-07-21T18:33:39Z | [
"python",
"json"
] |
Matplotlib: error with "height" in grouped barchart | 38,511,600 | <p>I want to produce a barchart with two categories on the x axis and, for each category, 5 different series. I took inspiration from <a href="http://emptypipes.org/2013/11/09/matplotlib-multicategory-barchart/" rel="nofollow">here</a>, and amended the code as follows:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from pylab import *
%pylab inline
myarray=np.array([['Series1', 'A',45],
['Series2', 'A',47],
['Series3', 'A',48],
['Series4','A',48],
['Series5', 'A',49],
['Series6','B',39],
['Series7','B',37],
['Series8','B',38],
['Series9','B',36],
['Series10','B',38]])
fig1=plt.figure()
ax1=fig1.add_subplot(111)
space=0.25
slots=np.unique(myarray[:,0])
categories=np.unique(myarray[:,1])
n=len(slots)
width = (1 - space) / (len(slots))
for i,cond in enumerate(slots):
print "cond:", cond
vals = myarray[myarray[:,0] == cond][:,2]
pos = [j - (1 - space) / 2. + i * width for j in range(1,len(categories)+1)]
ax1.bar(pos, vals, width=width,label=cond,color=cm.Accent(float(i)/n))
</code></pre>
<p>I keep getting the same error: <code>ValueError: incompatible sizes: argument 'height' must be length 2 or scalar</code>.</p>
<p>It points at: <code>ax1.bar(pos, vals, width=width,label=cond,color=cm.Accent(float(i)/n))</code>. </p>
<p>I understand the problem is with <code>vals</code> because it should either be a scalar or have length 2, but I don't know how to solve it. The elements of <code>vals</code> are float! </p>
| 1 | 2016-07-21T18:17:29Z | 38,521,130 | <pre><code>import numpy as np
import matplotlib.pyplot as plt
from pylab import *
myarray=np.array([['Series1', 'A',45],
['Series2', 'A',47],
['Series3', 'A',48],
['Series4','A',48],
['Series5', 'A',49],
['Series1','B',39],
['Series2','B',37],
['Series3','B',38],
['Series4','B',36],
['Series5','B',38]])
fig1=plt.figure()
ax1=fig1.add_subplot(111)
space=0.25
slots=np.unique(myarray[:,0])
categories=np.unique(myarray[:,1])
n=len(slots)
width = (1 - space) / (len(slots))
for i,cond in enumerate(slots[::-1]):
print "cond:", cond
vals = myarray[myarray[:,0] == cond][:,2]
pos = [j - (1 - space) / 2. + i * width for j in range(1,len(categories)+1)]
print(float(i)/n)
ax1.bar(pos, vals, width=width,label=cond,color=cm.Accent(1-float(i+1)/n))
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/LXdbJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/LXdbJ.png" alt="enter image description here"></a></p>
| 1 | 2016-07-22T07:55:49Z | [
"python",
"matplotlib"
] |
cannot quit jupyter notebook server running | 38,511,673 | <p>I am using Jupyter Notebook for a project. Since I ssh into a linux cluster at work I use</p>
<pre><code>ssh -Y -L 8000:localhost:8888 user@host
</code></pre>
<p>Then I start the notebook with <code>jupyter notebook --no-browser &</code> so that I can continue using the terminal. Then on my local machine I open to <code>localhost:8000</code> and go about my work.</p>
<p>My problem is that I forgot several times to close the server by foregrounding the process and killing it with <code>Ctrl-C</code>. Instead I just logged out of the ssh session. Now when I run <code>jupyter notebook list</code> I get</p>
<pre><code>Currently running servers:
http://localhost:8934/ :: /export/home/jbalsells
http://localhost:8870/ :: /export/home/jbalsells
http://localhost:8892/ :: /export/home/jbalsells
http://localhost:8891/ :: /export/home/jbalsells
http://localhost:8890/ :: /export/home/jbalsells
http://localhost:8889/ :: /export/home/jbalsells
http://localhost:8888/ :: /export/home/jbalsells
</code></pre>
<p>I obviously do not want all of these servers running on my work's machine, but I do not know how to close them!</p>
<p>When I run ps I get nothing:</p>
<pre><code> PID TTY TIME CMD
12678 pts/13 00:00:00 bash
22584 pts/13 00:00:00 ps
</code></pre>
<p>I have Jupyter 4.1.0 installed.</p>
| 0 | 2016-07-21T18:21:57Z | 38,511,773 | <p>Section 3.3 should be applicable to this.
<a href="http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/execute.html" rel="nofollow">http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/execute.html</a></p>
<blockquote>
<p>When a notebook is opened, its âcomputational engineâ (called the kernel) is automatically started. Closing the notebook browser tab, will not shut down the kernel, instead the kernel will keep running until is explicitly shut down.</p>
<p>To shut down a kernel, go to the associated notebook and click on menu File -> Close and Halt. Alternatively, the Notebook Dashboard has a tab named Running that shows all the running notebooks (i.e. kernels) and allows shutting them down (by clicking on a Shutdown button).</p>
</blockquote>
| 0 | 2016-07-21T18:26:53Z | [
"python",
"server",
"jupyter",
"jupyter-notebook"
] |
cannot quit jupyter notebook server running | 38,511,673 | <p>I am using Jupyter Notebook for a project. Since I ssh into a linux cluster at work I use</p>
<pre><code>ssh -Y -L 8000:localhost:8888 user@host
</code></pre>
<p>Then I start the notebook with <code>jupyter notebook --no-browser &</code> so that I can continue using the terminal. Then on my local machine I open to <code>localhost:8000</code> and go about my work.</p>
<p>My problem is that I forgot several times to close the server by foregrounding the process and killing it with <code>Ctrl-C</code>. Instead I just logged out of the ssh session. Now when I run <code>jupyter notebook list</code> I get</p>
<pre><code>Currently running servers:
http://localhost:8934/ :: /export/home/jbalsells
http://localhost:8870/ :: /export/home/jbalsells
http://localhost:8892/ :: /export/home/jbalsells
http://localhost:8891/ :: /export/home/jbalsells
http://localhost:8890/ :: /export/home/jbalsells
http://localhost:8889/ :: /export/home/jbalsells
http://localhost:8888/ :: /export/home/jbalsells
</code></pre>
<p>I obviously do not want all of these servers running on my work's machine, but I do not know how to close them!</p>
<p>When I run ps I get nothing:</p>
<pre><code> PID TTY TIME CMD
12678 pts/13 00:00:00 bash
22584 pts/13 00:00:00 ps
</code></pre>
<p>I have Jupyter 4.1.0 installed.</p>
| 0 | 2016-07-21T18:21:57Z | 38,513,158 | <p>So I found a solution.</p>
<p>Since <code>jupyter notebook list</code> tells you which ports the notebook servers are running on I looked for the PIDs using <code>netstat -tulpn</code> I got the information from <a href="http://www.cyberciti.biz/faq/what-process-has-open-linux-port/" rel="nofollow">http://www.cyberciti.biz/faq/what-process-has-open-linux-port/</a></p>
<pre><code>Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
PID/Program name
tcp 0 0 0.0.0.0:8649 0.0.0.0:* LISTEN
-
tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN
-
tcp 0 0 0.0.0.0:33483 0.0.0.0:* LISTEN
-
tcp 0 0 0.0.0.0:5901 0.0.0.0:* LISTEN
39125/Xvnc
</code></pre>
<p>Without looking too hard I was able to find the ports I knew to look for from <code>jupyter notebook list</code> and the processes running them (you could use <code>grep</code> if it were too hard to find them). Then I killed them with
<code>kill 8337</code> (or whatever number was associated).</p>
| 0 | 2016-07-21T19:49:10Z | [
"python",
"server",
"jupyter",
"jupyter-notebook"
] |
cannot quit jupyter notebook server running | 38,511,673 | <p>I am using Jupyter Notebook for a project. Since I ssh into a linux cluster at work I use</p>
<pre><code>ssh -Y -L 8000:localhost:8888 user@host
</code></pre>
<p>Then I start the notebook with <code>jupyter notebook --no-browser &</code> so that I can continue using the terminal. Then on my local machine I open to <code>localhost:8000</code> and go about my work.</p>
<p>My problem is that I forgot several times to close the server by foregrounding the process and killing it with <code>Ctrl-C</code>. Instead I just logged out of the ssh session. Now when I run <code>jupyter notebook list</code> I get</p>
<pre><code>Currently running servers:
http://localhost:8934/ :: /export/home/jbalsells
http://localhost:8870/ :: /export/home/jbalsells
http://localhost:8892/ :: /export/home/jbalsells
http://localhost:8891/ :: /export/home/jbalsells
http://localhost:8890/ :: /export/home/jbalsells
http://localhost:8889/ :: /export/home/jbalsells
http://localhost:8888/ :: /export/home/jbalsells
</code></pre>
<p>I obviously do not want all of these servers running on my work's machine, but I do not know how to close them!</p>
<p>When I run ps I get nothing:</p>
<pre><code> PID TTY TIME CMD
12678 pts/13 00:00:00 bash
22584 pts/13 00:00:00 ps
</code></pre>
<p>I have Jupyter 4.1.0 installed.</p>
| 0 | 2016-07-21T18:21:57Z | 40,136,334 | <p>I ran into the same issue and followed the solution posted above. Just wanted to clearify the solution a little bit.</p>
<pre><code>netstat -tulpn
</code></pre>
<p>will list all the active connections.</p>
<pre><code>tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN 19524/python
</code></pre>
<p>you will need the PID "19524" in this case. you can even use the following to get the PID of the port you are trying to shut down</p>
<pre><code>fuser 8888/tcp
</code></pre>
<p>this will give you 19524 as well.</p>
<pre><code>kill 19524
</code></pre>
<p>will shut down the port</p>
| 0 | 2016-10-19T15:55:32Z | [
"python",
"server",
"jupyter",
"jupyter-notebook"
] |
OpenStack Error(Neutron network service) | 38,511,699 | <h1>My Enviroment</h1>
<ul>
<li>CentOS7</li>
<li>OpenStack(Liberty)</li>
</ul>
<h1>Problem</h1>
<p>neutron port-show net-ID</p>
<p>Unable to find port with name 'net-ID'</p>
<p>How do I fix this problem?? Please help</p>
<hr>
<h2>/etc/neutron/plugins/ml2/linuxbridge_agent.ini</h2>
<pre><code>[linux_bridge]
physical_interface_mappings = public:ens6f0
[vxlan]
enable_vxlan = False
[agent]
prevent_arp_spoofing = True
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = True
</code></pre>
<hr>
<h2>/etc/neutron/plugins/ml2/ml2_conf.ini</h2>
<pre><code>[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = public
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
[ml2_type_geneve]
[securitygroup]
enable_ipset = True
</code></pre>
| 0 | 2016-07-21T18:23:20Z | 39,857,815 | <p>Execute the following command to get the net-ids.</p>
<blockquote>
<p>neutron port-list </p>
</blockquote>
<p>After getting the net-ids of different network devices(router, DHCP etc..), execute the following command with it.</p>
<blockquote>
<p>neutron port-show "net-id you got from previous command"</p>
</blockquote>
| 0 | 2016-10-04T16:52:36Z | [
"python",
"openstack",
"centos7",
"openstack-neutron"
] |
Regrouping lines of a text file | 38,511,706 | <p>I am using a Python script to generate some Stata commands. The output is a text file. I would like to group lines belonging to a same observation, which is currently not the case, using Python.</p>
<p>A typical line in this file (let's call it file.txt) is of the sort:</p>
<pre><code>[something something] if a == 1 & b == 2 & c == 3 & [other things]
</code></pre>
<p>Where a, b and c are identifying variables. An (a,b,c) triplets uniquely identifies an observation. What I am trying to do is to sort file.txt by grouping all lines related to a same observation together.</p>
<p>For instance, go from:</p>
<pre><code>replace k = 1 if a == 1 & b == 2 & c == 3 & comments_1 == "I wish I was better at Python"
replace k = 2 if a == 1 & b == 3 & c == 4 & comments_1 == ""
replace g = "Example" if a == 1 & b == 2 & c == 3 & comments_1 == "I wish I was better at Python"
</code></pre>
<p>to:</p>
<pre><code>replace k = 1 if a == 1 & b == 2 & c == 3 & comments_1 == "I wish I was better at Python"
replace g = "Example" if a == 1 & b == 2 & c == 3 & comments_1 == "I wish I was better at Python"
replace k = 2 if a == 1 & b == 3 & c == 4 & comments_1 == ""
</code></pre>
<p>The lines 1 and 3 of the input are next to each other in the output because they relate to the same observation (the same a, b, c triplet). This is different from sorting alphabetically, so I cannot use sort().</p>
<p>My plan would be:</p>
<blockquote>
<p>Create an empty dictionary dict[tuple[int]:set[str]]</p>
<p>Read each line of the text file. For each line, get the triplet by searching for the characters after 'a == ' and before ' b ==' and so forth. </p>
<p>If the triplet is in the dictionary, add the line as a string in the set to which the triplet points to. If not, create the entry and add the string.</p>
<p>For each string in the set of each entry, write in a file the strings.</p>
</blockquote>
<p>This I believe would sort the file.</p>
<p>Would that work? Is there a better way to do it?</p>
<p>Thanks!</p>
| 0 | 2016-07-21T18:23:41Z | 38,512,203 | <p>Sounds good to me. You could use a regex to extract the observations. For example, assuming that observations are made up of positive integers you could use:</p>
<pre><code>import re
line = 'replace k = 1 if a == 1 & b == 2 & c == 3 & comments_1 == "test"'
m = re.search(r'a == (\d+) & b == (\d+) & c == (\d+)', line)
observation = tuple(map(int, m.groups()))
print(observation)
</code></pre>
<p>This prints the tuple <code>(1, 2, 3)</code>.</p>
| 0 | 2016-07-21T18:51:16Z | [
"python",
"sorting"
] |
Regrouping lines of a text file | 38,511,706 | <p>I am using a Python script to generate some Stata commands. The output is a text file. I would like to group lines belonging to a same observation, which is currently not the case, using Python.</p>
<p>A typical line in this file (let's call it file.txt) is of the sort:</p>
<pre><code>[something something] if a == 1 & b == 2 & c == 3 & [other things]
</code></pre>
<p>Where a, b and c are identifying variables. An (a,b,c) triplets uniquely identifies an observation. What I am trying to do is to sort file.txt by grouping all lines related to a same observation together.</p>
<p>For instance, go from:</p>
<pre><code>replace k = 1 if a == 1 & b == 2 & c == 3 & comments_1 == "I wish I was better at Python"
replace k = 2 if a == 1 & b == 3 & c == 4 & comments_1 == ""
replace g = "Example" if a == 1 & b == 2 & c == 3 & comments_1 == "I wish I was better at Python"
</code></pre>
<p>to:</p>
<pre><code>replace k = 1 if a == 1 & b == 2 & c == 3 & comments_1 == "I wish I was better at Python"
replace g = "Example" if a == 1 & b == 2 & c == 3 & comments_1 == "I wish I was better at Python"
replace k = 2 if a == 1 & b == 3 & c == 4 & comments_1 == ""
</code></pre>
<p>The lines 1 and 3 of the input are next to each other in the output because they relate to the same observation (the same a, b, c triplet). This is different from sorting alphabetically, so I cannot use sort().</p>
<p>My plan would be:</p>
<blockquote>
<p>Create an empty dictionary dict[tuple[int]:set[str]]</p>
<p>Read each line of the text file. For each line, get the triplet by searching for the characters after 'a == ' and before ' b ==' and so forth. </p>
<p>If the triplet is in the dictionary, add the line as a string in the set to which the triplet points to. If not, create the entry and add the string.</p>
<p>For each string in the set of each entry, write in a file the strings.</p>
</blockquote>
<p>This I believe would sort the file.</p>
<p>Would that work? Is there a better way to do it?</p>
<p>Thanks!</p>
| 0 | 2016-07-21T18:23:41Z | 38,512,795 | <p>That's a good approach, but since you want to retain all lines, I wouldn't bother grouping lines with the same triple: Just make a list of all lines and sort them with their value triple as the sort key.</p>
<pre><code>def getvalues(line):
"""Extract a value triple from a line that matches the pattern"""
m = re.search(r"if a == (\d+) & b == (\d+) & c == (\d+) &", line)
if m:
return tuple(int(v) for v in m.groups())
else:
return line # Lines that don't match the pattern are sorted normally
with open("file.txt") as fp:
lines = fp.readlines()
lines.sort(key=getvalues)
</code></pre>
<p>The above assumes that all lines have identical variable names, white space, etc. If not, you'll need to elaborate your regexp.</p>
| 0 | 2016-07-21T19:27:41Z | [
"python",
"sorting"
] |
Python while loop Syntax Error | 38,511,856 | <p>I am learning Python and while working on a simple while loop I get a syntax error but cannot figure out why. Below is my code and the error I get</p>
<pre><code>products = ['Product 1', 'Product 2', 'Product 3']
quote_items = []
quote = input("What services are you interesting in? (Press X to quit)")
while (quote.upper() != 'X'):
product_found = products.get(quote)
if product_found:
quote_items.append(quote)
else:
print("No such product")
quote = input("Anything Else?")
print(quote_items)
</code></pre>
<p>I am using NetBeans 8.1 to run these. Below is the error I see after I type in Product 1:</p>
<pre><code>What servese are you interesting in? (Press X to quit)Product 1
Traceback (most recent call last):
File "\\NetBeansProjects\\while_loop.py", line 3, in <module>
quote = input("What services are you interesting in? (Press X to quit)")
File "<string>", line 1
Product 1
SyntaxError: no viable alternative at input '1'
</code></pre>
| 1 | 2016-07-21T18:31:40Z | 38,511,929 | <p>Use <code>raw_input</code> instead of <code>input</code>. Python evaluates <code>input</code> as pure python code.</p>
<pre><code>quote = raw_input("What services are you interesting in? (Press X to quit)")
</code></pre>
| 1 | 2016-07-21T18:36:18Z | [
"python",
"while-loop"
] |
Python while loop Syntax Error | 38,511,856 | <p>I am learning Python and while working on a simple while loop I get a syntax error but cannot figure out why. Below is my code and the error I get</p>
<pre><code>products = ['Product 1', 'Product 2', 'Product 3']
quote_items = []
quote = input("What services are you interesting in? (Press X to quit)")
while (quote.upper() != 'X'):
product_found = products.get(quote)
if product_found:
quote_items.append(quote)
else:
print("No such product")
quote = input("Anything Else?")
print(quote_items)
</code></pre>
<p>I am using NetBeans 8.1 to run these. Below is the error I see after I type in Product 1:</p>
<pre><code>What servese are you interesting in? (Press X to quit)Product 1
Traceback (most recent call last):
File "\\NetBeansProjects\\while_loop.py", line 3, in <module>
quote = input("What services are you interesting in? (Press X to quit)")
File "<string>", line 1
Product 1
SyntaxError: no viable alternative at input '1'
</code></pre>
| 1 | 2016-07-21T18:31:40Z | 38,512,098 | <p>in <em>Python 3</em></p>
<pre><code>products = ['Product 1', 'Product 2', 'Product 3']
quote_items = []
quote = input("What services are you interesting in? (Press X to quit)")
while (quote.upper() != 'X'):
product_found = quote in products
if product_found:
quote_items.append(quote)
else:
print("No such product")
quote = input("Anything Else?")
print(quote_items)
</code></pre>
<p>in <em>Python 2</em></p>
<pre><code>products = ['Product 1', 'Product 2', 'Product 3']
quote_items = []
quote = raw_input("What services are you interesting in? (Press X to quit)")
while (quote.upper() != 'X'):
product_found = quote in products
if product_found:
quote_items.append(quote)
else:
print "No such product"
quote = raw_input("Anything Else?")
print quote_items
</code></pre>
<p>this is because lists don't have the attribute '.get()' so you can use </p>
<p><code>value in list</code>
that will return a <code>True</code> or <code>False</code> value</p>
| 4 | 2016-07-21T18:45:56Z | [
"python",
"while-loop"
] |
Add custom language for localization in Django app | 38,511,925 | <p><a href="http://stackoverflow.com/questions/19267886/adding-a-custom-language-to-django">adding a custom language to django</a></p>
<p>I checked this question and did all the steps mentioned in the accepted answer. After doing all when I go to <code>/kjv/</code> then it redirects to <code>/en/kjv/</code> </p>
<p>Project structure:</p>
<p><code>MyProject
---------locale
-------------kjv
-----------------LC_MESSAGES
--------------------django.mo
--------------------django.po
---------myproject
--------------settings.py
---------app
---------manage.py</code></p>
<p>Some one can help me to fix this? </p>
<p>settings.py </p>
<pre><code>...
import django.conf.locale
gettext = lambda s: s
EXTRA_LANG_INFO = {
'kjv': {
'bidi': False,
'code': u'kjv',
'name': u'Kjvx',
'name_local': u'Kjvx'
},
}
# Add custom languages not provided by Django
LANG_INFO = dict(django.conf.locale.LANG_INFO.items() + EXTRA_LANG_INFO.items())
django.conf.locale.LANG_INFO = LANG_INFO
LANGUAGES = (
('hr', gettext('hr')),
('en', gettext('en')),
('de', gettext('de')),
('fr', gettext('fr')),
('kjv', gettext('kjv')),
)
...
</code></pre>
<p>Django-1.6.5 and all urls are wrapped in <code>i18n_patterns</code>.</p>
| 0 | 2016-07-21T18:35:55Z | 38,528,364 | <p>If anyone come to this place and facing similar issue then don't forget to update/add LOCALE_PATHS in settings.py</p>
<p>e.g</p>
<pre><code>PROJECT_PATH = os.path.dirname(os.path.abspath(__file__))
LOCALE_PATHS = (
os.path.join(PROJECT_PATH, '../locale'),
)
</code></pre>
<p>You can point to anywhere, given they exist and have valid locale structure.</p>
| 0 | 2016-07-22T13:59:15Z | [
"python",
"django",
"web-applications",
"localization",
"internationalization"
] |
Python crashing with openGl2 | 38,511,963 | <p>When I try to execute a function I wrote using mayavi, python will crash and give me a message:</p>
<blockquote>
<p>Error: In
D:\Build\VTK-7.0.0\Rendering\OpenGL2\vtkOpenGLRenderWindow.cxx, line
545 vtkWin32OpenGLRenderWIndow(00002533B0F8): Gl version 2.1 with
gpu_shader4 extension is not supported by your graphics card (rest is
cut off before it terminates)</p>
</blockquote>
<p>I have VTK 7.0.0, python 3.5.
What can I do to work around this and get my graphs to load?</p>
| -2 | 2016-07-21T18:37:57Z | 38,528,694 | <p>The issue was resolved by updating video card drivers and changing my work environnement out of remote desktop connection that was previously being used.</p>
| 0 | 2016-07-22T14:14:50Z | [
"python",
"opengl",
"vtk",
"mayavi"
] |
pyodbc: How to test whether it's possible to establish connection with SQL server without freezing up | 38,511,979 | <p>I am writing an app with <code>wxPython</code> that incorporates <code>pyodbc</code> to access SQL Server. A user must first establish a VPN connection before they can establish a connection with the SQL server. In cases where a user forgets to establish a VPN connection or is simply not authorized to access a particular server, the app will freeze for up to 60+ seconds before it produces an error message. Often, users will get impatient and force-close the app before the error message pops up. </p>
<p><strong>I wonder if there is a way to test whether it's possible to connect to the server without freezing up.</strong> I thought about using <code>timeout</code>, but it seems that <code>timeout</code> can be used only after I establish a connection</p>
<p>A sample connection string I use is below:</p>
<pre><code>connection = pyodbc.connect(r'DRIVER={SQL Server};SERVER=ServerName;database=DatabaseName;Trusted_Connection=True;unicode_results=True')
</code></pre>
| 1 | 2016-07-21T18:38:47Z | 38,512,759 | <p>See <a href="https://code.google.com/archive/p/pyodbc/wikis/Connection.wiki" rel="nofollow">https://code.google.com/archive/p/pyodbc/wikis/Connection.wiki</a> under <code>timeout</code></p>
<blockquote>
<p>Note: This attribute only affects queries. To set the timeout for the
actual connection process, use the timeout keyword of the
pyodbc.connect function.</p>
</blockquote>
<p>So change your connection string to:</p>
<pre><code>connection = pyodbc.connect(r'DRIVER={SQL Server};SERVER=ServerName;database=DatabaseName;Trusted_Connection=True;unicode_results=True', timeout=3)
</code></pre>
<p>should work</p>
| 1 | 2016-07-21T19:25:09Z | [
"python",
"sql-server",
"pyodbc"
] |
pyodbc: How to test whether it's possible to establish connection with SQL server without freezing up | 38,511,979 | <p>I am writing an app with <code>wxPython</code> that incorporates <code>pyodbc</code> to access SQL Server. A user must first establish a VPN connection before they can establish a connection with the SQL server. In cases where a user forgets to establish a VPN connection or is simply not authorized to access a particular server, the app will freeze for up to 60+ seconds before it produces an error message. Often, users will get impatient and force-close the app before the error message pops up. </p>
<p><strong>I wonder if there is a way to test whether it's possible to connect to the server without freezing up.</strong> I thought about using <code>timeout</code>, but it seems that <code>timeout</code> can be used only after I establish a connection</p>
<p>A sample connection string I use is below:</p>
<pre><code>connection = pyodbc.connect(r'DRIVER={SQL Server};SERVER=ServerName;database=DatabaseName;Trusted_Connection=True;unicode_results=True')
</code></pre>
| 1 | 2016-07-21T18:38:47Z | 38,516,742 | <blockquote>
<p>took a while before it threw an error message about server not existing or access being denied</p>
</blockquote>
<p>Your comment conflates two very different kinds of errors:</p>
<ul>
<li><p><em>server not existing</em> is a network error. Either the name has no address, or the address is unreachable. No connection can be made. </p></li>
<li><p><em>access being denied</em> is a response from the server. For the server to respond, a connection must exist. This is not to be confused with <em>connection refused</em> (ECONNREFUSED), which means the remote is not accepting connections on the port. </p></li>
</ul>
<p>SQL Server uses TCP/IP. You can use <a href="https://docs.python.org/3.5/howto/sockets.html" rel="nofollow">standard network functions</a> to determine if the network hostname of the machine running SQL Server can be found, and if the IP address is reachable. One advantage to using them to "pre-test" the connection is that any error you'll get will be much more specific than the typical <em>there was a problem connecting to the server</em>. </p>
<p>Note that not all delay-inducing errors can be avoided. For example, if the DNS server is not responding, the resolver will typically wait 30 seconds before giving up. If an IP address is valid, but there's no machine with that address, attempting a connection will take a long time to fail. There's no way for the client to know there's no such machine; it could just be taking a long time to get a response. </p>
| 1 | 2016-07-22T01:15:02Z | [
"python",
"sql-server",
"pyodbc"
] |
change rolename and static ip in azure python update_role | 38,512,129 | <p>The Azure Python API update_role doesn't provide way to update the role name and set static ip, are there any alternative ways to do these operations.</p>
| 0 | 2016-07-21T18:47:04Z | 38,522,309 | <p>The <code>update_role</code> function of <code>azure.servicemanagement.servicemanagementservice</code> for Python is wrapped the REST API <a href="https://msdn.microsoft.com/en-us/library/azure/jj157187.aspx" rel="nofollow"><code>Update Role</code></a>. The role name is not an attribute of request body, so it could not update the role name, but you can set the static ip for the role via the parameter <a href="https://github.com/Azure/azure-sdk-for-python/blob/71ecaace918e50e2a3cb7cc1c19f8a8ab4336909/azure-servicemanagement-legacy/azure/servicemanagement/servicemanagementservice.py#L1485" rel="nofollow"><code>network_config</code></a> for the tag <code><StaticVirtualNetworkIPAddress>ip-address</StaticVirtualNetworkIPAddress></code> in the request body of REST API.</p>
<p>However, I searched a <a href="https://michaelcollier.wordpress.com/2013/10/25/setting-a-webworker-role-name/" rel="nofollow">blog</a> of a MS developer which introduce setting a Web/Worker Role Name via modify the <a href="https://msdn.microsoft.com/en-us/library/azure/jj156212.aspx" rel="nofollow">Role schema</a>.</p>
| 0 | 2016-07-22T08:59:04Z | [
"python",
"azure"
] |
How to color-code 2D scatter plot based on a 3rd column | 38,512,187 | <p>I have a scatter plot with the x and y axes showing distance and temperature, respectively. The data was collected over multiple days, and I want to color-code the plot to see which data was collected on which date (e.g., 20160703). How can I add color based on the "date" column?</p>
| -2 | 2016-07-21T18:50:29Z | 38,512,661 | <p>I assume your using xlsx as that is the newest excel file format. </p>
<p>For xlsx you are going to want to use openpyxl.</p>
<p>Documentation: <a href="https://openpyxl.readthedocs.io/en/default/" rel="nofollow">https://openpyxl.readthedocs.io/en/default/</a></p>
<p>Here is the specific documentation for color: <a href="http://openpyxl.readthedocs.io/en/default/_modules/openpyxl/styles/colors.html" rel="nofollow">http://openpyxl.readthedocs.io/en/default/_modules/openpyxl/styles/colors.html</a></p>
<p>If I understand correctly here is what you want:</p>
<pre><code>import openpyxl
x = 0
wb = openpyxl.Workbook()
ws = wb.active
columnName = ['Date']
for cell in columnName:
cell.style = COLOR_INDEX[X]
x += 1
wb.save(filename.xlsx)
</code></pre>
| 0 | 2016-07-21T19:18:51Z | [
"python"
] |
Python Pandas - Segmentation Fault after renaming columns? | 38,512,205 | <p>So after I create a dataframe in pandas, I have a function that capitalizes the headers. But when I try to access the dataframe information after capitalizing, I get a segmentation fault error. If I try to access it before applying the function, I don't have any problems. What could I be doing wrong?</p>
<pre><code>reader = pd.read_csv(inFile)
def capitalize_headers(df):
for i in range(len(list(df.columns.values))):
df.columns.values[i] = (df.columns.values[i]).upper()
capitalize_headers(reader)
print reader['ColumnName']
</code></pre>
| 2 | 2016-07-21T18:51:19Z | 38,512,252 | <p>If you uppercase all of the column names then accessing a column that has lowercase characters will throw an error.</p>
<p>Specifically, the line</p>
<pre><code>df.columns.values[i] = (df.columns.values[i]).upper()
</code></pre>
<p>converts <code>'columnname'</code> to <code>'COLUMNNAME'</code>. Column access in Pandas is case sensitive, so you would now access that column with <code>df['COLUMNNAME']</code>.</p>
<p>Also, here is a more efficient/pythonic way of doing this using <a href="http://pandas.pydata.org/pandas-docs/stable/text.html#method-summary" rel="nofollow">Pandas <code>str</code> methods</a>.</p>
<pre><code>df.columns = df.columns.str.capitalize()
</code></pre>
| 2 | 2016-07-21T18:53:39Z | [
"python",
"function",
"pandas"
] |
Starting wxPython and extending another class | 38,512,227 | <p>I am switching from tkinter to wxPython and I am confused on inheritance when using template wxPython scripts similar to the wxexample class below. Given my three scripts below (mainClass.py, wxexample.py, callbacks.py), how do I:</p>
<p>1) properly start the wxPython window from the mainClass; </p>
<p>2) have the Example class extend the callbacks class below.</p>
<p>mainClass.py:</p>
<pre><code>from time import sleep
import callbacks as clb
import wxexample
class mainClass(clb.callbacks): #, wxexample.Example):
def main(self):
#Here start the wxPython UI in wxexample!
...
#while 1: Edited
# sleep(0.5)
if __name__ == "__main__":
instance = mainClass()
instance.main()
</code></pre>
<p>wxexample.py:</p>
<pre><code>import wx
class Example(wx.Frame):
def __init__(self, *args, **kw):
super(Example, self).__init__(*args, **kw)
self.InitUI()
def InitUI(self):
pnl = wx.Panel(self)
btn=wx.Button(pnl, label='Button', pos=(20, 30))
#Here I would like to call callbacks.mycallback as self.mycallback:
btn.Bind(wx.EVT_BUTTON, self.mycallback)
self.Show(True)
</code></pre>
<p>callbacks.py:</p>
<pre><code>class callbacks():
def mycallback(self, e): #EDITED
print("callbacks.mycallback")
</code></pre>
<p><strong>SOLVED</strong>: I went back to fundamentals and found this solution. I was confused because in my real implementation mainClass was extending wxexample.Example for other reasons, which throws an error (Cannot create a consistent method resolution order (MRO) for bases Example, Callbacks)</p>
<pre><code>import callbacks as clb
import wxexample
class mainClass(clb.Callbacks): #, wxexample.Example):
def main(self):
wxexample.main()
if __name__ == "__main__":
instance = mainClass()
instance.main()
</code></pre>
<p>wxexample.py:</p>
<pre><code>import wx
import callbacks as clb
class Example(wx.Frame, clb.Callbacks):
def __init__(self, *args, **kw):
super(Example, self).__init__(*args, **kw)
self.InitUI()
def InitUI(self):
pnl = wx.Panel(self)
btn=wx.Button(pnl, label='Button', pos=(20, 30))
#Here I would like to call callbacks.mycallback as self.mycallback:
btn.Bind(wx.EVT_BUTTON, self.mycallback)
self.Show(True)
def main():
ex = wx.App()
Example(None)
ex.MainLoop()
if __name__ == '__main__':
main()
</code></pre>
| 0 | 2016-07-21T18:52:34Z | 38,514,156 | <p>All wxPython applications require the following, at a minimum:</p>
<ol>
<li>An instance of <code>wx.App</code> or a subclass derived from <code>wx.App</code></li>
<li>An instance of a top-level UI element, such as a <code>wx.Frame</code>, <code>wx.Dialog</code> or a derived class, which has been shown</li>
<li>An event loop, almost always implemented by calling the application object's <code>MainLoop</code> method.</li>
</ol>
<p>In light of that list, I worry about the while loop you show in your <code>main</code> method, as not using the main event loop, or blocking control from returning to it, will lead to more problems than you want to deal with when just learning the toolkit. You could replace that while loop with something like the following and that would put you on the right track:</p>
<pre><code>app = wx.App()
frame = wxexample.Example(None, title="Example Frame")
app.MainLoop()
</code></pre>
<p>You'll also need to give <code>mycallback</code> an extra parameter as event handlers are always passed an event object, even if they do not need it.</p>
<p>If you haven't already, I recommend reading the tutorial at this site: <a href="http://zetcode.com/wxpython/" rel="nofollow">http://zetcode.com/wxpython/</a></p>
| 1 | 2016-07-21T20:51:20Z | [
"python",
"inheritance",
"wxpython"
] |
Ansible - How maintain a dynamic list of data | 38,512,260 | <p>I'm building a system to deploy an entire environment in AWS. However, in the case of failure, I want to tear down everything that's already been built. Since I was planning to deploy multiple, different environments, I figured it would be better to just keep a running list of what I've made in AWS up to that point.</p>
<p>So I want a way to store just a simple array of the names of each component, appending the names as each part is spun up in turn, so that at the error stage, I can just terminate everything, but this is proving to be quite head-scratching for me.</p>
<p>As it stands, my code looks something like this:</p>
<p><strong>top-level-playbook</strong></p>
<pre><code>- hosts: localhost
connection: local
roles:
- { role: make_ec2, when: "ansible_failed_task is undefined" }
--Fails Here--
- { role: make_ec2, when: "ansible_failed_task is undefined" }
- { role: make_ec2, when: "ansible_failed_task is undefined" }
post_tasks:
- name: "do a teardown"
*iterate through list and tear down environment*
when: ansible_failed_task is defined
</code></pre>
<p><strong>make_ec2</strong> (This is in a block/rescue)</p>
<pre><code>---
- name: "spin up EC2
--all the variables you need to spin up the EC2"
register: EC2
- name: "Append List"
- set_fact:
ec2_list: "{{ ec2_list | default | -Append ec2.string.value- }}"
</code></pre>
<p>In addition, if there are any better ways to do the tear down, please let me know as well!</p>
| 0 | 2016-07-21T18:54:36Z | 38,512,808 | <blockquote>
<p>So I want a way to store just a simple array of the names of each
component, appending the names as each part is spun up in turn....</p>
</blockquote>
<p>You can create a temp file in <code>pre_task</code> and write data into it, so if something failed, you know what to delete. Alternatively you can replace file with DB, but the mechanics would still be the same.</p>
<blockquote>
<p>if there are any better ways to do the tear down, please let me know
as well!</p>
</blockquote>
<p>I personally wouldn't reinvent the wheel and would just use <code>CloudFormation</code> which is a template for your AWS resources. You can use libraries like <code>troposphere</code> to help you manage JSON structure. So in other words if something failed, cloudformation will revert it back, and cleanup all the resources.</p>
| 0 | 2016-07-21T19:28:55Z | [
"python",
"amazon-web-services",
"ansible",
"jinja2"
] |
How to recognise urls starting with anchor(#) in urls.py file in django? | 38,512,300 | <p>I have starting building my application in angularJS and django, and after creating a login page, I am trying to redirect my application to a new url after successful login. I am using <code>$location</code> variable to redirect my page. Here is my code:</p>
<pre><code>$scope.login = function() {
$http({
method: 'POST',
data: {
username: $scope.username,
password: $scope.password
},
url: '/pos/login_authentication/'
}).then(function successCallback(response) {
user = response.data
console.log(response.data)
if (user.is_active) {
$location.url("dashboard")
}
}, function errorCallback(response) {
console.log('errorCallback')
});
}
</code></pre>
<p>My initial url was <code>http://localhost:8000/pos/</code>, and after hitting the log in button, the above function calls, and I am redirected to <code>http://localhost:8000/pos/#/dashboard</code>. But I am unable to catch this url in my regex pattern in <code>urls.py</code> file:</p>
<p>My project <code>urls.py</code> file:</p>
<pre><code>from django.conf.urls import url, include
from django.contrib import admin
urlpatterns = [
url(r'^pos/', include('pos.urls')),
url(r'^admin/', admin.site.urls),
]
</code></pre>
<p>And my <code>pos</code> application's <code>urls.py</code> file:</p>
<pre><code>urlpatterns = [
url(r'^$', views.index, name='index'),
url(r'^login_authentication/$', views.login_authentication, name='login_authentication'),
url(r'^#/dashboard/$', views.dashboard, name='dashboard')
]
</code></pre>
<p>Using this, I am getting the same login page on visiting this <code>http://localhost:8000/pos/#/dashboard</code> link. This means that in my <code>urls.py</code> file of my <code>pos</code> application, it is mapping my <code>http://localhost:8000/pos/#/dashboard</code> to first object of <code>urlpatterns</code>:<code>url(r'^$', views.index, name='index')</code>. How do I make python differentiate between both the links?</p>
| 1 | 2016-07-21T18:56:40Z | 38,512,413 | <p>You have some major misunderstanding about and anchor in url. The anchor is called officially <a href="https://en.wikipedia.org/wiki/Fragment_identifier" rel="nofollow"><code>Fragment identifier</code></a>, it's not part of the main url, so if you have <code>#</code> when you visit an url like <code>http://localhost:8000/pos/#/dashboard</code>, your browser would treat the remaining <code>#/dashboard</code> as the anchor in page that <code>http://localhost:8000/pos/</code> renders. You shouldn't be even using it in your <code>urls.py</code> definition. Please read the link above more carefully about the usage of an anchor.</p>
| 1 | 2016-07-21T19:02:47Z | [
"python",
"angularjs",
"django"
] |
How to recognise urls starting with anchor(#) in urls.py file in django? | 38,512,300 | <p>I have starting building my application in angularJS and django, and after creating a login page, I am trying to redirect my application to a new url after successful login. I am using <code>$location</code> variable to redirect my page. Here is my code:</p>
<pre><code>$scope.login = function() {
$http({
method: 'POST',
data: {
username: $scope.username,
password: $scope.password
},
url: '/pos/login_authentication/'
}).then(function successCallback(response) {
user = response.data
console.log(response.data)
if (user.is_active) {
$location.url("dashboard")
}
}, function errorCallback(response) {
console.log('errorCallback')
});
}
</code></pre>
<p>My initial url was <code>http://localhost:8000/pos/</code>, and after hitting the log in button, the above function calls, and I am redirected to <code>http://localhost:8000/pos/#/dashboard</code>. But I am unable to catch this url in my regex pattern in <code>urls.py</code> file:</p>
<p>My project <code>urls.py</code> file:</p>
<pre><code>from django.conf.urls import url, include
from django.contrib import admin
urlpatterns = [
url(r'^pos/', include('pos.urls')),
url(r'^admin/', admin.site.urls),
]
</code></pre>
<p>And my <code>pos</code> application's <code>urls.py</code> file:</p>
<pre><code>urlpatterns = [
url(r'^$', views.index, name='index'),
url(r'^login_authentication/$', views.login_authentication, name='login_authentication'),
url(r'^#/dashboard/$', views.dashboard, name='dashboard')
]
</code></pre>
<p>Using this, I am getting the same login page on visiting this <code>http://localhost:8000/pos/#/dashboard</code> link. This means that in my <code>urls.py</code> file of my <code>pos</code> application, it is mapping my <code>http://localhost:8000/pos/#/dashboard</code> to first object of <code>urlpatterns</code>:<code>url(r'^$', views.index, name='index')</code>. How do I make python differentiate between both the links?</p>
| 1 | 2016-07-21T18:56:40Z | 38,537,851 | <p>Using help from this <a href="http://stackoverflow.com/a/27941966/5159284">answer</a>, I figured a good redirection method through angular which doesn't append any anchor tag using $window:</p>
<pre><code>$scope.login = function() {
$http({
method: 'POST',
data: {
username: $scope.username,
password: $scope.password
},
url: '/pos/login_authentication/'
}).then(function successCallback(response) {
user = response.data
console.log(response.data)
if (user.is_active) {
$window.location.href = '/pos/dashboard';
}
}, function errorCallback(response) {
console.log('errorCallback')
});
}
</code></pre>
| 0 | 2016-07-23T03:27:56Z | [
"python",
"angularjs",
"django"
] |
Python: Locally assigning values to a global list | 38,512,312 | <p>I am currently in the process of programming a text-based adventure in Python as a learning exercise. I want "help" to be a global command, stored as values in a list, that can be called at (essentially) any time. As the player enters a new room, or the help options change, I reset the help_commands list with the new values. However, for some reason I cannot get the values in <code>help_commands</code> to update <em>inside</em> a function.</p>
<p>I asked a similar question before (<a href="http://stackoverflow.com/questions/38510214/python-typeerror-list-object-is-not-callable-on-global-variable">Python: TypeError: 'list' object is not callable on global variable</a>) and was suggested an object might be the way for me to go. </p>
<p>I'm somewhat new to Python and objects are one of my weaker aspects, so could I possibly get an example from someone?</p>
<pre><code>player = {
"name": "",
"gender": "",
"race": "",
"class": "",
"HP": 10,
}
# global help_commands
help_commands = ["Save", "Quit", "Other"]
def help():
sub_help = ' | '.join(help_commands)
return "The following commands are avalible: " + sub_help
def help_test():
print help()
help_commands = ["Exit [direction], Open [object], Talk to [Person], Use [Item]"]
print help()
print "Before we go any further, I'd like to know a little more about you."
print "What is your name, young adventurer?"
player_name = raw_input(">> ").lower()
if player_name == "help":
help()
else:
player['name'] = player_name
print "It is nice to meet you, ", player['name'] + "."
help_test()
</code></pre>
| 0 | 2016-07-21T18:57:08Z | 38,512,463 | <p>It's creating a local variable with the same name as the global variable. All in all, it's not a very great way to program - you might want to have a global help object, instead... or not. In any event, in your help_test() function, start with:</p>
<pre><code>global(help_commands)
</code></pre>
<p>And see if that does what you want.</p>
<p>This happens in objects, too.... there's a difference between:</p>
<pre><code>foo = "something"
</code></pre>
<p>and</p>
<pre><code>self.foo = "something"
</code></pre>
<p>inside a member function. If you're coming from something like visual basic, concepts like these will royally screw you up (luckily I went the other way, and just shake my head at VB).</p>
| 0 | 2016-07-21T19:05:26Z | [
"python",
"python-2.7",
"object"
] |
How to Concat a DataFrame with String to a DataFrame with Unicode and normalize datatype | 38,512,367 | <p>I'm having issues when concatenating two dataframes with different types of strings in Python2. One has normal Py2 strings, the other a unicode string. The concatenation works, but the types inside the numpy arrays internally remain the same (by design I'm sure).</p>
<pre><code>import pandas as pd
from pandas import DataFrame, MultiIndex
from datetime import datetime as dt
df = DataFrame(data={'data': ['A', 'BBB', 'CC']},
index=MultiIndex.from_tuples([(dt(2016, 1, 1), 2),
(dt(2016, 1, 1), 3),
(dt(2016, 1, 2), 2)],
names=['date', 'id']))
df2 = DataFrame(data={'data': [u'AAAAAAA']},
index=MultiIndex.from_tuples([(dt(2016, 1, 2), 4)],
names=['date', 'id']))
df3 = pd.concat([df, df2])
</code></pre>
<p>output:</p>
<pre><code>>>> df.data.values
array(['A', 'BBB', 'CC'], dtype=object)
>>> df2.data.values
array([u'AAAAAAA'], dtype=object)
>>> df3.data.values
array(['A', 'BBB', 'CC', u'AAAAAAA'], dtype=object)
</code></pre>
<p>As you can see, the array is now 'mixed', it has strings and unicode. Is there a way to force it to typecast to one or the other? If not, is there an easy way to check if one side is unicode or not, and convert that column to str or unicode? </p>
<p>(I care because pd.lib.infer_dtype will mark the dtype of this numpy array as "mixed" and I need it to be marked as either 'string' or 'unicode' to differentiate it from other objects that can also be stored in Pandas/Numpy Arrays)</p>
| 2 | 2016-07-21T19:00:13Z | 38,512,547 | <p>Pandas has an astype method but it returns a series. This will work. </p>
<pre><code> df2_copy = pd.DataFrame(d2.data.astype(str))
df2_copy.data.values
array(['AAAAAAA'], dtype=object)
</code></pre>
| 3 | 2016-07-21T19:11:11Z | [
"python",
"python-2.7",
"numpy",
"pandas",
"unicode"
] |
How to Concat a DataFrame with String to a DataFrame with Unicode and normalize datatype | 38,512,367 | <p>I'm having issues when concatenating two dataframes with different types of strings in Python2. One has normal Py2 strings, the other a unicode string. The concatenation works, but the types inside the numpy arrays internally remain the same (by design I'm sure).</p>
<pre><code>import pandas as pd
from pandas import DataFrame, MultiIndex
from datetime import datetime as dt
df = DataFrame(data={'data': ['A', 'BBB', 'CC']},
index=MultiIndex.from_tuples([(dt(2016, 1, 1), 2),
(dt(2016, 1, 1), 3),
(dt(2016, 1, 2), 2)],
names=['date', 'id']))
df2 = DataFrame(data={'data': [u'AAAAAAA']},
index=MultiIndex.from_tuples([(dt(2016, 1, 2), 4)],
names=['date', 'id']))
df3 = pd.concat([df, df2])
</code></pre>
<p>output:</p>
<pre><code>>>> df.data.values
array(['A', 'BBB', 'CC'], dtype=object)
>>> df2.data.values
array([u'AAAAAAA'], dtype=object)
>>> df3.data.values
array(['A', 'BBB', 'CC', u'AAAAAAA'], dtype=object)
</code></pre>
<p>As you can see, the array is now 'mixed', it has strings and unicode. Is there a way to force it to typecast to one or the other? If not, is there an easy way to check if one side is unicode or not, and convert that column to str or unicode? </p>
<p>(I care because pd.lib.infer_dtype will mark the dtype of this numpy array as "mixed" and I need it to be marked as either 'string' or 'unicode' to differentiate it from other objects that can also be stored in Pandas/Numpy Arrays)</p>
| 2 | 2016-07-21T19:00:13Z | 38,512,554 | <p>Use <code>applymap</code> and <code>encode</code></p>
<pre><code>df3.applymap(lambda s: s.encode('utf8'))
</code></pre>
<p><a href="http://i.stack.imgur.com/1DaBu.png" rel="nofollow"><img src="http://i.stack.imgur.com/1DaBu.png" alt="enter image description here"></a></p>
<pre><code>df3.applymap(lambda s: s.encode('utf8')).data.values
array(['A', 'BBB', 'CC', 'AAAAAAA'], dtype=object)
</code></pre>
| 2 | 2016-07-21T19:11:47Z | [
"python",
"python-2.7",
"numpy",
"pandas",
"unicode"
] |
Kivy infinite scroll | 38,512,411 | <p>I want to create infinite scroll for my app. It's my code:</p>
<pre><code>from kivy.app import App
from kivy.uix.scrollview import ScrollView
from kivy.uix.gridlayout import GridLayout
from kivy.uix.image import AsyncImage
IMAGES_URLS = ['https://upload.wikimedia.org/wikipedia/commons/c/c3/Jordan_by_Lipofsky_16577.jpg' for _ in range(5)]
def upload_images(widget):
layout = widget.children[0]
layout_childrens = len(layout.children)
for url in IMAGES_URLS:
img = AsyncImage(source=url, size_hint_y=None, height=240)
layout.add_widget(img)
widget.scroll_y = 100 - (100 * layout_childrens / (layout_childrens + len(IMAGES_URLS)))
class InfinityScrollView(ScrollView):
def on_scroll_move(self, touch):
if self.scroll_y < 0:
upload_images(self)
return super(InfinityScrollView, self).on_scroll_move(touch)
class InfiniteScrollApp(App):
def build(self):
layout = GridLayout(cols=1, spacing=10, size_hint_y=None)
layout.bind(minimum_height=layout.setter('height'))
for url in IMAGES_URLS:
img = AsyncImage(source=url, size_hint_y=None,
height=240)
layout.add_widget(img)
root = InfinityScrollView(size_hint=(None, None), size=(400, 400),
pos_hint={'center_x': .5, 'center_y': .5})
root.add_widget(layout)
return root
if __name__ == '__main__':
InfiniteScrollApp().run()
</code></pre>
<p>I overrode <code>on_scroll_move</code> method and when scroll on the bottom I called <code>upload_images</code> method that adds new images.</p>
<p>It's work fine but I got problem that scroll position stay on bottom after image loading, but I wont to move it to the first loaded images.</p>
<p>I tried to set correct value to <code>scroll_y</code> but it doesn't work, maybe I also must call some method or change other variables. Any advices?</p>
| 2 | 2016-07-21T19:02:43Z | 38,552,444 | <p>I found solution, I needed to override 2 variables (<code>scroll_y</code> and <code>effect_y</code>). It's issue on <a href="https://github.com/kivy/kivy/issues/2038#issuecomment-46158938" rel="nofollow">github</a> where I found solution. It's my fixed code.</p>
<pre><code>def upload_images(self):
layout = self.children[0]
layout_childrens = len(layout.children)
for url in IMAGES_URLS:
img = AsyncImage(source=url, size_hint_y=None, height=240)
layout.add_widget(img)
bar_position = layout_childrens / (layout_childrens + len(IMAGES_URLS))
self.scroll_y = 100 - 100 * bar_position
self.effect_y.value = self.effect_y.min - self.effect_y.min * bar_position
</code></pre>
| 1 | 2016-07-24T13:16:22Z | [
"android",
"python",
"kivy",
"infinite-scroll"
] |
How to insert breakpoint in django template? | 38,512,458 | <p>How I can insert <code>pdb.set_trace()</code> in django template? Or maybe run some another debug inside template.</p>
| 0 | 2016-07-21T19:05:08Z | 38,515,446 | <p>PyCharm Professional Edition supports graphical debugging of Django templates. More information about how to do that is here:</p>
<p><a href="https://www.jetbrains.com/help/pycharm/2016.1/debugging-django-templates.html" rel="nofollow">https://www.jetbrains.com/help/pycharm/2016.1/debugging-django-templates.html</a></p>
<p>PyCharm's debugger is very, very good. It is just about the best Python IDE available.</p>
<p>Disclaimer: I am a satisfied customer, but have no other vested interest.</p>
| 1 | 2016-07-21T22:34:47Z | [
"python",
"django",
"django-templates"
] |
Highlight specific points in matplotlib scatterplot | 38,512,485 | <p>I have a CSV with 12 columns of data. <a href="http://i.stack.imgur.com/7FGZE.png" rel="nofollow">I'm focusing on these 4 columns</a> </p>
<p>Right now I've plotted "Pass def" and "Rush def". I want to be able to highlight specific points on the scatter plot. For example, I want to highlight 1995 DAL point on the plot and change that point to a color of yellow. </p>
<p>I've started with a for loop but I'm not sure where to go. Any help would be great.</p>
<p>Here is my code: </p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import csv
import random
df = pd.read_csv('teamdef.csv')
x = df["Pass Def."]
y = df["Rush Def."]
z = df["Season"]
points = []
for point in df["Season"]:
if point == 2015.0:
print(point)
plt.figure(figsize=(19,10))
plt.scatter(x,y,facecolors='black',alpha=.55, s=100)
plt.xlim(-.6,.55)
plt.ylim(-.4,.25)
plt.xlabel("Pass DVOA")
plt.ylabel("Rush DVOA")
plt.title("Pass v. Rush DVOA")
plot.show
</code></pre>
| 2 | 2016-07-21T19:07:12Z | 38,512,830 | <p>You can layer multiple scatters, so the easiest way is probably </p>
<p><code>
plt.scatter(x,y,facecolors='black',alpha=.55, s=100)
plt.scatter(x, 2015.0, color="yellow")
</code></p>
| 3 | 2016-07-21T19:30:26Z | [
"python",
"pandas",
"matplotlib"
] |
Get values (point, vector, array, etc.) from `xr.Dataset` in Xarray ? (Python 3) | 38,512,507 | <p>I can't figure out how to actually pull the data out of a <code>xr.Dataset</code> object. I can't figure out how to access individual values. How can I pull the values (point values, vectors, arrays, etc.) out of the Datasets like I can with the DataArrays? </p>
<pre><code>np.random.seed(0)
DA_data = xr.DataArray(np.random.random((3,2,10,100)), dims=["targets","accuracy","metrics","attributes"], name="Data")
DA_data.coords["attributes"] = ["attr_%d"%_ for _ in range(100)]
# DA_data.coords
# Coordinates:
# * targets (targets) int64 0 1 2
# * accuracy (accuracy) int64 0 1
# * metrics (metrics) int64 0 1 2 3 4 5 6 7 8 9
# * attributes (attributes) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ...
# Indexing DataArray
#DA_data.sel(targets=0, accuracy=0, metrics=0, attributes="attr_5").values
#array(0.6458941130666561)
float(DA_data.sel(targets=0, accuracy=0, metrics=0, attributes="attr_5").values)
#0.6458941130666561
# Indexing Dataset
DS_data = DA_data.to_dataset()
# DS_data
# <xarray.Dataset>
# Dimensions: (accuracy: 2, attributes: 100, metrics: 10, targets: 3)
# Coordinates:
# * targets (targets) int64 0 1 2
# * accuracy (accuracy) int64 0 1
# * metrics (metrics) int64 0 1 2 3 4 5 6 7 8 9
# * attributes (attributes) <U7 'attr_0' 'attr_1' 'attr_2' 'attr_3' ...
# Data variables:
# Data (targets, accuracy, metrics, attributes) float64 0.5488 ...
DS_data.sel(targets=0, accuracy=0, metrics=0, attributes="attr_5").values
# <bound method Mapping.values of <xarray.Dataset>
# Dimensions: ()
# Coordinates:
# targets int64 0
# accuracy int64 0
# metrics int64 0
# attributes <U7 'attr_5'
# Data variables:
# Data float64 0.6459>
float(DS_data.sel(targets=0, accuracy=0, metrics=0, attributes="attr_5").values)
# ---------------------------------------------------------------------------
# TypeError Traceback (most recent call last)
# <ipython-input-408-e0c88e8541d8> in <module>()
# 38 # Data variables:
# 39 # Data float64 0.6459>
# ---> 40 float(DS_data.sel(targets=0, accuracy=0, metrics=0, attributes="attr_5").values)
# TypeError: float() argument must be a string or a number, not 'method'
</code></pre>
| 1 | 2016-07-21T19:08:35Z | 38,512,697 | <p>It's a little confusing, but <code>.values</code> works differently on <code>Dataset</code> and <code>DataArray</code>:</p>
<ul>
<li><code>DataArray.values</code> returns a NumPy array. This behavior is consistent with pandas.</li>
<li><code>Dataset.values()</code> returns a list (Python 2) or ValuesView (Python 3) of the DataArray objects that constitute the Dataset. This behavior is consistent with <code>Dataset</code> satisfying Python's <code>Mapping</code> interface.</li>
</ul>
<p>To pull values out of a <code>Dataset</code>, you need to pull out a <code>DataArray</code> via the dataset's dictionary-like interface, e.g., <code>float(DA_data['Data'])</code> or <code>float(DA_data.values()[0])</code>. You can't directly convert a <code>Dataset</code> into a float or NumPy array, no more than you could with a Python dict.</p>
| 2 | 2016-07-21T19:20:59Z | [
"python",
"arrays",
"multidimensional-array",
"dataset",
"python-xarray"
] |
Nested function calls and missing input parameter, python | 38,512,596 | <p>I'm trying out some Text Classification tutorials <a href="https://github.com/abromberg/sentiment_analysis_python/blob/master/sentiment_analysis.py" rel="nofollow">here</a>:</p>
<p>I don't understand the function calls in line 59 -- 65: </p>
<pre><code>#creates a feature selection mechanism that uses all words
def make_full_dict(words):
return dict([(word, True) for word in words])
#tries using all words as the feature selection mechanism
print 'using all words as features'
evaluate_features(make_full_dict)
</code></pre>
<p>Shouldn't <code>make_full_dict</code> be called with a string input value for <code>words</code>? </p>
| 0 | 2016-07-21T19:14:37Z | 38,512,638 | <p>Without further context, it is a bit difficult to give a complete answer to your question. It seems that the <code>evaluate_features</code> method takes a function as parameter; in that case, you don't need to call the function which was passed in as a parameter. Only <code>evaluate_features</code> should do that. If you call the function, then the return value of the function is what <code>evaluate_features</code> will get, rather than the function itself</p>
<p>If you want to see what that function is doing, add some print statements in the <code>make_full_dict</code> method which will help you see what words were passed to it</p>
| 1 | 2016-07-21T19:17:47Z | [
"python",
"function-calls",
"function-call"
] |
python unicode errors convert to printed values | 38,512,629 | <p>If I have some unicode like this:</p>
<pre><code>'\x00B\x007\x003\x007\x00-\x002\x00,\x001\x00P\x00W\x000\x000\x009\x00,\x00N\x00O\x00N\x00E\x00,\x00C\x00,\x005\x00,\x00J\x00,\x00J\x00,\x002\x009\x00,\x00G\x00A\x00R\x00Y\x00,\x00 \x00W\x00I\x00L\x00L\x00I\x00A\x00M\x00S\x00,\x00 \x00P\x00A\x00R\x00E\x00N\x00T\x00I\x00,\x00 \x00F\x00I\x00N\x00N\x00E\x00Y\x00 \x00&\x00 \x00L\x00E\x00W\x00I\x00S\x00,\x00U\x00S\x00,\x001\x00\r\x00'
</code></pre>
<p>and it's read in from a csv in string format, but I'd like to convert it to a human readable form. It works when I print it, but I can't seem to figure out the approach command to make save it to a variable in human readable form. What is the best approach?</p>
| -2 | 2016-07-21T19:16:45Z | 38,512,741 | <p>You don't have Unicode. Not <em>yet</em>. You have a series of bytes, and those bytes use the UTF-16 <em>encoding</em>. You need to decode those bytes first:</p>
<pre><code>data.decode('utf-16-be')
</code></pre>
<p>Printing it works only because your console ignores most of the big-endian pair of each UTF-16 codeunit.</p>
<p>Your data is missing a <a href="https://en.wikipedia.org/wiki/Byte_order_mark" rel="nofollow">Byte order mark</a>, so I had use the <code>utf-16-be</code>, or <em>big endian</em> variant of UTF-16, on the assumption that you cut the data at the right byte. It could also be <em>little</em> endian if you didn't.</p>
<p>As it is I had to remove the last <code>\x00</code> null byte to make it decode; you pasted an odd, rather than an even number of bytes, as you cut one UTF-16 code unit (each 2 bytes) in half:</p>
<pre><code>>>> s = '\x00B\x007\x003\x007\x00-\x002\x00,\x001\x00P\x00W\x000\x000\x009\x00,\x00N\x00O\x00N\x00E\x00,\x00C\x00,\x005\x00,\x00J\x00,\x00J\x00,\x002\x009\x00,\x00G\x00A\x00R\x00Y\x00,\x00 \x00W\x00I\x00L\x00L\x00I\x00A\x00M\x00S\x00,\x00 \x00P\x00A\x00R\x00E\x00N\x00T\x00I\x00,\x00 \x00F\x00I\x00N\x00N\x00E\x00Y\x00 \x00&\x00 \x00L\x00E\x00W\x00I\x00S\x00,\x00U\x00S\x00,\x001\x00\r\x00'
>>> s[:-1].decode('utf-16-be')
u'B737-2,1PW009,NONE,C,5,J,J,29,GARY, WILLIAMS, PARENTI, FINNEY & LEWIS,US,1\r'
</code></pre>
<p>The file you read this from <em>probably</em> contains the BOM as the first two bytes. If so, just tell whatever you use to read this data to use <code>utf-16</code> as the codec, and it'll figure out the right variant from those first bytes.</p>
<p>If you are using Python 2 you'd want to study the <a href="https://docs.python.org/2/library/csv.html#examples" rel="nofollow"><em>Examples</em> section</a> of the <code>csv</code> module for code that can re-code your data in a form suitable for that module; if you include the <code>UnicodeReader</code> from that section you'd use it like this:</p>
<pre><code>with open(yourdatafile) as inputfile:
reader = UnicodeReader(inputfile, encoding='utf-16')
for row in reader:
# row is now a list with unicode strings
</code></pre>
<p>Demo:</p>
<pre><code>>>> from StringIO import StringIO
>>> import codecs
>>> f = StringIO(codecs.BOM_UTF16_BE + s[:-1])
>>> r = UnicodeReader(f, encoding='utf-16')
>>> next(r)
[u'B737-2', u'1PW009', u'NONE', u'C', u'5', u'J', u'J', u'29', u'GARY', u' WILLIAMS', u' PARENTI', u' FINNEY & LEWIS', u'US', u'1']
</code></pre>
<p>If you are using Python 3, simply set the <code>encoding</code> parameter to the <code>open()</code> function to <code>utf-16</code> and use the <code>csv</code> module as-is.</p>
| 6 | 2016-07-21T19:24:12Z | [
"python",
"unicode"
] |
Is there a way to run a python script and still be able to use my Ubuntu server? | 38,512,644 | <p>I have a server I lease from Digital Ocean. I access it using Putty. I want to run my python script in the background so that I can still do other things on the machine. What is a terminal command I can use to have it run in the background so that I can still use the machine?</p>
<p>Random information you might need:</p>
<p>-Using Ubuntu 14.04</p>
<p>-Python3.4 script</p>
<p>-My favorite dessert is cheesecake (buy me cheesecake)</p>
<p>Thanks for the help!</p>
| -1 | 2016-07-21T19:18:04Z | 38,512,752 | <p>It's more of a unix/linux question than python, but you can do it in a couple of different ways:</p>
<pre><code>% python myprogram.py &
</code></pre>
<p>The "&" says to run it in the background.</p>
<p>If you forgot, and just ran it, then you can type "^z" (control-Z) to suspend your program, then type:</p>
<pre><code>% bg
</code></pre>
<p>To start it running again in the background. And, just for fun, "fg" would put it back in the foreground. You can have many processes running in the background - if you're doing more than one, you can use "%n" to explicitly say which one (i.e. "bg" and "bg %1" would both work, if you only had one background job), and for more fun, if you have multiple jobs running in the background:</p>
<pre><code>% jobs
</code></pre>
<p>Will list them for you.</p>
| 1 | 2016-07-21T19:24:43Z | [
"python",
"python-3.x",
"ubuntu",
"terminal",
"ubuntu-14.04"
] |
Python Issues swapping variables into a list | 38,512,739 | <p>This is what I'm assigned to do
Start by filling a list with 10 random numbers.
Show the list to the user.
Ask the user to pick two numbers between 1 and 10.
Swap the elements in the list that are in the two list locations the user used in #3
Check to see if the list is in order from smallest to largest.
Repeat steps 3 to 5 until done.
Thank the user for sorting the list for you.</p>
<p>Ive gotten to the part of assigning the user's input to a temporary box in the list but I get the error </p>
<blockquote>
<p>TypeError: 'type' object is not subscriptable</p>
</blockquote>
<p>and now I'm stuck. I've searched around youtube and everywhere online and can't find anything to assist me.
Heres my code:</p>
<pre><code>numbers = [4,2,5,5,6,4,7,6,9,5]
print("Heres your current list", numbers)
print("Pick a location between 1 and 10")
num = int(input())
if num <= 10 and num >= 1:
print("Please pick another location between 1 and 10")
num1 = int(input())
temp1 = list[num-1]
temp2 = list[num1-1]
list[num-1] = temp2
list[num1-1] = temp1
print(list)
</code></pre>
| -2 | 2016-07-21T19:24:07Z | 38,512,821 | <p>The name of your list is numbers and not list. Once you make the change, your code will work.</p>
| 0 | 2016-07-21T19:30:06Z | [
"python",
"python-3.x"
] |
Python Issues swapping variables into a list | 38,512,739 | <p>This is what I'm assigned to do
Start by filling a list with 10 random numbers.
Show the list to the user.
Ask the user to pick two numbers between 1 and 10.
Swap the elements in the list that are in the two list locations the user used in #3
Check to see if the list is in order from smallest to largest.
Repeat steps 3 to 5 until done.
Thank the user for sorting the list for you.</p>
<p>Ive gotten to the part of assigning the user's input to a temporary box in the list but I get the error </p>
<blockquote>
<p>TypeError: 'type' object is not subscriptable</p>
</blockquote>
<p>and now I'm stuck. I've searched around youtube and everywhere online and can't find anything to assist me.
Heres my code:</p>
<pre><code>numbers = [4,2,5,5,6,4,7,6,9,5]
print("Heres your current list", numbers)
print("Pick a location between 1 and 10")
num = int(input())
if num <= 10 and num >= 1:
print("Please pick another location between 1 and 10")
num1 = int(input())
temp1 = list[num-1]
temp2 = list[num1-1]
list[num-1] = temp2
list[num1-1] = temp1
print(list)
</code></pre>
| -2 | 2016-07-21T19:24:07Z | 38,512,949 | <pre><code>numbers = [4,2,5,5,6,4,7,6,9,5]
print("Heres your current list", numbers)
print("Pick a location between 1 and 10")
num = int(input())
if num <= 10 and num >= 1:
print("Please pick another location between 1 and 10")
num1 = int(input())
temp1 = numbers[num-1]
temp2 = numbers[num1-1]
numbers[num-1] = temp2
numbers[num1-1] = temp1
print(numbers)
</code></pre>
| 0 | 2016-07-21T19:37:00Z | [
"python",
"python-3.x"
] |
How to fix size exceeds expected in python 3 | 38,512,813 | <p>I am trying to loop over Excel files with python <code>pandas</code>. First I'm saving them to csv and then I open them again, slice and then saving them again. But I get an error: </p>
<pre><code>"Workbook: size exceeds expected 10752 bytes; corrupt?"
</code></pre>
<p>I am relatively new to python.</p>
| -1 | 2016-07-21T19:29:30Z | 38,512,876 | <p>I think you may have a cell that has more than 255 characters in it. </p>
<p>See this article about data and file size limitations: <a href="http://kb.tableau.com/articles/knowledgebase/jet-data-file-size-limitations" rel="nofollow">http://kb.tableau.com/articles/knowledgebase/jet-data-file-size-limitations</a></p>
| 0 | 2016-07-21T19:33:03Z | [
"python",
"pandas"
] |
Python socket test recv() data | 38,512,817 | <p>I have a problem with my python script using socket. I want to test if the client use the correct file, not an other tool like telnet. The server :</p>
<pre><code>import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
while 1:
conn, addr = s.accept()
data = conn.recv(1024)
if data == 'test':
print 'ok'
else:
print '!'
conn.close()
</code></pre>
<p>The client:</p>
<pre><code>import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(HOST, PORT)
s.send('test')
</code></pre>
<p>The client send 'test' to the server to verify that it's the correct file.
But in the case where the client send nothing (if the client uses another way to connect), i can't test if conn.recv(1024) equals to 'test' because the script freezes, i need to wait the client stop and the server unfreezes.
Thank you in advance.</p>
| 1 | 2016-07-21T19:29:45Z | 38,520,949 | <p>You can use <code>select</code> function to limit the time your server waits for new client connection or incoming data from client:</p>
<pre><code>import socket
import select
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
while 1:
# wait up to 60 seconds that a client connects
newClient,_,_ = select.select([s], [], [], 60)
if not (newClient):
# no client wanted to connect
print 'No new client in last 60 seconds!'
return
else:
# new connection
print 'New client!'
conn, addr = s.accept()
# wait up to 60 seconds that the client send data
readable,_,_ = select.select([conn], [], [], 60)
if (readable):
data = conn.recv(1024)
if data == 'test':
print 'ok'
else:
print 'client do not send test'
else:
print 'client send nothing'
# close connection
conn.close()
</code></pre>
<p>see <a href="https://docs.python.org/2/library/select.html" rel="nofollow"><code>select</code></a>.</p>
| 0 | 2016-07-22T07:45:16Z | [
"python",
"sockets"
] |
Chain two multiprocessing scripts - python | 38,512,855 | <p>I have two scripts, they are both multiprocessing utilized scripts.</p>
<p><code>build.py</code> reads from a db and spits out a text file. Parallel jobs are launched to do this.</p>
<p><code>push.py</code> inserts/updates this text file to a persistent DB. Again, this is multiprocessing too.</p>
<p>Currently I have two separate crontab commands to do this. I want <code>build.py</code> to launch <code>push.py</code> then terminate itself, how can I do this?</p>
| 0 | 2016-07-21T19:31:58Z | 38,513,306 | <p>You can just use <a class='doc-link' href="http://stackoverflow.com/documentation/python/1393/subprocess-library#t=201607212000283295138"><code>subprocess</code></a></p>
<p>In <code>build.py</code></p>
<pre><code>import subprocess
def main():
# Do multiprocessing code, wait for all processes to finish
...
# Launch push.py and exit
subprocess.Popen(['python', '/path/to/push.py'])
if __name__ == '__main__':
main()
</code></pre>
| 1 | 2016-07-21T19:59:07Z | [
"python",
"multiprocessing"
] |
Can AxesGrid be used to plot two imshows (overlay) with two separate colorbars? | 38,512,870 | <p>I am using AxesGrid in matplotlib to create a plot that overlays two imshow plots, with a separate colourbar side by side for each image colourmap. While I can see in <a href="http://stackoverflow.com/questions/22128166/two-different-color-colormaps-in-the-same-imshow-matplotlib">this question/answer</a> that using pyplot.colorbar() automatically plots the second colour bar next to the first, this doesn't seem to work with AxesGrid.</p>
<pre><code> import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import AxesGrid
fig = plt.figure(1, facecolor='white',figsize=(10,7.5))
grid = AxesGrid(fig, 111,
nrows_ncols=(1, 1),
axes_pad=(0.45, 0.15),
label_mode="1",
share_all=True,
cbar_location="right",
cbar_mode="each",
cbar_size="7%",
cbar_pad="2%",
)
im = grid[0].imshow(my_image, my_cmap)
cbar = grid.cbar_axes[0].colorbar(im)
im2 = grid[0].imshow(my_image_overlay, my_cmap2, alpha=0.5)
cbar2 = grid.cbar_axes[0].colorbar(im2)
plt.show()
</code></pre>
<p>However, this just shows the second colourbar. (Presumably overlaying the first one). I tried overriding the padding in cbar2 with:</p>
<pre><code> cbar2 = grid.cbar_axes[0].colorbar(im2, pad=0.5)
</code></pre>
<p>but this results in an error with "got an unexpected keyword argument 'pad'"</p>
<p>Is there a way to offset the second colourbar?</p>
| 0 | 2016-07-21T19:32:45Z | 38,518,065 | <p>I think you may need to make two axes for the colorbars and use them:</p>
<pre><code>from mpl_toolkits.axes_grid1 import make_axes_locatable
# create an axes on the right side of ax. The width of cax will be 5%
# of ax and the padding between cax and ax will be fixed at 0.05 inch.
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
</code></pre>
<p>'cause you're currently using the same allocated space for both.</p>
| 0 | 2016-07-22T04:17:01Z | [
"python",
"matplotlib",
"colorbar"
] |
I have text file in which each row with single word. convert it into line using python | 38,512,906 | <p>I have a text file with a single word in each row.</p>
<p>I want to convert it into line using Python.</p>
<p>Input:
<code>file.txt</code></p>
<pre><code>word1 in row1
word2 in row2
word3 in row3
word4 in row4
</code></pre>
<p>Expected Output:
<code>['word1.Z', 'word2.Z', 'word3.Z', 'word4.Z']</code></p>
| -1 | 2016-07-21T19:34:58Z | 38,513,006 | <p>This should get you started:</p>
<pre><code>filename = 'test.txt'
words = open(filename, 'r').readlines()
</code></pre>
<p><code>readlines()</code> creates a list where each element is one line of the file, so your <code>words</code> list will look like <code>['word1', 'word2', 'word3', 'word4']</code>.</p>
| 0 | 2016-07-21T19:40:13Z | [
"python"
] |
I have text file in which each row with single word. convert it into line using python | 38,512,906 | <p>I have a text file with a single word in each row.</p>
<p>I want to convert it into line using Python.</p>
<p>Input:
<code>file.txt</code></p>
<pre><code>word1 in row1
word2 in row2
word3 in row3
word4 in row4
</code></pre>
<p>Expected Output:
<code>['word1.Z', 'word2.Z', 'word3.Z', 'word4.Z']</code></p>
| -1 | 2016-07-21T19:34:58Z | 38,513,029 | <pre><code>['%s.Z'%s for s in open('myfile.txt').read().split()]
</code></pre>
<p>or if <code>myfile.txt</code> is big:</p>
<pre><code>['%s.Z'%s for s in open('myfile.txt').readline().split()]
</code></pre>
| 3 | 2016-07-21T19:41:53Z | [
"python"
] |
I have text file in which each row with single word. convert it into line using python | 38,512,906 | <p>I have a text file with a single word in each row.</p>
<p>I want to convert it into line using Python.</p>
<p>Input:
<code>file.txt</code></p>
<pre><code>word1 in row1
word2 in row2
word3 in row3
word4 in row4
</code></pre>
<p>Expected Output:
<code>['word1.Z', 'word2.Z', 'word3.Z', 'word4.Z']</code></p>
| -1 | 2016-07-21T19:34:58Z | 38,513,073 | <p>You can use this piece of code:</p>
<p>It will read each line in <code>test.txt</code> and save it in a list called 'words'</p>
<p>Then it will loop thought each word, remove <code>"\n"</code> and add <code>".Z"</code> to them.</p>
<pre><code>filename = 'test.txt'
words = open(filename, 'r').readlines()
new_words = []
for word in words:
new_word = word.strip("\n") + ".Z"
new_words.append(new_word)
new_words
</code></pre>
<p>Output:</p>
<pre><code>['test.Z','this.Z','text.Z','is.Z','written.Z','in.Z','many.Z','lines.Z']
</code></pre>
| 0 | 2016-07-21T19:44:08Z | [
"python"
] |
timing issue between data sending and receiving code | 38,512,996 | <p>I'm pretty new to Python and am trying to do something a bit tricky. So essentially I'm trying to send data over light; just a small amount of text. So I have a line of code to encode the ASCII to binary, and that works fine and leaves me with a list with the binary as strings like </p>
<pre><code>n=['0','b','1','0','1']
</code></pre>
<p>and so on. I have one raspberry <code>pi</code> set up to send with the code below, and one to receive with the code further down. It all seems to work but I think the timing is off between the two and the list at the received end is shifted and sometimes has random 0's where there shouldn't be. (I think it's reading faster than sending). Is there any way to fix this that you can easily see? The for loops both start at the same time via pushbutton.</p>
<p>Sending:</p>
<pre><code>For x in range(2,130):
If myList[x] != '1':
Led.off()
Sleep(.5)
Led.off()
Elif myList == '1':
Led.on()
Sleep(.5)
Led.off()
</code></pre>
<p>Receiving:</p>
<pre><code>For x in range(2,130):
If gpio.input(14) == True:
myList[x] = '1'
Sleep(.5)
Elif gpio.input(14) ==False:
myList[x] = '0'
Sleep(.5)
</code></pre>
<p>The <code>gpio.input(14)</code> is connected to a <code>photodiode</code> that is receiving the signals from an led. I'm assuming the code for receiving runs faster than the sending code and why timing is off but I don't know how to fix it.</p>
| 1 | 2016-07-21T19:39:50Z | 38,546,884 | <p>I imagine the problem is that writing to a list may take slightly more or less time to complete as an action than setting a GPIO pin. So each loop cycle it gets more and more out of sync until it's not reading the right things. What I would do is add another LED and another photodiode, on the opposite raspberry pi than the first ones are on, to output a "received" signal. So your code would look something like this:</p>
<p>Sending:</p>
<pre><code>For x in range(2,130):
If myList[x] != '1':
Led.off()
Sleep(.5)
Led.off()
Elif myList == '1':
Led.on()
Sleep(.5)
Led.off()
while GPIO.input(receivepin) == False: #wait until 'receive' pin reads a value
Sleep(0.1)
#here you might want to also add another
#pulse to let the reciever know that another piece of data is about to be sent
</code></pre>
<p>Recieving:</p>
<pre><code>For x in range(2,130):
If gpio.input(14) == True:
myList[x] = '1'
Elif gpio.input(14) == False:
myList[x] = '0'
Led.on()
#wait until the sender puts out a pulse to signal the next piece of data
</code></pre>
<p>This may not be the best way to do it, but you get the idea. Basically you want to eliminate the time delays and replace them with bits of code that wait until a certain parameter is met.</p>
| 0 | 2016-07-23T22:15:52Z | [
"python",
"raspberry-pi",
"transmission"
] |
python: Plotting data from multiple netCDF files | 38,513,027 | <p>long story short, I am plotting climate data from a netCDF file. My only problem, I have to plot data from tens of these files, each having over a hundred data points. Luckily, they are all identically formatted, and their names are in rising order (for example: file1.nc, file2.nc...). This is my code (unfinished as I have to change the markers and colors of the markers):</p>
<p>Anyways, I want to plot more than that one file (about 20 to begin with). Is there a way to do that? Also, if you guys have an idea how how to set up the colorbar based on variable 'data' that would be great.</p>
<p><strong>Thanks!</strong></p>
| 0 | 2016-07-21T19:41:41Z | 38,513,534 | <p>Make an empty array : </p>
<pre><code>data =[]
</code></pre>
<p>Make a list of filenames : </p>
<pre><code>flist=["001.dat","002.dat",...]
</code></pre>
<p>then iterate through that list: </p>
<pre><code>for fn in flist:
data.append( netcdf_file(fn,'r'))
</code></pre>
<p>Now you can refer to your data sets like:</p>
<pre><code>data[0]
data[1]
</code></pre>
<p>etc.</p>
| 0 | 2016-07-21T20:14:09Z | [
"python",
"matplotlib",
"plot",
"matplotlib-basemap",
"colorbar"
] |
python: Plotting data from multiple netCDF files | 38,513,027 | <p>long story short, I am plotting climate data from a netCDF file. My only problem, I have to plot data from tens of these files, each having over a hundred data points. Luckily, they are all identically formatted, and their names are in rising order (for example: file1.nc, file2.nc...). This is my code (unfinished as I have to change the markers and colors of the markers):</p>
<p>Anyways, I want to plot more than that one file (about 20 to begin with). Is there a way to do that? Also, if you guys have an idea how how to set up the colorbar based on variable 'data' that would be great.</p>
<p><strong>Thanks!</strong></p>
| 0 | 2016-07-21T19:41:41Z | 38,517,404 | <p>at the least, <code>plt.savefig("some unique name")</code> means you can generate them in a loop without having to save the plots/close them individually.</p>
<p>I also suggest getting comfortable with the <a href="http://matplotlib.org/faq/usage_faq.html#figure" rel="nofollow">object oriented interface</a>:</p>
<pre><code>fig = plt.figure()
ax = fig.add_subplot(1,1,1)
map = Basemap(projection='robin',lon_0=0,resolution='l',ax=ax)
#keep all your code
cs = map.scatter(x,y,data)
fig.savefig("{}".format(some unique identifier))
</code></pre>
<p>Eta: And you can find all the files using glob if they're in the same folder:</p>
<pre><code>import glob
filelist = glob.glob('/Users/epsuser/Dropbox/Argo/Data/*.nc')
for fl in filelist:
ncfile = netcdf_file(fname,'r')
#the rest of your reading code
fig = plt.figure()
#etc...
</code></pre>
| 0 | 2016-07-22T02:53:08Z | [
"python",
"matplotlib",
"plot",
"matplotlib-basemap",
"colorbar"
] |
direct access to msgpack'd element N from python | 38,513,105 | <p>I have a block of msgpack'd data is created as shown below:</p>
<pre><code>#!/usr/bin/env python
from io import BytesIO
import msgpack
packer = msgpack.Packer()
buf = BytesIO()
buf.write(packer.pack("foo"))
buf.write(packer.pack("bar"))
buf.write(packer.pack("baz"))
</code></pre>
<p>Later in my app (or a different app) I need to unpack the first two elements but want access to the third element STILL packed. The only way I have found to do that so far is to repack this third element as shown below, which is rather inefficient.</p>
<pre><code>buf.seek(0)
unpacker = msgpack.Unpacker(buf)
item1 = unpacker.unpack()
item2 = unpacker.unpack()
item3 = unpacker.unpack()
packed_item3 = msgpack.pack(item3)
</code></pre>
<p>This gets me where I want, but I would prefer to access this this last item directly so I can pass it on to where it needs to go already packed.</p>
| 0 | 2016-07-21T19:45:23Z | 38,696,555 | <p>Since your packs will not be of constant size after doing a msgpack, you can use a identifiable set of bytes as a seperator of your packs.
When you need direct access to Nth pack, still in packed state, you iterate over your byte array, and your Nth pack will lie after N-1 th seperator.
Though this will have a O(n) complexity and need iteration over the whole bytearray till your required pack.
String example with "####" as seperator would look like :</p>
<pre><code>"pack1####pack2####pack3####pack4####pack5...."
</code></pre>
| 0 | 2016-08-01T10:29:28Z | [
"python",
"msgpack"
] |
direct access to msgpack'd element N from python | 38,513,105 | <p>I have a block of msgpack'd data is created as shown below:</p>
<pre><code>#!/usr/bin/env python
from io import BytesIO
import msgpack
packer = msgpack.Packer()
buf = BytesIO()
buf.write(packer.pack("foo"))
buf.write(packer.pack("bar"))
buf.write(packer.pack("baz"))
</code></pre>
<p>Later in my app (or a different app) I need to unpack the first two elements but want access to the third element STILL packed. The only way I have found to do that so far is to repack this third element as shown below, which is rather inefficient.</p>
<pre><code>buf.seek(0)
unpacker = msgpack.Unpacker(buf)
item1 = unpacker.unpack()
item2 = unpacker.unpack()
item3 = unpacker.unpack()
packed_item3 = msgpack.pack(item3)
</code></pre>
<p>This gets me where I want, but I would prefer to access this this last item directly so I can pass it on to where it needs to go already packed.</p>
| 0 | 2016-07-21T19:45:23Z | 38,726,237 | <p>See <a href="http://pythonhosted.org/msgpack-python/api.html#msgpack.Unpacker.skip" rel="nofollow">http://pythonhosted.org/msgpack-python/api.html#msgpack.Unpacker.skip</a></p>
<pre class="lang-py prettyprint-override"><code>packed_item3 = None
def callback(b):
global packed_item3
packed_item3 = b
unpacker.skip(write_bytes=callback)
</code></pre>
<p>But <code>write_bytes</code> option will be deprecated. And other msgpack implementations doesn't have such a API.</p>
<p>More common way is double-packing.</p>
<pre><code>buf.write(msgpack.packb(msgpack.packb(item3))
</code></pre>
<p>In this way, you can get <code>packed_item3</code> without unpacking.
And this way can be used in other msgpack implementations.</p>
<p>For example, fluentd uses such way to achieve high throughput.</p>
| 0 | 2016-08-02T16:47:22Z | [
"python",
"msgpack"
] |
How to return a fail value in python | 38,513,225 | <p>I'm looking to find a good convention on returning fail from a function if the function fails. I typically like to return None or False but in my case, the function's purpose is to read an IO, which could be bool (True/False), int or float.</p>
<p>So in this case I can't return a False. I've tried to use None but I don't know if this is the best case cause if I don't check the return as I call the function the None output might be recognized as a False output.</p>
<p>I was thinking having a definition files that has string tokens, but that seems in efficient to have to parse the string.</p>
<p>Are there built-in objects available? Whats the convention?</p>
| -2 | 2016-07-21T19:53:02Z | 38,513,269 | <p>You should raise an exception if the function fails. It is not good practice to have to check if your return value is invalid per the <a href="https://docs.python.org/3/glossary.html#term-eafp" rel="nofollow">EAFP</a> principle of Python.</p>
<blockquote>
<p>Easier to ask for forgiveness than permission. This common Python coding style assumes the existence of valid keys or attributes and catches exceptions if the assumption proves false. This clean and fast style is characterized by the presence of many try and except statements. The technique contrasts with the LBYL style common to many other languages such as C.</p>
</blockquote>
| 0 | 2016-07-21T19:56:43Z | [
"python",
"function",
"exception",
"exception-handling"
] |
Can I add permissions to media django media files? | 38,513,295 | <p>I want to build an app and let user to see some videos just if they have permissions or they paid for that video. I am using Django and I want to add ngnix and gunicorn to serve media files.
I am not sure if once the user has the url of the video, how can I block him to not see the video if his payment expired or he doesn't have the permissions. For now I let django to serve the videos and I overwrite the server method and if he doesn't have access to video I return 404.</p>
| 0 | 2016-07-21T19:58:23Z | 38,513,586 | <p>You need to implement the so-called 'X-Sendfile feature'. Let's say your paid-for files will be served from location <code>/protected/</code> - you need to add to nginx's config:</p>
<pre><code>location /protected/ {
internal;
root /some/path;
}
</code></pre>
<p>then when you want to serve your user a file named <code>mycoolflix.mp4</code> your app needs to add header <code>X-Accel-Redirect: /protected/mycoolflix.mp4</code> and the file <code>/some/path/protected/mycoolflix.mp4</code> will be served to the user. More information in the nginx documentation <a href="https://www.nginx.com/resources/wiki/start/topics/examples/xsendfile/" rel="nofollow">here</a> and <a href="https://www.nginx.com/resources/wiki/start/topics/examples/x-accel/" rel="nofollow">here</a>.
Serving files from your views is not a good idea - it makes one of your Django processes busy until the download is complete, preventing it from serving other requests.</p>
| 0 | 2016-07-21T20:17:40Z | [
"python",
"django",
"nginx",
"gunicorn"
] |
Getting CERTIFICATE_VERIFY_FAILED. How to pass pem file to tinys3 ? | 38,513,332 | <p>I am trying to upload to aws s3. My python program using version 2.7.12</p>
<pre><code>import tinys3
S3_ACCESS_KEY=''
S3_SECRET_KEY=''
conn = tinys3.Connection(S3_ACCESS_KEY,S3_SECRET_KEY,tls=True)
f = open('D:\\poc\\dicomimage','rb')
conn.upload('D:\\poc\\sampleimage',f,'development/system')
</code></pre>
<p>But i am getting below error:
requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)</p>
<p>I cannot set tls=False.
I tried below things but getting same error all the time</p>
<ol>
<li>added cert=cert_path, pip.ini</li>
<li>Also executed - pip uninstall -y certifi && pip install certifi==2015.04.28, as mentioned in various posts.</li>
</ol>
<p>How can i pass my pem file to tinys3 or any setting to fix the issue.</p>
<p>PS: I am a full time java developer, fortunately/unfortunately this is my first python program. So, please explain how things are working here.</p>
| 0 | 2016-07-21T20:00:23Z | 38,513,833 | <p>You should specify the endpoint of your bucket</p>
<pre><code>conn = tinys3.Connection(S3_ACCESS_KEY,S3_SECRET_KEY,tls=True,endpoint='s3-us-east-1.amazonaws.com')
</code></pre>
<p>change your endpoint to match the endpoint of your bucket, check <a href="http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region" rel="nofollow">AWS list of region endpoint</a> </p>
| 0 | 2016-07-21T20:31:32Z | [
"python",
"ssl",
"amazon-s3"
] |
Is it possible to see tensorboard over ssh? | 38,513,333 | <p>I am running tensorflow code remotely on a ssh server. (e.g., ssh -X account@server.address)</p>
<p>On the remote server, it says <code>You can navigate to http://0.0.0.0:6006</code>.</p>
<p>In this case, how can I check tensorboard? How can I navigate the address of a remote machine?
I tried to search, but there seems no useful information.</p>
| 0 | 2016-07-21T20:00:27Z | 38,513,424 | <p><code>0.0.0.0</code> is the wildcard address. Thus, you can use <strong>any</strong> address for the purpose unless the system's firewall is implementig something more restrictive.</p>
<p>That said, let's assume that it <em>is</em> implementing firewall-based restrictions (if it weren't, you could just access <a href="http://server.address:6006/" rel="nofollow">http://server.address:6006/</a> -- but so could anyone else). In that case:</p>
<pre><code>ssh -L 16006:127.0.0.1:6006 account@server.address
</code></pre>
<p>...and then refer to <a href="http://127.0.0.1:16006/" rel="nofollow">http://127.0.0.1:16006/</a> in a local browser.</p>
| 3 | 2016-07-21T20:06:33Z | [
"python",
"tensorflow",
"tensorboard"
] |
Pycharm will not recognize opencv3 | 38,513,429 | <p>so guys after a painful day I have finally gotten opencv to be recognized in python 3 :</p>
<pre><code>Python 3.5.1 (v3.5.1:37a07cee5969, Dec 5 2015, 21:12:44)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version__
'3.1.0'
</code></pre>
<p>but when I do this in Pycharm the result is:</p>
<pre><code> /Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5 "/Users/saminahbab/Documents/directory/Image Recognition /pictures/searcher.py"
Traceback (most recent call last):
File "/Users/saminahbab/Documents/directory/Image Recognition /pictures/searcher.py", line 3, in <module>
import cv2
ImportError: No module named 'cv2'
</code></pre>
<p>which could be for a number of reasons, i have tried to do all sorts of symlinks within reason, trying pyenv among others
now I know that these are different python builds but I wouldnt know to unify them so as to get cv2 working on pycharm and also keep all of my other packages that I will be using in conjunction. anyone with any advice?</p>
| 2 | 2016-07-21T20:06:51Z | 39,848,891 | <p>An easy way to get PyCharm working with OpenCV 3 is as follows:</p>
<ol>
<li>Install Anaconda to a directory that doesn't require admin access from <a href="https://www.continuum.io/downloads" rel="nofollow">https://www.continuum.io/downloads</a></li>
<li><p>Create a virtual environment (optional):</p>
<p><code>conda create -n <yourEnvName> python=<yourPython3Version> anaconda</code></p>
<p><code>source activate <yourEnvName></code> (source is not required if you are using the anaconda prompt in Windows)</p></li>
<li><p>Install OpenCV 3: <code>conda install -n <yourEnvName> -c https://conda.anaconda.org/menpo opencv3</code></p></li>
<li><p>Setup PyCharm: Open PyCharm --> File --> Settings --> Project --> Project Interpreter --> Click on the config wheel, select "Add Local".
Add <code><yourAnacondaDir>\envs\<yourEnvName>\python.exe</code> and wait till PyCharm is done indexing</p></li>
</ol>
<p>Finally, create a new python file and check if opencv 3 has been setup properly by typing:</p>
<pre><code>import cv2
print(cv2.__version__)
</code></pre>
| 0 | 2016-10-04T09:35:14Z | [
"python",
"opencv",
"packages"
] |
Inconsistency when trying to ignore SIGINT | 38,513,495 | <p>I am of the understanding that when you set a signal handler, all child processes inherit said handler by default.</p>
<p>Thus, the following code runs as expected:</p>
<pre><code>import subprocess, signal
signal.signal( signal.SIGINT, signal.SIG_IGN ) # use the ignore handler
subprocess.check_call( [ "sleep", "10" ] )
</code></pre>
<p>I.e. regardless of how much I press Ctrl-C, the script doesn't terminate until after the 10 seconds has elapsed.</p>
<p>However, if I switch the call to a <code>git clone xxxxxx</code>, it seems I <strong>am</strong> able to interrupt the script.</p>
<p>I don't understand why there is a difference in behaviour... Any ideas?</p>
<p>Thanks!</p>
| 0 | 2016-07-21T20:11:44Z | 38,515,101 | <blockquote>
<p>I am of the understanding that when you set a signal handler, all child processes inherit said handler by default.</p>
</blockquote>
<p>As phrased, this is not quite right, though your example is fine.</p>
<blockquote>
<pre><code>signal.signal( signal.SIGINT, signal.SIG_IGN ) # use the ignore handler
</code></pre>
</blockquote>
<p>Technically, <code>SIG_IGN</code> is not a handler at all, but rather a special value telling the kernel that the signal should be discarded at the time it is sent. A <em>handler</em> is a user-supplied function; behind the scenes, when you install a handler, the kernel becomes set up to deliver the signal to the user code.<sup>1</sup></p>
<p>The key difference here is that the <em>user code</em> version requires that said user code continue to exist, but when one process runs another, using fork+exec or spawn or whatever (all nicely hidden by the Python <code>subprocess.Popen</code> interface), the new process has thrown away, or even never had, the user code from the original process. This means any user-code-based handler no longer exists in the new process (whether it is <code>sleep</code> or <code>git</code> or anything else), and therefore the signal disposition must be restored to the default.</p>
<p>When using <code>SIG_IGN</code>, however, the disposition is "discard signal immediately", which needs no user-code action. Hence fork+exec or spawn (again hidden behind <code>subprocess.Popen</code>) does not force the signal disposition to be reset.</p>
<p>As <a href="http://stackoverflow.com/questions/38513495/inconsistency-when-trying-to-ignore-sigint#comment64424665_38513495">Barmar commented</a>, however, any process can change the disposition of signals at its own will. Clearly Git is setting its own new disposition for <code>SIGINT</code>.</p>
<p>Programmers <em>should</em>, at least in theory, write this sort of code with a bit of boilerplate. In Python-ese it would translate as:</p>
<pre><code>with signal.hold([signal.SIGINT]):
previous = signal.signal(signal.SIGINT, handler)
if previous == signal.SIG_IGN:
# Parent process told us NOT to catch SIGINT,
# so we should leave it ignored.
signal.signal(signal.SIGINT, signal.SIG_IGN)
</code></pre>
<p>This uses a pair of functions Python fails to expose (probably because not all systems actually implemented it, although it is now standard POSIX), wrapped into a <code>with</code> context manager. If we simply set the signal's disposition first, then check what it <em>was</em> and restore it if needed, we open a race window during which we've <em>changed</em> the disposition. To fix the race, we can use the POSIX <code>sigprocmask</code> function to temporarily <em>block</em> the signal (defer it in the kernel for some period) while we fiddle around with the disposition. Once we're sure we have the correct disposition, we unblock the signal. If any were delivered during that period, they get disposed-of at the unblock point.</p>
<p>None of this helps much since it requires a fix to be made to the other program(s) (to check the initial disposition of signals when they install their own handlers). However, there <em>are</em> several ways to work around it. The simplest is to use the signal blocking technique, because the blocking mask is <em>also</em> inherited, along with any "ignore" dispositionâand most programs don't bother fussing with the blocking mask, or if they do, use a correct bit of boilerplate (the one we've hidden here behind the non-existent Python <code>with signal.hold(...)</code> trick):</p>
<ul>
<li>call <code>sigprocmask</code> to block the signal while retrieving the current mask at entry</li>
<li>call <code>sigprocmask</code> to <em>restore the saved mask</em> (not just explicitly unblock) at exit.</li>
</ul>
<p>Unfortunately, this requires calling the POSIX <code>sigprocmask</code> function, which is not exposed even in Python 3.4. Python 3.4 <em>does</em> expose <code>pthread_sigmask</code> and that may (depending on your kernel) suffice. It's not clear whether it's worth coding up, though.</p>
<p>Another (even more complex) method of dealing with this is to make your Python program do POSIX-style job control, like most shells. It can then decide which process group should receive tty-generated signals such as <code>SIGINT</code>.</p>
<hr>
<p><sup>1</sup>Technically, the kernel to user signal delivery goes through something called a <em>trampoline</em>. There are several different traditional mechanisms for this, and it is a fairly big ball of hair to make sure it all works correctly.</p>
| 2 | 2016-07-21T22:03:53Z | [
"python",
"git",
"subprocess",
"signals",
"sigint"
] |
No improvements using multiprocessing | 38,513,640 | <p>I tested the performance of <code>map</code>, <code>mp.dummy.Pool.map</code> and <code>mp.Pool.map</code></p>
<pre><code>import itertools
from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
import numpy as np
# wrapper function
def wrap(args): return args[0](*args[1:])
# make data arrays
x = np.random.rand(30, 100000)
y = np.random.rand(30, 100000)
# map
%timeit -n10 map(wrap, itertools.izip(itertools.repeat(np.correlate), x, y))
# mp.dummy.Pool.map
for i in range(2, 16, 2):
print 'Thread Pool ', i, ' : ',
t = ThreadPool(i)
%timeit -n10 t.map(wrap, itertools.izip(itertools.repeat(np.correlate), x, y))
t.close()
t.join()
# mp.Pool.map
for i in range(2, 16, 2):
print 'Process Pool ', i, ' : ',
p = mp.Pool(i)
%timeit -n10 p.map(wrap, itertools.izip(itertools.repeat(np.correlate), x, y))
p.close()
p.join()
</code></pre>
<p>The outputs</p>
<pre><code> # in this case, one CPU core usage reaches 100%
10 loops, best of 3: 3.16 ms per loop
# in this case, all CPU core usages reach ~80%
Thread Pool 2 : 10 loops, best of 3: 4.03 ms per loop
Thread Pool 4 : 10 loops, best of 3: 3.3 ms per loop
Thread Pool 6 : 10 loops, best of 3: 3.16 ms per loop
Thread Pool 8 : 10 loops, best of 3: 4.48 ms per loop
Thread Pool 10 : 10 loops, best of 3: 4.19 ms per loop
Thread Pool 12 : 10 loops, best of 3: 4.03 ms per loop
Thread Pool 14 : 10 loops, best of 3: 4.61 ms per loop
# in this case, all CPU core usages reach 80-100%
Process Pool 2 : 10 loops, best of 3: 71.7 ms per loop
Process Pool 4 : 10 loops, best of 3: 128 ms per loop
Process Pool 6 : 10 loops, best of 3: 165 ms per loop
Process Pool 8 : 10 loops, best of 3: 145 ms per loop
Process Pool 10 : 10 loops, best of 3: 259 ms per loop
Process Pool 12 : 10 loops, best of 3: 176 ms per loop
Process Pool 14 : 10 loops, best of 3: 176 ms per loop
</code></pre>
<ul>
<li><p>Multi-threadings does increase speed. It's acceptable due to the Lock.</p></li>
<li><p>Multi-processes slow down the speed a lot, which is surprising. I have eight 3.78 MHz CPUs, each with 4 cores.</p></li>
</ul>
<p>If inceases the shape of <code>x</code> and <code>y</code> to <code>(300, 10000)</code>, i.e. 10 times larger, the similar results can be seen.</p>
<p>But for small arrays as <code>(20, 1000)</code>, </p>
<pre><code> 10 loops, best of 3: 28.9 µs per loop
Thread Pool 2 : 10 loops, best of 3: 429 µs per loop
Thread Pool 4 : 10 loops, best of 3: 632 µs per loop
...
Process Pool 2 : 10 loops, best of 3: 525 µs per loop
Process Pool 4 : 10 loops, best of 3: 660 µs per loop
...
</code></pre>
<ul>
<li>multi-processing and multi-threading have similar performance.</li>
<li>the single process is much faster. (due to overheads of multi-processing and multi-threading?)</li>
</ul>
<h2>Anyhow, even in excuting such a simple function, it's really out of expect that multiprocessing performs so bad. How can that happen?</h2>
<p>As suggested by @TrevorMerrifield, I modified the code to avoid passing big arrays to <code>wrap</code>.</p>
<pre><code>from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
import numpy as np
n = 30
m = 1000
# make data in wrap
def wrap(i):
x = np.random.rand(m)
y = np.random.rand(m)
return np.correlate(x, y)
# map
print 'Single process :',
%timeit -n10 map(wrap, range(n))
# mp.dummy.Pool.map
print '---'
print 'Thread Pool %2d : '%(4),
t = ThreadPool(4)
%timeit -n10 t.map(wrap, range(n))
t.close()
t.join()
print '---'
# mp.Pool.map, function must be defined before making Pool
print 'Process Pool %2d : '%(4),
p = Pool(4)
%timeit -n10 p.map(wrap, range(n))
p.close()
p.join()
</code></pre>
<p>outputs</p>
<pre><code>Single process :10 loops, best of 3: 688 µs per loop
---
Thread Pool 4 : 10 loops, best of 3: 1.67 ms per loop
---
Process Pool 4 : 10 loops, best of 3: 854 µs per loop
</code></pre>
<ul>
<li>No improvements.</li>
</ul>
<p>I tried another way, passing an indice to <code>wrap</code> to get data from global arrays <code>x</code> and <code>y</code>.</p>
<pre><code>from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
import numpy as np
# make data arrays
n = 30
m = 10000
x = np.random.rand(n, m)
y = np.random.rand(n, m)
def wrap(i): return np.correlate(x[i], y[i])
# map
print 'Single process :',
%timeit -n10 map(wrap, range(n))
# mp.dummy.Pool.map
print '---'
print 'Thread Pool %2d : '%(4),
t = ThreadPool(4)
%timeit -n10 t.map(wrap, range(n))
t.close()
t.join()
print '---'
# mp.Pool.map, function must be defined before making Pool
print 'Process Pool %2d : '%(4),
p = Pool(4)
%timeit -n10 p.map(wrap, range(n))
p.close()
p.join()
</code></pre>
<p>outputs</p>
<pre><code>Single process :10 loops, best of 3: 133 µs per loop
---
Thread Pool 4 : 10 loops, best of 3: 2.23 ms per loop
---
Process Pool 4 : 10 loops, best of 3: 10.4 ms per loop
</code></pre>
<ul>
<li>That's bad.....</li>
</ul>
<p>I tried another simple example (different <code>wrap</code>).</p>
<pre><code>from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
# make data arrays
n = 30
m = 10000
# No big arrays passed to wrap
def wrap(i): return sum(range(i, i+m))
# map
print 'Single process :',
%timeit -n10 map(wrap, range(n))
# mp.dummy.Pool.map
print '---'
i = 4
print 'Thread Pool %2d : '%(i),
t = ThreadPool(i)
%timeit -n10 t.map(wrap, range(n))
t.close()
t.join()
print '---'
# mp.Pool.map, function must be defined before making Pool
print 'Process Pool %2d : '%(i),
p = Pool(i)
%timeit -n10 p.map(wrap, range(n))
p.close()
p.join()
</code></pre>
<p>The timgings:</p>
<pre><code> 10 loops, best of 3: 4.28 ms per loop
---
Thread Pool 4 : 10 loops, best of 3: 5.8 ms per loop
---
Process Pool 4 : 10 loops, best of 3: 2.06 ms per loop
</code></pre>
<ul>
<li>Now <code>multiprocessing</code> is faster.</li>
</ul>
<p>But if changes <code>m</code> to 10 times larger (i.e. <code>100000</code>), </p>
<pre><code> Single process :10 loops, best of 3: 48.2 ms per loop
---
Thread Pool 4 : 10 loops, best of 3: 61.4 ms per loop
---
Process Pool 4 : 10 loops, best of 3: 43.3 ms per loop
</code></pre>
<ul>
<li>Again, no improvements.</li>
</ul>
| 0 | 2016-07-21T20:20:21Z | 38,515,922 | <p>You are mapping <code>wrap</code> to <code>(a, b, c)</code>, where <code>a</code> is a function and <code>b</code> and <code>c</code> are 100K element vectors. All of this data is pickled when it is sent to the chosen process in the pool, then unpickled when it reaches it. This is to ensure that processes have mutually exclusive access to data.</p>
<p>Your problem is that the pickling process is more expensive than the correlation. As a guideline you want to minimize that amount of information that is sent between processes, and maximize the amount of work each process does, while still being spread across the # of cores on the system. </p>
<p>How to do that depends on the actual problem you're trying to solve. By tweaking your toy example so that your vectors were a bit bigger (1 million elements) and randomly generated in the <code>wrap</code> function, I could get a 2X speedup over single core, by using a process pool with 4 elements. The code looks like this: </p>
<pre><code>def wrap(a):
x = np.random.rand(1000000)
y = np.random.rand(1000000)
return np.correlate(x, y)
p = Pool(4)
p.map(wrap, range(30))
</code></pre>
| 2 | 2016-07-21T23:24:52Z | [
"python",
"multithreading",
"multiprocessing",
"threadpool",
"python-multiprocessing"
] |
What is an efficient way to return previous & next values of a looping list of objects in python? | 38,513,644 | <p>I have a list of python objects:</p>
<pre><code>fruits = [ 'apple', 'orange', 'banana', 'grape', 'cherry' ]
</code></pre>
<p>I currently have a <code>for</code> loop in a class method that returns "prev_fruit" and "next_fruit" objects for a given object:</p>
<pre><code>def get_prev_next(self, fruit_list):
prev_fruit = next_fruit = None
fruit_list_length = len(fruit_list)
for idx, fruit in enumerate(fruit_list):
if fruit == self:
if idx > 0:
prev_fruit = fruit_list[idx-1]
if idx < (fruit_list_length-1):
next_fruit = fruit_list[idx+1]
return prev_fruit, next_fruit
</code></pre>
<p>This works although there are probably more efficient ways of doing it (which I'm happy to learn about).</p>
<p>I now want the list to optionally be "looping" (previous for first index is last and next index for last is first).</p>
<pre><code>def get_prev_next(self, fruit_list, looping=False):
...
</code></pre>
<p>What is an efficient way to do this on lists of objects with 1-10000 values?</p>
<blockquote>
<p>"efficient" is necessarily not "most efficient" as code legibility and portability is a factor - I don't want to relearn the approach six months from now</p>
</blockquote>
| 0 | 2016-07-21T20:20:45Z | 38,513,729 | <p>If the input is a list, then you're kind of stuck searching the list for the index of the fruit that you have and then getting the previous and next ones from that ...</p>
<pre><code>def forgiving_getitem(lst, ix, default=None):
if 0 <= ix < len(lst):
return lst[ix]
else:
return default
ix = fruit_list.index(self)
previous = forgiving_getitem(fruit_list, ix - 1)
next = forgiving_getitem(fruit_list, ix + 1)
</code></pre>
<p>Unfortunately, this has O(N) runtime complexity. If you're going to do this a lot with the same fruit_list, it might be better to come up with a different datastructure (e.g. a doubly linked list) where each node carries around the fruit and the previous and next fruits. In this case, getting the previous and next is O(1) and you only need to spend O(N) time constructing the nodes in the first place, however, there could be significant other code refactoring involved to work with nodes instead of fruits directly...</p>
| 0 | 2016-07-21T20:25:17Z | [
"python",
"loops"
] |
Translate integers in a numpy array to a contiguous range 0...n | 38,513,646 | <p>I would like to translate arbitrary integers in a numpy array to a contiguous range 0...n, like this:</p>
<pre><code>source: [2 3 4 5 4 3]
translating [2 3 4 5] -> [0 1 2 3]
target: [0 1 2 3 2 1]
</code></pre>
<p>There must be a better way to than the following:</p>
<pre><code>import numpy as np
"translate arbitrary integers in the source array to contiguous range 0...n"
def translate_ids(source, source_ids, target_ids):
target = source.copy()
for i in range(len(source_ids)):
x = source_ids[i]
x_i = source == x
target[x_i] = target_ids[i]
return target
#
source = np.array([ 2, 3, 4, 5, 4, 3 ])
source_ids = np.unique(source)
target_ids = np.arange(len(source_ids))
target = translate_ids(source, source_ids, target_ids)
print "source:", source
print "translating", source_ids, '->', target_ids
print "target:", target
</code></pre>
<p>What is it?</p>
| 0 | 2016-07-21T20:20:57Z | 38,513,770 | <p>IIUC you can simply use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html" rel="nofollow"><code>np.unique</code></a>'s optional argument <code>return_inverse</code>, like so -</p>
<pre><code>np.unique(source,return_inverse=True)[1]
</code></pre>
<p>Sample run -</p>
<pre><code>In [44]: source
Out[44]: array([2, 3, 4, 5, 4, 3])
In [45]: np.unique(source,return_inverse=True)[1]
Out[45]: array([0, 1, 2, 3, 2, 1])
</code></pre>
| 3 | 2016-07-21T20:27:37Z | [
"python",
"arrays",
"numpy"
] |
Multi-indexing - accessing the last time in every day | 38,513,649 | <p>New to multiindexing in Pandas. I have data that looks like this</p>
<pre><code>Date Time value
2014-01-14 12:00:04 .424
12:01:12 .342
12:01:19 .341
...
12:05:49 .23
2014-05-12 ...
1:02:42 .23
....
</code></pre>
<p>For now, I want to access the last time for every single date and store the value in some array. I've made a multiindex like this</p>
<pre><code>df= pd.read_csv("df.csv",index_col=0)
df.index = pd.to_datetime(df.index,infer_datetime_format=True)
df.index = pd.MultiIndex.from_arrays([df.index.date,df.index.time],names=['Date','Time'])
df= df[~df.index.duplicated(keep='first')]
dates = df.index.get_level_values(0)
</code></pre>
<p>So I have dates saved as an array. I want to iterate through the dates but can't either get the syntax right or am accessing the values incorrectly. I've tried a for loop but can't get it to run (<code>for date in dates</code>) and can't do direct access either (<code>df.loc[dates[i]]</code> or something like that). Also the number of time variables in each date varies. Is there any way to fix this?</p>
| 4 | 2016-07-21T20:21:18Z | 38,513,946 | <p>This sounds like a <code>groupby/max</code> operation. More specifically, you want to group by the <code>Date</code> and aggregate the <code>Time</code>s by taking the <code>max</code>. Since aggregation can only be done over <em>column</em> values, we'll need to change the <code>Time</code> index level into a column (by using <code>reset_index</code>):</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Date': ['2014-01-14', '2014-01-14', '2014-01-14', '2014-01-14', '2014-05-12', '2014-05-12'], 'Time': ['12:00:04', '12:01:12', '12:01:19', '12:05:49', '01:01:59', '01:02:42'], 'value': [0.42399999999999999, 0.34200000000000003, 0.34100000000000003, 0.23000000000000001, 0.0, 0.23000000000000001]})
df['Date'] = pd.to_datetime(df['Date'])
df = df.set_index(['Date', 'Time'])
df = df.reset_index('Time', drop=False)
max_times = df.groupby(level=0)['Time'].max()
print(max_times)
</code></pre>
<p>yields</p>
<pre><code>Date
2014-01-14 12:05:49
2014-05-12 1:02:42
Name: Time, dtype: object
</code></pre>
<hr>
<p>If you wish <strong>to select the entire row</strong>, then you could use <code>idxmax</code> -- but there is a caveat. <code>idxmax</code> returns index labels. Therefore, the index must be <em>unique</em> for the labels to signify unique rows. Since the <code>Date</code> level is not by itself unique, to use <code>idxmax</code> we'll need to <code>reset_index</code> completely (to make an index of unique integers):</p>
<pre><code>df = pd.DataFrame({'Date': ['2014-01-14', '2014-01-14', '2014-01-14', '2014-01-14', '2014-05-12', '2014-05-12'], 'Time': ['12:00:04', '12:01:12', '12:01:19', '12:05:49', '01:01:59', '1:02:42'], 'value': [0.42399999999999999, 0.34200000000000003, 0.34100000000000003, 0.23000000000000001, 0.0, 0.23000000000000001]})
df['Date'] = pd.to_datetime(df['Date'])
df['Time'] = pd.to_timedelta(df['Time'])
df = df.set_index(['Date', 'Time'])
df = df.reset_index()
idx = df.groupby(['Date'])['Time'].idxmax()
print(df.loc[idx])
</code></pre>
<p>yields</p>
<pre><code> Date Time value
3 2014-01-14 12:05:49 0.23
5 2014-05-12 01:02:42 0.23
</code></pre>
<hr>
<p>I don't see a good way to do this while keeping the MultiIndex.
It is easier to perform the <code>groupby</code> operation before setting the MultiIndex.
Moreover, it is probably preferable to <strong>preserve the datetimes as one value</strong> instead of splitting it into two parts. Note that given a datetime/period-like Series, the <a href="http://pandas.pydata.org/pandas-docs/stable/basics.html#dt-accessor" rel="nofollow"><code>.dt</code> accessor</a> gives you easy access to the <code>date</code> and the <code>time</code> as needed. Thus you can group by the <code>Date</code> without making a <code>Date</code> column:</p>
<pre><code>df = pd.DataFrame({'DateTime': ['2014-01-14 12:00:04', '2014-01-14 12:01:12', '2014-01-14 12:01:19', '2014-01-14 12:05:49', '2014-05-12 01:01:59', '2014-05-12 01:02:42'], 'value': [0.42399999999999999, 0.34200000000000003, 0.34100000000000003, 0.23000000000000001, 0.0, 0.23000000000000001]})
df['DateTime'] = pd.to_datetime(df['DateTime'])
# df = pd.read_csv('df.csv', parse_dates=[0])
idx = df.groupby(df['DateTime'].dt.date)['DateTime'].idxmax()
result = df.loc[idx]
print(result)
</code></pre>
<p>yields</p>
<pre><code> DateTime value
3 2014-01-14 12:05:49 0.23
5 2014-05-12 01:02:42 0.23
</code></pre>
| 3 | 2016-07-21T20:38:03Z | [
"python",
"datetime",
"pandas",
"indexing"
] |
Jupyter reveal based slideshow | 38,513,660 | <p>I want to create a presentation such <a href="http://www.slideviper.oquanta.info/tutorial/slideshow_tutorial_slides.html#/" rel="nofollow">this</a> starting from a simple Jupyter notebook.
What I have looks like:</p>
<p><a href="http://i.stack.imgur.com/Gy7WW.png" rel="nofollow"><img src="http://i.stack.imgur.com/Gy7WW.png" alt="enter image description here"></a></p>
<p>Given that, I'd expect to get a slideshow with two slides corresponding to the two cells.
Then from the command line I execute:</p>
<pre><code>jupyter nbconvert --to slides mynotebook.ipynb --post serve
</code></pre>
<p>What I get is a static html page that seems to group both of my cells together.</p>
<p><a href="http://i.stack.imgur.com/mAVMU.png" rel="nofollow"><img src="http://i.stack.imgur.com/mAVMU.png" alt="enter image description here"></a></p>
<p>How do I get the type of one slide per cell effect of the linked presentation?</p>
| 0 | 2016-07-21T20:21:54Z | 38,664,675 | <p>You may need to include a reference to the reveal.js package in your nbconvert command:</p>
<pre><code>jupyter nbconvert --to slides --reveal-prefix="http://cdn.jsdelivr.net/reveal.js/2.5.0 mynotebook.ipynb --post serve
</code></pre>
<p>Or, for instructions on using a local version of reveal.js, see <a href="http://www.damian.oquanta.info/posts/using-a-local-revealjs-library-with-your-ipython-slides.html" rel="nofollow">this blog post by Damian Avila</a></p>
| 0 | 2016-07-29T17:45:30Z | [
"python",
"jupyter-notebook",
"reveal.js"
] |
Get default value if key has falsy value in dict | 38,513,683 | <p>I am working in python, and was using <code>dict</code> in my code.</p>
<p>I have case where I always need <code>default</code> value if the give <code>key</code> is not exist or if <code>key</code> exists and it has <code>falsy</code> value.</p>
<p>for example</p>
<pre><code>x = {'a': 'test', 'b': False, 'c': None, 'd': ''}
print x.get('a', [])
test
print x.get('b', []) # Need [] as False is falsy value in python
False
print x.get('e', []) # This will work fine, because `e` is not valid key
None
print x.get('c', [])
None
print x.get('c', []) or [] # This gives output which I want
</code></pre>
<p>Instead of check <code>Falsy</code> value in <code>or</code> operation, is there any pythonic way to get my default value?</p>
| 0 | 2016-07-21T20:22:46Z | 38,513,904 | <p>Using <code>or</code> to <em>return</em> your default value is Pythonic. I'm not sure you will get a more <em>readable</em> workaround.</p>
<p>About using <code>or</code> in the <a href="https://docs.python.org/3/library/stdtypes.html#boolean-operations-and-or-not" rel="nofollow">docs</a>:</p>
<blockquote>
<p>This is a <strong>short-circuit</strong> operator, so it only evaluates the second
argument if the first one is False.</p>
</blockquote>
<p>You must also consider that the value must have been accessed first before it can then be evaluated as <em>Falsy</em> or <em>Truthy</em>.</p>
| 2 | 2016-07-21T20:34:59Z | [
"python",
"dictionary",
"default"
] |
Get default value if key has falsy value in dict | 38,513,683 | <p>I am working in python, and was using <code>dict</code> in my code.</p>
<p>I have case where I always need <code>default</code> value if the give <code>key</code> is not exist or if <code>key</code> exists and it has <code>falsy</code> value.</p>
<p>for example</p>
<pre><code>x = {'a': 'test', 'b': False, 'c': None, 'd': ''}
print x.get('a', [])
test
print x.get('b', []) # Need [] as False is falsy value in python
False
print x.get('e', []) # This will work fine, because `e` is not valid key
None
print x.get('c', [])
None
print x.get('c', []) or [] # This gives output which I want
</code></pre>
<p>Instead of check <code>Falsy</code> value in <code>or</code> operation, is there any pythonic way to get my default value?</p>
| 0 | 2016-07-21T20:22:46Z | 38,514,211 | <p>Here is an ugly hack:</p>
<pre><code>from collections import defaultdict
x = {'a': 'test', 'b': False, 'c': None, 'd': ''}
d = defaultdict(lambda : [], dict((k, v) if v is not None else (k, []) for k, v in x.items()))
print(d['a'])
# test
print(d['b'])
# False
print(d['e'])
# []
print(d['c'])
# []
</code></pre>
| 0 | 2016-07-21T20:56:10Z | [
"python",
"dictionary",
"default"
] |
How to sort queryset by | 38,513,686 | <p>There are two related classes in my code:</p>
<pre><code>class TreeNode(MPTTModel):
...
@property
def last_payment(self):
return self.annuities.last()
class FilterPayment(models.Model):
class Meta:
verbose_name = 'Ð²Ð·Ð½Ð¾Ñ Ð·Ð° ÑилÑÑÑ'
verbose_name_plural = 'взноÑÑ Ð·Ð° ÑилÑÑÑ'
expected_date = models.DateField(verbose_name='Ð¾Ð¶Ð¸Ð´Ð°ÐµÐ¼Ð°Ñ Ð´Ð°Ñа')
fact_date = models.DateField(verbose_name='ÑакÑиÑеÑÐºÐ°Ñ Ð´Ð°Ñа', null=True, blank=True)
total = models.IntegerField(verbose_name='ÑÑмма')
client = models.ForeignKey(TreeNode, related_name='annuities', verbose_name='клиенÑ')
</code></pre>
<p>How to filter <code>TreeNode.objects.all()</code> by <code>last_payment__expected_date</code> if <code>last_payment</code> is property?</p>
| 0 | 2016-07-21T20:22:54Z | 38,513,866 | <p>You cannot use a property to filter on Django Querysets. You can, though, do the following which will give you the desired result:</p>
<pre><code>import datetime
TreeNode.objects.filter(annuities__expected_date = datetime.datetime(YYYY, MM, DD))
</code></pre>
| 0 | 2016-07-21T20:33:04Z | [
"python",
"django"
] |
Unable to wait while iterating through webdriver dropdown | 38,513,692 | <p>I need assistance with web driver using python. I am trying to iterate through a dropdown list using a wait. A selected dropdown item automatically updates page so webdriverwait is needed to give the page time to update. Need to iterate through a dropdown of 10 items and wait between each item. Any examples on how this is done? </p>
| 0 | 2016-07-21T20:23:11Z | 38,515,680 | <p>General pattern of what you want to do could be: </p>
<ol>
<li>Create a list of pairs <code>item to select, condition to wait for after selection</code> </li>
<li>Loop on this list</li>
</ol>
<p>I am not very much of a python developer, so take the code below as pseudo-code with python flavour:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import Select
# First item in tuple is the value of select,
# Second is the value to wait for after selecting the item
verify_list = [('item1', '//xpath1'),('item2', '//xpath2'),...]
wait = WebDriverWait(driver, 10)
for item,condition in verify_list:
# Get select itself (since page refreshes, you may need to do it after each select)
select = Select(driver.find_element_by_xpath("xpath"))
# Select the value (for example by visible text)
select.select_by_visible_text(item)
# Wait for condition. For example presence of the element
# try/catch can be added here, if you want to run all options,
# even if one of them fails.
wait.until(EC.presence_of_element_located((By.XPATH, condition))
</code></pre>
| 0 | 2016-07-21T22:59:28Z | [
"python",
"selenium"
] |
Is the continue statement necessary in a Python while loop? | 38,513,718 | <p>I'm confused about the use of the <code>continue</code> statement in a <code>while</code> loop.</p>
<p>In this <a href="http://stackoverflow.com/a/23294659/1391441">highly upvoted answer</a>, <code>continue</code> is used inside a <code>while</code> loop to indicate that the execution should continue (obviously). It's <a href="https://docs.python.org/3/reference/simple_stmts.html#continue" rel="nofollow">definition</a> also mentions its use in a <code>while</code> loop:</p>
<blockquote>
<p>continue may only occur syntactically nested in a for or while loop</p>
</blockquote>
<p>But in <a href="http://stackoverflow.com/q/8420705/1391441">this (also highly upvoted) question</a> about the use of <code>continue</code>, all examples are given using a <code>for</code> loop.</p>
<p>It would also appear, given the tests I've run, that it is completely unnecessary. This code:</p>
<pre><code>while True:
data = raw_input("Enter string in all caps: ")
if not data.isupper():
print("Try again.")
continue
else:
break
</code></pre>
<p>works just as good as this one:</p>
<pre><code>while True:
data = raw_input("Enter string in all caps: ")
if not data.isupper():
print("Try again.")
else:
break
</code></pre>
<p>What am I missing?</p>
| 1 | 2016-07-21T20:24:34Z | 38,513,752 | <p><code>continue</code> just means skip to the next iteration of the loop. The behavior here is the same because nothing further happens after the <code>continue</code> statement anyways.</p>
<p>The docs you quoted are just saying that you can <em>only</em> use <code>continue</code> inside of a loop structure - outside, it's meaningless.</p>
| 2 | 2016-07-21T20:26:45Z | [
"python"
] |
Is the continue statement necessary in a Python while loop? | 38,513,718 | <p>I'm confused about the use of the <code>continue</code> statement in a <code>while</code> loop.</p>
<p>In this <a href="http://stackoverflow.com/a/23294659/1391441">highly upvoted answer</a>, <code>continue</code> is used inside a <code>while</code> loop to indicate that the execution should continue (obviously). It's <a href="https://docs.python.org/3/reference/simple_stmts.html#continue" rel="nofollow">definition</a> also mentions its use in a <code>while</code> loop:</p>
<blockquote>
<p>continue may only occur syntactically nested in a for or while loop</p>
</blockquote>
<p>But in <a href="http://stackoverflow.com/q/8420705/1391441">this (also highly upvoted) question</a> about the use of <code>continue</code>, all examples are given using a <code>for</code> loop.</p>
<p>It would also appear, given the tests I've run, that it is completely unnecessary. This code:</p>
<pre><code>while True:
data = raw_input("Enter string in all caps: ")
if not data.isupper():
print("Try again.")
continue
else:
break
</code></pre>
<p>works just as good as this one:</p>
<pre><code>while True:
data = raw_input("Enter string in all caps: ")
if not data.isupper():
print("Try again.")
else:
break
</code></pre>
<p>What am I missing?</p>
| 1 | 2016-07-21T20:24:34Z | 38,513,811 | <p>Here's a really simple example where <code>continue</code> actually does something measureable:</p>
<pre><code>animals = ['dog', 'cat', 'pig', 'horse', 'cow']
while animals:
a = animals.pop()
if a == 'dog':
continue
elif a == 'horse':
break
print(a)
</code></pre>
<p>You'll notice that if you run this, you won't see <code>dog</code> printed. That's because when python sees <code>continue</code>, it skips the rest of the while suite and starts over from the top.</p>
<p>You won't see <code>'horse'</code> or <code>'cow'</code> either because when <code>'horse'</code> is seen, we encounter the break which exits the <code>while</code> suite entirely.</p>
<p>With all that said, I'll just say that over 90%<sup>1</sup> of loops <em>won't</em> need a <code>continue</code> statement.</p>
<p><sup><sup>1</sup>This is complete guess, I don't have any <em>real</em> data to support this claim :)</sup></p>
| 2 | 2016-07-21T20:30:31Z | [
"python"
] |
Is the continue statement necessary in a Python while loop? | 38,513,718 | <p>I'm confused about the use of the <code>continue</code> statement in a <code>while</code> loop.</p>
<p>In this <a href="http://stackoverflow.com/a/23294659/1391441">highly upvoted answer</a>, <code>continue</code> is used inside a <code>while</code> loop to indicate that the execution should continue (obviously). It's <a href="https://docs.python.org/3/reference/simple_stmts.html#continue" rel="nofollow">definition</a> also mentions its use in a <code>while</code> loop:</p>
<blockquote>
<p>continue may only occur syntactically nested in a for or while loop</p>
</blockquote>
<p>But in <a href="http://stackoverflow.com/q/8420705/1391441">this (also highly upvoted) question</a> about the use of <code>continue</code>, all examples are given using a <code>for</code> loop.</p>
<p>It would also appear, given the tests I've run, that it is completely unnecessary. This code:</p>
<pre><code>while True:
data = raw_input("Enter string in all caps: ")
if not data.isupper():
print("Try again.")
continue
else:
break
</code></pre>
<p>works just as good as this one:</p>
<pre><code>while True:
data = raw_input("Enter string in all caps: ")
if not data.isupper():
print("Try again.")
else:
break
</code></pre>
<p>What am I missing?</p>
| 1 | 2016-07-21T20:24:34Z | 38,513,944 | <p><code>continue</code> is only necessary if you want to jump to the next iteration of a loop without doing the rest of the loop. It has no effect if it's the last statement to be run.</p>
<p><code>break</code> exits the loop altogether.</p>
<p>An example:</p>
<pre><code>items = [1, 2, 3, 4, 5]
print('before loop')
for item in items:
if item == 5:
break
if item < 3:
continue
print(item)
print('after loop')
</code></pre>
<p>result:</p>
<pre><code>before loop
3
4
after loop
</code></pre>
| 1 | 2016-07-21T20:37:48Z | [
"python"
] |
Djano JSON field. 'module' object has no attribute 'JSONField' | 38,513,827 | <p>I am learning Django and was frustrated by creating a json field in a model. I was trying to create a json field in my model but got error: 'module' object has no attribute 'JSONField'. Here is my class in models.py:</p>
<pre><code>class Question(models.Model):
question_text = models.JSONField(max_length=200)
pub_date = models.DateTimeField('date published')
</code></pre>
<p>I am using django 1.9.8 and postgresql 9.2.13.
I need the table created in postgresql db has a column with JSON type. How can I do that in the model class?
Thank you!</p>
| -2 | 2016-07-21T20:31:14Z | 38,513,871 | <p>There's no <code>JSONField</code> in <code>models</code> module, you need to:</p>
<pre><code>from django.contrib.postgres.fields import JSONField
class Question(models.Model):
question_text = JSONField()
</code></pre>
<p>Django doc about <a href="https://docs.djangoproject.com/en/1.9/ref/contrib/postgres/fields/#jsonfield" rel="nofollow">JSONField</a>.</p>
| 1 | 2016-07-21T20:33:36Z | [
"python",
"django"
] |
Djano JSON field. 'module' object has no attribute 'JSONField' | 38,513,827 | <p>I am learning Django and was frustrated by creating a json field in a model. I was trying to create a json field in my model but got error: 'module' object has no attribute 'JSONField'. Here is my class in models.py:</p>
<pre><code>class Question(models.Model):
question_text = models.JSONField(max_length=200)
pub_date = models.DateTimeField('date published')
</code></pre>
<p>I am using django 1.9.8 and postgresql 9.2.13.
I need the table created in postgresql db has a column with JSON type. How can I do that in the model class?
Thank you!</p>
| -2 | 2016-07-21T20:31:14Z | 38,513,931 | <p>There's no <code>JSONField</code> in models. But there's a handy <code>jsonfield</code> package available to use <code>JSONField</code> in Django models. To install the package, do:</p>
<pre><code>pip install jsonfield
</code></pre>
<p>Once installed, do:</p>
<pre><code>from jsonfield import JSONField
from django.db import models
class Question(models.Model):
question_text = JSONField(max_length=200)
pub_date = models.DateTimeField('date published')
</code></pre>
| 1 | 2016-07-21T20:36:49Z | [
"python",
"django"
] |
C++ DLL returning a pointer called from Python | 38,513,929 | <p>I am trying to access a C++ dll from python (I am new to Python). I overcame many calling convention issues and finally got it to run without any compile/linking error. However when I print the returning array from C++ dll in python it shows all random initialized values. It looks like the values were not correctly returned.</p>
<p>My C++ code looks like this.</p>
<pre><code>double DLL_EXPORT __cdecl *function1(int arg1, int arg2, double arg3[],int arg4,double arg5,double arg6,double arg7, double arg8)
{
double *Areas = new double[7];
....Calculations
return Areas;
}
</code></pre>
<p>My python code looks as follows:</p>
<pre><code>import ctypes
CalcDll = ctypes.CDLL("CalcRoutines.dll")
arg3_py=(ctypes.c_double*15)(1.926,1.0383,0.00008,0.00102435,0.0101,0.0,0.002,0.0254,102,1,0.001046153,0.001046153,0.001046153,0.001046153,20)
dummy = ctypes.c_double(0.0)
CalcDll.function1.restype = ctypes.c_double*7
Areas = CalcDll.function1(ctypes.c_int(1),ctypes.c_int(6),arg3_py,ctypes.c_int(0),dummy,dummy,dummy,dummy)
for num in HxAreas:
print("\t", num)
</code></pre>
<p>The output of the print statement is as below:</p>
<pre><code> 2.4768722583947873e-306
3.252195577561737e+202
2.559357001198207e-306
5e-324
2.560791130833573e-306
3e-323
2.5621383435212475e-306
</code></pre>
<p>Any suggestion on what I am doing wrong is greatly appreciated.</p>
| 1 | 2016-07-21T20:36:31Z | 38,515,576 | <p>Instead of </p>
<pre><code>CalcDll.function1.restype = ctypes.c_double * 7
</code></pre>
<p>there should be </p>
<pre><code>CalcDll.function1.restype = ctypes.POINTER(ctypes.c_double)
</code></pre>
<p>and then</p>
<pre><code>Areas = CalcDll.function1(ctypes.c_int(1), ctypes.c_int(6), arg3_py,
ctypes.c_int(0), dummy, dummy, dummy, dummy)
for i in range(7):
print("\t", Areas[i])
</code></pre>
<p>I'm not sure what ctypes does in case of 'ctypes.c_double * 7', if it tries to extract seven double from the stack or what.</p>
<p>Tested with</p>
<pre><code>double * function1(int arg1, int arg2, double arg3[],
int arg4, double arg5, double arg6,
double arg7, double arg8)
{
double * areas = malloc(sizeof(double) * 7);
int i;
for(i=0; i<7; i++) {
areas[i] = i;
}
return areas;
}
</code></pre>
<p>the values in the array are printed correctly with <code>restype = ctypes.POINTER(ctypes.c_double)</code></p>
| 1 | 2016-07-21T22:47:59Z | [
"python",
"c++",
"pointers",
"dll",
"ctypes"
] |
Python : calling __new__ method of superclass | 38,513,986 | <p>I have two python classes defined like follows : </p>
<pre><code>class A(object) :
def __init__(self, param) :
print('A.__init__ called')
self.param = param
def __new__(cls, param) :
print('A.__new__ called')
x = object.__new__(A)
x._initialize() # initialization code
return x
class B(A) :
def __init__(self, different_param) :
print('B.__init__ called')
def __new__(cls, different_param) :
print('B.__new__ called')
# call __new__ of class A, passing "param" as parameter
# and get initialized instance of class B
# something like below
b = object.__new__(B)
param = cls.convert_param(different_param)
return super(B, cls).__new__(b, param) # I am expecting something
# similar to this
@staticmethod
def convert_param(param) :
return param
</code></pre>
<p><code>class B</code> is a subclass of <code>class A</code>. The difference between the two classes is that the parameters passed to <code>class B</code> are in a different format as compared to those expected by <code>class A</code>. So, the <code>convert_param</code> method of <code>classB</code> is called to convert the parameters to be compatible with the <code>__new__</code> method of <code>class A</code>. </p>
<p>I am stuck at the part where I wish to call the <code>__new__</code> method of <code>class A</code> from the <code>__new__</code> method of <code>class B</code>, since there is a lot of initialisation that takes place in there, and then get back the initialised instance of <code>class B</code>. </p>
<p>I am having a difficult time figuring this out. Please help.</p>
| 0 | 2016-07-21T20:40:16Z | 38,514,090 | <p><code>convert_param</code> should either be a <code>staticmethod</code> or a <code>classmethod</code> and you don't want to be calling <code>object.__new__</code> from <code>B</code> (otherwise, you're essentially trying to create two new instances of <code>B</code> instead of just one). If <code>convert_param</code> is a <code>staticmethod</code> or a <code>classmethod</code>, then you can do the parameter conversion <em>before</em> you have an instance (e.g. before <code>__new__</code> has been called on the superclass):</p>
<pre><code>class B(A):
@staticmethod
def convert_param(params):
# do magic
return params
def __new__(cls, params):
params = cls.convert_params(params)
return super(B, cls).__new__(cls, params)
</code></pre>
<p>Additionally, you'll need to change <code>A</code>'s <code>__new__</code> slightly to not hard-code the type of the instance returned from <code>object.__new__</code>:</p>
<pre><code>class A(object) :
def __new__(cls, param) :
print('A.__new__ called')
x = super(A, cls).__new__(cls)
x._initialize() # initialization code
return x
</code></pre>
| 2 | 2016-07-21T20:46:27Z | [
"python",
"inheritance",
"override",
"subclass"
] |
how to pass own dictionary to sub-template | 38,514,001 | <p>using <code>bottlepy</code> with the <code>simple template engine</code> i wonder how i could pass the entire dictionary that was <em>passed</em> to the template on to it's sub-templates.</p>
<p>e.g. in my <code>main.py</code> i have:</p>
<pre><code>@bottle.route('/')
@bottle.view('main')
def index():
"""main page"""
return {"name": "main", "foo": 12, "flag": True}
</code></pre>
<p>and i want to pass on <strong>all</strong> the values in the dictionary from my <code>main.tpl</code> to a <code>sub.tpl</code></p>
<pre><code>$ cat sub.tpl
<h1>Hello, {{name}}</h1>
$ cat main.tpl
% include('subtemplate', name=name, foo=foo, flag=flag)
</code></pre>
<p>enumerating each key (as in the above example), is of course not very scalable nor flexible.</p>
<p>so: is there a way to pass on the entire environment?</p>
<p>something like</p>
<pre><code>$ cat main.tpl
% include('subtemplate', *env)
</code></pre>
| 0 | 2016-07-21T20:41:01Z | 38,514,098 | <p>Just a thought, off the top of me head. (I.e., untested.)</p>
<pre><code>@bottle.route('/')
@bottle.view('main')
def index():
"""main page"""
env = {"name": "main", "foo": 12, "flag": True} # same vars as before
env["env"] = env # add a reference to the entire dict, for passing deeper into subtemplates
return env
</code></pre>
<p>And then:</p>
<pre><code>% include('subtemplate', env=env)
</code></pre>
<hr>
<p><strong>EDIT</strong></p>
<p>Thanks to @Kwartz for suggesting the following improvement.</p>
<p>A cleaner method would be, simply:</p>
<pre><code>% include('subtemplate', **env)
</code></pre>
<p>Have not tried it, but if <code>**locals()</code> works (h/t to @Lukas Graf for trying it and confirming), then it's reasonable to expect <code>**env</code> to work as well.</p>
| 2 | 2016-07-21T20:47:36Z | [
"python",
"templates",
"template-engine",
"bottle"
] |
Save a csr_matrix and a numpy array in one file | 38,514,151 | <p>I need to save a large sparse csr_matrix and a numpy array to be able to read them back later. Let X be the sparse csr_matrix and Y be the number array.</p>
<p>Currently I take the following slightly insane route.</p>
<pre><code>from scipy.sparse import csr_matrix
import numpy as np
def save_sparse_csr(filename,array):
np.savez(filename,data = array.data ,indices=array.indices,
indptr =array.indptr, shape=array.shape )
def load_sparse_csr(filename):
loader = np.load(filename)
return csr_matrix(( loader['data'], loader['indices'], loader['indptr']),
shape = loader['shape'])
save_sparse_csr("file1", X)
np.save("file2", Y)
</code></pre>
<p>Then when I want to read them in it is:</p>
<pre><code>X = load_sparse_csr("file1.npz")
Y = np.load("file2.npy")
</code></pre>
<p>Two questions:</p>
<ol>
<li>Is there a better way to save a csr_matrix than this?</li>
<li>Can I save both X and Y to the same file somehow? I seems crazy to have to make two files for this.</li>
</ol>
| 0 | 2016-07-21T20:51:12Z | 38,517,842 | <p>So you are saving the 3 array attributes of the <code>csr</code> along with its shape. And that is sufficient to recreate the array, right?</p>
<p>What's wrong with that? Even if you find a function that saves the <code>csr</code> for you, I bet it is doing the same thing - saving those same arrays.</p>
<p>The normal way in Python to save a class is to <code>pickle</code> it. But the class has to create the appropriate pickle method. <code>numpy</code> does that (essentially its <code>save</code> function). But as far as I know <code>scipy.sparse</code> has not provided that.</p>
<p>Since <code>scipy.sparse</code> has its roots in the MATLAB sparse code (and C/Fortran code developed for linear algebra problems), it can load/save using the <code>loadmat/savemat</code> functions. I'd have to double check but I think the work with <code>csc</code> the default MATLAB sparse format.</p>
<p>There are one or two other <code>sparse.io</code> modules than handle sparse, but I have worked with those. There formats for sharing sparse arrays among different packages working with the same problems (for example PDEs or finite element). More than likely those formats will use a <code>coo</code> compatible layout (data, rows, cols), either as 3 arrays, a csv of 3 columns, or 2d array.</p>
<p>Mentioning <code>coo</code> format raises another possibility. Make a structure array with <code>data, row, col</code> fields, and use <code>np.save</code> or even <code>np.savetxt</code>. I don't think it's any faster or cleaner than <code>csr</code> direct. But it does put all the data in one array (but <code>shape</code> might still need a separate entry).</p>
<p>You might also be able to pickle the <code>dok</code> format, since it is a <code>dict</code> subclass.</p>
| 0 | 2016-07-22T03:48:11Z | [
"python",
"numpy",
"scipy"
] |
Can you read first line from file with open(fname, 'a+')? | 38,514,177 | <p>I want to be able to open a file, append some text to the end, and then read only the first line. I know exactly how long the first line of the file is, and the file is large enough that I don't want to read it into memory all at once. I've tried using:</p>
<pre><code>with open('./output files/log.txt', 'a+') as f:
f.write('This is example text')
content = f.readline()
print(content)
</code></pre>
<p>but the print statement is blank. When I try using <code>open('./output files/log.txt')</code> or <code>open('./output files/log.txt', 'r+')</code> instead of <code>open('./output files/log.txt', 'a+')</code> this works so I know it has to do with the <code>'a+</code> argument. My problem is that I have to append to the file. How can I append to the file and still get the first line without using something like </p>
<pre><code>with open('./output files/log.txt', 'a+') as f_1:
f.write('This is example text')
with open('./output files/log.txt') as f_2:
content = f_2.readline()
print(content)
</code></pre>
| 3 | 2016-07-21T20:53:45Z | 38,514,309 | <p>You need to go back to the start of the file using <code>seek(0)</code>, like so:</p>
<pre><code>with open('./output files/log.txt', 'a+') as f_1:
f_1.write('This is example text')
f_1.seek(0)
print(f_1.readline())
</code></pre>
| 1 | 2016-07-21T21:02:47Z | [
"python",
"file",
"python-3.x"
] |
Can you read first line from file with open(fname, 'a+')? | 38,514,177 | <p>I want to be able to open a file, append some text to the end, and then read only the first line. I know exactly how long the first line of the file is, and the file is large enough that I don't want to read it into memory all at once. I've tried using:</p>
<pre><code>with open('./output files/log.txt', 'a+') as f:
f.write('This is example text')
content = f.readline()
print(content)
</code></pre>
<p>but the print statement is blank. When I try using <code>open('./output files/log.txt')</code> or <code>open('./output files/log.txt', 'r+')</code> instead of <code>open('./output files/log.txt', 'a+')</code> this works so I know it has to do with the <code>'a+</code> argument. My problem is that I have to append to the file. How can I append to the file and still get the first line without using something like </p>
<pre><code>with open('./output files/log.txt', 'a+') as f_1:
f.write('This is example text')
with open('./output files/log.txt') as f_2:
content = f_2.readline()
print(content)
</code></pre>
| 3 | 2016-07-21T20:53:45Z | 38,514,676 | <p>When you open a file with the append flag <code>a</code>, it moves the file descriptor's pointer to the end of the file, so that the <code>write</code> call will add to the end of the file. </p>
<p>The <code>readline()</code> function reads from the current pointer of the file until the next <code>'\n'</code> character it reads. So when you open a file with append, and then call <code>readline</code>, it will try to read a line starting from the end of the file. This is why your <code>print</code> call is coming up blank.</p>
<p>You can see this in action by looking at where the <code>file</code> object is currently pointing, using the <code>tell()</code> function. </p>
<p>To read the first line, you'd have to make sure the file's pointer is back at the beginning of the file, which you can do using the <a href="http://www.tutorialspoint.com/python/file_seek.htm" rel="nofollow"><code>seek</code></a> function. <code>seek</code> <a href="http://stackoverflow.com/a/11696554/2487336">takes two arguments</a>: <code>offset</code> and <code>from_what</code>. If you omit the second argument, <code>offset</code> is taken from the beginning of the file. So to jump to the beginning of the file, do: <code>seek(0)</code>. </p>
<p>If you want to jump back to the end of the file, you can include the <code>from_what</code> option. <code>from_what=2</code> means take the offset from the end of the file. So to jump to the end: <code>seek(0, 2)</code>.</p>
<p><br/></p>
<p><strong>Demonstration of file pointers when opened in append mode:</strong></p>
<p>Example using a text file that looks like this:</p>
<pre><code>the first line of the file
and the last line
</code></pre>
<p>Code:</p>
<pre><code>with open('example.txt', 'a+') as fd:
print fd.tell() # at end of file
fd.write('example line\n')
print fd.tell() # at new end of the file after writing
# jump to the beginning of the file:
fd.seek(0)
print fd.readline()
# jump back to the end of the file
fd.seek(0, 2)
fd.write('went back to the end')
</code></pre>
<p>console output:</p>
<pre><code>45
57
the first line of the file
</code></pre>
<p>new contents of <code>example.txt</code>:</p>
<pre><code>the first line of the file
and the last line
example line
went back to the end
</code></pre>
<p><br/></p>
<p><strong>Edit: added jumping back to end of file</strong></p>
| 2 | 2016-07-21T21:28:40Z | [
"python",
"file",
"python-3.x"
] |
Finding important features in a random forest is very slow | 38,514,249 | <p>I have a set of feature vectors associated with binary class labels,
each of which has about 40,000 features. I train a RandomForest classifier using <code>RandomForestClassifier</code> from <code>sklearn</code> which takes about 10 minutes. I would however like to see which are the most important features.</p>
<p>I tried simply printing out <code>clf.feature_importances_</code> but this takes
about 1 second per feature making about 40,000 seconds overall (approx 12 hours). This
is much much longer than the time needed to train the classifier in
the first place!</p>
<p>Is there a more efficient way to find out which features are most important?</p>
<p>Here is an example of what I mean:</p>
<pre><code>from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=50)
clf = clf.fit(X, Y)
for i in xrange(len(clf.feature_importances_)):
print clf.feature_importances_[i]
</code></pre>
| 3 | 2016-07-21T20:58:26Z | 38,514,640 | <p>All you need to do is to store the results of <code>clf.feature_importances_</code> in an array and then use that to print out results. Like:</p>
<pre><code>from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=50)
clf = clf.fit(X, Y)
featureImportance = clf.feature_importances_
for i in xrange(len(featureImportance)):
print featureImportance[i]
</code></pre>
<p>The way you are handling it right now is recalculating the array every single time.</p>
| 1 | 2016-07-21T21:26:01Z | [
"python",
"scikit-learn"
] |
Finding important features in a random forest is very slow | 38,514,249 | <p>I have a set of feature vectors associated with binary class labels,
each of which has about 40,000 features. I train a RandomForest classifier using <code>RandomForestClassifier</code> from <code>sklearn</code> which takes about 10 minutes. I would however like to see which are the most important features.</p>
<p>I tried simply printing out <code>clf.feature_importances_</code> but this takes
about 1 second per feature making about 40,000 seconds overall (approx 12 hours). This
is much much longer than the time needed to train the classifier in
the first place!</p>
<p>Is there a more efficient way to find out which features are most important?</p>
<p>Here is an example of what I mean:</p>
<pre><code>from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=50)
clf = clf.fit(X, Y)
for i in xrange(len(clf.feature_importances_)):
print clf.feature_importances_[i]
</code></pre>
| 3 | 2016-07-21T20:58:26Z | 38,522,809 | <p>I'm going to suggest a small variation, which should solve the problem automatically, because it obtains <code>feature_importances_</code> just one:</p>
<pre><code>from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=50)
clf = clf.fit(X, Y)
for feature_importance in clf.feature_importances_:
print feature_importance
</code></pre>
<p>If you need the loop index <code>i</code> elsewhere in your loop, just use <code>enumerate</code>:</p>
<pre><code>for i, feature_importance in enumerate(clf.feature_importances_):
print feature_importance
</code></pre>
<p>This is also the more Pythonic way than using </p>
<pre><code>for i in xrange(len(<some-array>)):
<some-array>[i]
</code></pre>
<hr>
<p>I think it would have been better if somehow the <code>RandomForestClassifier</code> keeps track of its state behind the scenes. If the state changes (e.g., <code>n_estimators</code> is changed, or another parameter), it should recompute <code>feature_importances_</code> (on the fly, as it does now). Otherwise, it should just return the current, cached, feature importances.<br>
That is, however, more complicated behind the scenes. </p>
<p>Perhaps the simplest would have been to change the property into an actual method: <code>calc_feature_importances()</code>.<br>
Then again, I didn't put the effort in into creating <code>RandomForestClassifier</code>, so I can't really complain.</p>
| 1 | 2016-07-22T09:23:10Z | [
"python",
"scikit-learn"
] |
Can authentication via JS be faked? (Using 3rd party Authentication) | 38,514,277 | <p>I am looking to override the authentication for my Django backend with <a href="https://docs.fabric.io/web/fabric/overview.html" rel="nofollow">Twitter Fabric's Digits</a>. Digits allows you to sign in without a password, it is cellphone mobile authentication. </p>
<p>The trick is, they provide an embed widget for your frontend (JS). This widget allows you to send requests and returns whether the user is authenticated or not. </p>
<p>Currently I have two ideas for integrating this with Django.</p>
<ol>
<li>Embed the script, wait for a response, and send the response to the backend. Let the backend parse the script.</li>
<li>Figure out the endpoints and ping them from the backend, essentially rewrite Fabric's Digits JS functions in Python.</li>
</ol>
<p>I'd really like to do idea 1 but am unsure whether this is secure enough. Can the response of the request be spoofed? Are there vulnerabilities to option 1?</p>
| 1 | 2016-07-21T21:00:30Z | 38,514,461 | <p>Option #1 isn't enough, but you do need to send the response to the server, and you don't need to do #2.</p>
<p>If you just went with your first option and didn't do any server-side validation of the response, they could easily mock the response that you would've forwarded to the backend. Remember (ignoring firewalls) the user can send anything they want to your server backend bypassing all client-side validation.</p>
<p>What you need to do is verify that the response your server receives from the frontend, is valid, by using Digits API from your backend. <a href="https://docs.fabric.io/web/digits/embeddable.html" rel="nofollow">See the documentation</a>:</p>
<blockquote>
<p>From your web server, over SSL, you can use this response to securely request the userID, phone number, and oAuth tokens of the Digits user. With this approach, there is no need to configure OAuth signing, or configure and host a callback url for Digits.</p>
<p>As additional security measures, you will want to on your webhost:</p>
<ul>
<li>Validate the oauth_consumer_key header value matches your oauth consumer key, to ensure the user is logging into your site</li>
<li>Verify the X-Auth-Service-Provider header, by parsing the uri and asserting the domain is api.twitter.com or www.digits.com, to ensure you call Twitter.</li>
<li><strong>Validate the response from the verify_credentials call to ensure the user is successfully logged in</strong></li>
</ul>
</blockquote>
| 2 | 2016-07-21T21:13:46Z | [
"javascript",
"python",
"django",
"authentication",
"twitter-fabric"
] |
Is it possible to have multiple tags in xpath, python | 38,514,311 | <p>I want to extract text from either an a tag or a p tag and wondering whether I can do them both in the same <code>XPATH</code>. </p>
<p>XPATH would look like this: </p>
<pre><code>'//*[contains(@id, "profile")]/div/div/div/div/a|h4/a'
</code></pre>
<p>where '|' means a tag or h4 tag</p>
| 0 | 2016-07-21T21:02:54Z | 38,515,015 | <p>Use <code>self</code> axis</p>
<pre class="lang-none prettyprint-override"><code>//*[contains(@id, "profile")]/div/div/div/div/*[self::a or self::h4]/a
</code></pre>
<p>And if you want <code>.../div/a</code> or <code>.../div/h4/a</code>, use union of two Xpath</p>
<pre class="lang-none prettyprint-override"><code>//*[contains(@id, "profile")]/div/div/div/div/a | //*[contains(@id, "profile")]/div/div/div/div/a
</code></pre>
| 1 | 2016-07-21T21:56:23Z | [
"python",
"xpath"
] |
What am I doing wrong here? Try and Except in Python | 38,514,317 | <p>Please read my code for better understanding of my question. I'm creating a to do list in python. In the while loop where there's try and except, I want to set the user input type as string. And if the user types in an integer I want to print out the message in the "except" block. But it doesn't execute the ValueError if the I type in an integer when I run the code. </p>
<p>Here's the code:</p>
<pre><code>to_do_list = []
print("""
Hello! Welcome to your notes app.
Type 'SHOW' to show your list so far
Type 'DONE' when you'v finished your to do list
""")
#let user show their list
def show_list():
print("Here is your list so far: {}. Continue adding below!".format(", ".join(to_do_list)))
#append new items to the list
def add_to_list(user_input):
to_do_list.append(user_input)
print("Added {} to the list. {} items so far".format(user_input.upper(), len(to_do_list)))
#display the list
def display_list():
print("Here's your list: {}".format(to_do_list))
print("Enter items to your list below")
while True:
#HERE'S WHERE THE PROBLEM IS!
#check if input is valid
try:
user_input = str(input(">"))
except ValueError:
print("Strings only!")
else:
#if user wants to show list
if user_input == "SHOW":
show_list()
continue
#if user wants to end the list
elif user_input == "DONE":
new_input = input("Are you sure you want to quit? y/n ")
if new_input == "y":
break
else:
continue
#append items to the list
add_to_list(user_input)
display_list()
</code></pre>
| 0 | 2016-07-21T21:03:26Z | 38,514,382 | <p>Two problems with your assumptions:</p>
<ol>
<li>Calling <code>str</code> on an integer will not raise a <code>ValueError</code> because every integer can be represented as a string.</li>
<li>Everything coming back from <code>input</code> (on Python 3 anyway, which it looks like you're using) is <em>already</em> a string. Casting a string to a string will <em>definitely</em> not throw an error.</li>
</ol>
<p>You might want to use <a href="https://docs.python.org/3/library/stdtypes.html#str.isdigit" rel="nofollow"><code>isdigit</code></a> if you want to throw out all-numeric input.</p>
<hr>
<p>There seems to be some confusion in the comments over the word 'all-numeric'. I mean a string that is entirely composed of numbers, which was my interpretation of the OP not wanting "integers" on his to-do list. If you want to throw out some broader class of stringified numbers (signed integers, floats, scientific notation), <code>isdigit</code> is not the method for you. :)</p>
| 2 | 2016-07-21T21:08:15Z | [
"python"
] |
What am I doing wrong here? Try and Except in Python | 38,514,317 | <p>Please read my code for better understanding of my question. I'm creating a to do list in python. In the while loop where there's try and except, I want to set the user input type as string. And if the user types in an integer I want to print out the message in the "except" block. But it doesn't execute the ValueError if the I type in an integer when I run the code. </p>
<p>Here's the code:</p>
<pre><code>to_do_list = []
print("""
Hello! Welcome to your notes app.
Type 'SHOW' to show your list so far
Type 'DONE' when you'v finished your to do list
""")
#let user show their list
def show_list():
print("Here is your list so far: {}. Continue adding below!".format(", ".join(to_do_list)))
#append new items to the list
def add_to_list(user_input):
to_do_list.append(user_input)
print("Added {} to the list. {} items so far".format(user_input.upper(), len(to_do_list)))
#display the list
def display_list():
print("Here's your list: {}".format(to_do_list))
print("Enter items to your list below")
while True:
#HERE'S WHERE THE PROBLEM IS!
#check if input is valid
try:
user_input = str(input(">"))
except ValueError:
print("Strings only!")
else:
#if user wants to show list
if user_input == "SHOW":
show_list()
continue
#if user wants to end the list
elif user_input == "DONE":
new_input = input("Are you sure you want to quit? y/n ")
if new_input == "y":
break
else:
continue
#append items to the list
add_to_list(user_input)
display_list()
</code></pre>
| 0 | 2016-07-21T21:03:26Z | 38,514,384 | <p><code>input</code> returns a string. See <a href="https://docs.python.org/3.5/library/functions.html#input" rel="nofollow">the docs</a> for the <code>input</code> function. Casting the result of this function to a string won't do anything.</p>
<p>You could use <a href="https://docs.python.org/3/library/stdtypes.html#str.isdecimal" rel="nofollow"><code>isdecimal</code></a> to check if the string is a numeric.</p>
<pre><code>if user_input.isdecimal():
print("Strings only!")
</code></pre>
<p>This would fit in nicely with your existing <code>else</code> clause.</p>
| 3 | 2016-07-21T21:08:42Z | [
"python"
] |
What am I doing wrong here? Try and Except in Python | 38,514,317 | <p>Please read my code for better understanding of my question. I'm creating a to do list in python. In the while loop where there's try and except, I want to set the user input type as string. And if the user types in an integer I want to print out the message in the "except" block. But it doesn't execute the ValueError if the I type in an integer when I run the code. </p>
<p>Here's the code:</p>
<pre><code>to_do_list = []
print("""
Hello! Welcome to your notes app.
Type 'SHOW' to show your list so far
Type 'DONE' when you'v finished your to do list
""")
#let user show their list
def show_list():
print("Here is your list so far: {}. Continue adding below!".format(", ".join(to_do_list)))
#append new items to the list
def add_to_list(user_input):
to_do_list.append(user_input)
print("Added {} to the list. {} items so far".format(user_input.upper(), len(to_do_list)))
#display the list
def display_list():
print("Here's your list: {}".format(to_do_list))
print("Enter items to your list below")
while True:
#HERE'S WHERE THE PROBLEM IS!
#check if input is valid
try:
user_input = str(input(">"))
except ValueError:
print("Strings only!")
else:
#if user wants to show list
if user_input == "SHOW":
show_list()
continue
#if user wants to end the list
elif user_input == "DONE":
new_input = input("Are you sure you want to quit? y/n ")
if new_input == "y":
break
else:
continue
#append items to the list
add_to_list(user_input)
display_list()
</code></pre>
| 0 | 2016-07-21T21:03:26Z | 38,514,502 | <p>In Python, <code>input</code> always returns a string. For example:</p>
<pre><code>>>> input('>')
>4
'4'
</code></pre>
<p>So <code>str</code> won't throw a ValueError in this case--it's already a string.</p>
<p>If you really want to check and make sure the user didn't enter just numbers you probably want to check to see if your input is all digits, and then error out.</p>
| 0 | 2016-07-21T21:16:46Z | [
"python"
] |
pony.orm sort by newest Entity in relationship | 38,514,429 | <p>Let's say i have these tables mapped with <code>pony.orm</code>:</p>
<pre><code>class Category(db.Entity):
threads = Set("Thread")
class Thread(db.Entity):
category = Required("Category")
posts = Set("Post")
class Post(db.Entity):
thread = Required("Thread")
timestamp = Required(datetime)
</code></pre>
<p>Now I want to get all threads of a certain category ordered by there newest post:</p>
<p>With this line I get the <strong>ID</strong> of the newest post, but I want the object.</p>
<pre><code>query = select((max(p.id), p.thread) for p in Post if p.thread.category.id == SOME_ID)
.order_by(lambda post_id, thread: -post_id)
</code></pre>
<p>Of course I could <code>[(Post[i], thread) for i, thread in query]</code> or <code>select(p for p in Post if p.id in [i for i,_ in query])</code></p>
<p>But this creates additional sql statements. So my question is: How can I get the newest posts of all threads in a certain category sorted by the timestamp of that post with a single sql statement.</p>
<p>I wouldn't using <code>db.execute(sql)</code> if you can't use the ORM.</p>
| 1 | 2016-07-21T21:11:23Z | 38,515,885 | <p>Try this:</p>
<pre><code>select((p, t) for t in Thread for p in t.posts
if p.id == max(p2.id for p2 in t.posts)
).order_by(lambda p, t: desc(p.id))
</code></pre>
| 1 | 2016-07-21T23:20:43Z | [
"python",
"ponyorm"
] |
Python - Populate dictionary from nested dictionary comprehension | 38,514,485 | <p>I want to populate a dictionary by iterating over two other dictionaries. I have a working example and i would like to know if there is a way to do it in dictionary comprehension (mainly for performance reasons) or make it more pythonic. First of all here is the code:</p>
<pre><code>def get_replacement_map(dict_A, dict_B, min_sim):
replacement_map = {} # the dictionary i want to populate
for key_A, value_A in dict_A.items():
best_replacement = ()
best_similarity = 0
for key_B, value_B in dict_B.items():
if key_B[0] != key_A[0]:
# similarity(x,y) may return None so in that case assign sim = 0
sim = similarity(value_A[0], value_B[0]) or 0
if sim > best_similarity and sim > min_sim:
best_replacement = key_B
best_similarity = sim
if sim > 0.9: # no need to keep looking, this is good enough!
break
if best_replacement:
synonym_map[key_A] = best_replacement
return replacement_map
</code></pre>
<p>It does a simple thing. It calculates the similarity between the elements of two dictionaries and for each element finds the best possible replacement (if the similarity is above the min_sim threshold). The purpose is to build a dictionary of replacements.</p>
<p>I am new to Python so i am pretty sure that this is not the pythonic way to implement this. I have seen big improvements in performance by using comprehensions instead of for loops, so i was curious if this code can be also done using nested dictionary comprehensions and also if that makes sense to do.</p>
<p>If it is not a good idea to do it using comprehensions are there any improvements i can do?</p>
| 0 | 2016-07-21T21:15:38Z | 38,514,587 | <p>This is a complicated enough replacement schema that if you were to contain it all in a one-liner, it would be very difficult to read. Maintaining the structure and spacing relevant to making the flow understandable is the more pythonic way to solve this.</p>
<p>As for performance gains, you likely won't see any as discussed in <a href="http://stackoverflow.com/questions/22108488/are-list-comprehensions-and-functional-functions-faster-than-for-loops" title="Are list-comprehensions and functional functions faster than âfor loopsâ?">this</a> question.</p>
| 0 | 2016-07-21T21:22:49Z | [
"python",
"python-3.x",
"dictionary",
"dictionary-comprehension"
] |
Find shortest path from one geometry to other on shapely | 38,514,607 | <p>Given two lines line_a and line_b, how do I find the pair of points that represents the smaller route from line_a to line_b?</p>
| 0 | 2016-07-21T21:23:52Z | 38,997,756 | <p>Unfortunately, there is no shapely operation to do this.
I was thinking about this problem when was asked to extend the solution in the
question <a href="http://stackoverflow.com/questions/33311616/find-coordinate-of-closest-point-on-polygon-shapely/33324058#33324058">find-coordinate-of-closest-point-on-polygon-shapely</a> , to handle two polygons. The answer depends on geometry theorem
that is intuitively true (although a formal proof requires some calculus
and it is a little long to be written here without LaTex).
The theorem tells:
The minimum distance between two line strings that do not cross each
other (being a line string a concatenated sequence of segments),
is always achieved in one of the edge points of the line strings.</p>
<p>With this in mind, the problem is reduced to compute the minimum distance
between each edge of a LineString to the other LineString.</p>
<p>Another way of seeing this problem is to reduce it to compute the minimum
distance between each pair of segments. And to notice that the distance
between two segments that do not intersect, is achieved between two end points,
or between an endpoint and the projection of that end point over the other
segment. </p>
<p>The code has been optimized a little to avoid redundant computation, but
perhaps you can get a more elegant version. Or if you are familiar with <code>numpy</code>, you can probably get a shorter version that use <code>numpy</code> vector distance and dot product.</p>
<pre><code>from shapely.geometry import LineString, Point
import math
def get_min_distance_pair_points(l1, l2):
"""Returns the minimum distance between two shapely LineStrings.
It also returns two points of the lines that have the minimum distance.
It assumes the lines do not intersect.
l2 interior point case:
>>> l1=LineString([(0,0), (1,1), (1,0)])
>>> l2=LineString([(0,1), (1,1.5)])
>>> get_min_distance_pair_points(l1, l2)
((1.0, 1.0), (0.8, 1.4), 0.4472135954999578)
change l2 slope to see the point changes accordingly:
>>> l2=LineString([(0,1), (1,2)])
>>> get_min_distance_pair_points(l1, l2)
((1.0, 1.0), (0.5, 1.5), 0.7071067811865476)
l1 interior point case:
>>> l2=LineString([(0.3,.1), (0.6,.1)])
>>> get_min_distance_pair_points(l1, l2)
((0.2, 0.2), (0.3, 0.1), 0.1414213562373095)
Both edges case:
>>> l2=LineString([(5,0), (6,3)])
>>> get_min_distance_pair_points(l1, l2)
((1.0, 0.0), (5.0, 0.0), 4.0)
Parallels case:
>>> l2=LineString([(0,5), (5,0)])
>>> get_min_distance_pair_points(l1, l2)
((1.0, 1.0), (2.5, 2.5), 2.1213203435596424)
Catch intersection with the assertion:
>>> l2=LineString([(0,1), (1,0.8)])
>>> get_min_distance_pair_points(l1, l2)
Traceback (most recent call last):
...
assert( not l1.intersects(l2))
AssertionError
"""
def distance(a, b):
return math.sqrt( (a[0]-b[0])**2 + (a[1]-b[1])**2 )
def get_proj_distance(apoint, segment):
'''
Checks if the ortogonal projection of the point is inside the segment.
If True, it returns the projected point and the distance, otherwise
returns None.
'''
a = [float(i) for i in apoint]
b, c = segment
b = [float(i) for i in b]
c = [float(i) for i in c]
# t = <a-b, c-b>/|c-b|**2
# because p(a) = t*(c-b)+b is the ortogonal projection of vector a
# over the rectline that includes the points b and c.
t = (a[0]-b[0])*(c[0]-b[0]) + (a[1]-b[1])*(c[1]-b[1])
t = t / ( (c[0]-b[0])**2 + (c[1]-b[1])**2 )
# Only if t 0 <= t <= 1 the projection is in the interior of
# segment b-c, and it is the point that minimize the distance
# (by pitagoras theorem).
if 0 < t < 1:
pcoords = (t*(c[0]-b[0])+b[0], t*(c[1]-b[1])+b[1])
dmin = distance(a, pcoords)
return pcoords, dmin
return None
def get_edge_dict(line_points, line_segments):
result = {}
for i in xrange(len(line_points)):
p = line_points[i]
for j in xrange(len(line_segments)):
s = line_segments[j]
r = get_proj_distance(p, s)
if r:
result[ (i, j) ] = r
return result
def get_min_data(d_dict):
dm = None
pm = None
kmin = None
for k in d_dict:
p, d = d_dict[k]
if dm == None or d < dm:
kmin = k
pm = p
dm = d
return kmin, pm, dm
assert( not l1.intersects(l2))
l1p = list(l1.coords)
l2p = list(l2.coords)
l1s = zip(l1p, l1p[1:])
l2s = zip(l2p, l2p[1:])
edge1_dict = get_edge_dict(l1p, l2s)
edge2_dict = get_edge_dict(l2p, l1s)
base_dict = {}
for i in xrange(len(l1p)):
p1 = l1p[i]
for j in xrange(len(l2p)):
if not (i, j) in edge1_dict and not (j, i) in edge1_dict:
p2 = l2p[j]
base_dict[(i, j)] = ( p2, distance(p1, p2) )
edge1 = get_min_data(edge1_dict)
edge2 = get_min_data(edge2_dict)
base = get_min_data(base_dict)
dmin = min([d for _, _, d in [edge1, edge2, base] if d != None])
if dmin == edge1[2]:
kmin, pm, dm = edge1
i, j = kmin
return l1p[i], pm, dm
elif dmin == edge2[2]:
kmin, pm, dm = edge2
i, j = kmin
return pm, l2p[i], dm
elif dmin == base[2]:
kmin, pm, dm = base
i, j = kmin
return l1p[i], pm, dm
if __name__ == "__main__":
import doctest
doctest.testmod()
</code></pre>
| 0 | 2016-08-17T13:16:00Z | [
"python",
"shapely"
] |
IPython Notebook with remote ipyparallel Controller | 38,514,648 | <p>I'm currently trying to setup a remote cluster on a group of servers I own using the ipyparallel library. I figured that if I share the $IPYTHONDIR between all ipcontrollers, ipengines and notebook that everything would just connect and work, but this is not the case for my current setup.</p>
<p>What I'm attempting to accomplish is such that a ipcontroller and ipengines are sitting on my cluster waiting for a jupyter notebook to connect to the controller and use it for it's cluster computing resources.</p>
<p>Currently I cannot get my notebook to connect to my controller even though all ports are open, the servers are directly accessible, and the IPYTHONDIR is shared.</p>
<p>When I open my notebook and go to the clusters tab I see my parallel profile, but it's not started. Which is odd because the ipcontroller and ipengines are already started and waiting for a connection from the notebook.</p>
<p>This boils down to:</p>
<ul>
<li>Is it possible to run a notebook on a different server than the ipcontroller?</li>
<li>If the above is possible, why can I not get the notebook to connect to the cluster, and instead when I click start on the profile it simply makes a local cluster.</li>
</ul>
<p>Thanks! </p>
| 0 | 2016-07-21T21:26:40Z | 40,042,575 | <p>Yes this is possible if the notebook kernel is running on the same server as the ipcontroller. The notebook itself can be displayed from any browser. I use that functionality regularly.</p>
<p>The way I have done it is to have an ipython profile available on the server. In my case it's a Windows server and the profiles are set up under <code>c:\users\<user>\.ipython\</code>. In this case the profile folder is called <code>profile_my32bitcluster</code> and when I am creating the client, I specify the profile to use:</p>
<pre><code>from ipyparallel import Client
rc = Client(profile='my32bitcluster')
dview = rc[:]
# Test it by pushing out a dataframe across some engines, modifying it
# and returning the modified dataframes...
df = pd.DataFrame(data={'x':[1,2,3,4,5], 'y':[1,4,9,16,25]})
dview.push({'df':df})
def myfunc(x):
import sys
import os
import pandas as pd
global df
df['z'] = df['x'] * x
return df
results = dview.map_sync(myfunc, [2,3,4])
</code></pre>
<p>I hope that helps.</p>
| 0 | 2016-10-14T11:54:37Z | [
"python",
"ipython",
"ipython-parallel"
] |
How to use the output from OneHotEncoder in sklearn? | 38,514,682 | <p>I have a Pandas Dataframe with 2 categorical variables, and ID variable and a target variable (for classification). I managed to convert the categorical values with <code>OneHotEncoder</code>. This results in a sparse matrix. </p>
<pre><code>ohe = OneHotEncoder()
# First I remapped the string values in the categorical variables to integers as OneHotEncoder needs integers as input
... remapping code ...
ohe.fit(df[['col_a', 'col_b']])
ohe.transform(df[['col_a', 'col_b']])
</code></pre>
<p>But I have no clue how I can use this sparse matrix in a DecisionTreeClassifier? Especially when I want to add some other non-categorical variables in my dataframe later on. Thanks!</p>
<p><strong>EDIT</strong>
In reply to the comment of miraculixx: I also tried the DataFrameMapper in sklearn-pandas</p>
<pre><code>mapper = DataFrameMapper([
('id_col', None),
('target_col', None),
(['col_a'], OneHotEncoder()),
(['col_b'], OneHotEncoder())
])
t = mapper.fit_transform(df)
</code></pre>
<p>But then I get this error: </p>
<blockquote>
<p>TypeError: no supported conversion for types : (dtype('O'),
dtype('int64'), dtype('float64'), dtype('float64')).</p>
</blockquote>
| 2 | 2016-07-21T21:28:58Z | 38,519,416 | <p>I see you are already using Pandas, so why not using its <code>get_dummies</code> function?</p>
<pre><code>import pandas as pd
df = pd.DataFrame([['rick','young'],['phil','old'],['john','teenager']],columns=['name','age-group'])
</code></pre>
<p>result</p>
<pre><code> name age-group
0 rick young
1 phil old
2 john teenager
</code></pre>
<p>now you encode with get_dummies</p>
<pre><code>pd.get_dummies(df)
</code></pre>
<p>result</p>
<pre><code>name_john name_phil name_rick age-group_old age-group_teenager \
0 0 0 1 0 0
1 0 1 0 1 0
2 1 0 0 0 1
age-group_young
0 1
1 0
2 0
</code></pre>
<p>And you can actually use the new Pandas DataFrame in your Sklearn's DecisionTreeClassifier.</p>
| 1 | 2016-07-22T06:16:37Z | [
"python",
"pandas",
"scikit-learn",
"classification",
"one-hot-encoding"
] |
How to use the output from OneHotEncoder in sklearn? | 38,514,682 | <p>I have a Pandas Dataframe with 2 categorical variables, and ID variable and a target variable (for classification). I managed to convert the categorical values with <code>OneHotEncoder</code>. This results in a sparse matrix. </p>
<pre><code>ohe = OneHotEncoder()
# First I remapped the string values in the categorical variables to integers as OneHotEncoder needs integers as input
... remapping code ...
ohe.fit(df[['col_a', 'col_b']])
ohe.transform(df[['col_a', 'col_b']])
</code></pre>
<p>But I have no clue how I can use this sparse matrix in a DecisionTreeClassifier? Especially when I want to add some other non-categorical variables in my dataframe later on. Thanks!</p>
<p><strong>EDIT</strong>
In reply to the comment of miraculixx: I also tried the DataFrameMapper in sklearn-pandas</p>
<pre><code>mapper = DataFrameMapper([
('id_col', None),
('target_col', None),
(['col_a'], OneHotEncoder()),
(['col_b'], OneHotEncoder())
])
t = mapper.fit_transform(df)
</code></pre>
<p>But then I get this error: </p>
<blockquote>
<p>TypeError: no supported conversion for types : (dtype('O'),
dtype('int64'), dtype('float64'), dtype('float64')).</p>
</blockquote>
| 2 | 2016-07-21T21:28:58Z | 38,519,889 | <p>Look at this example from scikit-learn:
<a href="http://scikit-learn.org/stable/auto_examples/ensemble/plot_feature_transformation.html#example-ensemble-plot-feature-transformation-py" rel="nofollow">http://scikit-learn.org/stable/auto_examples/ensemble/plot_feature_transformation.html#example-ensemble-plot-feature-transformation-py</a></p>
<p>Problem is that you are not using the sparse matrices to <code>xx.fit()</code>. You are using the original data. </p>
| 0 | 2016-07-22T06:46:55Z | [
"python",
"pandas",
"scikit-learn",
"classification",
"one-hot-encoding"
] |
isinstance() returns false when the fully-qualified object class differs from the qualified class | 38,514,730 | <p>When a 3rd party library method uses <code>isinstance()</code> to compare an object with a class, it returns <code>False</code> because it compares the fully qualified class name of the object with a qualified class name that starts "higher" up. </p>
<p>E.g.: <code>isinstance()</code> finds that the object class and classname differ:</p>
<p>Expected:</p>
<blockquote>
<p>'network.mhistory.service.mhistory_messages.MHistoryActivityViewMessage'</p>
</blockquote>
<p>Found:</p>
<blockquote>
<p>'<strong>backend</strong>.network.mhistory.service.mhistory_messages.MHistoryActivityViewMessage' </p>
</blockquote>
<p>and returns <code>False</code> given the code snippet:</p>
<pre><code>if not isinstance(value, self.type):
raise ValidationError('Expected type %s for field %s, '
'found %s (type %s)' %
(self.type, name, value, type(value)))
</code></pre>
<p>Is there a way to change the fully qualified name of a class (at least temporarily)?</p>
| 0 | 2016-07-21T21:33:03Z | 38,514,785 | <p>For it to return True, the first argument to <code>isinstance</code> must be an <em>instance</em> of the second. Providing the same class for both arguments results in a False:</p>
<pre><code>>>> isinstance( int, int)
False
>>> isinstance( int(1), int)
True
</code></pre>
<p>Here, <code>int</code> is a class and a class is not an instance of a class.</p>
<p><code>int(1)</code>, by contrast, is an integer (an instance of <code>int</code>). Consequently, the second example returns True.</p>
<h3>Name qualification</h3>
<p>Let's compare <code>isinstance</code> when applied to qualified and unqualified names:</p>
<pre><code>>>> import numpy
>>> from numpy import bool
>>> isinstance( numpy.bool, bool)
False
>>> isinstance( bool, numpy.bool)
False
>>> isinstance( bool(1), numpy.bool)
True
>>> isinstance( numpy.bool(1), bool)
True
</code></pre>
<p>Name qualification does not affect the result. </p>
| 5 | 2016-07-21T21:36:58Z | [
"python",
"python-2.7"
] |
isinstance() returns false when the fully-qualified object class differs from the qualified class | 38,514,730 | <p>When a 3rd party library method uses <code>isinstance()</code> to compare an object with a class, it returns <code>False</code> because it compares the fully qualified class name of the object with a qualified class name that starts "higher" up. </p>
<p>E.g.: <code>isinstance()</code> finds that the object class and classname differ:</p>
<p>Expected:</p>
<blockquote>
<p>'network.mhistory.service.mhistory_messages.MHistoryActivityViewMessage'</p>
</blockquote>
<p>Found:</p>
<blockquote>
<p>'<strong>backend</strong>.network.mhistory.service.mhistory_messages.MHistoryActivityViewMessage' </p>
</blockquote>
<p>and returns <code>False</code> given the code snippet:</p>
<pre><code>if not isinstance(value, self.type):
raise ValidationError('Expected type %s for field %s, '
'found %s (type %s)' %
(self.type, name, value, type(value)))
</code></pre>
<p>Is there a way to change the fully qualified name of a class (at least temporarily)?</p>
| 0 | 2016-07-21T21:33:03Z | 38,516,793 | <p>As far as Python is concerned, the classes <code>network.mhistory.service.mhistory_messages.MHistoryActivityViewMessage</code> and <code>backend.network.mhistory.service.mhistory_messages.MHistoryActivityViewMessage</code> are not the same. That's true even if they have exactly the same definition because they were read from the same file!</p>
<p>Your bug isn't that <code>isinstance</code> is returning the "wrong" answer, it's that you're able to access those two classes (and possibly others as well) by two different names.</p>
<p>There are likely two different problems leading to the bug. First, you've probably got some code somewhere that is messing around with <code>sys.path</code>. That isn't inherently bad, but it's causing you problems by making the contents of your <code>backend</code> package available two differnet ways, first directly (e.g. <code>import network</code>) and via <code>backend</code> (<code>from backend import network</code>). You don't want this.</p>
<p>The second part of your bug (which may have been the motivating factor leading to the first part), is that you're actually <em>using</em> both ways of accessing those objects. You only need one, and should thus fix the parts that import the package the wrong way.</p>
| 2 | 2016-07-22T01:22:58Z | [
"python",
"python-2.7"
] |
Is this a bad use of django class based views? | 38,514,784 | <p>sometimes I have a hard time seeing if I am doing something the right way or not. Here is how I am using class based views in a project of mine. </p>
<pre><code>class View(View):
def get(self, request):
if request.GET.get('something'):
...do something
elif request.GET.get('bar'):
...do something
def post(self, request):
if request.POST.get('foo'):
...do something
elif request.POST.get('bar'):
...do something
</code></pre>
<p>Is this django-like?</p>
<p>I have a lot of these in one view, and I came on a situation where the post may be getting nothing in return so I was unsure about how to catch it. What should I do in this situation?</p>
| 1 | 2016-07-21T21:36:52Z | 38,514,835 | <p>When you use POST to send a form, you don't need to validate every single field in your view, you can do this in your form class. Check <a href="https://docs.djangoproject.com/es/1.9/ref/forms/validation/#form-and-field-validation" rel="nofollow">docs</a>.</p>
| 2 | 2016-07-21T21:41:05Z | [
"python",
"django"
] |
Is this a bad use of django class based views? | 38,514,784 | <p>sometimes I have a hard time seeing if I am doing something the right way or not. Here is how I am using class based views in a project of mine. </p>
<pre><code>class View(View):
def get(self, request):
if request.GET.get('something'):
...do something
elif request.GET.get('bar'):
...do something
def post(self, request):
if request.POST.get('foo'):
...do something
elif request.POST.get('bar'):
...do something
</code></pre>
<p>Is this django-like?</p>
<p>I have a lot of these in one view, and I came on a situation where the post may be getting nothing in return so I was unsure about how to catch it. What should I do in this situation?</p>
| 1 | 2016-07-21T21:36:52Z | 38,514,892 | <p>To handle POST data, you should rather use a <a href="https://docs.djangoproject.com/en/stable/ref/class-based-views/generic-editing/#formview" rel="nofollow"><code>FormView</code></a> or even a "model edit" view such as <a href="https://docs.djangoproject.com/en/stable/ref/class-based-views/generic-editing/#createview" rel="nofollow"><code>CreateView</code></a> or <a href="https://docs.djangoproject.com/en/stable/ref/class-based-views/generic-editing/#updateview" rel="nofollow"><code>UpdateView</code></a>. You can see <a class='doc-link' href="http://stackoverflow.com/documentation/django/1220/class-based-views/3998/form-and-object-creation#t=201607212142207374618">class-based form views examples here</a>.</p>
| 1 | 2016-07-21T21:46:22Z | [
"python",
"django"
] |
python Timedelta overflow | 38,514,856 | <p>I am trying to return a timedelta but when time_value is too large it overflows and gives an error. I can use a check to see if time_value is too large but I would prefer a wrapper that handles the error and returns a default. I have included the code for what I'm doing right now. Is there a version of timedelta or datetime that will do this for me?</p>
<pre><code>def time_format(time_value):
try:
if time_value is None:
return 0
elif time_value > 0:
return (timedelta(seconds=-time_value))
except OverflowError:
return 0
</code></pre>
| 1 | 2016-07-21T21:43:22Z | 38,514,955 | <p>You could use <a href="https://docs.python.org/2/library/datetime.html#datetime.timedelta.min" rel="nofollow"><code>datetime.timedelta.min</code></a> and <a href="https://docs.python.org/2/library/datetime.html#datetime.timedelta.max" rel="nofollow"><code>datetime.timedelta.max</code></a>. Note that these two are not symmetric about 0.</p>
<p>Then your code becomes</p>
<pre><code>time_offset = 0
if timedelta.min.total_seconds() <= -time_value <= timedelta.max.total_seconds():
time_offset = timedelta(seconds=-time_value)
</code></pre>
| 0 | 2016-07-21T21:51:17Z | [
"python",
"python-2.7",
"integer-overflow",
"timedelta"
] |
String to Integer error on file name | 38,514,874 | <p>I'm trying to add 59 to the 3 digits in a specific position in all the files in my folder, but it gives this error:</p>
<p><strong>ValueError: invalid literal for int() with base 10: ''</strong></p>
<p>I have checked with prints, and is indeed a 3 char string, containing only digits(by the looks)</p>
<p>Code :</p>
<pre><code>import os
def left(s, amount):
return s[:amount]
def right(s, amount):
return s[-amount:]
def mid(s, offset, amount):
return s[offset:offset+amount]
for filename in os.listdir("V:\HD_RASTER\CTA2-GUA3"):
s = mid(filename, 21, 3)
print("Chars : " + len(s) + " String : " + s)
s = int(s) + 59
s = string(s)
os.rename(filename,left(filename,21) + s + mid(filename,24,len(filename))
</code></pre>
<p>Folder screenshot of file names :</p>
<p><a href="http://i.stack.imgur.com/Dri9V.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Dri9V.jpg" alt="enter image description here"></a></p>
| 1 | 2016-07-21T21:44:46Z | 38,515,391 | <p>Your code is very fragile, and functions like <code>left</code>, <code>mid</code>, and <code>right</code> suggest you are more used to another language.</p>
<p>Among other things, this only works if your current directory contains the files, because <code>listdir</code> only returns the file name, not it's path. So <code>os.rename</code> will fail.</p>
<p>Try making it a little more flexible and bulletproof.</p>
<pre><code>import glob
import os
FPATH = r"V:\HD_RASTER\CTA2-GUA3"
FILE_PREFIX= 'TRANS_leilao-004-14_0'
FULL_PREFIX = os.path.join(FPATH,PREFIX)
PREFIX_LEN = len(FULL_PREFIX)
files = glob.glob(FULL_PREFIX+r"???.*")
for old_file in files:
n = old_file[PREFIX_LEN:PREFIX_LEN+3]
try:
new_n = int(n) + 59
except ValueError:
print "Failed to parse filename: " + old_file
continue
new_file = old_file[:PREFIX_LEN] + str(new_n) + old_file[PREFIX_LEN+3:]
try:
os.rename(old_file, new_file)
catch OSError:
print "failed to rename " + old_file
</code></pre>
| 1 | 2016-07-21T22:29:31Z | [
"python",
"python-2.7",
"operating-system"
] |
convert txt file to xls in python and add new columns | 38,514,894 | <p>I have a text file like this content :</p>
<pre><code>28179
49172
40180
36228
29337
</code></pre>
<p>I want to convert text file to excel output <strong>".xls"</strong> that <strong>has two columns</strong> as follow :</p>
<pre><code>ID ---- Code
1 ---- 28179
2 ---- 49172
3 ---- 40180
4 ---- 36228
5 ---- 29337
</code></pre>
<p>How can i do this with python?</p>
<p>Thanks!</p>
| 0 | 2016-07-21T21:46:26Z | 38,515,266 | <p>First import the <a href="https://pypi.python.org/pypi/xlwt" rel="nofollow">xlwt library</a> and read the file and save it in an array:</p>
<pre><code>import xlwt
data = []
with open("data.txt") as f:
for line in f:
data.append(line)
</code></pre>
<p>Then copy that array and write the new column to an Excel Spreadsheet:</p>
<pre><code>wb = xlwt.Workbook()
sheet = wb.add_sheet("New Sheet")
for row_index in range(len(data)):
for col_index in range(2)):
if col_index == 1:
sheet.write(row_index, col_index, row_index)
if col_index == 2:
sheet.write(row_index, col_index, data[row_index])
wb.save("newSheet.xls")
</code></pre>
<p>This is a modified version of the code provided in <a href="http://stackoverflow.com/questions/21316568/python-text-file-strings-into-columns-in-spreadsheet">this question</a></p>
| 0 | 2016-07-21T22:18:31Z | [
"python",
"excel",
"text",
"xls"
] |
convert txt file to xls in python and add new columns | 38,514,894 | <p>I have a text file like this content :</p>
<pre><code>28179
49172
40180
36228
29337
</code></pre>
<p>I want to convert text file to excel output <strong>".xls"</strong> that <strong>has two columns</strong> as follow :</p>
<pre><code>ID ---- Code
1 ---- 28179
2 ---- 49172
3 ---- 40180
4 ---- 36228
5 ---- 29337
</code></pre>
<p>How can i do this with python?</p>
<p>Thanks!</p>
| 0 | 2016-07-21T21:46:26Z | 39,353,126 | <p>suppose you have the data saved in '<strong>data.txt</strong>' and if you could install,</p>
<pre><code>pip install pyexcel-io
pip install pyexcel-xls
</code></pre>
<p>you can use the following code:</p>
<pre><code>from pyexcel_io import save_data
SYMBOL = '----'
def data_gen(text_file):
yield ['ID', SYMBOL, 'Code'] # first row
with open(text_file, "r") as input_file:
for row_index, element in enumerate(input_file, 1):
yield row_index, SYMBOL, element.strip() # the rest
save_data("output.xls", {'data': data_gen("data.txt")})
</code></pre>
<p>if your data file exceeds 65,536 rows, you need to install</p>
<pre><code>pip install pyexcel-xlsx
</code></pre>
<p>and update the last line of code as :</p>
<pre><code>...
save_data("output.xlsx", {'data': data_gen("data.txt")})
</code></pre>
| 0 | 2016-09-06T15:51:25Z | [
"python",
"excel",
"text",
"xls"
] |
what is the order of the dictionary in python | 38,514,915 | <p>I have a question about the order of the dictionary in python. I am in python 2.7</p>
<pre><code>array ={"dad":"nse","cat":"error","bob":"das","nurse":"hello"}
for key in array:
print key
</code></pre>
<p>why result shows </p>
<pre><code>dad
bob
nurse
cat
</code></pre>
<p>NOT</p>
<pre><code>dad
cat
bob
nurse
</code></pre>
| -4 | 2016-07-21T21:47:58Z | 38,514,943 | <p>In the standard Python implementation, CPython, dictionaries have no guaranteed order, <a href="https://docs.python.org/2/library/stdtypes.html#dict.items" rel="nofollow">per the docs</a>.</p>
| 1 | 2016-07-21T21:49:55Z | [
"python",
"python-2.7",
"dictionary"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.