title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
How to concatenate two numpy ndarrays without using concatenate | 38,705,094 | <p>I am writing code which utilizes Numba to JIT compile my python code.
The function takes in two arrays of same length as input, randomly selects a slicing point and returns a tuple with two Frankenstein array formed by parts of the two input strings.
Numba however does not yet support the numpy.concatenate function (don't know if it ever will). As I am unwilling to drop Numpy, does anyone know a performant solution for concatenating two Numpy arrays without the concatenate function?</p>
<pre><code>def randomSlice(str1, str2):
lenstr = len(str1)
rnd = np.random.randint(1, lenstr)
return (np.concatenate((str1[:rnd], str2[rnd:])), np.concatenate((str2[:rnd], str1[rnd:])))
</code></pre>
| 2 | 2016-08-01T17:48:06Z | 38,705,440 | <p>This might work for you:</p>
<pre><code>import numpy as np
import numba as nb
@nb.jit(nopython=True)
def randomSlice_nb(str1, str2):
lenstr = len(str1)
rnd = np.random.randint(1, lenstr)
out1 = np.empty_like(str1)
out2 = np.empty_like(str1)
out1[:rnd] = str1[:rnd]
out1[rnd:] = str2[rnd:]
out2[:rnd] = str2[:rnd]
out2[rnd:] = str1[rnd:]
return (out1, out2)
</code></pre>
<p>On my machine, using Numba 0.27 and timing via the <code>timeit</code> module to make sure I'm not counting the jit time in the stats (or you could run it once, and then time subsequent calls), the numba version gives a small but non-negligible performance increase on various size input arrays of ints or floats. If the arrays have a dtype of something like <code>|S1</code>, then numba is significantly slower. The Numba team has spent very little time optimizing non-numeric usecases so this isn't terribly surprising. I'm a little unclear about the exact form of your input arrays <code>str1</code> and <code>str2</code>, so I can't exactly guarantee that the code will work for your specific usecase.</p>
| 1 | 2016-08-01T18:11:23Z | [
"python",
"numpy",
"numba"
] |
using MultiSelect widget to hide and show lines in bokeh | 38,705,123 | <p>i'm working with four sets of data, each of them have several number of time series. i'm using bokeh for plotting all of them together, the result looks like this:</p>
<p><a href="http://i.stack.imgur.com/u0Azi.png" rel="nofollow">multiline graph bokeh with widget </a></p>
<pre><code>from bokeh.plotting import figure, output_file, show
from bokeh.palettes import RdYlGn4
from bokeh.models import CustomJS, ColumnDataSource, MultiSelect
from bokeh.layouts import row, widgetbox
output_file("graph.html")
p = figure(plot_width=1000, plot_height=400, x_axis_type="datetime", title="title")
cadena=range(4)
for i,comp in enumerate(cadena):
ts=[t for t in data_plu_price.columns if int(t) in df.T[df.C==comp].values]
n_lines=len(data[ts].columns)
p.multi_line(xs=[data[ts].index.values]*n_lines, ys=[data[t].values for t in ts],line_color=RdYlGn4[i], legend=str(i))
p.title.align = "center"
p.title.text_font_size = "20px"
p.xaxis.axis_label = 'date'
p.yaxis.axis_label = 'price'
callback = CustomJS("""Some Code""")
multi_select = MultiSelect(title="Select:", value=cadena,
options=[(str(i), str(i)) for i in range(4)])
layout = row(p,widgetbox(multi_select))
show(layout)
</code></pre>
<p>the problem is that it looks really messy, so i wanned to use the multiselect widget to show/hide all the groups of multilines(4). What kind of code do i need to use in the creation of the <code>multi_line</code>and in the callback object for making this interaction? </p>
<p>Any guidance?</p>
<p>Thanks in advance.</p>
| 1 | 2016-08-01T17:50:17Z | 38,706,613 | <p>Support for doing exactly that (using a MultiSelect widget to hide/show lines) was just added in version 0.12.1 in this PR: <a href="https://github.com/bokeh/bokeh/pull/4868" rel="nofollow">https://github.com/bokeh/bokeh/pull/4868</a></p>
<p>There's an example here (copied below): <a href="https://github.com/bokeh/bokeh/blob/master/examples/plotting/file/line_on_off.py" rel="nofollow">https://github.com/bokeh/bokeh/blob/master/examples/plotting/file/line_on_off.py</a></p>
<pre><code>""" Example demonstrating turning lines on and off - with JS only
"""
import numpy as np
from bokeh.io import output_file, show
from bokeh.layouts import row
from bokeh.palettes import Viridis3
from bokeh.plotting import figure
from bokeh.models import CheckboxGroup, CustomJS
output_file("line_on_off.html", title="line_on_off.py example")
code = """
if (0 in checkbox.active) {
l0.visible = true
} else {
l0.visible = false
}
if (1 in checkbox.active) {
l1.visible = true
} else {
l1.visible = false
}
if (2 in checkbox.active) {
l2.visible = true
} else {
l2.visible = false
}
"""
p = figure()
props = dict(line_width=4, line_alpha=0.7)
x = np.linspace(0, 4 * np.pi, 100)
l0 = p.line(x, np.sin(x), color=Viridis3[0], legend="Line 0", **props)
l1 = p.line(x, 4 * np.cos(x), color=Viridis3[1], legend="Line 1", **props)
l2 = p.line(x, np.tan(x), color=Viridis3[2], legend="Line 2", **props)
callback = CustomJS(code=code, args={})
checkbox = CheckboxGroup(labels=["Line 0", "Line 1", "Line 2"], active=[0, 1, 2], callback=callback, width=100)
callback.args = dict(l0=l0, l1=l1, l2=l2, checkbox=checkbox)
layout = row(checkbox, p)
show(layout)
</code></pre>
| 0 | 2016-08-01T19:24:25Z | [
"python",
"python-2.7",
"widget",
"bokeh"
] |
Django using custom SQL instead of models to return JSON object to template | 38,705,147 | <p>I currently am able to retrieve JSON data/object from my models.py and MySQL database and send that JSON object to my template. How would I be able to use custom SQL to retrieve data from MySQL then make it into a JSON object to send to my template. I basically don't want to use models.py at all. Here is what I have in my views.py when I am using models.py:</p>
<pre><code>def startpage(request):
platforms = Platform.objects.select_related().values('platformtype')
return render(request, 'HTML1.html', {'platforms_as_json' : json.dumps(list(platforms)),})
</code></pre>
<p>This is what I have so far:</p>
<pre><code>def my_custom_sql(self):
cursor = connection.cursor()
cursor.execute("SELECT platformtype FROM Platform", [self.Platform])
row = cursor.fetchone()
return row
</code></pre>
<p>How would I be able to do the same thing except without using models.py and using custom SQL queries within my views? Thank you</p>
<p>UPDATE:</p>
<pre><code>def startpage(request):
platforms = my_custom_sql()
return render(request, 'Html1.html', {'platforms_as_json' : json.dumps(list(platforms)), })
def my_custom_sql():
cursor = connection.cursor()
cursor.execute("SELECT HWPlatformName FROM hwplatform", None)
rows = cursor.fetchall()
return rows
</code></pre>
<p>Now I am able to get the data onto my template but I don't believe it is giving me correct JSON format..</p>
| 1 | 2016-08-01T17:51:21Z | 38,705,372 | <p>If you want instances of the Model, you're looking for the <a href="https://docs.djangoproject.com/en/1.9/topics/db/sql/" rel="nofollow"><code>raw</code></a> method on the objects property.</p>
<pre><code>platforms = Platform.objects.raw('SELECT * FROM Platform')
</code></pre>
<p>If you're just looking for content from the server, then you can just return the value from the SQL query:</p>
<pre><code>platforms = my_custom_sql() # Or call my_custom_sql statically.
</code></pre>
<p>If you're looking for delayed population, you can put the <code>yield</code> statement into your <code>my_custom_sql</code> function.</p>
| 1 | 2016-08-01T18:06:46Z | [
"python",
"mysql",
"json",
"django"
] |
redis not working in my python django app | 38,705,235 | <p>I first followed the tutorial on the heroku site. I did this</p>
<pre><code>pip install rq
</code></pre>
<p>then in a worker.py file</p>
<pre><code>import os
import redis
from rq import Worker, Queue, Connection
listen = ['high', 'default', 'low']
redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379')
conn = redis.from_url(redis_url)
if __name__ == '__main__':
with Connection(conn):
worker = Worker(map(Queue, listen))
worker.work()
</code></pre>
<p>and then</p>
<pre><code>python worker.py
</code></pre>
<p>and I got the following error</p>
<pre><code> Traceback (most recent call last):
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 439, in connect
sock = self._connect()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 494, in _connect
raise err
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 482, in _connect
sock.connect(socket_address)
ConnectionRefusedError: [Errno 61] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/client.py", line 572, in execute_command
connection.send_command(*args)
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 563, in send_command
self.send_packed_command(self.pack_command(*args))
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 538, in send_packed_command
self.connect()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 442, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 61 connecting to localhost:6379. Connection refused.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 439, in connect
sock = self._connect()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 494, in _connect
raise err
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 482, in _connect
sock.connect(socket_address)
ConnectionRefusedError: [Errno 61] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "worker.py", line 15, in <module>
worker.work()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/rq/worker.py", line 423, in work
self.register_birth()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/rq/worker.py", line 242, in register_birth
if self.connection.exists(self.key) and \
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/client.py", line 855, in exists
return self.execute_command('EXISTS', name)
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/client.py", line 578, in execute_command
connection.send_command(*args)
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 563, in send_command
self.send_packed_command(self.pack_command(*args))
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 538, in send_packed_command
self.connect()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 442, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 61 connecting to localhost:6379. Connection refused.
</code></pre>
<p>I then went to google and found the package index which I also followed which is</p>
<pre><code>>>> import redis
>>> r = redis.StrictRedis(host='localhost', port=6379, db=0)
>>> r.set('foo', 'bar')
</code></pre>
<p>hit enter and got the following message</p>
<pre><code> Traceback (most recent call last):
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 439, in connect
sock = self._connect()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 494, in _connect
raise err
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 482, in _connect
sock.connect(socket_address)
ConnectionRefusedError: [Errno 61] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/client.py", line 572, in execute_command
connection.send_command(*args)
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 563, in send_command
self.send_packed_command(self.pack_command(*args))
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 538, in send_packed_command
self.connect()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 442, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 61 connecting to localhost:6379. Connection refused.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 439, in connect
sock = self._connect()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 494, in _connect
raise err
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 482, in _connect
sock.connect(socket_address)
ConnectionRefusedError: [Errno 61] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/client.py", line 1072, in set
return self.execute_command('SET', *pieces)
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/client.py", line 578, in execute_command
connection.send_command(*args)
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 563, in send_command
self.send_packed_command(self.pack_command(*args))
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 538, in send_packed_command
self.connect()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/redis/connection.py", line 442, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 61 connecting to localhost:6379. Connection refused.
</code></pre>
<p>I have done no more or less then what these tutorials ask. How can I make this work?</p>
| 0 | 2016-08-01T17:57:29Z | 38,710,498 | <p>You need to run the redis server. Type redis-server on you console to start the server(Mac OSX).</p>
<pre><code>$redis-server
</code></pre>
<p>Remember that the worker needs a broker(redis) in order to communicate with your app. </p>
| 0 | 2016-08-02T01:33:31Z | [
"python",
"django",
"heroku",
"redis"
] |
How to give sns.clustermap a precomputed distance matrix? | 38,705,359 | <p>Usually when I do dendrograms and heatmaps, I use a distance matrix and do a bunch of <code>SciPy</code> stuff. I want to try out <code>Seaborn</code> but <code>Seaborn</code> wants my data in rectangular form (rows=samples, cols=attributes, not a distance matrix)? </p>
<p>I essentially want to use <code>seaborn</code> as the backend to compute my dendrogram and tack it on to my heatmap. Is this possible? If not, can this be a feature in the future. </p>
<p><strong>Maybe there are parameters I can adjust so it can take a distance matrix instead of a rectangular matrix?</strong></p>
<p>Here's the usage:</p>
<pre><code>seaborn.clustermap¶
seaborn.clustermap(data, pivot_kws=None, method='average', metric='euclidean',
z_score=None, standard_scale=None, figsize=None, cbar_kws=None, row_cluster=True,
col_cluster=True, row_linkage=None, col_linkage=None, row_colors=None,
col_colors=None, mask=None, **kwargs)
</code></pre>
<p>My code below:</p>
<pre><code>from sklearn.datasets import load_iris
iris = load_iris()
X, y = iris.data, iris.target
DF = pd.DataFrame(X, index = ["iris_%d" % (i) for i in range(X.shape[0])], columns = iris.feature_names)
</code></pre>
<p><a href="http://i.stack.imgur.com/U1Jpe.png"><img src="http://i.stack.imgur.com/U1Jpe.png" alt="enter image description here"></a></p>
<p>I don't think my method is correct below because I'm giving it a precomputed distance matrix and NOT a rectangular data matrix as it requests. There's no examples of how to use a correlation/distance matrix with <code>clustermap</code> but there is for <a href="https://stanford.edu/~mwaskom/software/seaborn/examples/network_correlations.html">https://stanford.edu/~mwaskom/software/seaborn/examples/network_correlations.html</a> but the ordering is not clustered w/ the plain <code>sns.heatmap</code> func. </p>
<pre><code>DF_corr = DF.T.corr()
DF_dism = 1 - DF_corr
sns.clustermap(DF_dism)
</code></pre>
<p><a href="http://i.stack.imgur.com/xHlZR.png"><img src="http://i.stack.imgur.com/xHlZR.png" alt="enter image description here"></a></p>
| 8 | 2016-08-01T18:05:36Z | 38,858,404 | <p>You can pass the precomputed distance matrix as linkage to <code>clustermap()</code>:</p>
<pre><code>import pandas as pd, seaborn as sns
import scipy.spatial as sp, scipy.cluster.hierarchy as hc
from sklearn.datasets import load_iris
sns.set(font="monospace")
iris = load_iris()
X, y = iris.data, iris.target
DF = pd.DataFrame(X, index = ["iris_%d" % (i) for i in range(X.shape[0])], columns = iris.feature_names)
DF_corr = DF.T.corr()
DF_dism = 1 - DF_corr # distance matrix
linkage = hc.linkage(sp.distance.squareform(DF_dism), method='average')
sns.clustermap(DF_dism, row_linkage=linkage, col_linkage=linkage)
</code></pre>
<p>For <code>clustermap(distance_matrix)</code> (i.e., without linkage passed), the linkage is calculated internally based on pairwise distances of the rows and columns in the distance matrix (see note below for full details) instead of using the elements of the distance matrix directly (the correct solution). As a result, the output is somewhat different from the one in the question:
<a href="http://i.stack.imgur.com/g3Qqo.png" rel="nofollow"><img src="http://i.stack.imgur.com/g3Qqo.png" alt="clustermap"></a></p>
<p>Note: if no <code>row_linkage</code> is passed to <code>clustermap()</code>, the row linkage is determined internally by considering each row a "point" (observation) and calculating the pairwise distances between the points. So the row dendrogram reflects row similarity. Analogous for <code>col_linkage</code>, where each column is considered a point. This explanation should likely be added to the <a href="https://web.stanford.edu/~mwaskom/software/seaborn/generated/seaborn.clustermap.html" rel="nofollow">docs</a>. Here the docs's first example modified to make the internal linkage calculation explicit:</p>
<pre><code>import seaborn as sns; sns.set()
import scipy.spatial as sp, scipy.cluster.hierarchy as hc
flights = sns.load_dataset("flights")
flights = flights.pivot("month", "year", "passengers")
row_linkage, col_linkage = (hc.linkage(sp.distance.pdist(x), method='average')
for x in (flights.values, flights.values.T))
g = sns.clustermap(flights, row_linkage=row_linkage, col_linkage=col_linkage)
# note: this produces the same plot as "sns.clustermap(flights)", where
# clustermap() calculates the row and column linkages internally
</code></pre>
| 1 | 2016-08-09T18:55:33Z | [
"python",
"matplotlib",
"heatmap",
"seaborn",
"hierarchical-clustering"
] |
Django sorting by date(day) | 38,705,451 | <p>I want to sort models by day first and then by score, meaning I'd like to see the the highest scoring Articles in each day. </p>
<pre><code>class Article(models.Model):
date_modified = models.DateTimeField(blank=True, null=True)
score = models.DecimalField(max_digits=5, decimal_places=3, blank=True, null=True)
</code></pre>
<p>This answer <a href="http://stackoverflow.com/questions/23599642/django-datetime-field-query-order-by-time-hour">Django Datetime Field Query - Order by Time/Hour</a> suggests that I use <code>'__day'</code> with my <code>date_modified</code> as in:</p>
<pre><code>Article.objects.filter().order_by('-date_modified__day', '-score')
FieldError: Cannot resolve keyword 'day' into field. Join on 'date_modified' not permitted.
</code></pre>
<p>However I get the same error as in the post, so I'm not even sure it should work this way.</p>
<p>I found other answers <a href="http://stackoverflow.com/questions/3652975/django-order-by-date-in-datetime-extract-date-from-datetime">django order by date in datetime / extract date from datetime</a> using <code>.extra</code>: </p>
<pre><code>Article.objects.filter().extra(select={"day_mod": "strftime('%d', date_modified)"}, order_by=['-day_mod', '-score'])
</code></pre>
<p>This works for filtering with no other conditions, but if I apply a condition on the filter such as a category:</p>
<pre><code>Article.objects.filter(category = 'Art').extra(select={'day_mod': "strftime('%d', date_modified)"}, order_by=['-day_mod', '-score'])
</code></pre>
<p>I get this error:</p>
<pre><code>File "/home/mykolas/anaconda2/lib/python2.7/site-packages/IPython/core/formatters.py", line 699, in __call__
printer.pretty(obj)
File "/home/mykolas/anaconda2/lib/python2.7/site-packages/IPython/lib/pretty.py", line 383, in pretty
return _default_pprint(obj, self, cycle)
File "/home/mykolas/anaconda2/lib/python2.7/site-packages/IPython/lib/pretty.py", line 503, in _default_pprint
_repr_pprint(obj, p, cycle)
File "/home/mykolas/anaconda2/lib/python2.7/site-packages/IPython/lib/pretty.py", line 694, in _repr_pprint
output = repr(obj)
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/query.py", line 234, in __repr__
data = list(self[:REPR_OUTPUT_SIZE + 1])
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/query.py", line 258, in __iter__
self._fetch_all()
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/query.py", line 1074, in _fetch_all
self._result_cache = list(self.iterator())
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/query.py", line 52, in __iter__
results = compiler.execute_sql()
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql
cursor.execute(sql, params)
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/backends/utils.py", line 83, in execute
sql = self.db.ops.last_executed_query(self.cursor, sql, params)
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/backends/sqlite3/operations.py", line 146, in last_executed_query
return sql % params
TypeError: %d format: a number is required, not unicode
</code></pre>
<p>Don't really know what's going on here, help would be appreciated.</p>
| 1 | 2016-08-01T18:11:55Z | 38,705,818 | <p>I guess you should use the standar date ordering, without extra method. Format processing is a template responsability.</p>
<pre><code>Article.objects.filter(category='Art').order_by=('-date_modified', '-score')
</code></pre>
<p>Then in you template, you can show the date in the format you want. I leave you an example, see the api docs for <a href="https://docs.djangoproject.com/es/1.9/ref/templates/builtins/#date" rel="nofollow">more options</a>.</p>
<pre><code>{{ date_modified|date:"M/d"|lower }}
</code></pre>
<p>Another example (maybe mor suitable for your needs):</p>
<pre><code>{{ date_modified|date:"D d M Y" }} {{ date_modified|time:"H:i" }}
</code></pre>
| 1 | 2016-08-01T18:36:39Z | [
"python",
"django",
"sorting"
] |
Django sorting by date(day) | 38,705,451 | <p>I want to sort models by day first and then by score, meaning I'd like to see the the highest scoring Articles in each day. </p>
<pre><code>class Article(models.Model):
date_modified = models.DateTimeField(blank=True, null=True)
score = models.DecimalField(max_digits=5, decimal_places=3, blank=True, null=True)
</code></pre>
<p>This answer <a href="http://stackoverflow.com/questions/23599642/django-datetime-field-query-order-by-time-hour">Django Datetime Field Query - Order by Time/Hour</a> suggests that I use <code>'__day'</code> with my <code>date_modified</code> as in:</p>
<pre><code>Article.objects.filter().order_by('-date_modified__day', '-score')
FieldError: Cannot resolve keyword 'day' into field. Join on 'date_modified' not permitted.
</code></pre>
<p>However I get the same error as in the post, so I'm not even sure it should work this way.</p>
<p>I found other answers <a href="http://stackoverflow.com/questions/3652975/django-order-by-date-in-datetime-extract-date-from-datetime">django order by date in datetime / extract date from datetime</a> using <code>.extra</code>: </p>
<pre><code>Article.objects.filter().extra(select={"day_mod": "strftime('%d', date_modified)"}, order_by=['-day_mod', '-score'])
</code></pre>
<p>This works for filtering with no other conditions, but if I apply a condition on the filter such as a category:</p>
<pre><code>Article.objects.filter(category = 'Art').extra(select={'day_mod': "strftime('%d', date_modified)"}, order_by=['-day_mod', '-score'])
</code></pre>
<p>I get this error:</p>
<pre><code>File "/home/mykolas/anaconda2/lib/python2.7/site-packages/IPython/core/formatters.py", line 699, in __call__
printer.pretty(obj)
File "/home/mykolas/anaconda2/lib/python2.7/site-packages/IPython/lib/pretty.py", line 383, in pretty
return _default_pprint(obj, self, cycle)
File "/home/mykolas/anaconda2/lib/python2.7/site-packages/IPython/lib/pretty.py", line 503, in _default_pprint
_repr_pprint(obj, p, cycle)
File "/home/mykolas/anaconda2/lib/python2.7/site-packages/IPython/lib/pretty.py", line 694, in _repr_pprint
output = repr(obj)
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/query.py", line 234, in __repr__
data = list(self[:REPR_OUTPUT_SIZE + 1])
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/query.py", line 258, in __iter__
self._fetch_all()
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/query.py", line 1074, in _fetch_all
self._result_cache = list(self.iterator())
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/query.py", line 52, in __iter__
results = compiler.execute_sql()
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql
cursor.execute(sql, params)
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/backends/utils.py", line 83, in execute
sql = self.db.ops.last_executed_query(self.cursor, sql, params)
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/backends/sqlite3/operations.py", line 146, in last_executed_query
return sql % params
TypeError: %d format: a number is required, not unicode
</code></pre>
<p>Don't really know what's going on here, help would be appreciated.</p>
| 1 | 2016-08-01T18:11:55Z | 38,707,048 | <p>If you're using Django >= 1.8 you can use a <a href="https://docs.djangoproject.com/en/1.9/ref/models/expressions/#func-expressions" rel="nofollow"><code>Func</code></a> expression. The problem you're experiencing is that the <code>%</code> notation is passed directly to the database adapter, which is trying to replace the <code>%</code> with the relevant parameters.</p>
<p>You can use it like this:</p>
<pre><code>from django.db.models import F, Func, Value
Article.objects.annotate(
day_mod=Func(Value('%d'), F('date_modified'),
function='strftime'
).order_by('-day_mod', '-score')
</code></pre>
<p>This should (theoretically, I haven't tested it) end up with a SQL query like this:</p>
<pre><code>SELECT
...
strftime('%d', "article"."date_modified") AS "day_mod"
FROM "article"
...
ORDER BY "day_mod" DESC, "score" DESC
</code></pre>
<p>However, I suspect you'll need to add the year and the month to the <code>strftime</code>, otherwise you'll end up with the articles at the beginning of a month being buried by older articles that happened at the end of previous months.</p>
<p>It should also be noted that the <code>strftime</code> function is not supported by MySQL or PostgreSQL. As far as I can tell it's only supported by SQLite, which shouldn't be used in production.</p>
<p>Unfortunately there doesn't seem to be a standard for datetime formatting in SQL. MySQL seems to use <code>DATE_FORMAT(date, format)</code> and PostgreSQL uses <code>to_char(date, format)</code>.</p>
<p>If you want to support both SQLite and another DB, you can <a href="https://docs.djangoproject.com/en/1.9/ref/models/expressions/#writing-your-own-query-expressions" rel="nofollow">write a custom expression</a> with the relevant <code>as_sqlite</code> and <code>as_<db></code> methods that will format your datetimes, but it might be a bit difficult, as not all the DBs use the same formatting strings.</p>
<p>Your best bet is probably to just cast the datetime to date, i.e. <code>CAST (date_modified AS DATE)</code>. This ought to work in most flavours of SQL. Simplest way I can come up with is:</p>
<pre><code>from django.db.models import DateField, Expression, F
class CastToDate(Expression):
template = 'CAST( %(expressions)s AS DATE )'
def __init__(self, expressions, output_field=None, **extra):
output_field = output_field or DateField()
super(CastToDate, self).__init__(self, expressions, output_field, **extra)
if len(expressions) != 1:
raise ValueError('expressions must have exactly 1 element')
Articles.objects.annotate(
day_mod=CastToDate(F('date_modified'))).order_by('-day_mod', '-score')
</code></pre>
| 1 | 2016-08-01T19:52:45Z | [
"python",
"django",
"sorting"
] |
Django sorting by date(day) | 38,705,451 | <p>I want to sort models by day first and then by score, meaning I'd like to see the the highest scoring Articles in each day. </p>
<pre><code>class Article(models.Model):
date_modified = models.DateTimeField(blank=True, null=True)
score = models.DecimalField(max_digits=5, decimal_places=3, blank=True, null=True)
</code></pre>
<p>This answer <a href="http://stackoverflow.com/questions/23599642/django-datetime-field-query-order-by-time-hour">Django Datetime Field Query - Order by Time/Hour</a> suggests that I use <code>'__day'</code> with my <code>date_modified</code> as in:</p>
<pre><code>Article.objects.filter().order_by('-date_modified__day', '-score')
FieldError: Cannot resolve keyword 'day' into field. Join on 'date_modified' not permitted.
</code></pre>
<p>However I get the same error as in the post, so I'm not even sure it should work this way.</p>
<p>I found other answers <a href="http://stackoverflow.com/questions/3652975/django-order-by-date-in-datetime-extract-date-from-datetime">django order by date in datetime / extract date from datetime</a> using <code>.extra</code>: </p>
<pre><code>Article.objects.filter().extra(select={"day_mod": "strftime('%d', date_modified)"}, order_by=['-day_mod', '-score'])
</code></pre>
<p>This works for filtering with no other conditions, but if I apply a condition on the filter such as a category:</p>
<pre><code>Article.objects.filter(category = 'Art').extra(select={'day_mod': "strftime('%d', date_modified)"}, order_by=['-day_mod', '-score'])
</code></pre>
<p>I get this error:</p>
<pre><code>File "/home/mykolas/anaconda2/lib/python2.7/site-packages/IPython/core/formatters.py", line 699, in __call__
printer.pretty(obj)
File "/home/mykolas/anaconda2/lib/python2.7/site-packages/IPython/lib/pretty.py", line 383, in pretty
return _default_pprint(obj, self, cycle)
File "/home/mykolas/anaconda2/lib/python2.7/site-packages/IPython/lib/pretty.py", line 503, in _default_pprint
_repr_pprint(obj, p, cycle)
File "/home/mykolas/anaconda2/lib/python2.7/site-packages/IPython/lib/pretty.py", line 694, in _repr_pprint
output = repr(obj)
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/query.py", line 234, in __repr__
data = list(self[:REPR_OUTPUT_SIZE + 1])
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/query.py", line 258, in __iter__
self._fetch_all()
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/query.py", line 1074, in _fetch_all
self._result_cache = list(self.iterator())
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/query.py", line 52, in __iter__
results = compiler.execute_sql()
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql
cursor.execute(sql, params)
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/backends/utils.py", line 83, in execute
sql = self.db.ops.last_executed_query(self.cursor, sql, params)
File "/home/mykolas/lenv/lib/python2.7/site-packages/django/db/backends/sqlite3/operations.py", line 146, in last_executed_query
return sql % params
TypeError: %d format: a number is required, not unicode
</code></pre>
<p>Don't really know what's going on here, help would be appreciated.</p>
| 1 | 2016-08-01T18:11:55Z | 38,709,728 | <pre><code>from django.db.models import DateTimeField
from django.db.models.functions import Trunc
Article.objects.order_by(
Trunc('date_modified', 'date', output_field=DateTimeField()).desc(),
'-score')
</code></pre>
<ul>
<li><a href="https://docs.djangoproject.com/en/1.10/ref/models/database-functions/#trunc" rel="nofollow"><code>Trunc()</code></a> (Django 1.10)</li>
<li><a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#order-by" rel="nofollow"><code>order_by()</code></a></li>
</ul>
| 1 | 2016-08-01T23:49:11Z | [
"python",
"django",
"sorting"
] |
how to update to a specific versionh of pyserial 2.6 | 38,705,458 | <p>I want to update to pyserial 2.6,I normally install using pip install pyserial,is there a pip command to install to a specific version</p>
| 0 | 2016-08-01T18:12:31Z | 38,707,186 | <pre><code>pip install pyserial==2.6
</code></pre>
<p>or you can as well use >= and <=</p>
<p>Also, try <code>pip help install</code> to get more help on pip installation options</p>
| 0 | 2016-08-01T20:02:22Z | [
"python",
"pip",
"pyserial"
] |
Regex to match only part of certain line | 38,705,502 | <p>I have some config file from which I need to extract only some values. For example, I have this:</p>
<pre><code>PART
{
title = Some Title
description = Some description here. // this 2 params are needed
tags = qwe rty // don't need this param
...
}
</code></pre>
<p>I need to extract value of certain param, for example <code>description</code>'s value. How do I do this in Python3 with regex?</p>
| -1 | 2016-08-01T18:16:18Z | 38,705,605 | <p>This is a pretty simple regex, you just need a positive lookbehind, and optionally something to remove the comments. (do this by appending <code>?(//)?</code> to the regex)</p>
<pre><code>r"(?<=description = ).*"
</code></pre>
<p><a href="https://regex101.com/r/bL4dQ9/1" rel="nofollow">Regex101 demo</a></p>
| -1 | 2016-08-01T18:23:01Z | [
"python",
"regex",
"python-3.x"
] |
Regex to match only part of certain line | 38,705,502 | <p>I have some config file from which I need to extract only some values. For example, I have this:</p>
<pre><code>PART
{
title = Some Title
description = Some description here. // this 2 params are needed
tags = qwe rty // don't need this param
...
}
</code></pre>
<p>I need to extract value of certain param, for example <code>description</code>'s value. How do I do this in Python3 with regex?</p>
| -1 | 2016-08-01T18:16:18Z | 38,705,715 | <p>The better approach would be to use an established configuration file system. Python has built-in support for INI-like files in the <a href="https://docs.python.org/3/library/configparser.html" rel="nofollow"><code>configparser</code></a> module.</p>
<p>However, if you just <em>desperately</em> need to get the string of text in that file after the <code>description</code>, you could do this:</p>
<pre><code>def get_value_for_key(key, file):
with open(file) as f:
lines = f.readlines()
for line in lines:
line = line.lstrip()
if line.startswith(key + " ="):
return line.split("=", 1)[1].lstrip()
</code></pre>
<p>You can use it with a call like: <code>get_value_for_key("description", "myfile.txt")</code>. The method will return <code>None</code> if nothing is found. It is assumed that your file will be formatted where there is a space and the equals sign after the key name, e.g. <code>key = value</code>.</p>
<p>This avoids regular expressions altogether and preserves any whitespace on the right side of the value. (If that's not important to you, you can use <code>strip</code> instead of <code>lstrip</code>.)</p>
<p>Why avoid regular expressions? They're expensive and really not ideal for this scenario. Use simple string matching. This avoids importing a module and simplifies your code. But really I'd say to convert to a supported configuration file format.</p>
| 0 | 2016-08-01T18:30:50Z | [
"python",
"regex",
"python-3.x"
] |
Regex to match only part of certain line | 38,705,502 | <p>I have some config file from which I need to extract only some values. For example, I have this:</p>
<pre><code>PART
{
title = Some Title
description = Some description here. // this 2 params are needed
tags = qwe rty // don't need this param
...
}
</code></pre>
<p>I need to extract value of certain param, for example <code>description</code>'s value. How do I do this in Python3 with regex?</p>
| -1 | 2016-08-01T18:16:18Z | 38,705,796 | <p>Here is the regex, assuming that the file text is in <code>txt</code>:</p>
<pre><code>import re
m = re.search(r'^\s*description\s*=\s*(.*?)(?=(//)|$)', txt, re.M)
print(m.group(1))
</code></pre>
<p>Let me explain.
<code>^</code> matches at beginning of line.
Then <code>\s*</code> means zero or more spaces (or tabs)
<code>description</code> is your anchor for finding the value part.
After that we expect <code>=</code> sign with optional spaces before or after by denoting <code>\s*=\s*</code>.
Then we capture everything after the <code>=</code> and optional spaces, by denoting <code>(.*?)</code>. This expression is captured by parenthesis. Inside the parenthesis we say match anything (the dot) as many times as you can find (the asterisk) in a non greedy manner (the question mark), that is, stop as soon as the following expression is matched.</p>
<p>The following expression is a lookahead expression, starting with <code>(?=</code> which matches the thing right after the <code>(?=</code>.
And that thing is actually two options, separated by the vertical bar <code>|</code>.</p>
<p>The first option, to the left of the bar says <code>//</code> (in parenthesis to make it atomic unit for the vertical bar choice operation), that is, the start of the comment, which, I suppose, you don't want to capture.
The second option is <code>$</code>, meaning the end of the line, which will be reached if there is no comment <code>//</code> on the line.
So we look for everything we can after the first <code>=</code> sign, until either we meet a <code>//</code> pattern, or we meet the end of the line. This is the essence of the <code>(?=(//)|$)</code> part.</p>
<p>We also need the <code>re.M</code> flag, to tell the regex engine that we want <code>^</code> and <code>$</code> match the start and end of lines, respectively. Without the flag they match the start and end of the entire string, which isn't what we want in this case.</p>
| 1 | 2016-08-01T18:35:44Z | [
"python",
"regex",
"python-3.x"
] |
Beginning Python: Find greatest number from function that gives input list of positive integers | 38,705,555 | <ol>
<li><p>New to python, having trouble getting the function to display the greatest number, for some reason I have the number display the least.
The quiz I am using used this code as the final solution, I think it is wrong, any help appreciated. </p>
<pre><code># Define a procedure, greatest,
# that takes as input a list
# of positive numbers, and
# returns the greatest number
# in that list. If the input
# list is empty, the output
# should be 0.
def greatest(list_of_numbers):
big = 0
for i in list_of_numbers:
if i > big:
big = i
return big
print greatest([4,23,1])
#>>> 23 I can't get 23 It returns 4 for some reason.
print greatest([])
#>>> 0
</code></pre>
<p>For some reason it gives me 4 instead of 23 as the greatest. </p></li>
</ol>
| 0 | 2016-08-01T18:19:54Z | 38,705,576 | <p>You are returning on the first iteration. Move your return out one level:</p>
<pre><code>def greatest(list_of_numbers):
big = 0
for i in list_of_numbers:
if i > big:
big = i
return big
</code></pre>
<p>However this is entirely unnecessary as Python has this built in:</p>
<pre><code>def greatest(list_of_numbers):
return max(list_of_numbers)
</code></pre>
| 3 | 2016-08-01T18:21:33Z | [
"python"
] |
Split a unicode string into components containing numbers and letters | 38,705,622 | <p>I'd like to split the string <code>u'123K</code> into <code>123</code> and <code>K</code>. I've tried <code>re.match("u'123K", "\d+")</code> to match the number and <code>re.match("u'123K", "K")</code> to match the letter but they don't work. What is a Pythonic way to do this?</p>
| 0 | 2016-08-01T18:24:40Z | 38,705,649 | <p>Use <code>re.findall()</code> to find all numbers and characters:</p>
<pre><code>>>> s = u'123K'
>>> re.findall(r'\d+|[a-zA-Z]+', s) # or use r'\d+|\D+' as mentioned in comment in order to match all numbers and non-numbers.
['123', 'K']
</code></pre>
<p>If you are just dealing with this string or if you only want to split the string from the last character you can simply use a indexing:</p>
<pre><code>num, charracter = s[:-1], s[-1:]
</code></pre>
| 2 | 2016-08-01T18:26:50Z | [
"python",
"regex"
] |
Split a unicode string into components containing numbers and letters | 38,705,622 | <p>I'd like to split the string <code>u'123K</code> into <code>123</code> and <code>K</code>. I've tried <code>re.match("u'123K", "\d+")</code> to match the number and <code>re.match("u'123K", "K")</code> to match the letter but they don't work. What is a Pythonic way to do this?</p>
| 0 | 2016-08-01T18:24:40Z | 38,705,819 | <p>You can also use <a href="https://docs.python.org/3.4/library/itertools.html#itertools.groupby" rel="nofollow"><code>itertools.groupby</code></a> method, grouping digits:</p>
<pre><code>>>> import itertools as it
>>> for _,v in it.groupby(s, key=str.isdigit):
print(''.join(v))
123
K
</code></pre>
| 0 | 2016-08-01T18:36:40Z | [
"python",
"regex"
] |
In advanced collections module in python 2.7. What is the difference between Counter(dict(c.items)) and dict(c) | 38,705,698 | <pre><code>//This is the code
sen = 'How many times does each word show up in the sentence word word shows up up shows'
words = sen.split()
c = Counter(words)
dict(c)
Counter(dict(c.items()))
</code></pre>
<p>//Output</p>
<pre><code>//output of dict(c)
{'How': 1,
'does': 1,
'each': 1,
'in': 1,
'many': 1,
'sentence': 1,
'show': 1,
'shows': 2,
'the': 1,
'times': 1,
'up': 3,
'word': 3}
//output of Counter(dict(c.items()))
Counter({'How': 1,
'does': 1,
'each': 1,
'in': 1,
'many': 1,
'sentence': 1,
'show': 1,
'shows': 2,
'the': 1,
'times': 1,
'up': 3,
'word': 3})
</code></pre>
| -5 | 2016-08-01T18:29:33Z | 38,705,782 | <p>Read the docs about <a href="https://docs.python.org/2/library/collections.html#collections.Counter" rel="nofollow">Counters</a>. They're generally used for tallying only. It provides other operations that aren't available with vanilla <code>dict</code>'s. </p>
| 0 | 2016-08-01T18:35:14Z | [
"python"
] |
In python, how do I scan a text file with one long row and separate the items into different columns? | 38,705,701 | <p>I have a text file that looks like this:</p>
<pre><code>âDistance 1: Distance XYâ 1 2 4 5 9 âDistance 2: Distance XYâ 3 6 8 10 5 âDistance 3: Distance XYâ 88 45 36 12 4
</code></pre>
<p>It is all on one big line like this. My question is how do I take this and separate the distance measurements so that the lines look something more like this:</p>
<pre><code>âDistance 1: Distance XYâ 1 2 4 5 9
âDistance 2: Distance XYâ 3 6 8 10 5
âDistance 3: Distance XYâ 88 45 36 12 4
</code></pre>
<p>I want to do this to make a dictionary for each distance measurement.</p>
| 3 | 2016-08-01T18:29:53Z | 38,705,784 | <p>You can use <code>re.split</code> to split the string with regular expressions:</p>
<pre><code>import re
s = '\"Distance 1: Distance XY\" 1 2 4 5 9 \"Distance 2: Distance XY\" 3 6 8 10 5 \"Distance 3: Distance XY\" 88 45 36 12 4'
re.split(r'(?<=\d)\s+(?=\")', s)
# ['"Distance 1: Distance XY" 1 2 4 5 9',
# '"Distance 2: Distance XY" 3 6 8 10 5',
# '"Distance 3: Distance XY" 88 45 36 12 4']
</code></pre>
<p><code>(?<=\d)\s+(?=\")</code> constrains the delimiter to be the space between a digit and a quote.</p>
<p>If it is smart quote in the text file, replace <code>\"</code> with smart quote, <em>option + [</em> on mac, <a href="http://practicaltypography.com/straight-and-curly-quotes.html" rel="nofollow">check here for windows:</a></p>
<pre><code>with open("test.txt", 'r') as f:
for line in f:
print(re.split(r'(?<=\d)\s+(?=â)', line.rstrip("\n")))
# ['âDistance 1: Distance XYâ 1 2 4 5 9', 'âDistance 2: Distance XYâ 3 6 8 10 5', 'âDistance 3: Distance XYâ 88 45 36 12 4']
</code></pre>
<p>Or use the unicode for left smart quotation marks <code>\u201C</code>:</p>
<pre><code>with open("test.csv", 'r') as f:
for line in f:
print(re.split(r'(?<=\d)\s+(?=\u201C)', line.rstrip("\n")))
# ['âDistance 1: Distance XYâ 1 2 4 5 9', 'âDistance 2: Distance XYâ 3 6 8 10 5', 'âDistance 3: Distance XYâ 88 45 36 12 4']
</code></pre>
| 5 | 2016-08-01T18:35:21Z | [
"python"
] |
In python, how do I scan a text file with one long row and separate the items into different columns? | 38,705,701 | <p>I have a text file that looks like this:</p>
<pre><code>âDistance 1: Distance XYâ 1 2 4 5 9 âDistance 2: Distance XYâ 3 6 8 10 5 âDistance 3: Distance XYâ 88 45 36 12 4
</code></pre>
<p>It is all on one big line like this. My question is how do I take this and separate the distance measurements so that the lines look something more like this:</p>
<pre><code>âDistance 1: Distance XYâ 1 2 4 5 9
âDistance 2: Distance XYâ 3 6 8 10 5
âDistance 3: Distance XYâ 88 45 36 12 4
</code></pre>
<p>I want to do this to make a dictionary for each distance measurement.</p>
| 3 | 2016-08-01T18:29:53Z | 38,705,928 | <p>A perhaps less elegant solution than Psidom's, assuming the lines have the same format every time:</p>
<pre><code>with open("input.txt", 'r') as file:
line = file.read()
line = line.split()
count = 0
output = open("output.txt", 'w')
for i in line:
output.write(i)
output.write(" ")
count+=1
if count == 9:
output.write("\n")
count = 0
output.close()
</code></pre>
| 1 | 2016-08-01T18:42:47Z | [
"python"
] |
In python, how do I scan a text file with one long row and separate the items into different columns? | 38,705,701 | <p>I have a text file that looks like this:</p>
<pre><code>âDistance 1: Distance XYâ 1 2 4 5 9 âDistance 2: Distance XYâ 3 6 8 10 5 âDistance 3: Distance XYâ 88 45 36 12 4
</code></pre>
<p>It is all on one big line like this. My question is how do I take this and separate the distance measurements so that the lines look something more like this:</p>
<pre><code>âDistance 1: Distance XYâ 1 2 4 5 9
âDistance 2: Distance XYâ 3 6 8 10 5
âDistance 3: Distance XYâ 88 45 36 12 4
</code></pre>
<p>I want to do this to make a dictionary for each distance measurement.</p>
| 3 | 2016-08-01T18:29:53Z | 38,706,140 | <p>A attempt to better Andrew's fine answer.</p>
<pre><code>with open("input.txt", 'r') as file:
output = open("output.txt", 'w')
for line in file:
line = line.split()
relevant_line = line[0:9]
relevant_line_as_string = " ".join(relevant_line)
output.write(relevant_line_as_string + '\n')
output.close()
</code></pre>
<p>You don't need to close if your are using 'with' :)</p>
<pre><code>~ â¯â¯â¯ touch input
~ â¯â¯â¯ vim input
~ â¯â¯â¯ touch script.py
~ â¯â¯â¯ vim script.py # script.py has my answer copy pasted there
~ â¯â¯â¯ touch output
~ â¯â¯â¯ python script.py
~ â¯â¯â¯ cat output
âDistance 1: Distance XYâ 1 2 4 5 9
# it works!
</code></pre>
| 1 | 2016-08-01T18:56:06Z | [
"python"
] |
Scrape data from a page that requires a login | 38,705,748 | <p>I am new to Python and Web Scapping and I am trying to write a very basic script that will get data from a webpage that can only be accessed after logging in. I have looked at a bunch of different examples but none are fixing the issue. This is what I have so far: </p>
<pre><code>from bs4 import BeautifulSoup
import urllib, urllib2, cookielib
username = 'name'
password = 'pass'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({'username' : username, 'password' : password})
opener.open('WebpageWithLoginForm')
resp = opener.open('WebpageIWantToAccess')
soup = BeautifulSoup(resp, 'html.parser')
print soup.prettify()
</code></pre>
<p>As of right now when I print the page it just prints the contents of the page as if I was not logged in. I think the issue has something to do with the way I am setting the cookies but I am really not sure because I do not fully understand what is happening with the cookie processor and its libraries.
Thank you!</p>
<p>Current Code:</p>
<pre><code>import requests
import sys
EMAIL = 'usr'
PASSWORD = 'pass'
URL = 'https://connect.lehigh.edu/app/login'
def main():
# Start a session so we can have persistant cookies
session = requests.session(config={'verbose': sys.stderr})
# This is the form data that the page sends when logging in
login_data = {
'username': EMAIL,
'password': PASSWORD,
'LOGIN': 'login',
}
# Authenticate
r = session.post(URL, data=login_data)
# Try accessing a page that requires you to be logged in
r = session.get('https://lewisweb.cc.lehigh.edu/PROD/bwskfshd.P_CrseSchdDetl')
if __name__ == '__main__':
main()
</code></pre>
| 1 | 2016-08-01T18:32:36Z | 38,705,775 | <p>You can use the <code>requests</code> module.</p>
<p>Take a look at this answer that i've linked below.</p>
<p><a href="http://stackoverflow.com/a/8316989/6464893">http://stackoverflow.com/a/8316989/6464893</a></p>
| 0 | 2016-08-01T18:34:47Z | [
"python",
"cookies",
"login",
"web-scraping",
"beautifulsoup"
] |
setting qml property from Python? | 38,705,765 | <p>I'm trying to use Python, PySide and a sample QML file to make a simple ui.
How can I set the "value" property available in the QML control from my Python app? As of now, the "SpeedDial" show up, but can't figure out how to change it's value.</p>
<p><strong>Python file:</strong></p>
<pre><code>import sys
from PySide.QtCore import *
from PySide.QtGui import *
from PySide.QtDeclarative import QDeclarativeView
# Create Qt application and the QDeclarative view
app = QApplication(sys.argv)
view = QDeclarativeView()
# Create an URL to the QML file
url = QUrl('SpeedDial.qml')
# Set the QML file and show
view.setSource(url)
view.show()
# Enter Qt main loop
sys.exit(app.exec_())
</code></pre>
<p><strong>The qml file:</strong></p>
<pre><code>import QtQuick 1.0
Item {
id: root1
property real value : 0
width: 300; height: 300
Image { id: speed_inactive; x: -9; y: 8; opacity: 0.8; z: 3; source: "pics/speed_inactive.png"
}
Image {
id: needle
x: 136; y: 86
clip: true
opacity: root1.opacity
z: 3
smooth: true
source: "pics/needle.png"
transform: Rotation {
id: needleRotation
origin.x: 5; origin.y: 65
angle: Math.min(Math.max(-130, root1.value*2.6 - 130), 133)
Behavior on angle {
SpringAnimation {
spring: 1.4
damping: .15
}
}
}
}
}
</code></pre>
| 1 | 2016-08-01T18:34:07Z | 38,714,550 | <p>You're not supposed to control QML properties directly from Python or C++.</p>
<p>Define a Python class that will represent a car with a property <code>speed</code>. Instantiate it in QML like that:</p>
<pre><code>Car {
id: car
}
</code></pre>
<p>and then use its speed:</p>
<pre><code>angle: car.speed
</code></pre>
| 0 | 2016-08-02T07:43:30Z | [
"python",
"qt",
"qml",
"pyside"
] |
How to approach this multithreading task, problems with logical structure | 38,705,778 | <p>I have two ICs which convert sensor signals connected to a Raspberry Pi:</p>
<ul>
<li>IC1 sends his data every 2 seconds as a serial telegram.</li>
<li>IC2 is an ADC that sends its data after receiving a certain control signal.</li>
</ul>
<p>I have written separate codes that let me extract the measurement values out of the serial telegram as well as request and receive the ADC values and they are both working fine on their own. However, they are blocking my main program and I want to have the sensor values at the same time. Additionally, while waiting between serial telegrams I'd like to constantly sum the ADC's output and when the telegram is received, an average value for the analog data should be calculated.</p>
<p>At the moment, my average value calculation is performed like this:</p>
<pre><code>class ADC():
# [...]
def startAvg(self):
self._recording = 1
interrupt = threading.Thread(target=self.stopAvg())
recording = threading.Thread(target=self._record())
recording.start()
interrupt.start()
return self._avgs
def _record(self):
sum1 = 0
sum2 = 0
counts = 0
while self.recording:
sum1 += ADC[0] # ADC channel 0
sum2 += ADC[1] # ADC channel 1
counts += 1
self._avgs = [ sum1 / counts, sum2 / counts ]
def stopAvg(self):
time.sleep(1)
self._recording = 0
</code></pre>
<p>I created a dummy function that just sleeps (stopAvg) to simulate the arrival of a serial telegram. Later, the averaging should be stopped from outside the class, in my main program. However, even now, _record() is only called <em>after</em> stopAvg() is finished. Where's my mistake? I have read some tutorials about threading, but I don't see how to apply it on my problem, especially the averaging. I know it could simply get my tasks done one after the other but I want to get the average value so that peaks in my signal are taken care of. Are threads the right solution after all?</p>
<p>I think I just need a good advice on how to structure my threads since I never worked with threads before.</p>
| 0 | 2016-08-01T18:34:59Z | 38,709,083 | <p>I found my mistake:</p>
<pre><code>interrupt = threading.Thread(target=self.stopAvg())
recording = threading.Thread(target=self._record())
</code></pre>
<p>The brackets inside target's argument don't belong there!</p>
<pre><code>interrupt = threading.Thread(target=self.stopAvg)
recording = threading.Thread(target=self._record)
</code></pre>
<p>...is how it should be.</p>
| 0 | 2016-08-01T22:29:30Z | [
"python",
"multithreading",
"python-2.7"
] |
how to use find all method from BS4 to scrape certain strings | 38,705,805 | <pre><code><li class="sre" data-tn-component="asdf-search-result" id="85e08291696a3726" itemscope="" itemtype="http://schema.org/puppies">
<div class="sre-entry">
<div class="sre-side-bar">
</div>
<div class="sre-content">
<div class="clickable_asdf_card" onclick="window.open('/r/85e08291696a3726?sp=0', '_blank')" style="cursor: pointer;" target="_blank">
</code></pre>
<p>I need to grab the string '/r/85e08291696a3726?sp=0' which occurs throughout a page. I'm not sure how to use the soup.find_all method to do this. The strings that I need always occur next to '
<p>This is what I was thinking (below) but obviously I am getting the parameters wrong. How would I format the find_all method to return the '/r/85e08291696a3726?sp=0' strings throughout the page?</p>
<pre><code>for divsec in soup.find_all('div', class_='clickable_asdf_card'):
print('got links')
x=x+1
</code></pre>
<p>I read the documentation for bs4 and I was thinking about using find_all('clickable_asdf_card') to find all occurrences of the string I need but then what? Is there a way to adjust the parameters to return the string I need?</p>
| 3 | 2016-08-01T18:36:16Z | 38,705,856 | <p>Use <code>BeautifulSoup</code>'s <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-regular-expression" rel="nofollow">built-in regular expression search</a> to find and extract the desired substring from an <code>onclick</code> attribute value:</p>
<pre><code>import re
pattern = re.compile(r"window\.open\('(.*?)', '_blank'\)")
for item in soup.find_all(onclick=pattern):
print(pattern.search(item["onclick"]).group(1))
</code></pre>
<p>If there is just a single element you want to find, use <code>find()</code> instead of <code>find_all()</code>.</p>
| 2 | 2016-08-01T18:38:42Z | [
"python",
"python-3.x",
"web-scraping",
"beautifulsoup",
"bs4"
] |
Merge Sorted Array in leetcode: The in-place modification of list doesn't work | 38,705,954 | <p>The link to the question: <a href="https://leetcode.com/problems/merge-sorted-array/" rel="nofollow">merge sorted array</a></p>
<p>I don't know why my solution doesn't modify the list nums1 when exiting the function <strong>merge</strong>. Here is the code:</p>
<pre><code>def merge(self, nums1, m, nums2, n):
"""
:type nums1: List[int]
:type m: int
:type nums2: List[int]
:type n: int
:rtype: void Do not return anything, modify nums1 in-place instead.
"""
i = 0
j = 0
while i < m and j < n:
if nums1[i] < nums2[j]:
i += 1
else:
nums1 = nums1[:i-1] + [nums2[j]] + nums1[i-1:]
i += 1
j += 1
if i == m:
nums1 = nums1 + nums2
</code></pre>
| 1 | 2016-08-01T18:44:33Z | 38,706,080 | <p>lists are mutable, so you can create the behavior that you are looking for. What you need to do is to assign new values to specific indices in nums1. When you use splicing you are in fact creating new lists. Use list functions like <code>[].insert()</code>, <code>[].pop()</code>, <code>[].extend()</code> to achieve the functionality you are looking for.</p>
| 0 | 2016-08-01T18:52:47Z | [
"python",
"arrays"
] |
fastest format to load saved graph structure into python-igraph | 38,706,050 | <p>I have a very large network structure which I am working with in igraph. There are many different file formats which igraph Graph objects can write to and then be loaded from. I ran into memory problems when using g.write_picklez, and Graph.Read_Lgl() takes about 5 minutes to finish. I was wondering if anyone had already profiled the numerous file format choices for write and load speed as well as memory footprint. FYI this network has ~5.7m nodes and ~130m edges.</p>
| 0 | 2016-08-01T18:50:55Z | 38,720,955 | <p>If you don't have vertex or edge attributes, your best bet is a simple edge list, i.e. <code>Graph.Read_Edgelist()</code>. The disadvantage is that it assumes that vertex IDs are in the range [0; |V|-1], so you'll need to have an additional file next to it where line <em>i</em> contains the name of the vertex with ID=<em>i</em>.</p>
| 1 | 2016-08-02T12:51:30Z | [
"python",
"profiling",
"igraph"
] |
Creating a Python list or dictionary with a set number of rows, but variable number of columns per row | 38,706,095 | <p>I'm working with images and want to create a list of lists of arrays. For example, a list with 5 rows, where each row has a variable number of images (3x200x100) stored in them ranging from 2 images to 10 images.</p>
<p>I've ruled out numpy and concatenation since they need the matrix to be uniform. I've tried appending one list to another, but that just gives me a long row of them when what I want is another row to be added after the prior row.</p>
<p>I was thinking that either a list or a dictionary would be the way to go since what I'm using to populate my list of lists is a dictionary, but I'm unable to figure out how to structure the lists so that it's correct to my above specifications instead of just a super long list. Is there any way for me to initialize a list so that only the row number gets specified and I can dynamically change the column number per row? If not, is there a different data structure I should be looking at?</p>
| 0 | 2016-08-01T18:54:01Z | 38,706,158 | <p>How about something like this?</p>
<pre><code>row1 = [1,2,3,4,5,6]
row2 = [1,2,3]
matrix = [row1, row2]
row3 = [1,2,4,5,6,7,8,9]
matrix.append(row3)
</code></pre>
| 0 | 2016-08-01T18:57:01Z | [
"python",
"list",
"numpy"
] |
python imap not calling function for all items in list | 38,706,132 | <p>I am trying to use python3.5 to parallelize CodeML by calling separate instances on different threads. I've gotten everything to work up to a point. If I provide <code>Pool.imap</code> (or <code>Pool.map</code>) with an iterable that contains more variables than cores the program has available, it will only run one variable through each core and then exit. Is there anything I'm doing wrong here?</p>
<pre><code># Call CodeML for all files in a directory.
genes = glob(path + "06_phylipFiles/" + "*.phylip")
l = int(len(genes))
pool = Pool(processes = cpu)
func = partial(runCodeml, ap, usertree, path, completed, ctl, forward)
print("\n\tRunning CodeML with", str(cpu), "threads....\n")
rcml = pool.imap(func, genes, chunksize = int(l/cpu))
pool.close()
pool.join()
</code></pre>
<p>Basically, I need <code>Pool.imap</code> to run through the whole list before exiting. Thank you in advance for any help.</p>
| -1 | 2016-08-01T18:55:45Z | 38,839,518 | <p>I figured out the problem was with another program I was calling. I set that to run in a different step, so it runs on one thread before Pool.imap is called.</p>
| 0 | 2016-08-08T22:36:59Z | [
"python",
"python-multiprocessing"
] |
Pandas Dataframe - Lookup Error | 38,706,196 | <p>I am attempting to lookup a row in a pandas (version 0.14.1) dataframe using a date and stock ticker combination and am receiving a strange error.</p>
<p>My pandas dataframe that looks like this:</p>
<pre><code> AAPL IBM GOOG XOM Date
2011-01-10 16:00:00 340.99 143.41 614.21 72.02 2011-01-10
2011-01-11 16:00:00 340.18 143.06 616.01 72.56 2011-01-11
2011-01-12 16:00:00 342.95 144.82 616.87 73.41 2011-01-12
2011-01-13 16:00:00 344.20 144.55 616.69 73.54 2011-01-13
2011-01-14 16:00:00 346.99 145.70 624.18 74.62 2011-01-14
2011-01-18 16:00:00 339.19 146.33 639.63 75.45 2011-01-18
2011-01-19 16:00:00 337.39 151.22 631.75 75.00 2011-01-19
</code></pre>
<p>When I attempt to do a lookup using a date/string combination I receive the following error:</p>
<pre><code>>>> df_data.lookup(date,ticker)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2820, in run_code
exec code_obj in self.user_global_ns, self.user_ns
File "<ipython-input-2-31ab981e2184>", line 1, in <module>
df_data.lookup(date,ticker)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 2207, in lookup
n = len(row_labels)
TypeError: object of type 'datetime.datetime' has no len()
</code></pre>
<p>From what I can see in the pandas documentation, this should work and my date variable is a regular date time</p>
<pre><code>>>> date
Out[5]: datetime.datetime(2011, 1, 10, 16, 0)
</code></pre>
<p>Am I doing something obviously incorrect?</p>
| 1 | 2016-08-01T18:58:59Z | 38,706,311 | <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.lookup.html" rel="nofollow"><code>df.lookup</code></a> expects 2 array-likes (instead of scalars) as arguments:</p>
<pre><code>In [25]: df.lookup(row_labels=[DT.datetime(2011,1,10,16,0)], col_labels=['AAPL'])
Out[25]: array([ 340.99])
</code></pre>
<hr>
<p>If you only want to look up one value, use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.get_value.html" rel="nofollow"><code>df.get_value</code></a> instead:</p>
<pre><code>In [30]: df.get_value(DT.datetime(2011,1,10,16,0), 'AAPL')
Out[30]: 340.99000000000001
</code></pre>
| 4 | 2016-08-01T19:05:27Z | [
"python",
"pandas"
] |
How can I inline every n objects within a loop in html with flask using jinja | 38,706,259 | <p>Let us say I have 30 numbers 1-30. If I loop through the list in flask and print out the numbers (<p>{{ num }}</p>), It will print out like so:</p>
<p>1</p>
<p>2</p>
<p>3</p>
<p>4</p>
<p>...</p>
<p>What I want to do is to have it print out like this:</p>
<p>1 2 3</p>
<p>4 5 6</p>
<p>7 8 9</p>
<p>so that 3 elements are on the same line and then it moves to the next line. Is there a way to do this using jinja and inline blocks?</p>
| 0 | 2016-08-01T19:02:33Z | 38,706,377 | <p>Maybe something like this?</p>
<pre><code>{% if num % 3 == 0 %}
{{ num }} <br>
{% else %}
{{ num }}
{% endif %}
</code></pre>
<p>Assuming you're using <code>jinja2</code> for templates</p>
| 1 | 2016-08-01T19:09:52Z | [
"python",
"flask"
] |
pandas.HDFStore: How do I modify "data_columns" for an existing store? I'd like to add an index to a column not in data columns | 38,706,359 | <p>I have created a large (120GB; 1 billion rows) HDF5 file using pandas. After an initial creation of the hdf file, I added to the file like so:</p>
<pre><code>with pd.get_store(path_output) as hdf_output:
for i in range(BIG_LOOP):
df = ...
hdf_output.append('all', df, data_columns=[])
</code></pre>
<p>I purposefully set data_columns=[] in order to avoid indexing during creation time. Now that I have the HDF file I'd like to add indexes to several columns (say, columns_to_index=['A', 'B', 'C'])</p>
<p>Somehow, accoding to ptdump I do have <code>data_columns:=['A']</code> at the moment, but I don't recall how that happened. (Perhaps the initial df was written with a different parameter (I was successively adding to the hdfstore for several days and I may have changed something). In any case, though, regardless of how this was created, I'd like to index additional columns.</p>
<p>Simply calling <code>mystore.create_table_index('all', columns=['A', 'B', 'C'], optlevel=9, kind='full')</code> doesn't work, apparently. The first time I ran it it churned for an hour and added 2 GB to the filesize (inspecting metadata shows that the chunksize was increased), but I don't have all 3 indexes (just an index for 'A'). <strong>How can I generate the index for all 3 columns?</strong></p>
<p>I also noticed this line in the ptdump -- it seems disturbing that I have "non_index_axes" for the items I'd like to index: <code>non_index_axes := [(1, ['A', 'B', 'C'])]</code></p>
<p>If it isn't possible to create the index in pandas, I'd appreciate advice on how to do this directly in pytables. (e.g., do I need to first delete any existing indices? and how do I modify the "non_index_axes" and the "data_coumns")</p>
<p><strong>Edit</strong>: Anticipating questions about my use case, here's the big picture of what I'm trying to accomplish:</p>
<ol>
<li><p>Read in 120 GB of data from CSV files. Each file represents one day of financial data and consists of 100,000s of rows, with around a dozen columns per row. I'm just storing every row, sequentially, in the HDF5 file. I'd like this initial phase to run quickly, hence me turning off indexing. Currently I read and parse each CSV file in 6 seconds, and storing into the HDF5 file as above takes just 1.5 seconds.</p></li>
<li><p>Index a handful (not all) of the columns to support a variety of queries, such as getting all items with a given string in column 1 and with a date from column 2 in a certain range.</p></li>
<li><p>As time passes, each day I will parse a new CSV file and add it to the HDF5 file. I expect the indices to continue to be updated. </p></li>
<li><p>(Depending on my access patterns, the order I store rows in (currently, by date) might continue to be the best order for retrieval. I might also end up needing to sort by a different column in most queries, in which case I think I would need to re-sort the table after each CSV file is parsed and appended.)</p></li>
</ol>
<p>Currently I'm stuck on step 2, generating the column indices.</p>
| 1 | 2016-08-01T19:08:55Z | 38,707,659 | <p>I'd do it bit differently - <a href="http://stackoverflow.com/a/38472574/5741205">take a look at this small example</a>:</p>
<pre><code>for chunk in ... # reading data in chunks:
# specify `data_columns`, but don't index (`index=False`)
hdf_output.append('all', chunk, data_columns=[cols_to_index], index=False)
# index columns explicitly
hdf_output.create_table_index(hdf_key, columns=cols_to_index, optlevel=9, kind='full')
</code></pre>
| 1 | 2016-08-01T20:35:37Z | [
"python",
"pandas",
"hdf5",
"pytables",
"hdfstore"
] |
Python Challenge- #3: it prints random letters instead of 7 | 38,706,413 | <pre><code>import re
b=re.findall('[A-Z]+[a-z]+[A-Z]',k)
for i in b:
print (i)
</code></pre>
<p>I have written this code. The string in k is too long to print here.
I need to find all sets of sub strings where the middle later is in lower case and three letters exactly on each side is upper case.
I think this code should work. But, its printing random number of letters in each substring .
What is the problem in my code or how can I fix it ?
Please help!</p>
| 0 | 2016-08-01T19:11:57Z | 38,706,604 | <p>Your regexp shouldn't contain <code>+</code>, which matches 1 or more uppercase letters. If you need to get something like <code>AAAbCCC</code> just try:</p>
<pre><code>b=re.findall('[A-Z]{3}[a-z][A-Z]{3}',k)
</code></pre>
| 0 | 2016-08-01T19:23:42Z | [
"python"
] |
Python Challenge- #3: it prints random letters instead of 7 | 38,706,413 | <pre><code>import re
b=re.findall('[A-Z]+[a-z]+[A-Z]',k)
for i in b:
print (i)
</code></pre>
<p>I have written this code. The string in k is too long to print here.
I need to find all sets of sub strings where the middle later is in lower case and three letters exactly on each side is upper case.
I think this code should work. But, its printing random number of letters in each substring .
What is the problem in my code or how can I fix it ?
Please help!</p>
| 0 | 2016-08-01T19:11:57Z | 38,706,653 | <p>You can specify repetitions of a atom/range/group with <code>{n}</code> where n is an integer.</p>
<p>I'd change your regexp to <code>[A-Z]{3}[a-z]+[A-Z]{3}</code>, or better, if the middle substring must be only one letter, remove the <code>+</code>, because that would match 1 <em>or more</em> lowercase characters: <code>[A-Z]{3}[a-z][A-Z]{3}</code>.</p>
| 0 | 2016-08-01T19:27:00Z | [
"python"
] |
looping through user input with an if condition | 38,706,631 | <p>Hi I want to loop through this input if balance does not match the sum of book balances(pp, bfair, sky, freds wh)</p>
<pre><code>while True:
try:
balance = float(raw_input('Balance:'))
print balance
except ValueError:
print"That's not a number"
continue
else:
break
while True:
try:
bfair_balance = float(raw_input('bfair:'))
print bfair_balance
except ValueError:
print"That's not a number"
continue
else:
break
while True:
try:
wh_balance = float(raw_input('wh:'))
print wh_balance
except ValueError:
print"That's not a number"
continue
else:
break
while True:
try:
freds_balance = float(raw_input('freds:'))
print freds_balance
except ValueError:
print"That's not a number"
continue
else:
break
while True:
try:
sky_balance = float(raw_input('sky:'))
print sky_balance
except ValueError:
print"That's not a number"
continue
else:
break
while True:
try:
pp_balance = float(raw_input('pp:'))
print pp_balance
except ValueError:
print "That's not a number"
continue
else:
break
</code></pre>
<p>Do i put this all in another while loop with the if statements meeting the conditions ? </p>
| 0 | 2016-08-01T19:25:48Z | 38,706,831 | <p>Yes.</p>
<p>And consider using functions to avoid repetitions in your code:</p>
<pre><code>def ask_float(msg):
while True:
try:
x = float(raw_input(msg))
print x
return x
except ValueError:
print "That's not a number"
continue
while True:
balance = ask_float('Balance:')
bfair_balance = ask_float('bfair:')
wh_balance = ask_float('wh:')
freds_balance = ask_float('freds:')
sky_balance = ask_float('sky:')
pp_balance = ask_float('pp:')
balance_sum = pp_balance + bfair_balance + sky_balance + freds_balance + wh_balance
if balance == balance_sum:
# balance is correct -> stop the loop
break
else:
print("put a nice error message here")
</code></pre>
| 0 | 2016-08-01T19:39:11Z | [
"python",
"loops",
"input",
"user"
] |
How to ignore path when extracting zip file in python | 38,706,644 | <p>I want to ignore the path of a file stored in a zip.
I use the following:</p>
<pre><code>ZipFile.extract('/ignorepath/filename.txt', '/mygoodpath')
</code></pre>
<p>This will create the followng:</p>
<blockquote>
<p>/mygoodpath/ignorepath/filename.txt</p>
</blockquote>
<p>I would prefer </p>
<blockquote>
<p>/mygoodpath/filename.txt</p>
</blockquote>
<p>I am looking at shutil.move as well as ZipFile.open to open and write, though the later would probably have a few edge cases. Best method to handle this?</p>
| 0 | 2016-08-01T19:26:33Z | 38,707,240 | <p>Try using <a href="https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile.open" rel="nofollow">Zipfile.open</a></p>
<pre><code>with ZipFile('spam.zip') as myzip:
with myzip.open('/ignorepath/filename.txt') as infile:
with open('/mygoodpath/filename.txt', 'w') as outfile:
outfile.write(infile.read())
</code></pre>
| 0 | 2016-08-01T20:06:28Z | [
"python",
"path",
"zip"
] |
html5 websockets closing connect when a link is clicked | 38,706,655 | <p>I use the javascript websocket to connect to the websocket server. I use python flask framework to navigate through webpages.</p>
<p>my project is as below:</p>
<ol>
<li>the route "/" renders index.html page. In this page, I create a
websocket connection.</li>
<li>when I receive data from the server, I navigate to different route (for instance: "/page/1")</li>
</ol>
<p>When i click on the href link on my index.html page, i see the websocket is being closed.</p>
<p>I googled out and implemented 2 methods of persistent storage.</p>
<ol>
<li><p>LocalStorage</p></li>
<li><p>Shared Web Workers</p></li>
</ol>
<p>Both of them were not of any use, since, the websockets are being closed when i click on the href link. From this I think that persistent storage of websocket instance is not a solution to my problem (please correct me if i am wrong). Please suggest me the right approach to tackle my problem. Thank you in advance.</p>
<p>I am using the latest version of google chrome (52.0.2743.82)</p>
| 0 | 2016-08-01T19:27:07Z | 38,720,960 | <p>The WebSocket connection only persists as long as the page it was established for is open. Loading another page closes the WebSocket, so storing a reference to the object does not help (what it references no longer exists). You need to establish a new WebSocket connection after each page load.
(For an older look into how the problems here, see <a href="http://tavendo.com/blog/post/websocket-persistent-connections/" rel="nofollow">http://tavendo.com/blog/post/websocket-persistent-connections/</a>, and 10.2.3 in the HTML spec <a href="https://html.spec.whatwg.org/multipage/workers.html#shared-workers-introduction" rel="nofollow">https://html.spec.whatwg.org/multipage/workers.html#shared-workers-introduction</a>)</p>
| 0 | 2016-08-02T12:51:58Z | [
"javascript",
"python",
"html5",
"google-chrome",
"websocket"
] |
Create new folder with timestamp and then move files to new folder | 38,706,664 | <p>I'm having trouble with this ... I want to create a new folder with a time stamped name. Then I want to move a bunch of files into it.</p>
<p>I can't figure it out!</p>
<pre><code>import shutil, os, time
timestr = time.strftime("%Y%m%d")
Sourcepath = r'Z:\\test'
if not os.path.exists(Sourcepath):
os.makedirs(Sourcepath+timestr)
source = os.listdir(Sourcepath)
destinationpath = (Sourcepath+timestr)
for files in source:
if files.endswith('.json'):
shutil.move(os.path.join(source,files),os.path.join(destinationpath,files))
</code></pre>
| 0 | 2016-08-01T19:27:46Z | 38,706,878 | <p>Does this fix your problem. Notice indentation of last line</p>
<pre><code>import shutil, os, time
timestr = time.strftime("%Y%m%d")
Sourcepath = r'Z:\\test'
if not os.path.exists(Sourcepath):
os.makedirs(Sourcepath+timestr)
source = os.listdir(Sourcepath)
destinationpath = (Sourcepath+timestr)
for files in source:
if files.endswith('.json'):
shutil.move(os.path.join(source,files),os.path.join(destinationpath,files))
</code></pre>
| 0 | 2016-08-01T19:41:48Z | [
"python",
"timestamp",
"shutil"
] |
Is it possible to use SQL Server in python without external libs? | 38,706,720 | <p>I'm developing on an environment which I'm not allowed to install anything. It's a monitoring server and I'm making a script to work with logs and etc.</p>
<p>So, I need to connect to a SQL Server with Python 2.7 without any lib like pyodbc installed. Is it possible to make this? I've found nothing I could use to connect to that database.</p>
| -1 | 2016-08-01T19:32:04Z | 38,706,901 | <p>There are certain things you can do to run sql from the command line from python:</p>
<pre><code>import subprocess
x = subprocess.check_output('sqlcmd -Q "SELECT * FROM db.table"')
print x
</code></pre>
| 2 | 2016-08-01T19:43:30Z | [
"python",
"sql-server"
] |
Obfuscate Data in .csv from a .txt file | 38,706,754 | <p>I would like to obfuscate words that occur in a column of a .csv file based on a list of data to remove that are in a different .txt file. </p>
<p>Ideally I will be able to ignore the case of my data and then in the .csv file, replace the matching words from the "to remove" file with an <code>'*'</code>. I am not sure what the best method would be to replace the words in the .csv file while also ignoring case. What I have so far isn't working and I am open to solutions. </p>
<p>Example Data file:</p>
<pre><code>This is a line of text in .csv column that I want to remove a word from or data such as 123 from.
</code></pre>
<p>My .txt file will be a list of data to remove:</p>
<pre><code>want
remove
123
</code></pre>
<p>Output should be: </p>
<pre><code>This is a line of text in .csv column that I **** to ****** a word or data such as *** from.
</code></pre>
<p>My code:</p>
<pre><code>import csv
with open('MyFileName.csv' , 'rb') as csvfile, open ('DataToRemove.txt', 'r') as removetxtfile:
reader = csv.reader(csvfile)
reader.next()
for row in reader:
csv_words = row[3].split(" ") #Gets the word for the 4th column in .csv file
for line in removetxtfile:
for wordtoremove in line.split():
if csv_words.lower() == wordtoremove.lower()
csv_words = csv_words.replace(wordtoremove.lower(), '*' * len(csv_words))
</code></pre>
| 0 | 2016-08-01T19:34:17Z | 38,707,475 | <p>I would start by constructing a set of censor words. My input is basically a plain text file of newline separated words. If your text file is different you might need to parse separately. </p>
<p>Other thoughts:</p>
<p>Create a separate censored file output instead of trying to overwrite your input file. That way if you screw up your algorithm you don't lose your data. </p>
<p>You do a <code>.split(" ")</code> on the 4th column, which is only necessary if there are multiple words, space separated, in that column. If that is not the case, you can skip the <code>for w in csv_words</code> loop, which loops over all the words in the 4th column. </p>
<pre><code>import csv
import re
import string
PUNCTUATION_SPLIT_REGEX = re.compile(r'[\s{}]+'.format(re.escape(string.punctuation)))
# construct a set of words to censor
censor_words = set()
with open ('DataToRemove.txt', 'r') as removetxtfile:
for l in removetxtfile:
words = PUNCTUATION_SPLIT_REGEX.split(l)
for w in words:
censor_words.add(w.strip().lower())
with open('MyFileName.csv' , 'rb') as csvfile, open('CensoredFileName.csv', 'w') as f:
reader = csv.reader(csvfile)
# reader.next()
for row in reader:
csv_words = row[3].split(' ') #Gets the word for the 4th column in .csv file
new_column = []
for w in csv_words:
if w.lower() in censor_words:
new_column.append('*'*len(w))
else:
new_column.append(w)
row[3] = ' '.join(new_column)
f.write(' '.join(row) + '\n')
</code></pre>
| 0 | 2016-08-01T20:23:47Z | [
"python",
"csv"
] |
Student - np.random.choice: How to isolate and tally hit frequency within a np.random.choice range | 38,706,796 | <p>Currently learning Python and very new to Numpy & Panda</p>
<p>I have pieced together a random generator with a range. It uses Numpy and I am unable to isolate each individual result to count the iterations within a range within my random's range.</p>
<p>Goal: Count the iterations of "Random >= 1000" and then add 1 to the appropriate cell that correlates to the tally of iterations. Example in very basic sense:</p>
<pre><code>#Random generator begins... these are first four random generations
Randomiteration0 = 175994 (Random >= 1000)
Randomiteration1 = 1199 (Random >= 1000)
Randomiteration2 = 873399 (Random >= 1000)
Randomiteration3 = 322 (Random < 1000)
#used to +1 to the fourth row of column A in CSV
finalIterationTally = 4
#total times random < 1000 throughout entire session. Placed in cell B1
hits = 1
#Rinse and repeat to custom set generations quantity...
</code></pre>
<p>(The logic would then be to +1 to A4 in the spreadsheet. If the iteration tally would have been 7, then +1 to the A7, etc. So essentially, I am measuring the distance and frequency of that distance between each "Hit")</p>
<p>My current code includes a CSV export portion. I do not need to export each individual random result any longer. I only need to export the frequency of each iteration distance between each hit. This is where I am stumped.</p>
<p>Cheers</p>
<pre><code>import pandas as pd
import numpy as np
#set random generation quantity
generations=int(input("How many generations?\n###:"))
#random range and generator
choices = range(1, 100000)
samples = np.random.choice(choices, size=generations)
#create new column in excel
my_break = 1000000
if generations > my_break:
n_empty = my_break - generations % my_break
samples = np.append(samples, [np.nan] * n_empty).reshape((-1, my_break)).T
#export results to CSV
(pd.DataFrame(samples)
.to_csv('eval_test.csv', index=False, header=False))
#left uncommented if wanting to test 10 generations or so
print (samples)
</code></pre>
| 0 | 2016-08-01T19:36:36Z | 38,707,934 | <p>I believe you are mixing up iterations and generations. It sounds like you want 4 iterations for N numbers of generations, but your bottom piece of code does not express the "4" anywhere. If you pull all your variables out to the top of your script it can help you organize better. Panda is great for parsing complicated csvs, but for this case you don't really need it. You probably don't even need numpy. </p>
<pre><code>import numpy as np
THRESHOLD = 1000
CHOICES = 10000
ITERATIONS = 4
GENERATIONS = 100
choices = range(1, CHOICES)
output = np.zeros(ITERATIONS+1)
for _ in range(GENERATIONS):
samples = np.random.choice(choices, size=ITERATIONS)
count = sum([1 for x in samples if x > THRESHOLD])
output[count] += 1
output = map(str, map(int, output.tolist()))
with open('eval_test.csv', 'w') as f:
f.write(",".join(output)+'\n')
</code></pre>
| 0 | 2016-08-01T20:53:41Z | [
"python",
"pandas",
"numpy",
"random",
"range"
] |
How to Remove a Substring of String in a Dataframe Column? | 38,706,813 | <p>I have this simplified dataframe:</p>
<pre><code>ID, Date
1 8/24/1995
2 8/1/1899 :00
</code></pre>
<p>How can I use the power of pandas to recognize any date in the dataframe which has extra <code>:00</code> and removes it. </p>
<p>Any idea how to solve this problem?</p>
<p>I have tried this syntax but did not help:</p>
<pre><code>df[df["Date"].str.replace(to_replace="\s:00", value="")]
</code></pre>
<p><strong>The Output Should Be Like:</strong></p>
<pre><code>ID, Date
1 8/24/1995
2 8/1/1899
</code></pre>
| 2 | 2016-08-01T19:37:39Z | 38,706,893 | <p>You need to assign the trimmed column back to the original column instead of doing subsetting, and also the <code>str.replace</code> method doesn't seem to have the <code>to_replace</code> and <code>value</code> parameter. It has <code>pat</code> and <code>repl</code> parameter instead:</p>
<pre><code>df["Date"] = df["Date"].str.replace("\s:00", "")
df
# ID Date
#0 1 8/24/1995
#1 2 8/1/1899
</code></pre>
| 3 | 2016-08-01T19:42:49Z | [
"python",
"regex",
"string",
"pandas",
"dataframe"
] |
How to Remove a Substring of String in a Dataframe Column? | 38,706,813 | <p>I have this simplified dataframe:</p>
<pre><code>ID, Date
1 8/24/1995
2 8/1/1899 :00
</code></pre>
<p>How can I use the power of pandas to recognize any date in the dataframe which has extra <code>:00</code> and removes it. </p>
<p>Any idea how to solve this problem?</p>
<p>I have tried this syntax but did not help:</p>
<pre><code>df[df["Date"].str.replace(to_replace="\s:00", value="")]
</code></pre>
<p><strong>The Output Should Be Like:</strong></p>
<pre><code>ID, Date
1 8/24/1995
2 8/1/1899
</code></pre>
| 2 | 2016-08-01T19:37:39Z | 38,706,935 | <p>To apply this to an entire dataframe, I'd <code>stack</code> then <code>unstack</code></p>
<pre><code>df.stack().str.replace(r'\s:00', '').unstack()
</code></pre>
<p><a href="http://i.stack.imgur.com/MmbsN.png" rel="nofollow"><img src="http://i.stack.imgur.com/MmbsN.png" alt="enter image description here"></a></p>
<h3>functionalized</h3>
<pre><code>def dfreplace(df, *args, **kwargs):
s = pd.Series(df.values.flatten())
s = s.str.replace(*args, **kwargs)
return pd.DataFrame(s.values.reshape(df.shape), df.index, df.columns)
</code></pre>
<h3>Examples</h3>
<pre><code>df = pd.DataFrame(['8/24/1995', '8/1/1899 :00'], pd.Index([1, 2], name='ID'), ['Date'])
dfreplace(df, '\s:00', '')
</code></pre>
<p><a href="http://i.stack.imgur.com/MmbsN.png" rel="nofollow"><img src="http://i.stack.imgur.com/MmbsN.png" alt="enter image description here"></a></p>
<hr>
<pre><code>rng = range(5)
df2 = pd.concat([pd.concat([df for _ in rng]) for _ in rng], axis=1)
df2
</code></pre>
<p><a href="http://i.stack.imgur.com/iYJWM.png" rel="nofollow"><img src="http://i.stack.imgur.com/iYJWM.png" alt="enter image description here"></a></p>
<pre><code>dfreplace(df2, '\s:00', '')
</code></pre>
<p><a href="http://i.stack.imgur.com/g3ukq.png" rel="nofollow"><img src="http://i.stack.imgur.com/g3ukq.png" alt="enter image description here"></a></p>
| 2 | 2016-08-01T19:45:34Z | [
"python",
"regex",
"string",
"pandas",
"dataframe"
] |
Edit & save dictionary in another python file | 38,706,855 | <p>I have 2 python files, file1.py has only 1 dictionary and I would like to read & write to that dictionary from file2.py. Both files are in same directory.</p>
<p>I'm able to read from it using <strong>import file1</strong> but how do I write to that file.</p>
<p>Snippet:</p>
<p>file1.py (nothing additional in file1, apart from following data)</p>
<pre><code>dict1 = {
'a' : 1, # value is integer
'b' : '5xy', # value is string
'c' : '10xy',
'd' : '1xy',
'e' : 10,
}
</code></pre>
<p>file2.py</p>
<pre><code> import file1
import json
print file1.dict1['a'] #this works fine
print file1.dict1['b']
# Now I want to update the value of a & b, something like this:
dict2 = json.loads(data)
file1.dict1['a'] = dict2.['some_int'] #int value
file1.dict1['b'] = dict2.['some_str'] #string value
</code></pre>
<p>The main reason why I'm using dictionary and not text file, is because the new values to be updated come from a json data and converting it to a dictionary is simpler saving me from string parsing each time I want to update the <strong>dict1</strong>.</p>
<p>Problem is, <strong>When I update the value from dict2, I want those value to be written to dict1 in file1</strong></p>
<p>Also, the code runs on a Raspberry Pi and I've SSH into it using Ubuntu machine.</p>
<p>Can someone please help me how to do this?</p>
<p><strong>EDIT:</strong></p>
<ol>
<li>file1.py could be saved in any other format like .json or .txt. It was just my assumption that saving data as a dictionary in separate file would allow easy update.</li>
<li>file1.py has to be a separate file, it is a configuration file so I don't want to merge it to my main file.</li>
<li>The <strong>data</strong> for <strong>dict2</strong> mention above comes from socket connection at</li>
</ol>
<p><code>dict2 = json.loads(data)</code></p>
<ol start="4">
<li>I want to update the *file1** with the data that comes from socket connection.</li>
</ol>
| 0 | 2016-08-01T19:40:18Z | 38,706,923 | <p>You should use the pickle library to save and load the dictionary <a href="https://wiki.python.org/moin/UsingPickle" rel="nofollow">https://wiki.python.org/moin/UsingPickle</a></p>
<p>Here is the basic usage of pickle</p>
<pre><code> 1 # Save a dictionary into a pickle file.
2 import pickle
3
4 favorite_color = { "lion": "yellow", "kitty": "red" }
5
6 pickle.dump( favorite_color, open( "save.p", "wb" ) )
1 # Load the dictionary back from the pickle file.
2 import pickle
3
4 favorite_color = pickle.load( open( "save.p", "rb" ) )
5 # favorite_color is now { "lion": "yellow", "kitty": "red" }
</code></pre>
| 0 | 2016-08-01T19:44:59Z | [
"python",
"file",
"dictionary",
"fileupdate"
] |
Edit & save dictionary in another python file | 38,706,855 | <p>I have 2 python files, file1.py has only 1 dictionary and I would like to read & write to that dictionary from file2.py. Both files are in same directory.</p>
<p>I'm able to read from it using <strong>import file1</strong> but how do I write to that file.</p>
<p>Snippet:</p>
<p>file1.py (nothing additional in file1, apart from following data)</p>
<pre><code>dict1 = {
'a' : 1, # value is integer
'b' : '5xy', # value is string
'c' : '10xy',
'd' : '1xy',
'e' : 10,
}
</code></pre>
<p>file2.py</p>
<pre><code> import file1
import json
print file1.dict1['a'] #this works fine
print file1.dict1['b']
# Now I want to update the value of a & b, something like this:
dict2 = json.loads(data)
file1.dict1['a'] = dict2.['some_int'] #int value
file1.dict1['b'] = dict2.['some_str'] #string value
</code></pre>
<p>The main reason why I'm using dictionary and not text file, is because the new values to be updated come from a json data and converting it to a dictionary is simpler saving me from string parsing each time I want to update the <strong>dict1</strong>.</p>
<p>Problem is, <strong>When I update the value from dict2, I want those value to be written to dict1 in file1</strong></p>
<p>Also, the code runs on a Raspberry Pi and I've SSH into it using Ubuntu machine.</p>
<p>Can someone please help me how to do this?</p>
<p><strong>EDIT:</strong></p>
<ol>
<li>file1.py could be saved in any other format like .json or .txt. It was just my assumption that saving data as a dictionary in separate file would allow easy update.</li>
<li>file1.py has to be a separate file, it is a configuration file so I don't want to merge it to my main file.</li>
<li>The <strong>data</strong> for <strong>dict2</strong> mention above comes from socket connection at</li>
</ol>
<p><code>dict2 = json.loads(data)</code></p>
<ol start="4">
<li>I want to update the *file1** with the data that comes from socket connection.</li>
</ol>
| 0 | 2016-08-01T19:40:18Z | 38,706,973 | <p>I think you want to save the data from <code>file1</code> into a separate <code>.json</code> file, then read the <code>.json</code> file in your second file. Here is what you can do:</p>
<p><strong>file1.py</strong></p>
<pre><code>import json
dict1 = {
'a' : 1, # value is integer
'b' : '5xy', # value is string
'c' : '10xy',
'd' : '1xy',
'e' : 10,
}
with open("filepath.json", "w+") as f:
json.dump(dict1, f)
</code></pre>
<p>This will dump the dictionary <code>dict1</code> into a <code>json</code> file which is stored at <code>filepath.json</code>. </p>
<p>Then, in your second file:</p>
<p><strong>file2.py</strong></p>
<pre><code>import json
with open("pathname.json") as f:
dict1 = json.load(f)
# dict1 = {
'a' : 1, # value is integer
'b' : '5xy', # value is string
'c' : '10xy',
'd' : '1xy',
'e' : 10,
}
dict1['a'] = dict2['some_int'] #int value
dict1['b'] = dict2['some_str'] #string value
</code></pre>
<p><strong>Note:</strong> This will not change the values in your first file. However, if you need to access the changed values, you can <code>dump</code> your data into another <code>json</code> file, then load that <code>json</code> file again whenever you need the data. </p>
| 0 | 2016-08-01T19:47:51Z | [
"python",
"file",
"dictionary",
"fileupdate"
] |
Edit & save dictionary in another python file | 38,706,855 | <p>I have 2 python files, file1.py has only 1 dictionary and I would like to read & write to that dictionary from file2.py. Both files are in same directory.</p>
<p>I'm able to read from it using <strong>import file1</strong> but how do I write to that file.</p>
<p>Snippet:</p>
<p>file1.py (nothing additional in file1, apart from following data)</p>
<pre><code>dict1 = {
'a' : 1, # value is integer
'b' : '5xy', # value is string
'c' : '10xy',
'd' : '1xy',
'e' : 10,
}
</code></pre>
<p>file2.py</p>
<pre><code> import file1
import json
print file1.dict1['a'] #this works fine
print file1.dict1['b']
# Now I want to update the value of a & b, something like this:
dict2 = json.loads(data)
file1.dict1['a'] = dict2.['some_int'] #int value
file1.dict1['b'] = dict2.['some_str'] #string value
</code></pre>
<p>The main reason why I'm using dictionary and not text file, is because the new values to be updated come from a json data and converting it to a dictionary is simpler saving me from string parsing each time I want to update the <strong>dict1</strong>.</p>
<p>Problem is, <strong>When I update the value from dict2, I want those value to be written to dict1 in file1</strong></p>
<p>Also, the code runs on a Raspberry Pi and I've SSH into it using Ubuntu machine.</p>
<p>Can someone please help me how to do this?</p>
<p><strong>EDIT:</strong></p>
<ol>
<li>file1.py could be saved in any other format like .json or .txt. It was just my assumption that saving data as a dictionary in separate file would allow easy update.</li>
<li>file1.py has to be a separate file, it is a configuration file so I don't want to merge it to my main file.</li>
<li>The <strong>data</strong> for <strong>dict2</strong> mention above comes from socket connection at</li>
</ol>
<p><code>dict2 = json.loads(data)</code></p>
<ol start="4">
<li>I want to update the *file1** with the data that comes from socket connection.</li>
</ol>
| 0 | 2016-08-01T19:40:18Z | 38,707,046 | <p>If you are attempting to print the dictionary back to the file, you could use something like...</p>
<pre><code>outFile = open("file1.py","w")
outFile.writeline("dict1 = " % (str(dict2)))
outFile.close()
</code></pre>
<p>You might be better off having a json file, then loading the object from and writing the object value back to a file. You could them manipulate the json object in memory, and serialize it simply. </p>
<p>Z</p>
| 0 | 2016-08-01T19:52:42Z | [
"python",
"file",
"dictionary",
"fileupdate"
] |
Edit & save dictionary in another python file | 38,706,855 | <p>I have 2 python files, file1.py has only 1 dictionary and I would like to read & write to that dictionary from file2.py. Both files are in same directory.</p>
<p>I'm able to read from it using <strong>import file1</strong> but how do I write to that file.</p>
<p>Snippet:</p>
<p>file1.py (nothing additional in file1, apart from following data)</p>
<pre><code>dict1 = {
'a' : 1, # value is integer
'b' : '5xy', # value is string
'c' : '10xy',
'd' : '1xy',
'e' : 10,
}
</code></pre>
<p>file2.py</p>
<pre><code> import file1
import json
print file1.dict1['a'] #this works fine
print file1.dict1['b']
# Now I want to update the value of a & b, something like this:
dict2 = json.loads(data)
file1.dict1['a'] = dict2.['some_int'] #int value
file1.dict1['b'] = dict2.['some_str'] #string value
</code></pre>
<p>The main reason why I'm using dictionary and not text file, is because the new values to be updated come from a json data and converting it to a dictionary is simpler saving me from string parsing each time I want to update the <strong>dict1</strong>.</p>
<p>Problem is, <strong>When I update the value from dict2, I want those value to be written to dict1 in file1</strong></p>
<p>Also, the code runs on a Raspberry Pi and I've SSH into it using Ubuntu machine.</p>
<p>Can someone please help me how to do this?</p>
<p><strong>EDIT:</strong></p>
<ol>
<li>file1.py could be saved in any other format like .json or .txt. It was just my assumption that saving data as a dictionary in separate file would allow easy update.</li>
<li>file1.py has to be a separate file, it is a configuration file so I don't want to merge it to my main file.</li>
<li>The <strong>data</strong> for <strong>dict2</strong> mention above comes from socket connection at</li>
</ol>
<p><code>dict2 = json.loads(data)</code></p>
<ol start="4">
<li>I want to update the *file1** with the data that comes from socket connection.</li>
</ol>
| 0 | 2016-08-01T19:40:18Z | 38,710,044 | <p>Finally as @Zaren suggested, I used a json file instead of dictionary in python file.</p>
<p>Here's what I did:</p>
<ol>
<li><p>Modified <strong>file1.py</strong> to <strong>file1.json</strong> and store the data with appropriate formatting.</p></li>
<li><p>From <strong>file2.py</strong>, I opened <strong>file1.json</strong> when needed instead of <code>import file1</code> and used <code>json.dump</code> & <code>json.load</code> on <strong>file1.json</strong></p></li>
</ol>
| 0 | 2016-08-02T00:29:51Z | [
"python",
"file",
"dictionary",
"fileupdate"
] |
Difference between get_lines() method of axes and legend | 38,706,857 | <p>In the code</p>
<pre><code>from matplotlib.figure import Figure
fig1 = Figure()
ax1 = fig1.add_subplot(111)
p1 = ax1.plot([1,2,3], label='123')
lg1 = ax1.legend()
</code></pre>
<p><code>lg1.get_lines()[0] == ax1.get_lines()[0]</code> evaluates to false even though they should be referring to the same line. May I know why this is the case?</p>
| 0 | 2016-08-01T19:40:26Z | 38,711,329 | <p>The short answer is that they are different instances of objects in memory. </p>
<pre><code>In [6]: lg1.get_lines()
Out[6]: [<matplotlib.lines.Line2D at 0x10e355828>]
In [7]: ax1.get_lines()
Out[7]: <a list of 1 Line2D objects>
In [8]: list(ax1.get_lines())
Out[8]: [<matplotlib.lines.Line2D at 0x10e342940>]
</code></pre>
<p>Notice that the id values are different, therefore, they are not truly "equal", even though they may "refer" to the same object in the plot. </p>
<pre><code>In [9]: lg1.get_lines()[0]
Out[9]: <matplotlib.lines.Line2D at 0x10e355828>
In [10]: ax1.get_lines()[0]
Out[10]: <matplotlib.lines.Line2D at 0x10e342940>
</code></pre>
<pre><code>In [11]: id(lg1.get_lines()[0])
Out[11]: 4533344296
In [12]: id(ax1.get_lines()[0])
Out[12]: 4533266752
</code></pre>
<p>Or, rather, <code>ax1.get_lines()</code> gives the line that is plotted and <code>lg1.get_lines()</code> gives the actual lines drawn in the legend box</p>
| 0 | 2016-08-02T03:28:56Z | [
"python",
"matplotlib",
"legend"
] |
search a 2GB WAV file for dropouts using wave module | 38,706,926 | <p>`What is the best way to analyze a 2GB WAV file (1khz Tone) for audio dropouts using wave module? </p>
| 2 | 2016-08-01T19:45:10Z | 38,707,088 | <p>I think a simple solution to this would be to consider that the frame rate on audio files is pretty high. A sample file on my computer happens to have a framerate of 8,000. That means for every second of audio, I have 8,000 samples. If you have missing audio, I'm sure it will exist across multiple frames within a second, so you can essentially reduce your comparisons as drastically as your standards would allow. If I were you, I would try iterating over every 1,000th sample instead of every single sample in the audio file. That basically means it will examine every 1/8th of a second of audio to see if it's dead. Not as precise, but hopefully it will get the job done.</p>
<pre><code>import wave
file1 = wave.open("testdropout.wav", "r")
file2 = open("silence.log", "w")
for i in range(file1.getnframes()):
frame = file1.readframes(i)
zero = True
for j in range(0, len(frame), 1000):
# check if amplitude is greater than 0
# the ord() function converts the hex values to integers
if ord(frame[j]) > 0:
zero = False
break
if zero:
print >> file2, 'dropout at second %s' % (file1.tell()/file1.getframerate())
file1.close()
file2.close()
</code></pre>
| 1 | 2016-08-01T19:56:25Z | [
"python",
"wav"
] |
search a 2GB WAV file for dropouts using wave module | 38,706,926 | <p>`What is the best way to analyze a 2GB WAV file (1khz Tone) for audio dropouts using wave module? </p>
| 2 | 2016-08-01T19:45:10Z | 38,707,693 | <p>At the moment, you're reading the entire file into memory, which is not ideal. If you look at the methods available for a "Wave_read" object, one of them is <code>setpos(pos)</code>, which sets the position of the file pointer to <em>pos</em>. If you update this position, you should be able to only keep the frame you want in memory at any given time, preventing errors. Below is a rough outline:</p>
<pre><code>import wave
file1 = wave.open("testdropout.wav", "r")
file2 = open("silence.log", "w")
def scan_frame(frame):
for j in range(len(frame)):
# check if amplitude is less than 0
# It makes more sense here to check for the desired case (low amplitude)
# rather than breaking at higher amplitudes
if ord(frame[j]) <= 0:
return True
for i in range(file1.getnframes()):
frame = file1.readframes(1) # only read the frame at the current file position
zero = scan_frame(frame)
if zero:
print >> file2, 'dropout at second %s' % (file1.tell()/file1.getframerate())
pos = file1.tell() # States current file position
file1.setpos(pos + len(frame)) # or pos + 1, or whatever a single unit in a wave
# file is, I'm not entirely sure
file1.close()
file2.close()
</code></pre>
<p>Hope this can help!</p>
| 1 | 2016-08-01T20:37:58Z | [
"python",
"wav"
] |
search a 2GB WAV file for dropouts using wave module | 38,706,926 | <p>`What is the best way to analyze a 2GB WAV file (1khz Tone) for audio dropouts using wave module? </p>
| 2 | 2016-08-01T19:45:10Z | 38,708,236 | <p>I haven't used the <code>wave</code> module before, but <code>file1.readframes(i)</code> looks like it's reading 1 frame when you're at the first frame, 2 frames when you're at the second frame, 10 frames when you're in the tenth frame, and a 2Gb CD quality file might have a million frames - by the time you're at frame 100,000 reading 100,000 frames ... getting slower each time through the loop as well? </p>
<p>And from my comment, in Python 2 <code>range()</code> generates an in-memory array of the full size first, and <code>xrange()</code> doesn't, but not using range at all helps even more.</p>
<p>And push the looping down into the lower layers with <code>any()</code> to make the code shorter, and possibly faster:</p>
<pre><code>import wave
file1 = wave.open("testdropout.wav", "r")
file2 = open("silence.log", "w")
chunksize = file1.getframerate()
chunk = file1.readframes(chunksize)
while chunk:
if not any(ord(sample) for sample in chunk):
print >> file2, 'dropout at second %s' % (file1.tell()/chunksize)
chunk = file1.readframes(chunksize)
file1.close()
file2.close()
</code></pre>
<p>This should read the file in 1-second chunks.</p>
| 1 | 2016-08-01T21:14:51Z | [
"python",
"wav"
] |
Cannot compile pyethash python package which requires C99 compiler (AFAIU). Error - Cannot open include file: 'alloca.h' | 38,706,927 | <h1>Problem</h1>
<p>When installing pyethash manually or with pip I get the same kind of error:</p>
<blockquote>
<p>fatal error C1083: Cannot open include file: 'alloca.h': No such file
or directory error: command 'C:\Program Files (x86)\Microsoft Visual
Studio 9.0\VC\BIN\amd64\cl.exe' failed with exit status 2</p>
</blockquote>
<h1>Related and tried already:</h1>
<ul>
<li><a href="http://stackoverflow.com/questions/2817869/error-unable-to-find-vcvarsall-bat">error: Unable to find vcvarsall.bat</a> </li>
<li><a href="http://stackoverflow.com/questions/13596407/errors-while-building-installing-c-module-for-python-2-7/21898585#21898585:">Errors while building/installing C module for Python 2.7</a> </li>
</ul>
<h1>Similar problem with no answer:</h1>
<ul>
<li><a href="http://stackoverflow.com/questions/2817869/error-unable-to-find-vcvarsall-bat#comment42289832_26127562">comment42289832_26127562</a> </li>
<li><a href="http://stackoverflow.com/questions/37828090/setup-script-exited-with-error-cl-exe-failed-with-exit-status-2">Setup script exited with error cl.exe' failed with exit status 2</a> </li>
</ul>
<h1>Other facts:</h1>
<ul>
<li>Successfully installed Crypto and scrypt which require C++ compiler. </li>
<li>There are another 3 files mentioned in pyethash core.c source file headers which are absent anywhere on my drive:
<ul>
<li>alloca.h</li>
<li>stdint.h</li>
<li>stdlib.h</li>
</ul></li>
</ul>
<h1>System</h1>
<p>python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 20:40:30) [MSC v.1500 64 bit (AMD64)], windows 8.1 x64</p>
<h1>...\ethash-master> python setup.py install</h1>
<pre><code>PS C:\pyethereum\ethash-master> python setup.py install
running install
running build
running build_ext
building 'pyethash' extension
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\Python2
7\include -IC:\Python27\PC /Tcsrc/python/core.c /Fobuild\temp.win-amd64-2.7\Release\src/python/core.obj -Isrc/ -std=gnu9
9 -Wall
cl : Command line warning D9002 : ignoring unknown option '-std=gnu99'
core.c
c:\program files (x86)\microsoft visual studio 9.0\vc\include\codeanalysis\sourceannotations.h(81) : warning C4820: 'Pre
Attribute' : '4' bytes padding added after data member 'Access'
c:\program files (x86)\microsoft visual studio 9.0\vc\include\codeanalysis\sourceannotations.h(96) : warning C4820: 'Pre
Attribute' : '4' bytes padding added after data member 'NullTerminated'
c:\program files (x86)\microsoft visual studio 9.0\vc\include\codeanalysis\sourceannotations.h(112) : warning C4820: 'Po
stAttribute' : '4' bytes padding added after data member 'Access'
c:\program files (x86)\microsoft visual studio 9.0\vc\include\codeanalysis\sourceannotations.h(191) : warning C4820: 'Pr
eRangeAttribute' : '4' bytes padding added after data member 'Deref'
c:\program files (x86)\microsoft visual studio 9.0\vc\include\codeanalysis\sourceannotations.h(203) : warning C4820: 'Po
stRangeAttribute' : '4' bytes padding added after data member 'Deref'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\io.h(60) : warning C4820: '_finddata32i64_t' : '4' bytes p
adding added after data member 'name'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\io.h(64) : warning C4820: '_finddata64i32_t' : '4' bytes p
adding added after data member 'attrib'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\io.h(73) : warning C4820: '__finddata64_t' : '4' bytes pad
ding added after data member 'attrib'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\io.h(78) : warning C4820: '__finddata64_t' : '4' bytes pad
ding added after data member 'name'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\io.h(126) : warning C4820: '_wfinddata64i32_t' : '4' bytes
padding added after data member 'attrib'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\io.h(131) : warning C4820: '_wfinddata64i32_t' : '4' bytes
padding added after data member 'name'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\io.h(135) : warning C4820: '_wfinddata64_t' : '4' bytes pa
dding added after data member 'attrib'
C:\Program Files\Microsoft SDKs\Windows\v7.0\include\basetsd.h(114) : warning C4668: '__midl' is not defined as a prepro
cessor macro, replacing with '0' for '#if/#elif'
C:\Program Files\Microsoft SDKs\Windows\v7.0\include\basetsd.h(424) : warning C4668: '_WIN32_WINNT' is not defined as a
preprocessor macro, replacing with '0' for '#if/#elif'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\stdio.h(62) : warning C4820: '_iobuf' : '4' bytes padding
added after data member '_cnt'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\stdio.h(381) : warning C4255: '_get_printf_count_output' :
no function prototype given: converting '()' to '(void)'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\stdlib.h(215) : warning C4255: '_get_purecall_handler' : n
o function prototype given: converting '()' to '(void)'
c:\python27\include\pyport.h(206) : warning C4668: 'SIZEOF_PID_T' is not defined as a preprocessor macro, replacing with
'0' for '#if/#elif'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\math.h(41) : warning C4820: '_exception' : '4' bytes paddi
ng added after data member 'type'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\sys/stat.h(111) : warning C4820: '_stat32' : '2' bytes pad
ding added after data member 'st_gid'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\sys/stat.h(127) : warning C4820: 'stat' : '2' bytes paddin
g added after data member 'st_gid'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\sys/stat.h(143) : warning C4820: '_stat32i64' : '2' bytes
padding added after data member 'st_gid'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\sys/stat.h(144) : warning C4820: '_stat32i64' : '4' bytes
padding added after data member 'st_rdev'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\sys/stat.h(148) : warning C4820: '_stat32i64' : '4' bytes
padding added after data member 'st_ctime'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\sys/stat.h(157) : warning C4820: '_stat64i32' : '2' bytes
padding added after data member 'st_gid'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\sys/stat.h(171) : warning C4820: '_stat64' : '2' bytes pad
ding added after data member 'st_gid'
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\sys/stat.h(172) : warning C4820: '_stat64' : '4' bytes pad
ding added after data member 'st_rdev'
c:\python27\include\object.h(358) : warning C4820: '_typeobject' : '4' bytes padding added after data member 'tp_flags'
c:\python27\include\object.h(411) : warning C4820: '_typeobject' : '4' bytes padding added after data member 'tp_version
_tag'
c:\python27\include\unicodeobject.h(420) : warning C4820: '<unnamed-tag>' : '4' bytes padding added after data member 'h
ash'
c:\python27\include\intobject.h(26) : warning C4820: '<unnamed-tag>' : '4' bytes padding added after data member 'ob_iva
l'
c:\python27\include\stringobject.h(49) : warning C4820: '<unnamed-tag>' : '7' bytes padding added after data member 'ob_
sval'
c:\python27\include\bytearrayobject.h(26) : warning C4820: '<unnamed-tag>' : '4' bytes padding added after data member '
ob_exports'
c:\python27\include\setobject.h(26) : warning C4820: '<unnamed-tag>' : '4' bytes padding added after data member 'hash'
c:\python27\include\setobject.h(56) : warning C4820: '_setobject' : '4' bytes padding added after data member 'hash'
c:\python27\include\methodobject.h(42) : warning C4820: 'PyMethodDef' : '4' bytes padding added after data member 'ml_fl
ags'
c:\python27\include\fileobject.h(26) : warning C4820: '<unnamed-tag>' : '4' bytes padding added after data member 'f_ski
pnextlf'
c:\python27\include\fileobject.h(33) : warning C4820: '<unnamed-tag>' : '4' bytes padding added after data member 'writa
ble'
c:\python27\include\genobject.h(23) : warning C4820: '<unnamed-tag>' : '4' bytes padding added after data member 'gi_run
ning'
c:\python27\include\descrobject.h(28) : warning C4820: 'wrapperbase' : '4' bytes padding added after data member 'offset
'
c:\python27\include\descrobject.h(32) : warning C4820: 'wrapperbase' : '4' bytes padding added after data member 'flags'
c:\python27\include\weakrefobject.h(37) : warning C4820: '_PyWeakReference' : '4' bytes padding added after data member
'hash'
c:\python27\include\pystate.h(70) : warning C4820: '_ts' : '4' bytes padding added after data member 'use_tracing'
c:\python27\include\import.h(61) : warning C4820: '_frozen' : '4' bytes padding added after data member 'size'
c:\python27\include\code.h(26) : warning C4820: '<unnamed-tag>' : '4' bytes padding added after data member 'co_firstlin
eno'
src/python/core.c(2) : fatal error C1083: Cannot open include file: 'alloca.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 9.0\\VC\\BIN\\amd64\\cl.exe' failed with exit status 2
</code></pre>
<h1>Question</h1>
<p>How can I compile this package? (Please help, it is 4th day now!)</p>
| 0 | 2016-08-01T19:45:16Z | 38,709,510 | <p><code>alloca</code> <strong>alloc</strong>ates <strong>a</strong>utomatic memory and as Jens Gustedt noted is not standardised.</p>
<p>MSVCRT declares it in the <a href="https://msdn.microsoft.com/en-us/library/wb1s57t5.aspx" rel="nofollow"><code><malloc.h></code> header</a>. Its implementation on Windows aligns with the common behavior on UNIX systems, so it should work as expected. Other parts of the code might be more tightly coupled to UNIX though and may require a rewrite.</p>
<p>The other two header are standard C headers and should be in the <code>INCLUDE</code> directory which your compiler automatically searches.</p>
| 0 | 2016-08-01T23:19:37Z | [
"python",
"c++",
"c",
"compiler-errors",
"c99"
] |
Pandas Dataframe and Converting DateTime Objects | 38,706,932 | <p>I have multiple datettime columns in my dataframe and when I export the datetime to csv, I need to convert the datetime from Month/Day/Year to Month/Year. Is it possible to do this?</p>
<p>I was trying this:</p>
<pre><code>if date_mask == "MMM":
df[name].apply(lambda x: x.strftime('%b %Y'))
else:
df[name].apply(lambda x: x.strftime('%m %Y'))
</code></pre>
<p>When I look at the exported CSV I still see the old datetime values.</p>
<p>Any ideas on how to do this?</p>
<p>Thanks</p>
<p><strong>##########################################################</strong></p>
<p><strong>Solution</strong></p>
<pre><code>def modify_date(x):
try:
if pd.isnull(x) == False:
return x.strftime('%b %Y')
else:
print pd.NaT
except:
return pd.NaT
df = pd.DateFrame.from_records(<some list from database>)
df[name] = df[name].apply(modify_date)
</code></pre>
<p>Thanks for all the help!</p>
| 0 | 2016-08-01T19:45:25Z | 38,717,978 | <p>Solution</p>
<pre><code>def modify_date(x):
try:
if pd.isnull(x) == False:
return x.strftime('%b %Y')
else:
print pd.NaT
except:
return pd.NaT
df = pd.DateFrame.from_records(<some list from database>)
df[name] = df[name].apply(modify_date)
</code></pre>
<p>Thanks for all the help!</p>
| 0 | 2016-08-02T10:30:17Z | [
"python",
"csv",
"datetime",
"pandas"
] |
Setting up a PyCharm environment for a GTK Hello World on Windows | 38,706,946 | <p>I'm just trying to make a simple GTK Hello World app run in Pycharm.</p>
<ul>
<li>I have installed <a href="https://www.jetbrains.com/pycharm/download/#section=windows" rel="nofollow">PyCharm Community Edition</a> 2016.2. </li>
<li>I have installed any combination of <a href="https://www.continuum.io/downloads" rel="nofollow">Anaconda</a> (Python 2, Python 3, 32 bit, 64 bit). </li>
<li>I have downloaded a <a href="http://pygtk.org/pygtk2tutorial/examples/helloworld.py" rel="nofollow">GTK hello world example</a></li>
</ul>
<p>When I try to run this stuff, I first get the error</p>
<pre><code>C:\Users\[...]\Anaconda3\python.exe C:/Users/[...]/PycharmProjects/HelloTk/hellotk.py
Traceback (most recent call last):
File "C:/Users/[...]/PycharmProjects/HelloTk/hellotk.py", line 3, in <module>
import pygtk
ImportError: No module named 'pygtk'
</code></pre>
<p>Which I tried to resolve by the instructions on SO: <a href="http://stackoverflow.com/questions/19885821/how-do-i-import-modules-in-pycharm">How do I import modules in Pycharm</a>. However, this does not work for the error</p>
<pre><code>Collecting PyGTK
Using cached pygtk-2.24.0.tar.bz2
Complete output from command python setup.py egg_info:
ERROR: Could not import dsextras module: Make sure you have installed pygobject.
</code></pre>
<p>Which brought me to the next step, installing <code>pygobject</code>. At first, this failed because of a missing <code>pkg-config</code>, which I installed according the instructions on Stack Overflow <a href="http://stackoverflow.com/a/22363820/480982">How to install pkg config in windows?</a>. This seemed to work, but I now get the error</p>
<pre><code>Collecting PyGObject
Using cached pygobject-2.28.3.tar.bz2
Complete output from command python setup.py egg_info:
* glib-2.0.pc could not be found, bindings for glib._glib will not be built.
ERROR: Nothing to do, glib could not be found and is essential.
</code></pre>
<p>Googling more, I found <a href="http://stackoverflow.com/questions/31324430/installing-pygobject-via-pip-in-virtualenv">Installing PygObject via PIP in virtualenv</a>, but the solution is for Linux only.</p>
<p>Since ~2h since I installed PyCharm and Anaconda I'm trying to compile a stupid simple Hello World program. How do I make it work and what was I doing wrong?</p>
| 0 | 2016-08-01T19:46:18Z | 39,975,986 | <p>You need to download the latest installer for windows from: <a href="https://sourceforge.net/projects/pygobjectwin32/files/?source=navbar" rel="nofollow">https://sourceforge.net/projects/pygobjectwin32/files/?source=navbar</a></p>
<p>Also, you must make sure you are not running python 3.5 or newer, the last supported version seems to be 3.4.x for GTK.</p>
<p>It took me a few hours to figure this out as the installer completes without complaining at all on 3.5 as well, it just doesn't install the package.</p>
<p>Regards,</p>
<p>Hunor</p>
<p>Edit: so, while this sort of works to install, i still can't get quite a few things to work in it :(</p>
| 0 | 2016-10-11T11:13:54Z | [
"python",
"pycharm",
"pygtk"
] |
How to add more than one feature into the same cell using the grid method, Python Tkinter | 38,707,017 | <p>There might be question like this, but I can't find it.
I want to have more than one entry or label etc. in the same cell without them overlapping. I hope you know what I mean.
Any ideas?</p>
| 0 | 2016-08-01T19:50:52Z | 38,708,108 | <p>Put as many items as you want in a frame, and then put the frame in the grid cell.</p>
<h2>Example</h2>
<pre><code>import tkinter as tk
root = tk.Tk()
# some random widgets, for illustrative purposes
l0 = tk.Label(root, text="Cell 0,0", borderwidth=1, relief="solid")
l1 = tk.Label(root, text="Cell 0,1", borderwidth=1, relief="solid")
l2 = tk.Label(root, text="Cell 1,0", borderwidth=1, relief="solid")
l3 = tk.Label(root, text="Cell 1,1", borderwidth=1, relief="solid")
l4 = tk.Label(root, text="Cell 1,2", borderwidth=1, relief="solid")
# create a frame for one of the cells, and put
# a label and entry widget in it
f1 = tk.Frame(root, borderwidth=1, relief="solid")
l5 = tk.Label(f1, text="Cell 0,2")
e1 = tk.Entry(f1)
# put the label and entry in the frame:
l5.pack(side="top", fill="both", expand=True)
e1.pack(side="top", fill="x")
# put the widgets in the root
l0.grid(row=0, column=0, padx=2, pady=2, sticky="nsew")
l1.grid(row=0, column=1, padx=2, pady=2, sticky="nsew")
f1.grid(row=0, column=2, padx=2, pady=2, sticky="nsew")
l2.grid(row=1, column=0, padx=2, pady=2, sticky="nsew")
l3.grid(row=1, column=1, padx=2, pady=2, sticky="nsew")
l4.grid(row=1, column=2, padx=2, pady=2, sticky="nsew")
root.mainloop()
</code></pre>
| 0 | 2016-08-01T21:06:09Z | [
"python",
"tkinter",
"grid"
] |
calling functions via grequests | 38,707,023 | <p>I realize there have been many posts on grequests such as <a href="http://stackoverflow.com/questions/9110593/asynchronous-requests-with-python-requests">Asynchronous Requests with Python requests</a>
which describes the basic usage of grequests and how to send hooks via <code>grequests.get()</code> I pulled this bit of code right from that link.</p>
<pre><code>import grequests
urls = [
'http://python-requests.org',
'http://httpbin.org',
'http://python-guide.org',
'http://kennethreitz.com'
]
# A simple task to do to each response object
def do_something(response):
print ('print_test')
# A list to hold our things to do via async
async_list = []
for u in urls:
action_item = grequests.get(u, hooks = {'response' : do_something})
async_list.append(action_item)
# Do our list of things to do via async
grequests.map(async_list)
</code></pre>
<p>When i run this however i get no output</p>
<pre><code>/$ python test.py
/$
</code></pre>
<p>since there are 4 links I would expect the output to be </p>
<pre><code>print_test
print_test
print_test
print_test
</code></pre>
<p>I have been searching around and haven't been able to find a reason for the lack of output I am amusing that there is a bit of key information that I am missing.</p>
| 0 | 2016-08-01T19:51:12Z | 38,707,579 | <p>I need to check sources yet, but if you rewrite your hook function as </p>
<pre><code># A simple task to do to each response object
def do_something(response, *args, **kwargs):
print ('print_test')
</code></pre>
<p>it puts output. So it's probably failing to call you original hook(because it passes more arguments than you accept) and catching exception, so you get no output</p>
| 1 | 2016-08-01T20:30:27Z | [
"python",
"asynchronous",
"request",
"grequests"
] |
Running executable that takes arguments using Python | 38,707,087 | <p>I am writing a small python script where I open an existing executable (.exe) and I send a string as an argument.</p>
<p>I am using the subprocess.call method and I get the following error:</p>
<pre><code>File "C:\Python34\lib\subprocess.py", line 537, in call
with Popen(*popenargs, **kwargs) as p:
File "C:\Python34\lib\subprocess.py", line 767, in __init__
raise TypeError("bufsize must be an integer")
TypeError: bufsize must be an integer
</code></pre>
<p>My code</p>
<pre><code>import os
import subprocess
x = subprocess.call("C:\\Users\\Desktop\\Program\\Program.exe", y)
</code></pre>
<p>where y is a string I am passing.</p>
<p>I am trying to upgrade an old VB code. The original code calls the executable and passes an argument as shown below. I am trying to replicate this in Python.</p>
<pre><code>Private comm As ExecCmd
Dim cmd As String
Dim app As String
Dim e As New ExecCmd
exec_1= "...\Desktop\Program.exe"
x = "Text" & Variable & " Hello" & Variable2
comm.StartApp exec_1, x 'starts the .exe file with an argument
</code></pre>
| 1 | 2016-08-01T19:56:02Z | 38,707,175 | <p>Put the program and any arguments you want into an array first.</p>
<pre><code>import os
import subprocess
x = subprocess.call(["C:\\Users\\Desktop\\Program\\Program.exe", y])
</code></pre>
| 4 | 2016-08-01T20:01:54Z | [
"python",
"exe"
] |
Running executable that takes arguments using Python | 38,707,087 | <p>I am writing a small python script where I open an existing executable (.exe) and I send a string as an argument.</p>
<p>I am using the subprocess.call method and I get the following error:</p>
<pre><code>File "C:\Python34\lib\subprocess.py", line 537, in call
with Popen(*popenargs, **kwargs) as p:
File "C:\Python34\lib\subprocess.py", line 767, in __init__
raise TypeError("bufsize must be an integer")
TypeError: bufsize must be an integer
</code></pre>
<p>My code</p>
<pre><code>import os
import subprocess
x = subprocess.call("C:\\Users\\Desktop\\Program\\Program.exe", y)
</code></pre>
<p>where y is a string I am passing.</p>
<p>I am trying to upgrade an old VB code. The original code calls the executable and passes an argument as shown below. I am trying to replicate this in Python.</p>
<pre><code>Private comm As ExecCmd
Dim cmd As String
Dim app As String
Dim e As New ExecCmd
exec_1= "...\Desktop\Program.exe"
x = "Text" & Variable & " Hello" & Variable2
comm.StartApp exec_1, x 'starts the .exe file with an argument
</code></pre>
| 1 | 2016-08-01T19:56:02Z | 38,707,183 | <p>The command and arguments should be in a list</p>
<pre><code>x = subprocess.call(["C:\\Users\\Desktop\\Program\\Program.exe", y])
</code></pre>
<p><a href="https://docs.python.org/2/library/subprocess.html#using-the-subprocess-module" rel="nofollow">Documentation</a></p>
| 3 | 2016-08-01T20:02:14Z | [
"python",
"exe"
] |
A novice's hangman project: Dealing with duplicate letters | 38,707,200 | <p>First, I realize the following code is probably not very good, so apologies for anything that makes you cringe, I'm just trying to code as much as I can in hopes of getting better.</p>
<p>This is part of a small hangman game project, I'm trying to figure the best way to deal with duplicate letters in strings.</p>
<p>This is what I got for now:</p>
<pre><code>def checkDupes(word):
global dupeList
global repeatTimesDupes
if repeatTimesDupes != 0:
dupeCount = 0
for i in range(len(word)):
temp = word[i]
print("temp letter is: ", temp)
for j in range(i+1,len(word)):
if word[j] == temp:
if temp not in dupeList:
dupeList.append(word[j])
print("the dupeList contains: ", dupeList)#debug
repeatTimesDupes -= 1
def getLetter(position,buttons,word):
i = 96
index = 0
letter = chr(i)
for button in buttons:
if button != None:
i+=1
if button.collidepoint(position):
print("the position is: ", position)
print(i)
for j in range(len(word)):
print(word[j] , chr(i))
if word[j] == chr(i):
index = j
return chr(i), index
else:
return '?', -1
def checkForLetter(word,letter):
inWord = " "
for i in range(len(word)):
if word[i] == letter:
inWord = True
break
else:
print(len(word))
print (word[i])
inWord = False
return inWord
#========================== Start Loop ===========================================
while done == False:
events = pygame.event.get()
screen.fill(BGCOLOR)
timedelta = clock.tick_busy_loop(60)
timedelta /= 1000 # Convert milliseconds to seconds
for event in events:
if event.type == pygame.QUIT:
done = True
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_ESCAPE:
done = True
if event.type == pygame.MOUSEBUTTONUP:
if event.button == MOUSEBUTTONLEFT:
pos = pygame.mouse.get_pos()
for button in buttonsList:
if button.collidepoint(pos):
if button != None:
checkDupes(gameWord)
letter, atIndex = getLetter(pos,buttonsList,gameWord)
letterSelected = True
moveCounter+=1
screen.blit(blackBG,(0,0))
showButtons(letterList)
showLetterSlots(gameWord,screenRect)
setCounters(moveMade,mistakeMade)
if letterSelected:
inGameWord = checkForLetter(gameWord, letter)
if inGameWord:
print(atIndex)
print(letter)
letterRen = wordFonts.render(letter,1,(0,255,0))
renderList[atIndex] = letterRen
print("The render list is: ", renderList)
renCount = 0
for r in lineRectList:
if renderList[renCount] != '?' :
screen.blit(renderList[renCount],((r.centerx-10),430))
if renCount <= len(gameWord):
renCount+=1
#update game screen
clock.tick(60)
pygame.display.update()
#========================== End Loop =============================================
pygame.quit()
</code></pre>
<p>I'm looking for quick way to deal with duplicates so they are blitted along with their matches. I'm already slowing down my letter blits with all that looping, so I'm not really sure my current <code>getDupes</code> is the way to go.</p>
<p>If anyone is willing to look at this and give some input, I'd very much appreciate it.</p>
<p>Thanks for your time.</p>
| -3 | 2016-08-01T20:03:20Z | 38,707,644 | <p>Based on what you've described in the comments, it seems reasonable to use a <code>dictionary</code> object in this case. You don't just want to store the letter, you want to store <em>where</em> those letters occur. </p>
<p>Dictionaries have a key and a value. For example:</p>
<p><code>{'jack': 4095, 'jill': 12}</code>
The key is <code>jack</code> and the value is <code>4095</code>.</p>
<p>In this case, we won't be using an <code>int</code> for the value. We'll actually be using an array of ints. </p>
<p>So, your dictionary might look like this:</p>
<p><code>{'o':[1, 5, 6, 3, 2, 1]}</code> where those numbers are the <em>index</em> that the letter was encountered at. That will work for an arbitrary number of duplicate letters. Then, in your buttons, you know which ones to 'blit' because they're in the same order as the string.</p>
<p>Python dictionary documentation:
<a href="https://docs.python.org/3/tutorial/datastructures.html" rel="nofollow">https://docs.python.org/3/tutorial/datastructures.html</a></p>
<p>FWIW, some refactoring would do you service here as well.</p>
| 1 | 2016-08-01T20:34:19Z | [
"python",
"pygame"
] |
Pandas DataFrame: Cannot convert string into a float | 38,707,242 | <p>I have a column <code>Column1</code> in a pandas dataframe which is of type <code>str</code>, values which are in the following form: </p>
<pre><code>import pandas as pd
df = pd.read_table("filename.dat")
type(df["Column1"].ix[0]) #outputs 'str'
print(df["Column1"].ix[0])
</code></pre>
<p>which outputs <code>'1/350'</code>. So, this is currently a string. I would like to convert it into a float. </p>
<p>I tried this:</p>
<pre><code>df["Column1"] = df["Column1"].astype('float64', raise_on_error = False)
</code></pre>
<p>But this didn't change the values into floats. </p>
<p>This also failed:</p>
<pre><code>df["Column1"] = df["Column1"].convert_objects(convert_numeric=True)
</code></pre>
<p>And this failed:</p>
<pre><code>df["Column1"] = df["Column1"].apply(pd.to_numeric, args=('coerce',))
</code></pre>
<p>How do I convert all the values of column "Column1" into floats? Could I somehow use regex to remove the parentheses?</p>
<p>EDIT: </p>
<p>The line</p>
<pre><code>df["Meth"] = df["Meth"].apply(eval)
</code></pre>
<p>works, but only if I use it twice, i.e. </p>
<pre><code>df["Meth"] = df["Meth"].apply(eval)
df["Meth"] = df["Meth"].apply(eval)
</code></pre>
<p>Why would this be? </p>
| 3 | 2016-08-01T20:06:34Z | 38,707,340 | <p>You can do it by applying <code>eval</code> to the column:</p>
<pre><code>data = {'one':['1/20', '2/30']}
df = pd.DataFrame(data)
In [8]: df['one'].apply(eval)
Out[8]:
0 0.050000
1 0.066667
Name: one, dtype: float64
</code></pre>
| 2 | 2016-08-01T20:13:49Z | [
"python",
"string",
"pandas",
"valueconverter"
] |
Pandas DataFrame: Cannot convert string into a float | 38,707,242 | <p>I have a column <code>Column1</code> in a pandas dataframe which is of type <code>str</code>, values which are in the following form: </p>
<pre><code>import pandas as pd
df = pd.read_table("filename.dat")
type(df["Column1"].ix[0]) #outputs 'str'
print(df["Column1"].ix[0])
</code></pre>
<p>which outputs <code>'1/350'</code>. So, this is currently a string. I would like to convert it into a float. </p>
<p>I tried this:</p>
<pre><code>df["Column1"] = df["Column1"].astype('float64', raise_on_error = False)
</code></pre>
<p>But this didn't change the values into floats. </p>
<p>This also failed:</p>
<pre><code>df["Column1"] = df["Column1"].convert_objects(convert_numeric=True)
</code></pre>
<p>And this failed:</p>
<pre><code>df["Column1"] = df["Column1"].apply(pd.to_numeric, args=('coerce',))
</code></pre>
<p>How do I convert all the values of column "Column1" into floats? Could I somehow use regex to remove the parentheses?</p>
<p>EDIT: </p>
<p>The line</p>
<pre><code>df["Meth"] = df["Meth"].apply(eval)
</code></pre>
<p>works, but only if I use it twice, i.e. </p>
<pre><code>df["Meth"] = df["Meth"].apply(eval)
df["Meth"] = df["Meth"].apply(eval)
</code></pre>
<p>Why would this be? </p>
| 3 | 2016-08-01T20:06:34Z | 38,707,400 | <p>You need to evaluate the expression (e.g. '1/350') in order to get the result, for which you can use Python's <a href="https://docs.python.org/3.5/library/functions.html#eval" rel="nofollow"><code>eval()</code></a> function.</p>
<p>By wrapping Panda's <code>apply()</code> function around it, you can then execute the <code>eval()</code> function on every value in your column. Example:</p>
<pre><code>df["Column1"].apply(eval)
</code></pre>
<p><em>As you're interpreting literals, you can also use the <a href="https://docs.python.org/3.5/library/ast.html#ast.literal_eval" rel="nofollow"><code>ast.literal_eval</code></a> function as noted in the docs.</em> Update: This won't work, as the use of <code>literal_eval()</code> is still restricted to additions and subtractions (<a href="http://stackoverflow.com/a/20748308/3165737">source</a>).</p>
<p><em>Remark: as mentioned in other answers and comments on this question, the use of <code>eval()</code> is not without risks, as you're basically executing whatever input is passed in. In other words, if your input contains malicious code, you're giving it a free pass.</em></p>
<p><strong>Alternative option:</strong></p>
<pre><code># Define a custom div function
def div(a,b):
return int(a)/int(b)
# Split each string and pass the values to div
df_floats = df['col1'].apply(lambda x: div(*x.split('/')))
</code></pre>
<p><strong>Second alternative</strong> in case of <em>unclean</em> data:</p>
<p>By using regular expressions, we can remove any non-digits appearing resp. before the numerator and after the denominator.</p>
<pre><code># Define a custom div function (unchanged)
def div(a,b):
return int(a)/int(b)
# We'll import the re module and define a precompiled pattern
import re
regex = re.compile('\D*(\d+)/(\d+)\D*')
df_floats = df['col1'].apply(lambda x: div(*regex.findall(x)[0]))
</code></pre>
<p>We'll lose a bit of performance, but the upside is that even with input like <code>'!erefdfs?^dfsdf1/350dqsd qsd qs d'</code>, we still end up with the value of <code>1/350</code>.</p>
<p><strong>Performance:</strong></p>
<p>When timing both options on a dataframe with 100.000 rows, the second option (using the user defined <code>div</code> function) clearly wins:</p>
<ul>
<li>using <code>eval</code>: 1 loop, best of 3: 1.41 s per loop</li>
<li>using <code>div</code>: 10 loops, best of 3: 159 ms per loop</li>
<li>using <code>re</code>: 1 loop, best of 3: 275 ms per loop</li>
</ul>
| 4 | 2016-08-01T20:19:05Z | [
"python",
"string",
"pandas",
"valueconverter"
] |
Pandas DataFrame: Cannot convert string into a float | 38,707,242 | <p>I have a column <code>Column1</code> in a pandas dataframe which is of type <code>str</code>, values which are in the following form: </p>
<pre><code>import pandas as pd
df = pd.read_table("filename.dat")
type(df["Column1"].ix[0]) #outputs 'str'
print(df["Column1"].ix[0])
</code></pre>
<p>which outputs <code>'1/350'</code>. So, this is currently a string. I would like to convert it into a float. </p>
<p>I tried this:</p>
<pre><code>df["Column1"] = df["Column1"].astype('float64', raise_on_error = False)
</code></pre>
<p>But this didn't change the values into floats. </p>
<p>This also failed:</p>
<pre><code>df["Column1"] = df["Column1"].convert_objects(convert_numeric=True)
</code></pre>
<p>And this failed:</p>
<pre><code>df["Column1"] = df["Column1"].apply(pd.to_numeric, args=('coerce',))
</code></pre>
<p>How do I convert all the values of column "Column1" into floats? Could I somehow use regex to remove the parentheses?</p>
<p>EDIT: </p>
<p>The line</p>
<pre><code>df["Meth"] = df["Meth"].apply(eval)
</code></pre>
<p>works, but only if I use it twice, i.e. </p>
<pre><code>df["Meth"] = df["Meth"].apply(eval)
df["Meth"] = df["Meth"].apply(eval)
</code></pre>
<p>Why would this be? </p>
| 3 | 2016-08-01T20:06:34Z | 38,707,719 | <p>I hate advocating for the use of <code>eval</code>. I didn't want to spend time on this answer but I was compelled because I don't want you to use <code>eval</code>.</p>
<p>So I wrote this function that works on a <code>pd.Series</code></p>
<pre><code>def do_math_in_string(s):
op_map = {'/': '__div__', '*': '__mul__', '+': '__add__', '-': '__sub__'}
df = s.str.extract(r'(\d+)(\D+)(\d+)', expand=True)
df = df.stack().str.strip().unstack()
df.iloc[:, 0] = pd.to_numeric(df.iloc[:, 0]).astype(float)
df.iloc[:, 2] = pd.to_numeric(df.iloc[:, 2]).astype(float)
def do_op(x):
return getattr(x[0], op_map[x[1]])(x[2])
return df.T.apply(do_op)
</code></pre>
<hr>
<h3>Demonstration</h3>
<pre><code>s = pd.Series(['1/2', '3/4', '4/5'])
do_math_in_string(s)
0 0.50
1 0.75
2 0.80
dtype: float64
</code></pre>
<hr>
<pre><code>do_math_in_string(pd.Series(['1/2', '3/4', '4/5', '6+5', '11-7', '9*10']))
0 0.50
1 0.75
2 0.80
3 11.00
4 4.00
5 90.00
dtype: float64
</code></pre>
<p>Please don't use <code>eval</code>.</p>
| 3 | 2016-08-01T20:39:31Z | [
"python",
"string",
"pandas",
"valueconverter"
] |
GPU Kernel Blocksize/Gridsize without Threads | 38,707,318 | <p>I'm currently programming some numerical methods on a gpu via pycuda/cuda and am writing my own kernels. At some point, i need to estimate error for at least 1000 coupled ODE's. I don't want to have to copy a couple of vectors with over 1000 entries, so i created a kernel (at the bottom of the post) that is a basic max function. These %(T)s and %(N)s are string substitutions i'm making at runtime, which should be irrelevant for this question (T represents a complex datatype and N represents the number of coupled ODE's).</p>
<p>My question is: there is no need for parallel computation, so i do not use threads. When I call this function in python, what should I specify to be the blocksize or gridsize?</p>
<pre><code> __global__ void get_error(double *max_error,%(T)s error_vec[1][%(N)s])
{
max_error[0]=error_vec[0][0].real();
for(int ii=0;ii<%(N)s;ii=ii+1)
{
if(max_error[0] < error_vec[0][ii].real())
{
max_error[0]=error_vec[0][ii].real();
}
}
return;
}
</code></pre>
| 0 | 2016-08-01T20:12:25Z | 38,708,656 | <p>In a kernel launch, the total number of threads that will be spun up on the GPU is equal to the product of the grid size and block size specified for the launch.</p>
<p>Both of these values must be positive integers, therefore the only possible combination of these is 1,1 to create a launch of a single thread.</p>
<p>CUDA kernels are not required to make any specific reference to the builtin variables (e.g. <code>blockIdx</code>, <code>threadIdx</code> etc.) but normally do so in order to differentiate behavior amongst threads. In the case where you have only one thread being launched, there's no particular reason to use these variables, and its not necessary to do so.</p>
<p>A CUDA kernel launch of only a single thread is not a performant method for getting work done, but there may be specific cases where it is convenient to do so and does not have a significant performance impact on the application as a whole.</p>
<p>It's not obvious to me why your proposed kernel couldn't be recast as a thread-parallel kernel (it appears to be performing a <a href="http://stackoverflow.com/questions/25195874/cuda-using-grid-strided-loop-with-reduction-in-shared-memory">max-finding reduction</a>), but that seems to be separate from the point of your question.</p>
| 1 | 2016-08-01T21:47:28Z | [
"python",
"cuda",
"gpu"
] |
Does PeeWee support interaction with MySQL Views? | 38,707,331 | <p>I am trying to access pre-created MySQL View in the database via. peewee treating it as a table [peewee.model], however I am still prompted with Operational Error 1054 unknown column.</p>
<p>Does PeeWee Supports interactions with database view ?</p>
| 0 | 2016-08-01T20:13:01Z | 38,714,847 | <p>Peewee has been able to query against views when I've tried it, but while typing up a simple proof-of-concept I ran into two potential gotcha's.</p>
<p>First, the code:</p>
<pre><code>from peewee import *
db = SqliteDatabase(':memory:')
class Foo(Model):
name = TextField()
class Meta: database = db
db.create_tables([Foo])
for name in ('huey', 'mickey', 'zaizee'):
Foo.create(name=name)
</code></pre>
<p>OK -- nothing exciting, just loaded three names into a table. Then I made a view that corresponds to the upper-case conversion of the name:</p>
<pre><code>db.execute_sql('CREATE VIEW foo_view AS SELECT UPPER(name) FROM foo')
</code></pre>
<p>I then tried the following, which failed:</p>
<pre><code>class FooView(Foo):
class Meta:
db_table = 'foo_view'
print [fv.name for fv in FooView.select()]
</code></pre>
<p>Then I ran into the first issue.</p>
<p>When I subclassed "Foo", I brought along a primary key column named "id". Since I used a bare <code>select()</code> (<code>FooView.select()</code>), peewee assumed i wasnted both the "id" and the "name". Since the view has no "id", I got an error.</p>
<p>I tried again, specifying only the name:</p>
<pre><code>print [fv.name for fv in FooView.select(FooView.name)]
</code></pre>
<p>This also failed.</p>
<p>The reason this second query fails can be found by looking at the cursor description on a bare select:</p>
<pre><code>curs = db.execute_sql('select * from foo_view')
print curs.description[0][0] # Print the first column's name.
# prints UPPER(name)
</code></pre>
<p>SQLite named the view's column "UPPER(name)". To fix this, I redefined the view:</p>
<pre><code>db.execute_sql('CREATE VIEW foo_view AS SELECT UPPER(name) AS name FROM foo')
</code></pre>
<p>Now, when I query the view it works just fine:</p>
<pre><code>print [x.name for x in FooView.select(FooView.name)]
# prints ['HUEY', 'MICKEY', 'ZAIZEE']
</code></pre>
<p>Hope that helps.</p>
| 1 | 2016-08-02T07:59:12Z | [
"python",
"views",
"peewee"
] |
Decorate two functions? | 38,707,393 | <p>Is it possible to make such test cases shorter by using decorator or whatever else?</p>
<pre><code> def test_login_invalid_pwd(self):
password = '12345'
response = self._login(pwd=password)
self.assertEqual(status_code, 200)
self.assertEqual(response['resultText'],
'invalid password or login')
self.assertEqual(response['resultCode'], 55)
def test_web_login_invalid_login(self):
login = 'my_1258@'
response = self._login(login=login)
self.assertEqual(status_code, 200)
self.assertEqual(response['resultText'],
'invalid password or login')
self.assertEqual(response['resultCode'], 55)
</code></pre>
| 0 | 2016-08-01T20:18:29Z | 38,707,449 | <p>Yes. Try this:</p>
<pre><code>def helper(self, response):
self.assertEqual(status_code, 200)
self.assertEqual(response['resultText'],
'invalid password or login')
self.assertEqual(response['resultCode'], 55)
def test_login_invalid_pwd(self):
password = '12345'
response = self._login(pwd=password)
self.helper(response)
def test_web_login_invalid_login(self):
login = 'my_1258@'
response = self._login(login=login)
self.helper(response)
</code></pre>
<p>Or, depending on how granular your tests need to be, and assuming that your <code>._login()</code> method uses <code>None</code> as defaults:</p>
<pre><code>def test_login_invalid(self):
for login, pwd in (('my_1258@', None), (None, '12345')):
response = self._login(login=login, pwd=pwd)
self.assertEqual(status_code, 200)
self.assertEqual(response['resultText'],
'invalid password or login')
self.assertEqual(response['resultCode'], 55)
</code></pre>
| 3 | 2016-08-01T20:21:40Z | [
"python",
"python-unittest",
"web-api-testing"
] |
Handling bad URLs with requests | 38,707,394 | <p>Sorry in advance for the beginner question. I'm just learning how to access web data in Python, and I'm having trouble understanding exception handling in the <code>requests</code> package.</p>
<p>So far, when accessing web data using the <code>urllib</code> package, I wrap the <code>urlopen</code> call in a try/except structure to catch bad URLs, like this:</p>
<pre><code>import urllib, sys
url = 'https://httpbinTYPO.org/' # Note the typo in my URL
try: uh=urllib.urlopen(url)
except:
print 'Failed to open url.'
sys.exit()
text = uh.read()
print text
</code></pre>
<p>This is obviously kind of a crude way to do it, as it can mask all kinds of problems other than bad URLs.</p>
<p>From the documentation, I had sort of gathered that you could avoid the try/except structure when using the <code>requests</code> package, like this:</p>
<pre><code>import requests, sys
url = 'https://httpbinTYPO.org/' # Note the typo in my URL
r = requests.get(url)
if r.raise_for_status() is not None:
print 'Failed to open url.'
sys.exit()
text = r.text
print text
</code></pre>
<p>However, this clearly doesn't work (throws an error and a traceback). What's the "right" (i.e., simple, elegant, Pythonic) way to do this?</p>
| 2 | 2016-08-01T20:18:38Z | 38,708,680 | <p>You can specify a kind of exception after the keyword <strong>except</strong>. So to catch just errors that come from bad connections, you can do:</p>
<pre><code>import urllib, sys
url = 'https://httpbinTYPO.org/' # Note the typo in my URL
try: uh=urllib.urlopen(url)
except IOError:
print 'Failed to open url.'
sys.exit()
text = uh.read()
print text
</code></pre>
| 1 | 2016-08-01T21:49:08Z | [
"python",
"python-2.7",
"python-requests"
] |
Handling bad URLs with requests | 38,707,394 | <p>Sorry in advance for the beginner question. I'm just learning how to access web data in Python, and I'm having trouble understanding exception handling in the <code>requests</code> package.</p>
<p>So far, when accessing web data using the <code>urllib</code> package, I wrap the <code>urlopen</code> call in a try/except structure to catch bad URLs, like this:</p>
<pre><code>import urllib, sys
url = 'https://httpbinTYPO.org/' # Note the typo in my URL
try: uh=urllib.urlopen(url)
except:
print 'Failed to open url.'
sys.exit()
text = uh.read()
print text
</code></pre>
<p>This is obviously kind of a crude way to do it, as it can mask all kinds of problems other than bad URLs.</p>
<p>From the documentation, I had sort of gathered that you could avoid the try/except structure when using the <code>requests</code> package, like this:</p>
<pre><code>import requests, sys
url = 'https://httpbinTYPO.org/' # Note the typo in my URL
r = requests.get(url)
if r.raise_for_status() is not None:
print 'Failed to open url.'
sys.exit()
text = r.text
print text
</code></pre>
<p>However, this clearly doesn't work (throws an error and a traceback). What's the "right" (i.e., simple, elegant, Pythonic) way to do this?</p>
| 2 | 2016-08-01T20:18:38Z | 38,715,911 | <p>Try to catch connection error:</p>
<pre><code>from requests.exceptions import ConnectionError
try:
requests.get('https://httpbinTYPO.org/')
except ConnectionError:
print 'Failed to open url.'
</code></pre>
| 1 | 2016-08-02T08:53:26Z | [
"python",
"python-2.7",
"python-requests"
] |
Skipped If Statement/unstripped input data in Python | 38,707,588 | <p>I'm updating a crack the code game I made in Python so that you can play a single-player game against the computer. For some reason, the interpreter either doesn't use the data stripped from the input used to determine player count or skips the if statement that uses the stripped data from the input. Either way, after you input the player number it goes straight to the guessing code with an empty list of correct code characters.</p>
<p>My code for the player count determining and code creation is:</p>
<pre><code>plyrs = 0
correctanswer = []
print('Welcome!')
plyrs = str(input('How many players are there? (minimum 1, maximum 2) '))
if plyrs == 2:
print("I'm gonna ask you for some alphanumerical (number or letter characters to make a code for the other player to guess.")
input('Press enter when you are ready to enter in the code!') #Like all of my games, it has a wait.
i = 0
while i < 4:
correctanswer.append(input('What would you like digit ' + str(i + 1) + " to be? "))
i = i + 1
print("Ok, you've got your code!")
i = 0
while i < 19: #Generates seperator to prevent cheating
print('')
i = i + 0.1
print("Now, it's the other player's turn to guess!")
elif plyrs == 1:
import random
characters = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z','1','2','3','4','5','6','7','8','9','0']
i = 0
while i < 4:
correctanswer.append(characters[randint(0,36)])
i = i + 1
print('Time for you to guess!')
print('')
</code></pre>
<p>No other skipping if statement questions apply to this so please help. </p>
| 0 | 2016-08-01T20:31:02Z | 38,707,622 | <p><code>plyrs</code> is a string, and you're comparing it to an int. <code>"2" == 2</code> will always be false. Same with <code>plyrs == 1</code>, this will be false throughout.</p>
| 2 | 2016-08-01T20:33:03Z | [
"python",
"python-3.x",
"if-statement"
] |
filling cells in DataFrame | 38,707,628 | <p>I created the DataFrame and faced a problem:</p>
<pre><code> r value
0 0.8 2.5058
1 0.9 -1.9320
2 1.0 -2.6097
3 1.2 -1.6840
4 1.4 -0.8906
5 0.8 2.6955
6 0.9 -1.9552
7 1.0 -2.6641
8 1.2 -1.7169
9 1.4 -0.9056
... ... ...
</code></pre>
<p>For r from <code>0.8</code> to <code>1.4</code>, I want to assign the value for <code>r = 1.0</code>.
Therefore the desired Dataframe should look like:</p>
<pre><code> r value
0 0.8 -2.6097
1 0.9 -2.6097
2 1.0 -2.6097
3 1.2 -2.6097
4 1.4 -2.6097
5 0.8 -2.6641
6 0.9 -2.6641
7 1.0 -2.6641
8 1.2 -2.6641
9 1.4 -2.6641
... ... ....
</code></pre>
<p>My first idea wast to create the condition: </p>
<pre><code>np.where(data['r']==1.0, data['value'], 1.0)
</code></pre>
<p>but it does not solve my problem. </p>
| 2 | 2016-08-01T20:33:26Z | 38,707,670 | <p>Starting with this: </p>
<pre><code> r value
0 0.8 -2.6097
1 0.9 -2.6097
2 1.0 -2.6097
3 1.2 -2.6097
4 1.4 -2.6097
5 0.8 -2.6641
6 0.9 -2.6641
7 1.0 -2.6641
8 1.2 -2.6641
9 1.4 -2.6641
df3['grp'] = (df3['r'] ==.8).cumsum()
grpd = dict(df3[['grp','value']][df3['r'] == 1].values)
df3["value"] = df3["grp"].map(grpd)
df3 = df3.drop('grp', axis=1)
r value
0 0.8 -2.6097
1 0.9 -2.6097
2 1.0 -2.6097
3 1.2 -2.6097
4 1.4 -2.6097
5 0.8 -2.6641
6 0.9 -2.6641
7 1.0 -2.6641
8 1.2 -2.6641
9 1.4 -2.6641
</code></pre>
| 1 | 2016-08-01T20:36:47Z | [
"python",
"pandas"
] |
filling cells in DataFrame | 38,707,628 | <p>I created the DataFrame and faced a problem:</p>
<pre><code> r value
0 0.8 2.5058
1 0.9 -1.9320
2 1.0 -2.6097
3 1.2 -1.6840
4 1.4 -0.8906
5 0.8 2.6955
6 0.9 -1.9552
7 1.0 -2.6641
8 1.2 -1.7169
9 1.4 -0.9056
... ... ...
</code></pre>
<p>For r from <code>0.8</code> to <code>1.4</code>, I want to assign the value for <code>r = 1.0</code>.
Therefore the desired Dataframe should look like:</p>
<pre><code> r value
0 0.8 -2.6097
1 0.9 -2.6097
2 1.0 -2.6097
3 1.2 -2.6097
4 1.4 -2.6097
5 0.8 -2.6641
6 0.9 -2.6641
7 1.0 -2.6641
8 1.2 -2.6641
9 1.4 -2.6641
... ... ....
</code></pre>
<p>My first idea wast to create the condition: </p>
<pre><code>np.where(data['r']==1.0, data['value'], 1.0)
</code></pre>
<p>but it does not solve my problem. </p>
| 2 | 2016-08-01T20:33:26Z | 38,708,106 | <p>Try this:</p>
<pre><code>def subr(df):
isone = df.r == 1.0
if isone.any():
atone = df.value[isone].iloc[0]
# Improvement suggested by @root
df.loc[df.r.between(0.8, 1.4), 'value'] = atone
# df.loc[(df.r >= .8) & (df.r <= 1.4), 'value'] = atone
return df
df.groupby((df.r < df.r.shift()).cumsum()).apply(subr)
</code></pre>
<p><a href="http://i.stack.imgur.com/fZSak.png" rel="nofollow"><img src="http://i.stack.imgur.com/fZSak.png" alt="enter image description here"></a></p>
| 2 | 2016-08-01T21:06:08Z | [
"python",
"pandas"
] |
Not getting the desired output in python | 38,707,679 | <p><a href="http://i.stack.imgur.com/Uv6bI.png" rel="nofollow">enter image description here</a>I want to reverse a number. If 92 is typed the result should be 29. The code is given below</p>
<pre><code>def intreverse(n) :
a=str(n)
b=a[::-1]
c=int(b)
print (c)
</code></pre>
<p>But i am getting the result as actual output as </p>
<ul>
<li>29/n</li>
<li>none/n
Why?</li>
</ul>
| -2 | 2016-08-01T20:37:09Z | 38,707,767 | <p>It looks like you are getting newlines in your text input (possibly two?). I would fix this with</p>
<pre><code> a=str(n).strip()
</code></pre>
<p>also, you are printing the result, I think you want to return it. Since the function does not return anything, <code>a = intreverse('29')</code> will assign <code>None</code> to <code>a</code>. So you want:</p>
<pre><code>def intreverse(n) :
a=str(n).strip()
b=a[::-1]
c=int(b)
return c
</code></pre>
<p>Or just for the obligatory one-liner:</p>
<pre><code>def intreverse(n):
return int(str(n).strip()[::-1])
</code></pre>
| 1 | 2016-08-01T20:42:03Z | [
"python",
"python-3.x"
] |
Not getting the desired output in python | 38,707,679 | <p><a href="http://i.stack.imgur.com/Uv6bI.png" rel="nofollow">enter image description here</a>I want to reverse a number. If 92 is typed the result should be 29. The code is given below</p>
<pre><code>def intreverse(n) :
a=str(n)
b=a[::-1]
c=int(b)
print (c)
</code></pre>
<p>But i am getting the result as actual output as </p>
<ul>
<li>29/n</li>
<li>none/n
Why?</li>
</ul>
| -2 | 2016-08-01T20:37:09Z | 38,715,627 | <p>The course you use probably wants you to write functions with returns. As your function has no return, you get None in the end. And for the newline character <code>\n</code>: As you print your answer, print puts a newline character after what you print. If you delete the print statement and put a return statement instead as suggested, both problems will be solved.</p>
| 2 | 2016-08-02T08:40:43Z | [
"python",
"python-3.x"
] |
Python extract occurence of a string with regex | 38,707,699 | <p>I need a python regular expression to extract all the occurrences of a string from the line . </p>
<p>So for example,</p>
<pre><code>line = 'TokenRange(start_token:5835456583056758754, end_token:5867789857766669245, rack:brikbrik0),EndpointDetails(host:192.168.210.183, datacenter:DC1, rack:brikbrikadfdas), EndpointDetails(host:192.168.210.182, datacenter:DC1, rack:brikbrik1adf)])'
</code></pre>
<p>I want to extract all the string which contains the rack ID. I am crappy with reg ex, so when I looked at the python docs but could not find the correct use of re.findAll or some similar regex expression.
Can someone help me with the regular expression?
Here is the output i need : [brikbrik0,brikbrikadfdas, brikbrik1adf]</p>
| 1 | 2016-08-01T20:38:15Z | 38,707,724 | <p>You can capture alphanumerics coming after the <code>rack:</code>:</p>
<pre><code>>>> re.findall(r"rack:(\w+)", line)
['brikbrik0', 'brikbrikadfdas', 'brikbrik1adf']
</code></pre>
| 3 | 2016-08-01T20:39:42Z | [
"python",
"regex"
] |
Python extract occurence of a string with regex | 38,707,699 | <p>I need a python regular expression to extract all the occurrences of a string from the line . </p>
<p>So for example,</p>
<pre><code>line = 'TokenRange(start_token:5835456583056758754, end_token:5867789857766669245, rack:brikbrik0),EndpointDetails(host:192.168.210.183, datacenter:DC1, rack:brikbrikadfdas), EndpointDetails(host:192.168.210.182, datacenter:DC1, rack:brikbrik1adf)])'
</code></pre>
<p>I want to extract all the string which contains the rack ID. I am crappy with reg ex, so when I looked at the python docs but could not find the correct use of re.findAll or some similar regex expression.
Can someone help me with the regular expression?
Here is the output i need : [brikbrik0,brikbrikadfdas, brikbrik1adf]</p>
| 1 | 2016-08-01T20:38:15Z | 38,708,183 | <p>Add a <strong>word boundary</strong> to <code>rack</code>:</p>
<pre><code>\brack:(\w+)
</code></pre>
<p>See <a href="https://regex101.com/r/iW9dX2/1" rel="nofollow"><strong>a demo on regex101.com</strong></a>.<br>
<hr>
In <code>Python</code> (<a href="http://ideone.com/BveiOn" rel="nofollow"><strong>demo on ideone.com</strong></a>):</p>
<pre><code>import re
string = """TokenRange(start_token:5835456583056758754, end_token:5867789857766669245, rack:brikbrik0),EndpointDetails(host:192.168.210.183, datacenter:DC1, rack:brikbrikadfdas), EndpointDetails(host:192.168.210.182, datacenter:DC1, rack:brikbrik1adf)])"""
rx = re.compile(r'\brack:(\w+)')
matches = [match.group(1) for match in rx.finditer(string)]
print(matches)
</code></pre>
| 2 | 2016-08-01T21:10:27Z | [
"python",
"regex"
] |
Can numpy diagonalise a skew-symmetric matrix with real arithmetic? | 38,707,758 | <p>Any skew-symmetric matrix (<strong><em>A^T = -A</em></strong>) can be turned into a Hermitian matrix (<strong><em>iA</em></strong>) and diagonalised with complex numbers. But it is <a href="http://link.springer.com/article/10.1007/BF01436375" rel="nofollow">also possible</a> to bring it into <a href="https://en.wikipedia.org/wiki/Skew-symmetric_matrix#Spectral_theory" rel="nofollow">block-diagonal form with a special orthogonal transformation</a> and find its eigevalues using only real arithmetic. Is this implemented anywhere in numpy?</p>
| 2 | 2016-08-01T20:41:20Z | 39,047,400 | <p>Let's take a look at the <a href="http://www.netlib.org/lapack/explore-html/d9/d28/dgeev_8f_source.html" rel="nofollow"><code>dgeev()</code></a> function of the LAPACK librarie. This routine computes the eigenvalues of any real double-precison square matrix. Moreover, this routine is right behind the python function <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigvals.html#numpy.linalg.eigvals" rel="nofollow"><code>numpy.linalg.eigvals()</code></a> of the numpy library.</p>
<p>The method used by <code>dgeev()</code> is described in the <a href="http://www.netlib.org/lapack/lug/node50.html" rel="nofollow">documentation of LAPACK</a>. It requires the reduction of the matrix <code>A</code> to its <a href="https://en.wikipedia.org/wiki/Matrix_decomposition#Schur_decomposition" rel="nofollow">real Schur form</a> <code>S</code>.</p>
<p>Any real square matrix <code>A</code> can be expressed as:</p>
<p><code>A=QSQ^t</code></p>
<p>where:</p>
<ul>
<li><code>Q</code> is a real orthogonal matrix: <code>QQ^t=I</code></li>
<li><code>S</code> is a real block upper triangular matrix. The blocks on the diagonal of S are of size 1Ã1 or 2Ã2.</li>
</ul>
<p>Indeed, if <code>A</code> is skew-symmetric, this decomposition seems really close to a <a href="https://en.wikipedia.org/wiki/Skew-symmetric_matrix#Spectral_theory" rel="nofollow"> block diagonal form obtained by a special orthogonal transformation</a> of <code>A</code>. Moreover, it is really to see that the Schur form <code>S</code> of the skew symmetric matrix <code>A</code> is ... skew-symmetric !</p>
<p>Indeed, let's compute the transpose of <code>S</code>:</p>
<pre><code>S^t=(Q^tAQ)^t
S^t=Q^t(Q^tA)^t
S^t=Q^tA^tQ
S^t=Q^t(-A)Q
S^t=-Q^tAQ
S^t=-S
</code></pre>
<p>Hence, if <code>Q</code> is special orthogonal (<code>det(Q)=1</code>), <code>S</code> is a block diagonal form obtained by a special orthogonal transformation. Else, a special orthogonal matrix <code>P</code> can be computed by permuting the first two columns of <code>Q</code> and another Schur form <code>Sd</code> of the matrix <code>A</code> is obtained by changing the sign of <code>S_{12}</code> and <code>S_{21}</code>. Indeed, <code>A=PSdP^t</code>. Then, <code>Sd</code> is a block diagonal form of <code>A</code> obtained by a special orthogonal transformation.</p>
<p>In the end, even if <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigvals.html#numpy.linalg.eigvals" rel="nofollow"><code>numpy.linalg.eigvals()</code></a> applied to a real matrix returns complex numbers, there is little complex computation involved in the process ! </p>
<p>If you just want to compute the real Schur form, use the function <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.linalg.schur.html" rel="nofollow"><code>scipy.linalg.schur()</code></a> with argument <code>output='real'</code>.</p>
<p>Just a piece of code to check that:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import scipy.linalg as la
a=np.random.rand(4,4)
a=a-np.transpose(a)
print "a= "
print a
#eigenvalue
w, v =np.linalg.eig(a)
print "eigenvalue "
print w
print "eigenvector "
print v
# Schur decomposition
#import scipy
#print scipy.version.version
t,z=la.schur(a, output='real', lwork=None, overwrite_a=True, sort=None, check_finite=True)
print "schur form "
print t
print "orthogonal matrix "
print z
</code></pre>
| 1 | 2016-08-19T20:44:11Z | [
"python",
"numpy",
"matrix",
"linear-algebra",
"lapack"
] |
Can numpy diagonalise a skew-symmetric matrix with real arithmetic? | 38,707,758 | <p>Any skew-symmetric matrix (<strong><em>A^T = -A</em></strong>) can be turned into a Hermitian matrix (<strong><em>iA</em></strong>) and diagonalised with complex numbers. But it is <a href="http://link.springer.com/article/10.1007/BF01436375" rel="nofollow">also possible</a> to bring it into <a href="https://en.wikipedia.org/wiki/Skew-symmetric_matrix#Spectral_theory" rel="nofollow">block-diagonal form with a special orthogonal transformation</a> and find its eigevalues using only real arithmetic. Is this implemented anywhere in numpy?</p>
| 2 | 2016-08-01T20:41:20Z | 39,087,904 | <p>Yes you can do it via sticking a unitary transformation in the middle of the product hence we get </p>
<blockquote>
<p><strong><em>A = V * U * V^-1 = V * T' * T * U * T' * T * V^{-1}</em></strong>. </p>
</blockquote>
<p>Once you get the idea you can optimize the code by tiling things but let's do it the naive way by forming T explicitly. </p>
<p>If the matrix is even-sized then all blocks are complex conjugates. Otherwise we get a zero as the eigenvalue. The eigenvalues are guaranteed to have zero real parts so the first thing is to clean up the noise and then order such that the zeros are on the upper left corner (arbitrary choice).</p>
<pre><code>n = 5
a = np.random.rand(n,n)
a=a-np.transpose(a)
[u,v] = np.linalg.eig(a)
perm = np.argsort(np.abs(np.imag(u)))
unew = 1j*np.imag(u[perm])
</code></pre>
<p>Obviously, we need to reorder the eigenvector matrix too to keep things equivalent. </p>
<pre><code>vnew = v[:,perm]
</code></pre>
<p>Now so far we did nothing other than reordering the middle eigenvalue matrix in the eigenvalue decomposition. Now we switch from complex form to real block diagonal form.</p>
<p>First we have to know how many zero eigenvalues there are </p>
<pre><code>numblocks = np.flatnonzero(unew).size // 2
num_zeros = n - (2 * numblocks)
</code></pre>
<p>Then we basically, form another unitary transformation (complex this time) and stick it the same way</p>
<pre><code>T = sp.linalg.block_diag(*[1.]*num_zeros,np.kron(1/np.sqrt(2)*np.eye(numblocks),np.array([[1.,1j],[1,-1j]])))
Eigs = np.real(T.conj().T.dot(np.diag(unew).dot(T)))
Evecs = np.real(vnew.dot(T))
</code></pre>
<p>This gives you the new real valued decomposition. So the code all in one place </p>
<pre><code>n = 5
a = np.random.rand(n,n)
a=a-np.transpose(a)
[u,v] = np.linalg.eig(a)
perm = np.argsort(np.abs(np.imag(u)))
unew = 1j*np.imag(u[perm])
vnew = v[perm,:]
numblocks = np.flatnonzero(unew).size // 2
num_zeros = n - (2 * numblocks)
T = sp.linalg.block_diag(*[1.]*num_zeros,np.kron(1/np.sqrt(2)*np.eye(numblocks),np.array([[1.,1j],[1,-1j]])))
Eigs = np.real(T.conj().T.dot(np.diag(unew).dot(T)))
Evecs = np.real(vnew.dot(T))
print(np.allclose(Evecs.dot(Eigs.dot(np.linalg.inv(Evecs))) - a,np.zeros((n,n))))
</code></pre>
<p>gives <code>True</code>. Note that this is the <strong>naive</strong> way of obtaining the real spectral decomposition. There are lots of places where you need to keep track of numerical error accumulation. </p>
<p>Example output </p>
<pre><code>Eigs
Out[379]:
array([[ 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , -0.61882847, 0. , 0. ],
[ 0. , 0.61882847, 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , -1.05097581],
[ 0. , 0. , 0. , 1.05097581, 0. ]])
Evecs
Out[380]:
array([[-0.15419078, -0.27710323, -0.39594838, 0.05427001, -0.51566173],
[-0.22985364, 0.0834649 , 0.23147553, -0.085043 , -0.74279915],
[ 0.63465436, 0.49265672, 0. , 0.20226271, -0.38686576],
[-0.02610706, 0.60684296, -0.17832525, 0.23822511, 0.18076858],
[-0.14115513, -0.23511356, 0.08856671, 0.94454277, 0. ]])
</code></pre>
| 0 | 2016-08-22T20:10:15Z | [
"python",
"numpy",
"matrix",
"linear-algebra",
"lapack"
] |
Python Encoding that ignores leading 0s | 38,707,772 | <p>I'm writing code in python 3.5 that uses hashlib to spit out MD5 encryption for each packet once it is is given a pcap file and the password. I am traversing through the pcap file using pyshark. Currently, the values it is spitting out are not the same as the MD5 encryptions on the packets in the pcap file. </p>
<p>One of the reasons I have attributed this to is that in the hex representation of the packet, the values are represented with leading 0s. Eg: Protocol number is shown as b'06'. But the value I am updating the hashlib variable with is b'6'. And these two values are not the same for same reason:</p>
<pre><code>>> b'06'==b'6'
False
</code></pre>
<p>The way I am encoding integers is:</p>
<pre><code>(hex(int(value))[2:]).encode()
</code></pre>
<p>I am doing this encoding because otherwise it would result in this error: "TypeError: Unicode-objects must be encoded before hashing"</p>
<p>I was wondering if I could get some help finding a python encoding library that ignores leading 0s or if there was any way to get the inbuilt hex method to ignore the leading 0s.</p>
<p>Thanks!</p>
| 1 | 2016-08-01T20:42:18Z | 38,708,581 | <p>Hashing <code>b'06'</code> and <code>b'6'</code> gives different results because, in this context, '06' and '6' are different.</p>
<p>The <code>b</code> string prefix in Python tells the Python interpreter to convert each character in the string into a byte. Thus, <code>b'06'</code> will be converted into the two bytes <code>0x30 0x36</code>, whereas <code>b'6'</code> will be converted into the single byte <code>0x36</code>. Just as hashing <code>b'a'</code> and <code>b' a'</code> (note the space) produces different results, hashing <code>b'06'</code> and <code>b'6'</code> will similarly produce different results.</p>
<hr>
<p>If you don't understand why this happens, I recommend looking up how bytes work, both within Python and more generally - Python's handling of bytes has always been a bit counterintuitive, so don't worry if it seems confusing! It's also important to note that the way Python represents bytes has changed between Python 2 and Python 3, so be sure to check which version of Python any information you find is talking about. You can comment here, too, </p>
| 0 | 2016-08-01T21:41:15Z | [
"python",
"encryption",
"hex",
"md5",
"encode"
] |
How to clear matplotlib labels in legend? | 38,707,853 | <p>Is there a way to clear matplotlib labels inside a graph's legend? <a href="http://stackoverflow.com/questions/5735208/remove-the-legend-on-a-matplotlib-figure">This post</a> explains how to remove the legend itself, but the labels themselves still remain, and appear again if you plot a new figure. I tried the following code, but it does not work:</p>
<pre><code>handles, labels = ax.get_legend_handles_labels()
labels = []
</code></pre>
<p>EDIT: Here is an example</p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.gca()
ax.scatter([1,2,3], [4,5,6], label = "a")
legend = ax.legend()
plt.show()
legend.remove()
handles, labels = ax.get_legend_handles_labels()
print(labels)
</code></pre>
<p>Output: <code>["a"]</code></p>
| 2 | 2016-08-01T20:48:17Z | 38,708,097 | <p>Use <code>set_visible()</code> method:</p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.gca()
ax.scatter([1,2,3], [4,5,6], label = "a")
legend = ax.legend()
for text in legend.texts:
if (text.get_text() == 'a'): text.set_text('b') # change label text
text.set_visible(False) # disable label
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/JdFUw.png" rel="nofollow"><img src="http://i.stack.imgur.com/JdFUw.png" alt="enter image description here"></a></p>
| 2 | 2016-08-01T21:05:35Z | [
"python",
"matplotlib",
"legend"
] |
"Expected an indented block" error explanation | 38,707,867 | <p>So yes I know that there is an answer on how to fix this but can someone explain to me what the hell it means?Because I don't know where it comes from and I also don't know what indented means in programming (as you can understand dear reader English is not my native tongue).</p>
<p>P.S I found that error from a for-loop I was trying to execute, and the code was similar to this:</p>
<pre><code>img = img.resize((basewidth,hsize), PIL.Image.ANTIALIAS)
j='.jpg'
s='somepic'
p=img.save(s+'1'+j)
for i in range(2, 659):
if i==21:
i = i + 1
elif i==36:
i=i+1
elif i==45:
i = i + 1
elif i==51:
i = i + 1
elif i==133:
i = i + 1
elif i==163:
i = i + 1
elif i==263:
i = i + 1
elif i==267:
i = i + 1
elif i==272:
i = i + 1
elif i==299:
i = i + 1
elif i==300:
i = i + 1
elif i==312:
i = i + 1
elif i==313:
i = i + 1
elif i==314:
i = i + 1
elif i==320:
i = i + 1
elif i==323:
i = i + 1
elif i==362:
i = i + 1
elif i==390:
i = i + 1
elif i==432:
i = i + 1
elif i==445:
i = i + 1
elif i==455:
i = i + 1
elif i==459:
i = i + 1
elif i==460:
i = i + 1
elif i==461:
i = i + 1
elif i==477:
i = i + 1
elif i==487:
i = i + 1
elif i==493:
i = i + 1
elif i==496:
i = i + 1
elif i==500:
i = i + 1
elif i==510:
i = i + 1
elif i==519:
i = i + 1
elif i==522:
i = i + 1
elif i==545:
i = i + 1
elif i==547:
i = i + 1
elif i==562:
i = i + 1
elif i==597:
i = i + 1
elif i==599:
i = i + 1
elif i==615:
i = i + 1
elif i==638:
i = i + 1
elif i==654:
i=i+1
else:
p= img + "i".save(s+i+j)
i=i+1
</code></pre>
<p>Which means a for-loop, an if-statement, a couple of elifs (or ORs inside the first if-statement) and then I am closing my if-statement with a save and a step forward.</p>
<p>EDITED: So the code above is what I have written and before that are a bunch of image inputs.But although I manage to fix the code with what you said at the end I have another error which says ['str' object has no attribute 'save'] but that is a problem for another time.</p>
| -4 | 2016-08-01T20:49:04Z | 38,708,032 | <p>An indent in Python is 4 spaces. Would have commented this, but I don't have enough reputation. Here's a link: <a href="http://stackoverflow.com/questions/1125653/python-4-whitespaces-indention-why">Python 4 whitespaces indention. Why?</a></p>
| -1 | 2016-08-01T20:59:52Z | [
"python",
"for-loop",
"image-processing"
] |
"Expected an indented block" error explanation | 38,707,867 | <p>So yes I know that there is an answer on how to fix this but can someone explain to me what the hell it means?Because I don't know where it comes from and I also don't know what indented means in programming (as you can understand dear reader English is not my native tongue).</p>
<p>P.S I found that error from a for-loop I was trying to execute, and the code was similar to this:</p>
<pre><code>img = img.resize((basewidth,hsize), PIL.Image.ANTIALIAS)
j='.jpg'
s='somepic'
p=img.save(s+'1'+j)
for i in range(2, 659):
if i==21:
i = i + 1
elif i==36:
i=i+1
elif i==45:
i = i + 1
elif i==51:
i = i + 1
elif i==133:
i = i + 1
elif i==163:
i = i + 1
elif i==263:
i = i + 1
elif i==267:
i = i + 1
elif i==272:
i = i + 1
elif i==299:
i = i + 1
elif i==300:
i = i + 1
elif i==312:
i = i + 1
elif i==313:
i = i + 1
elif i==314:
i = i + 1
elif i==320:
i = i + 1
elif i==323:
i = i + 1
elif i==362:
i = i + 1
elif i==390:
i = i + 1
elif i==432:
i = i + 1
elif i==445:
i = i + 1
elif i==455:
i = i + 1
elif i==459:
i = i + 1
elif i==460:
i = i + 1
elif i==461:
i = i + 1
elif i==477:
i = i + 1
elif i==487:
i = i + 1
elif i==493:
i = i + 1
elif i==496:
i = i + 1
elif i==500:
i = i + 1
elif i==510:
i = i + 1
elif i==519:
i = i + 1
elif i==522:
i = i + 1
elif i==545:
i = i + 1
elif i==547:
i = i + 1
elif i==562:
i = i + 1
elif i==597:
i = i + 1
elif i==599:
i = i + 1
elif i==615:
i = i + 1
elif i==638:
i = i + 1
elif i==654:
i=i+1
else:
p= img + "i".save(s+i+j)
i=i+1
</code></pre>
<p>Which means a for-loop, an if-statement, a couple of elifs (or ORs inside the first if-statement) and then I am closing my if-statement with a save and a step forward.</p>
<p>EDITED: So the code above is what I have written and before that are a bunch of image inputs.But although I manage to fix the code with what you said at the end I have another error which says ['str' object has no attribute 'save'] but that is a problem for another time.</p>
| -4 | 2016-08-01T20:49:04Z | 38,708,162 | <p>In python syntax, if statements, loops, and functions must be followed by indented lines. It's just python syntax. You have to put 4 spaces or use a tab before each line to indent them. In many other scripting languages, { } are used to enclose the code blocks. Without correct indenting, python doesn't know when a block of code ends.</p>
| 0 | 2016-08-01T21:09:02Z | [
"python",
"for-loop",
"image-processing"
] |
python xarray select by lat/long and extract point data to dataframe | 38,707,926 | <p>I would like to select all grid cells within a lat/long range, and for each grid cell, export it as a dateframe and then to an csv file (i.e. <code>df.to_csv</code>). My dataset is below. I can use <code>xr.where(...)</code> to mask out grid cells outside my input, but not sure how to loop through remaining grids that were not masked out. Alternatively, I have tried using the <code>xr.sel</code> functions, but they do not seem to accept operators like <code>ds.sel(gridlat_0>45)</code>. <code>xr.sel_points(...)</code> may also work, but I cannot figure out the correct syntax of indexers to use in my case. Thank you for your help in advance.</p>
<pre><code><xarray.Dataset>
Dimensions: (time: 48, xgrid_0: 685, ygrid_0: 485)
Coordinates:
gridlat_0 (ygrid_0, xgrid_0) float32 44.6896 44.6956 44.7015 44.7075 ...
* ygrid_0 (ygrid_0) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ...
* xgrid_0 (xgrid_0) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ...
* time (time) datetime64[ns] 2016-07-28T01:00:00 2016-07-28T02:00:00 ...
gridlon_0 (ygrid_0, xgrid_0) float32 -129.906 -129.879 -129.851 ...
Data variables:
u (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
gridrot_0 (time, ygrid_0, xgrid_0) float32 nan nan nan nan nan nan nan ...
Qli (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
Qsi (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
p (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
rh (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
press (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
t (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
vw_dir (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
</code></pre>
| 0 | 2016-08-01T20:53:13Z | 38,709,078 | <p>The simplest way to do this is probably to loop through every grid point, with something like the following:</p>
<pre><code># (optionally) create a grid dataset so we don't need to pull out all
# the data from the main dataset before looking at each point
grid = ds[['gridlat_0', 'gridlon_0']]
for i in range(ds.coords['xgrid_0'].size):
for j in range(ds.coords['ygrid_0'].size):
sub_grid = grid.isel(xgrid_0=i, ygrid_0=j)
if is_valid(sub_grid.gridlat_0, sub_grid.gridlon_0):
sub_ds = ds.isel(xgrid_0=i, ygrid_0=j)
sub_ds.to_dataframe().to_csv(...)
</code></pre>
<p>Even with a 685x485, this should only take a few seconds to loop through every point.</p>
<p>Pre-filtering with <code>ds = ds.where(..., drop=True)</code> (available in the next xarray release, due out later this week) before hand could make this significantly faster, but you'll still have the issue of possibly not being able to represent the selected grid on orthogonal axes.</p>
<p>A final option, probably the cleanest, is to use <code>stack</code> to convert the dataset into 2D. Then you can use standard selection and groupby operations along the new <code>'space'</code> dimension:</p>
<pre><code>ds_stacked = ds.stack(space=['xgrid_0', 'ygrid_0'])
ds_filtered = ds_stacked.sel(space=(ds_stacked.gridlat_0 > 45))
for _, ds_one_place in ds_filtered.groupby('space'):
ds_one_place.to_dataframe().to_csv(...)
</code></pre>
| 0 | 2016-08-01T22:29:04Z | [
"python",
"python-xarray"
] |
Python Pandas -- how to select minimum amount of columns that contain 1s in all their columns across a set of rows | 38,708,020 | <p>Given a document-term pandas Dataframe. Where each cell is represented by an occurrence matrix.</p>
<pre><code> clover seed sowing stolon
1489 1 0 0 0
1488 1 0 0 0
9677 0 0 1 0
9996 1 0 0 1
0557 0 1 0 0
0564 1 0 0 0
0958 0 1 1 0
1272 1 0 0 0
1965 1 1 1 1
4326 1 1 1 0
4531 1 1 1 0
6026 0 0 1 0
6030 0 1 0 0
</code></pre>
<p>With respect to the first column 'clover' reduce the DataFrame to minimum of 3 rows that contain 1s in all their columns. In the current example clover, seed, sowing contain 1s for 3 rows 1965, 4326, 4531. The results would be:</p>
<pre><code> clover seed sowing stolon
1272 1 0 0 0
1965 1 1 1 1
4326 1 1 1 0
4531 1 1 1 0
</code></pre>
<p>Drop the irrelevant column:</p>
<pre><code> clover seed sowing
1272 1 0 0
1965 1 1 1
4326 1 1 1
4531 1 1 1
</code></pre>
<p>With respect to any number of columns how can I perform this selection process in an efficient way.</p>
| 2 | 2016-08-01T20:59:27Z | 38,708,321 | <p>I'd do it like this:</p>
<pre><code>relevant = ['clover', 'seed', 'sowing']
df[df[relevant].all(1)][relevant]
</code></pre>
<p><a href="http://i.stack.imgur.com/vpFog.png" rel="nofollow"><img src="http://i.stack.imgur.com/vpFog.png" alt="enter image description here"></a></p>
| 1 | 2016-08-01T21:20:37Z | [
"python",
"pandas",
"dataframe",
"selection"
] |
Python Pandas -- how to select minimum amount of columns that contain 1s in all their columns across a set of rows | 38,708,020 | <p>Given a document-term pandas Dataframe. Where each cell is represented by an occurrence matrix.</p>
<pre><code> clover seed sowing stolon
1489 1 0 0 0
1488 1 0 0 0
9677 0 0 1 0
9996 1 0 0 1
0557 0 1 0 0
0564 1 0 0 0
0958 0 1 1 0
1272 1 0 0 0
1965 1 1 1 1
4326 1 1 1 0
4531 1 1 1 0
6026 0 0 1 0
6030 0 1 0 0
</code></pre>
<p>With respect to the first column 'clover' reduce the DataFrame to minimum of 3 rows that contain 1s in all their columns. In the current example clover, seed, sowing contain 1s for 3 rows 1965, 4326, 4531. The results would be:</p>
<pre><code> clover seed sowing stolon
1272 1 0 0 0
1965 1 1 1 1
4326 1 1 1 0
4531 1 1 1 0
</code></pre>
<p>Drop the irrelevant column:</p>
<pre><code> clover seed sowing
1272 1 0 0
1965 1 1 1
4326 1 1 1
4531 1 1 1
</code></pre>
<p>With respect to any number of columns how can I perform this selection process in an efficient way.</p>
| 2 | 2016-08-01T20:59:27Z | 38,709,409 | <p>Another possibility is to use <code>df.sum(axis=1)>=3</code> as a mask. Chain <code>drop</code> to this:</p>
<pre><code>>>> df[df.sum(axis=1)>=3].drop('stolon', axis=1)
clover seed sowing
1965 1 1 1
4326 1 1 1
4531 1 1 1
</code></pre>
<p>To make this more general: for <code>n</code> columns replace <code>3</code> with <code>n</code>. You can drop more than one column by passing in a list e.g. <code>drop(['stolon','seed'])</code></p>
| 0 | 2016-08-01T23:07:42Z | [
"python",
"pandas",
"dataframe",
"selection"
] |
Retrieving the sender of PyQt4 QLabel mousepressevent | 38,708,039 | <p>I'm programming an interface in PyQt4 and I am using QLabels and making them clickable using the mousepressevent function. I have multiple labels (signals) that have the same mousepressevent slot. Here is the gist of what I'm trying to do.</p>
<pre><code>class Example(QtGui.QWidget):
def __init__(self):
super(Example, self).__init__()
self.initUI()
def initUI(self):
lbl1=QtGui.QLabel(self)
lbl2=QtGui.QLabel(self)
lbl1.mousePressEvent=self.exampleMousePress
lbl2.mousePressEvent=self.exampleMousePress
def exampleMousePress(self,event):
print "The sender is: " + sender().text()
</code></pre>
<p>The problem is that the sender function is not working here. Is there a way to get the event sender in the exampleMousePress function?</p>
<p>Thank you all!</p>
| 0 | 2016-08-01T21:00:19Z | 38,709,722 | <p>You can use <a href="http://doc.qt.io/qt-4.8/qobject.html#installEventFilter" rel="nofollow">event-filtering</a> to do this:</p>
<pre><code>class Example(QtGui.QWidget):
def __init__(self):
super(Example, self).__init__()
self.initUI()
def initUI(self):
lbl1 = QtGui.QLabel(self)
lbl2 = QtGui.QLabel(self)
lbl1.installEventFilter(self)
lbl2.installEventFilter(self)
def eventFilter(self, source, event):
if event.type() == QtCore.QEvent.MouseButtonPress:
print "The sender is:", source.text()
return super(Example, self).eventFilter(source, event)
</code></pre>
| 0 | 2016-08-01T23:47:43Z | [
"python",
"pyqt",
"pyqt4"
] |
Python if/elif conditional | 38,708,056 | <p>Is there a way to execute code in an if/elif structure so it evaluates all the code up to the elif? Here is an example:</p>
<pre><code>i = 1 # this can be anything depending on what to user chooses
x = 5
y = 10
if i == 1:
z = x + y
elif i == 2:
# Here I want to return what would have happened if i == 1, in addition other stuff:
r = x^3 - y
elif i == 3:
# Again, execute all the stuff that would have happened if i == 1 or == 2, an addition to something new:
# execute code for i == 1 and i == 2 as well as:
s = i^3 + y^2
</code></pre>
<p>What I'm attemping to do is to avoid explicitly rewriting <code>z = x + y</code> in <code>elif == 2</code> etc.. because for my application there are hundreds of lines of code to be executed (unlike this trivial example). I guess I could wrap these things in function and call them, but I'm wondering if there is a more concise, pythonic way of doing it. </p>
<p>EDIT: The responses here seem to be focusing on the if/elif part of the code. I think this is my fault as I must not be explaining it clearly. If i == 2, I want to execute all the code for i == 2, in addition to executing the code for i == 1. I understand that I can just put the stuff under i == 1 into the i == 2 conditional, but since it's already there, is there a way to call it without rewriting it? </p>
| 1 | 2016-08-01T21:01:59Z | 38,708,085 | <p>How an if works is it will execute if the condition passes. Else if is only executed if none of the prior if/elif statements conditions pass. So you want all if statements not elif statements</p>
<pre><code> i = 1 # this can be anything depending on what to user chooses
x = 5
y = 10
if i >= 1:
z = x + y
if i >=2:
# Here I want to return what would have happened if i == 1, in addition other stuff:
r = x^3 - y
if i >= 3:
# Again, execute all the stuff that would have happened if i == 1 or == 2, an addition to something new:
# execute code for i == 1 and i == 2 as well as:
s = i^3 + y^2
</code></pre>
| 1 | 2016-08-01T21:04:17Z | [
"python"
] |
Python if/elif conditional | 38,708,056 | <p>Is there a way to execute code in an if/elif structure so it evaluates all the code up to the elif? Here is an example:</p>
<pre><code>i = 1 # this can be anything depending on what to user chooses
x = 5
y = 10
if i == 1:
z = x + y
elif i == 2:
# Here I want to return what would have happened if i == 1, in addition other stuff:
r = x^3 - y
elif i == 3:
# Again, execute all the stuff that would have happened if i == 1 or == 2, an addition to something new:
# execute code for i == 1 and i == 2 as well as:
s = i^3 + y^2
</code></pre>
<p>What I'm attemping to do is to avoid explicitly rewriting <code>z = x + y</code> in <code>elif == 2</code> etc.. because for my application there are hundreds of lines of code to be executed (unlike this trivial example). I guess I could wrap these things in function and call them, but I'm wondering if there is a more concise, pythonic way of doing it. </p>
<p>EDIT: The responses here seem to be focusing on the if/elif part of the code. I think this is my fault as I must not be explaining it clearly. If i == 2, I want to execute all the code for i == 2, in addition to executing the code for i == 1. I understand that I can just put the stuff under i == 1 into the i == 2 conditional, but since it's already there, is there a way to call it without rewriting it? </p>
| 1 | 2016-08-01T21:01:59Z | 38,708,109 | <p>Perhaps something like this:</p>
<pre><code>if i in (1, 2, 3):
z = x + y
if i in (2, 3):
r = x^3 - y
if i == 3:
s = i^3 + y^2
</code></pre>
<p>You can replace the tuples with <code>range(...)</code> when you have many cases.</p>
| 1 | 2016-08-01T21:06:14Z | [
"python"
] |
Python if/elif conditional | 38,708,056 | <p>Is there a way to execute code in an if/elif structure so it evaluates all the code up to the elif? Here is an example:</p>
<pre><code>i = 1 # this can be anything depending on what to user chooses
x = 5
y = 10
if i == 1:
z = x + y
elif i == 2:
# Here I want to return what would have happened if i == 1, in addition other stuff:
r = x^3 - y
elif i == 3:
# Again, execute all the stuff that would have happened if i == 1 or == 2, an addition to something new:
# execute code for i == 1 and i == 2 as well as:
s = i^3 + y^2
</code></pre>
<p>What I'm attemping to do is to avoid explicitly rewriting <code>z = x + y</code> in <code>elif == 2</code> etc.. because for my application there are hundreds of lines of code to be executed (unlike this trivial example). I guess I could wrap these things in function and call them, but I'm wondering if there is a more concise, pythonic way of doing it. </p>
<p>EDIT: The responses here seem to be focusing on the if/elif part of the code. I think this is my fault as I must not be explaining it clearly. If i == 2, I want to execute all the code for i == 2, in addition to executing the code for i == 1. I understand that I can just put the stuff under i == 1 into the i == 2 conditional, but since it's already there, is there a way to call it without rewriting it? </p>
| 1 | 2016-08-01T21:01:59Z | 38,708,115 | <p>Try this:</p>
<pre><code>i = 1 # this can be anything depending on what to user chooses
x = 5
y = 10
if i >= 1:
z = x + y
if i >= 2:
# Here I want to return what would have happened if i == 1, in addition other stuff:
r = x^3 - y
if i == 3:
# Again, execute all the stuff that would have happened if i == 1 or == 2, an addition to something new:
# execute code for i == 1 and i == 2 as well as:
s = i^3 + y^2
</code></pre>
| 3 | 2016-08-01T21:06:35Z | [
"python"
] |
Rewriting on a specific line while other lines behave normaly | 38,708,128 | <p>I have a python program with which I have to update the last line of a terminal while on the rest of them I print data as if the last line is non-existent. Here's an example. Suppose we have this (pseudo)code:</p>
<pre><code>1. print("test1")
2. updateLastLine("79 percent")
3. print("test2")
4. updateLastLine("80 percent")
</code></pre>
<p>wheres on the terminal the data would look like:</p>
<p><em>the first 2 lines of code:</em></p>
<pre><code>test1
79 percent
</code></pre>
<p><em>the next 2 lines of code added:</em></p>
<pre><code>test1
test2
80 percent
</code></pre>
<p>How do I implement such a solution in python? Also, how would I capture input from keyboard from the updating line? It may help to know that the only place I capture input is from the updating line. Is there any library to do this?</p>
<p>A close example would be the <code>sudo apt-get update</code> command which behaves similarly.</p>
| 0 | 2016-08-01T21:07:04Z | 38,708,251 | <p>The percentage part put me on the way of a screen refreshing :
In a (almost) endless loop, clear the user screen with </p>
<pre><code>os.system('clear')
</code></pre>
<p>and then display your "test" list, the current percentage and the current user input (if there was one being typed).
Refresh the display every time the percentage, the list or the user input are updated.</p>
| 0 | 2016-08-01T21:15:38Z | [
"python",
"terminal"
] |
Rewriting on a specific line while other lines behave normaly | 38,708,128 | <p>I have a python program with which I have to update the last line of a terminal while on the rest of them I print data as if the last line is non-existent. Here's an example. Suppose we have this (pseudo)code:</p>
<pre><code>1. print("test1")
2. updateLastLine("79 percent")
3. print("test2")
4. updateLastLine("80 percent")
</code></pre>
<p>wheres on the terminal the data would look like:</p>
<p><em>the first 2 lines of code:</em></p>
<pre><code>test1
79 percent
</code></pre>
<p><em>the next 2 lines of code added:</em></p>
<pre><code>test1
test2
80 percent
</code></pre>
<p>How do I implement such a solution in python? Also, how would I capture input from keyboard from the updating line? It may help to know that the only place I capture input is from the updating line. Is there any library to do this?</p>
<p>A close example would be the <code>sudo apt-get update</code> command which behaves similarly.</p>
| 0 | 2016-08-01T21:07:04Z | 38,716,103 | <pre><code>import shutil
class Printer(object):
def __init__(self):
self.last_line = ''
def print(self, line):
(w, h) = shutil.get_terminal_size()
print(' '*(w-1), end='\r')
print(line)
self.update_last_line(self.last_line)
def update_last_line(self, line):
print(line, end=len(self.last_line) * ' ' + '\r')
self.last_line = line
def read_last_line(self, line):
response = input(line)
print('\033[{}C\033[1A'.format(len(line) + len(response)), end = '\r')
return response
if __name__ == '__main__':
p = Printer()
p.print("test1")
p.update_last_line("79 percent")
p.print("test2")
p.update_last_line("80 percent")
response = p.read_last_line("age = ")
p.print(response)
</code></pre>
| 1 | 2016-08-02T09:02:29Z | [
"python",
"terminal"
] |
Spark - Unable to access S3 file if master parameter set | 38,708,269 | <p>I'm having issues accessing a file sitting on AWS S3 when I launch the job specifying --master parameter - either as a parameter in the terminal </p>
<pre><code>spark-submit --master spark://myIP:port
</code></pre>
<p>or in the actual code </p>
<pre><code>conf = SparkConf().setAppName("test").setMaster("spark://myIP:port")
</code></pre>
<p>If I do not add that parameter it works just fine,.... what am I missing there?</p>
<p>BTW I'm using pyspark.</p>
<p>Thanks in advance!</p>
<h1>See error log below:</h1>
<pre><code>16/08/01 21:22:57 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
Traceback (most recent call last):
File "/home/user/s3.py", line 14, in <module>
print(rdd.take(1))
File "/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1310, in take
File "/spark/python/lib/pyspark.zip/pyspark/context.py", line 941, in runJob
File "/spark/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py", line 933, in __call__
File "/spark/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py", line 312, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 10.0.128.1): org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.ServiceException: Request Error: java.security.ProviderException: java.security.KeyException
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:478)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:427)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:427)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:427)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:181)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at org.apache.hadoop.fs.s3native.$Proxy8.retrieveMetadata(Unknown Source)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:468)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.open(NativeS3FileSystem.java:611)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:108)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:246)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:209)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:102)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.jets3t.service.ServiceException: Request Error: java.security.ProviderException: java.security.KeyException
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:623)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:277)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestHead(RestStorageService.java:1038)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2250)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectDetailsImpl(RestStorageService.java:2179)
at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:1120)
at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:575)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:174)
... 29 more
Caused by: javax.net.ssl.SSLException: java.security.ProviderException: java.security.KeyException
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1916)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1874)
at sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1857)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1378)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1355)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:553)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:412)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:179)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:144)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:134)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:612)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:447)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:326)
... 36 more
Caused by: java.security.ProviderException: java.security.KeyException
at sun.security.ec.ECKeyPairGenerator.generateKeyPair(ECKeyPairGenerator.java:146)
at java.security.KeyPairGenerator$Delegate.generateKeyPair(KeyPairGenerator.java:704)
at sun.security.ssl.ECDHCrypt.<init>(ECDHCrypt.java:78)
at sun.security.ssl.ClientHandshaker.serverKeyExchange(ClientHandshaker.java:717)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:278)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:913)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:849)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1035)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1344)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1371)
... 48 more
Caused by: java.security.KeyException
at sun.security.ec.ECKeyPairGenerator.generateECKeyPair(Native Method)
at sun.security.ec.ECKeyPairGenerator.generateKeyPair(ECKeyPairGenerator.java:126)
... 57 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1437)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1437)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1659)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1618)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1607)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1871)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1884)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1897)
at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:441)
at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:211)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.ServiceException: Request Error: java.security.ProviderException: java.security.KeyException
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:478)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:427)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:427)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:427)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:181)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at org.apache.hadoop.fs.s3native.$Proxy8.retrieveMetadata(Unknown Source)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:468)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.open(NativeS3FileSystem.java:611)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:108)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:246)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:209)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:102)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
Caused by: org.jets3t.service.ServiceException: Request Error: java.security.ProviderException: java.security.KeyException
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:623)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:277)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestHead(RestStorageService.java:1038)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2250)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectDetailsImpl(RestStorageService.java:2179)
at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:1120)
at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:575)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:174)
... 29 more
Caused by: javax.net.ssl.SSLException: java.security.ProviderException: java.security.KeyException
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1916)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1874)
at sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1857)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1378)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1355)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:553)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:412)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:179)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:144)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:134)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:612)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:447)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:326)
... 36 more
Caused by: java.security.ProviderException: java.security.KeyException
at sun.security.ec.ECKeyPairGenerator.generateKeyPair(ECKeyPairGenerator.java:146)
at java.security.KeyPairGenerator$Delegate.generateKeyPair(KeyPairGenerator.java:704)
at sun.security.ssl.ECDHCrypt.<init>(ECDHCrypt.java:78)
at sun.security.ssl.ClientHandshaker.serverKeyExchange(ClientHandshaker.java:717)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:278)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:913)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:849)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1035)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1344)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1371)
... 48 more
Caused by: java.security.KeyException
at sun.security.ec.ECKeyPairGenerator.generateECKeyPair(Native Method)
at sun.security.ec.ECKeyPairGenerator.generateKeyPair(ECKeyPairGenerator.java:126)
... 57 more
</code></pre>
| 0 | 2016-08-01T21:16:58Z | 38,725,609 | <p>By installing java version 1.8, as required in the documentation, the issue gets solved.</p>
| 0 | 2016-08-02T16:11:09Z | [
"python",
"apache-spark",
"pyspark",
"rdd"
] |
TKinter - How do you use variables in hex to color widgets? | 38,708,404 | <p>I'm trying to set a frame's background color a hex value which I've stored in a variable. I did something wrong and it's giving me this error "unknown color name 'hex_value'." What am I doing wrong?</p>
<pre><code>hex_value = "#f35123"
root = Tk()
top_frame = Frame(root, bg="hex_value")
top_frame.pack(side=TOP, fill=X)
root.mainloop()
</code></pre>
| -1 | 2016-08-01T21:27:34Z | 38,708,435 | <p>I would assume you want the <em>content</em> of the variable?!</p>
<pre><code>hex_value = "#f35123"
root = Tk()
top_frame = Frame(root, bg=hex_value)
top_frame.pack(side=TOP, fill=X)
root.mainloop()
</code></pre>
| 0 | 2016-08-01T21:30:32Z | [
"python",
"tkinter",
"hex"
] |
TKinter - How do you use variables in hex to color widgets? | 38,708,404 | <p>I'm trying to set a frame's background color a hex value which I've stored in a variable. I did something wrong and it's giving me this error "unknown color name 'hex_value'." What am I doing wrong?</p>
<pre><code>hex_value = "#f35123"
root = Tk()
top_frame = Frame(root, bg="hex_value")
top_frame.pack(side=TOP, fill=X)
root.mainloop()
</code></pre>
| -1 | 2016-08-01T21:27:34Z | 38,708,441 | <p>You're getting this error because you passed in the <em>string</em> "hex_value". Instead of the variable containing the string you want. Remove the <code>"</code></p>
| 2 | 2016-08-01T21:30:48Z | [
"python",
"tkinter",
"hex"
] |
Having a constantly changing variable in a python loop | 38,708,426 | <p>I'm trying to write a program that would ask for a students name, a couple other numerical values, and assign them to groups, via their numerical value, to have all groups as close to equal as possible (by taking the the highest next value in the list, and assigning it to the next group and so on).</p>
<p>However, I'd need to save their number to some variable, as well as their name, to then print out the group's list.
For this I'd need a variable that changes everytime the loop goes through to add another student. I'd also need to sort these number, and then somehow call back the name they corrispond to after they've been sorted into groups, and I'm not sure how to do any of these. Is there any way for this to be done, would I have to use another language?</p>
<p>This is the code I have so far:</p>
<pre><code>from easygui import *
times = 0
name = 0
s_yn = ynbox("Would you like to enter a student?")
while s_yn == 1:
msg = "Student's Information"
title = "House Sorting Program"
fieldNames = ["Name", "Grade","Athleticism (1-10)","Intellect (1-10)","Adherance to school rules (1-10)"]
fieldValues = []
fieldValues = multenterbox(msg,title, fieldNames)
times = times + 1
ath = fieldValues[2]
int_ = fieldValues[3]
adh = fieldValues[4]
ath = int(ath)
int_ = int(int_)
adh = int(adh)
total = ath+int_+adh
s_yn = ynbox("Would you like to enter a student?")
</code></pre>
| -1 | 2016-08-01T21:29:52Z | 38,709,353 | <p>I believe it would be nice to create a Student class that holds all variables associated with a student. Then you could add each student to a list which you could sort by the values you want and divide to how many groups you want.</p>
<pre><code>from easygui import *
from operator import attrgetter
class Student(object):
def __init__(self, name, grade, athleticism, intellect, adherance):
self.name = name
self.grade = int(grade)
self.athleticism = int(athleticism)
self.intellect = int(intellect)
self.adherance = int(adherance)
self.total = self.athleticism + self.intellect + self.adherance
def __str__(self): # When converting an instance of this class to a string it'll return the string below.
return "Name: %s, Grade: %s, Athleticism (1-10): %s, Intellect (1-10): %s, Adherance to school rules (1-10): %s"\
% (self.name, self.grade, self.athleticism, self.intellect, self.adherance)
student_group = []
while ynbox("Would you like to enter a student?"): # Returns 'True' or 'False' so it'll loop every time the user press 'yes'.
message = "Student's Information"
title = "House Sorting Program"
field_names = ["Name", "Grade", "Athleticism (1-10)", "Intellect (1-10)", "Adherance to school rules (1-10)"]
field_values = multenterbox(message, title, field_names)
student = Student(*field_values) # Unpack all elements in the list 'field_values' to the initializer.
student_group.append(student) # Add the student to the group 'student_group'.
# When the user has put in all the students we sort our group by 'total' (or any other value you want to sort by).
sorted_group = sorted(student_group, key=attrgetter("total"), reverse=True)
# Just as an example I divided the students into 3 groups based on their total.
best_students = sorted_group[:len(sorted_group) // 3]
average_students = sorted_group[len(sorted_group) // 3:2 * len(sorted_group) // 3]
worst_students = sorted_group[2 * len(sorted_group) // 3::]
</code></pre>
| 0 | 2016-08-01T23:00:48Z | [
"python",
"variables",
"input"
] |
Can we use SWIG to make python bindings for Qt application? | 38,708,475 | <p>How to use SWIG to binding to QT application, our situation is almost the same as the situation in this <a href="http://lists.qt-project.org/pipermail/pyside/2013-January/000957.html" rel="nofollow">post</a>, which says:</p>
<blockquote>
<ul>
<li>We have a big C++/Qt application with a Swig binding of the core.</li>
<li>We wanted to create new UI tools in python which need to use some of
our C++ widgets. So we need a binding of our C++ widgets. As our core
binding is written in Swig (and we are happy with that) we need to bind our
widgets with the same binding tool for compatibility.</li>
</ul>
</blockquote>
<p>Seems they had successfully created binding of Qt in SWIG, but there seems not easy to wrap QT using swig because QT application with macro Q_OBJECT will generate moc files at precompile time and at compile time these files are used. I tried this: </p>
<pre><code>>> swig -c++ -python application.i
application.h:46: Error: Syntax error in input(3)
</code></pre>
<p>it always give error about line 46 which indicate to Q_OBJECT.</p>
<p>I also found <a href="https://sourceforge.net/p/swig/mailman/message/30818925/" rel="nofollow">here</a> and <a href="https://github.com/swig/swig/issues/88#issuecomment-202054188" rel="nofollow">here</a> saying that it's impossible to use swig to wrap QT, I am so confused about this, can someone show some light about this if it is not feasible or if it can, give a simple example about using SWIG wrap QT. Thanks in advance.</p>
<p><strong>Update</strong> source file: application.h</p>
<pre><code>#ifndef APPLICATION_H_
#define APPLICATION_H_
#include <QApplication>
class frameApplication : public QApplication
{
Q_OBJECT
public:
frameApplication (){};
virtual ~frameApplication();
private slots:
void OnExitApp();
};
#endif // APPLICATION_H_
</code></pre>
<p>application.i</p>
<pre><code>%module application
%{
#include "application.h"
%}
%include "application.h"
</code></pre>
<p>This is a simplified version of application.h, using above SWIG command, the error message remains the same except the line number. </p>
| 2 | 2016-08-01T21:33:01Z | 38,747,415 | <blockquote>
<p>Can we use SWIG to make python bindings for Qt application?</p>
</blockquote>
<p><strong>YES</strong>!</p>
<p>The error happens because your header file uses macros that SWIG doesn't know about. A C++ compiler wouldn't know about them either, of course.</p>
<p>SWIG needs to preprocess your input. To do that, it needs to know <em>both</em> the include paths to Qt, and the defines needed for Qt headers to work.</p>
<p>On my particular project using the Widgets package, on an OSX Qt install, SWIG would need the following arguments, for example:</p>
<pre><code>-DQT_WIDGETS_LIB -DQT_GUI_LIB -DQT_CORE_LIB \
-I/Qt/5.6.0/lib/QtWidgets.framework/Headers \
-I/Qt/5.6.0/lib/QtGui.framework/Headers \
-I/Qt/5.6.0/lib/QtCore.framework/Headers \
-I/Qt/5.6.0/mkspecs/macx-clang
</code></pre>
<p>SWIG goes a long way to keep quiet about missing include files. It tries to do its best at parsing the input even when include files are missing, and makes some assumptions about what the various identifiers are even if they are undefined, but it can't do much when presented with unknown macros that don't merely expand to a single identifier.</p>
<p>The easiest way to get the necessary define (<code>-D</code>) and include path (<code>-I</code>) options is to build your code and get these from the compiler's command line. SWIG <em>purposefully</em> uses the same option syntax as most C/C++ compilers - to make passing that data easier.</p>
<p>It should be said that SWIG knows nothing special about signals nor slots: they are just C++ methods. Any bindings generated by SWIG won't support the use of Qt 5's new <code>connect</code> syntax to link signals and slots. You can still use the Qt 4 syntax where you pass C string method signatures to <code>connect</code>, e.g.:</p>
<pre><code>// with SIGNAL/SLOT macros
connect(obj1, SIGNAL(foo(QString)), obj2, SLOT(bar(QString)));
// without SIGNAL/SLOT macros - as you would call from a SWIG binding
connect(obj1, "2foo(QString)\0", obj2, "1bar(QString)\0");
</code></pre>
<p>The method signature needs to be suffixed by an extra null terminator. Signals are prefixed with <code>'2'</code>, slots are prefixed with <code>'1'</code>, other invokable methods are prefixed with <code>'0'</code>.</p>
<p>You won't be able to define new signals or invokable methods unless you use the private <code>QMetaObjectBuilder</code>; see <a href="https://www.qtdeveloperdays.com/sites/default/files/QtDevDays2014US-DIY-moc.pdf" rel="nofollow">here</a>.</p>
| 2 | 2016-08-03T15:10:19Z | [
"python",
"c++",
"qt",
"swig"
] |
Getting the timezone offset in Python from a timezone ignoring DST | 38,708,572 | <p>What is the correct way to get an UTC offset from a timezone in python?
I need a function to send a pytz timezone and get the timezone offset ignoring the Daylight Saving Time.</p>
<pre><code>import pytz
tz = pytz.timezone('Europe/Madrid')
getOffset(tz) #datetime.timedelta(0, 3600)
</code></pre>
| 0 | 2016-08-01T21:40:31Z | 38,708,944 | <p><code>pytz</code> timezone objects obey <code>tzinfo</code> <a href="https://docs.python.org/3/library/datetime.html#datetime.tzinfo.dst" rel="nofollow">API specification</a> defined in the <code>datetime</code> module. Therefore you can use their <code>.utcoffset()</code> and <code>.dst()</code> methods:</p>
<pre><code>timestamp = datetime(2009, 1, 1) # any unambiguous timestamp will work here
def getOffset(tz):
return tz.utcoffset(timestamp) - tz.dst(timestamp)
</code></pre>
| 2 | 2016-08-01T22:15:30Z | [
"python",
"datetime",
"timezone"
] |
python opencv panorama blacklines | 38,708,596 | <p>I am working on panorama with Python OpenCV. Can someone show me how to get rid of the black lines in my final images? I am thinking of maybe I should first check for the color I.e. 0,0,0 before copying it to the atlas image, but I am not quite sure how to do that.</p>
<pre><code>def warpTwoImages(img1, img2, H):
'''warp img2 to img1 with homograph H'''
h1,w1 = img1.shape[:2]
h2,w2 = img2.shape[:2]
pts1 = np.float32([[0,0],[0,h1],[w1,h1],[w1,0]]).reshape(-1,1,2)
pts2 = np.float32([[0,0],[0,h2],[w2,h2],[w2,0]]).reshape(-1,1,2)
pts2_ = cv2.perspectiveTransform(pts2, H)
pts = np.concatenate((pts1, pts2_), axis=0)
[xmin, ymin] = np.int32(pts.min(axis=0).ravel() - 0.5)
[xmax, ymax] = np.int32(pts.max(axis=0).ravel() + 0.5)
t = [-xmin,-ymin]
Ht = np.array([[1,0,t[0]],[0,1,t[1]],[0,0,1]]) # translate
result = cv2.warpPerspective(img2, Ht.dot(H), (xmax-xmin, ymax-ymin))
result[t[1]:h1+t[1],t[0]:w1+t[0]] = img1
return result
</code></pre>
<p><a href="http://i.stack.imgur.com/kBiHc.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/kBiHc.jpg" alt="Click here to see my reult"></a></p>
| 0 | 2016-08-01T21:42:32Z | 38,788,954 | <p>This answer depends on warpPrespicteve function to work with RGBA.
You can try to use the alpha channel of each image.
Before wrapping convert each image to RGBA (See the code below) were the alpha channel will be 0 for the black lines and for all other pixels it will be 255.</p>
<pre><code>import cv2
import numpy as np
# Read img
img = cv2.imread('i.jpg')
# Create mask from all the black lines
mask = np.zeros((img.shape[0],img.shape[1]),np.uint8)
cv2.inRange(img,(0,0,0),(1,1,1),mask)
mask[mask==0]=1
mask[mask==255]=0
mask = mask*255
b_channel, g_channel, r_channel = cv2.split(img)
# Create a new image with 4 channels the forth channel Aplha will give the opacity for each pixel
newImage = cv2.merge((b_channel, g_channel, r_channel, mask))
</code></pre>
| 0 | 2016-08-05T12:11:36Z | [
"python",
"opencv",
"image-processing",
"computer-vision",
"panoramas"
] |
How to calculate percentage of sparsity for a numpy array/matrix? | 38,708,621 | <p>I have the following 10 by 5 numpy array/matrix, which has a number of <code>NaN</code> values:</p>
<pre><code>array([[ 0., 0., 0., 0., 1.],
[ 1., 1., 0., nan, nan],
[ 0., nan, 1., nan, nan],
[ 1., 1., 1., 1., 0.],
[ 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., nan],
[ nan, nan, 1., 1., 1.],
[ 0., 1., 0., 1., 0.],
[ 1., 0., 1., 0., 0.],
[ 0., 1., 0., 0., 0.]])
</code></pre>
<p>How does one measure exactly how sparse this array is? Is there a simply function in numpy for measuring the percentage of missing values? </p>
| 4 | 2016-08-01T21:44:34Z | 38,709,042 | <pre><code>np.isnan(a).sum()
</code></pre>
<p>gives the number of <code>nan</code> values, in this example 8. </p>
<pre><code>np.prod(a.shape)
</code></pre>
<p>is the number of values, here 50. Their ratio should give the desired value.</p>
<pre><code>In [1081]: np.isnan(a).sum()/np.prod(a.shape)
Out[1081]: 0.16
</code></pre>
<p>You might also find it useful to make a masked array from this</p>
<pre><code>In [1085]: a_ma=np.ma.masked_invalid(a)
In [1086]: print(a_ma)
[[0.0 0.0 0.0 0.0 1.0]
[1.0 1.0 0.0 -- --]
[0.0 -- 1.0 -- --]
[1.0 1.0 1.0 1.0 0.0]
[0.0 0.0 0.0 1.0 0.0]
[0.0 0.0 0.0 0.0 --]
[-- -- 1.0 1.0 1.0]
[0.0 1.0 0.0 1.0 0.0]
[1.0 0.0 1.0 0.0 0.0]
[0.0 1.0 0.0 0.0 0.0]]
</code></pre>
<p>The number of valid values then is:</p>
<pre><code>In [1089]: a_ma.compressed().shape
Out[1089]: (42,)
</code></pre>
| 3 | 2016-08-01T22:25:58Z | [
"python",
"arrays",
"numpy",
"matrix",
"sparse-matrix"
] |
How to make combinations nCr from x to y (nCx -nCy) | 38,708,625 | <p>I want to make some combinations of all 160 elements in my list, but I don't want to make all possible combinations or it will never end. I just want some, lets say 1,2,3,4.</p>
<p>Instead of doing one by one:</p>
<pre><code>combination = itertools.combinations(lst, 1)
combination = itertools.combinations(lst, 2)
combination = itertools.combinations(lst, 3)
combination = itertools.combinations(lst, 4)
</code></pre>
<p>How can I do all 4???</p>
| 0 | 2016-08-01T21:45:05Z | 38,708,684 | <p>How about this simple <code>for</code> loop:</p>
<pre><code>comb = []
for i in range (1,5): # (start, end + 1)
comb[i] = itertools.combinations(lst, i)
</code></pre>
| 0 | 2016-08-01T21:49:31Z | [
"python",
"list",
"combinations",
"itertools"
] |
How to make combinations nCr from x to y (nCx -nCy) | 38,708,625 | <p>I want to make some combinations of all 160 elements in my list, but I don't want to make all possible combinations or it will never end. I just want some, lets say 1,2,3,4.</p>
<p>Instead of doing one by one:</p>
<pre><code>combination = itertools.combinations(lst, 1)
combination = itertools.combinations(lst, 2)
combination = itertools.combinations(lst, 3)
combination = itertools.combinations(lst, 4)
</code></pre>
<p>How can I do all 4???</p>
| 0 | 2016-08-01T21:45:05Z | 38,708,770 | <p>You can create single iterator containing all the combinations with <a href="https://docs.python.org/3.5/library/itertools.html#itertools.chain.from_iterable" rel="nofollow"><code>itertools.chain.from_iterable</code></a>:</p>
<pre><code>combination = chain.from_iterable(combinations(lst, i) for i in range(1,5))
</code></pre>
<p>Example with shorter input:</p>
<pre><code>>>> list(chain.from_iterable(combinations(range(3), i) for i in range(1,3)))
[(0,), (1,), (2,), (0, 1), (0, 2), (1, 2)]
</code></pre>
| 0 | 2016-08-01T21:56:56Z | [
"python",
"list",
"combinations",
"itertools"
] |
How do I filter SQLAlchemy results based on a columns' value? | 38,708,645 | <p>I'm trying to serialize results from a SQLAlchemy query. I'm new to the ORM so I'm not sure how to filter a result set after I've retrieved it. The result set looks like this, if I were to flatten the objects:</p>
<blockquote>
<p>A1 B1 V1</p>
<p>A1 B1 V2</p>
<p>A2 B2 V3</p>
</blockquote>
<p>I need to serialize these into a list of objects, 1 per unique value for A, each with a list of the V values. I.E.:</p>
<p>Object1:</p>
<ul>
<li><p>A: A1</p></li>
<li><p>B: B1</p></li>
<li><p>V: {V1, V2}</p></li>
</ul>
<p>Object2:</p>
<ul>
<li><p>A: A2</p></li>
<li><p>B: B2</p></li>
<li><p>V: {V3}</p></li>
</ul>
<p>Is there a way to iterate through all unique values on a given column, but with the ability to return a list of values from the other columns?</p>
| 0 | 2016-08-01T21:46:44Z | 38,806,913 | <p>Yup just use <code>func.array_agg</code> and <code>group_by</code></p>
<pre><code>import sqlalchemy.sql.functions as func
session.query(Object.col1, Object.col2, func.array_agg(Object.col3))
.group_by(Object.col1)
.group_by(Object.col2)
.all()
</code></pre>
<p>But this will only work with database back ends with an equivalent aggregation functions, apart from that you are just going to have to write your own group by pseudo function. Sql alchemy merely just translates your pythony code to the appropriate sql, all other logic is usually left up to the programmer.</p>
<p>Equivalent Postgresql</p>
<pre><code>SELECT col1, col2, array_agg(col3)
FROM objects
GROUP BY col1, col2;
</code></pre>
| 0 | 2016-08-06T17:25:09Z | [
"python",
"orm",
"sqlalchemy"
] |
How do I filter SQLAlchemy results based on a columns' value? | 38,708,645 | <p>I'm trying to serialize results from a SQLAlchemy query. I'm new to the ORM so I'm not sure how to filter a result set after I've retrieved it. The result set looks like this, if I were to flatten the objects:</p>
<blockquote>
<p>A1 B1 V1</p>
<p>A1 B1 V2</p>
<p>A2 B2 V3</p>
</blockquote>
<p>I need to serialize these into a list of objects, 1 per unique value for A, each with a list of the V values. I.E.:</p>
<p>Object1:</p>
<ul>
<li><p>A: A1</p></li>
<li><p>B: B1</p></li>
<li><p>V: {V1, V2}</p></li>
</ul>
<p>Object2:</p>
<ul>
<li><p>A: A2</p></li>
<li><p>B: B2</p></li>
<li><p>V: {V3}</p></li>
</ul>
<p>Is there a way to iterate through all unique values on a given column, but with the ability to return a list of values from the other columns?</p>
| 0 | 2016-08-01T21:46:44Z | 38,882,242 | <p>Turns out I needed to use association tables and the joinedload() function. The documentation is a bit wonky but I got there after playing with it for a while.</p>
| 0 | 2016-08-10T19:45:24Z | [
"python",
"orm",
"sqlalchemy"
] |
Remove ns0 from XML | 38,708,658 | <p>I have an XML file where I would like to edit certain attributes. I am able to properly edit the attributes but when I write the changes to the file, the tags have a strange "ns0" added onto them. How can I get rid of this? This is what I have tried and have been unsuccessful. I am working in python and using lxml.</p>
<pre><code> import xml.etree.ElementTree as ET
from xml.etree import ElementTree as etree
from lxml import etree, objectify
frag_xml_tree = ET.parse(xml_name)
frag_root = frag_xml_tree.getroot()
for e in frag_root:
for elem in frag_root.iter(e):
elem.attrib[frag_param_name] = update_val
etree.register_namespace("", "http://www.w3.org/2001")
frag_xml_tree.write(xml_name)
</code></pre>
<p>However, when I do this, I only get the error "Invalid tag name u''". I htought this error came up if the xml tags started with digits but that is not the case with my xml. I am really stuck on how to proceed. Thanks</p>
| 0 | 2016-08-01T21:47:39Z | 38,709,043 | <p>Actually the way to do it seemed to be a combination of two things. </p>
<ol>
<li>The import statement is import xml.etree.ElementTree as ET</li>
<li>ET.register_namespace("", NAMESPACE) is the correct call, where NAMESPACE is the namespace listed in the input xml, ie the url after xmlns.</li>
</ol>
| 0 | 2016-08-01T22:26:06Z | [
"python",
"xml",
"lxml"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.