title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Formatted string literals in Python 3.6 with tuples | 38,763,895 | <p>With <code>str.format()</code> I can use tuples for accesing arguments:</p>
<pre><code>>>> '{0}, {1}, {2}'.format('a', 'b', 'c')
</code></pre>
<blockquote>
<p>'a, b, c'</p>
</blockquote>
<p>or </p>
<pre><code>>>> t = ('a', 'b', 'c')
>>> '{0}, {1}, {2}'.format(*t)
</code></pre>
<blockquote>
<p>'a, b, c'</p>
</blockquote>
<p>But with the new formatted string literals prefixed with 'f' how can use tuples?</p>
| 0 | 2016-08-04T09:44:02Z | 38,764,165 | <p>Your first <code>str.format()</code> call is a regular method call with 3 arguments, there is <em>no tuple involved there</em>. Your second call uses the <code>*</code> splat call syntax; the <code>str.format()</code> call receives 3 separate individual arguments, it doesn't care that those came from a tuple.</p>
<p>Formatting strings with <code>f</code> don't use a method call, so you can't use either technique. Each slot in a <code>f'..'</code> formatting string is instead executed as a regular Python expression.</p>
<p>You'll have to extract your values from the tuple directly:</p>
<pre><code>f'{t[0]}, {t[1]}, {t[2]}'
</code></pre>
<p>or first expand your tuple into new local variables:</p>
<pre><code>a, b, c = t
f'{a}, {b}, {c}'
</code></pre>
<p>or simply continue to use <code>str.format()</code>. You don't <em>have</em> to use an <code>f'..'</code> formatting string, this is a <em>new, additional feature</em> to the language, not a replacement for <code>str.format()</code>.</p>
<p>From <a href="https://www.python.org/dev/peps/pep-0498/" rel="nofollow">PEP 498 -- <em>Literal String Interpolation</em></a>:</p>
<blockquote>
<p>This PEP does not propose to remove or deprecate any of the existing string formatting mechanisms.</p>
</blockquote>
| 1 | 2016-08-04T09:57:44Z | [
"python",
"python-3.x",
"tuples",
"string-formatting",
"python-3.6"
] |
How to save table in python? | 38,763,910 | <p>I am trying to get a table with data saved in svg, pdf or png file. Are there any libraries to do it?</p>
<p>I've tried pygal, but it seems that they provide only charts saving.</p>
<p>Edited: This table is just a couple of arrays with data, and I need to build a nice table from them</p>
| 0 | 2016-08-04T09:44:58Z | 38,764,770 | <p>Use tabulate, the documentation can be found <a href="https://pypi.python.org/pypi/tabulate" rel="nofollow">here</a></p>
| 0 | 2016-08-04T10:25:07Z | [
"python",
"pygal"
] |
Sort a list of lists in python | 38,764,161 | <p>This is how my list look after using some csv parsing:</p>
<pre><code>list=[['1131', '01/06/15', 'PROFI ROM FOOD SRL', '290.7'],
['1131', '', '', ''], ['2024194PJ', '01/08/15',
'SOCIETATEA NATIONALA DE', '2,088.17'], ['2024194PJ', '', 'RADIOCOMUNICATII SA', '']]
</code></pre>
<p>(this is a data sample, actual list will be bigger)
I will parse that list:</p>
<pre><code>for a in list:
for x in a:
if ....:
anotherlist.append(x)
</code></pre>
<p>I want this output:</p>
<pre><code>anotherlist=[['1131', '01/06/15', 'PROFI ROM FOOD SRL', '290.7'],
['2024194PJ', '01/08/15', 'SOCIETATEA NATIONALA DE RADIOCOMUNICATII
SA', '2,088.17']]
</code></pre>
<p>I want append the 2rd index value to each previous list and delete that specific list, so this </p>
<pre><code>[['2024194PJ', '01/08/15', 'SOCIETATEA NATIONALA DE',
'2,088.17'], ['2024194PJ', '', 'RADIOCOMUNICATII SA', '']]
</code></pre>
<p>will be this </p>
<pre><code>[['2024194PJ', '01/08/15',
'SOCIETATEA NATIONALA DE RADIOCOMUNICATII SA', '2,088.17']]
</code></pre>
<p>and also get rid of this format </p>
<pre><code>['1131', '', '', '']
</code></pre>
<p>But I don't know how to do that.</p>
| -1 | 2016-08-04T09:57:36Z | 38,764,373 | <p>Assuming the list will always follow the schema you provided:</p>
<pre><code>list2 = []
for i in range(len(list)/2):
list2 += [[list[i*2][0],
list[i*2][1],
list[i*2][2] + ' ' + list[i*2+1][2],
list[i*2][3]]]
</code></pre>
<p><strong>Explanation:</strong><br>
We start with an empty list (list2):</p>
<pre><code>list2 = []
</code></pre>
<p>A for loop then goes through the whole of your list, but only looks at every second element:</p>
<pre><code>for i in range(len(list)/2):
</code></pre>
<p>At each step, it adds a new entry to list2:</p>
<pre><code>list2 +=
</code></pre>
<p>This new entry is a combination of two list elements, list[i*2] and list[i*2+1]:</p>
<pre><code> [[list[i*2][0],
list[i*2][1],
list[i*2][2] + ' ' + list[i*2+1][2],
list[i*2][3]]]
</code></pre>
| 0 | 2016-08-04T10:07:15Z | [
"python",
"list"
] |
Pandas Divide dataframe by index values | 38,764,425 | <p>I am trying to divide all columns in the dataframe by the index.(1221 rows, 1000 columns)</p>
<pre><code> 5000058004097 5000058022936 5000058036940 5000058036827 \
91.0 3.667246e+10 3.731947e+12 2.792220e+14 2.691262e+13
94.0 9.869027e+10 1.004314e+13 7.514220e+14 7.242529e+13
96.0 2.536914e+11 2.581673e+13 1.931592e+15 1.861752e+14
...
</code></pre>
<p>Here is the code I have tried...</p>
<pre><code>A = SHIGH.divide(SHIGH.index, axis =1)
</code></pre>
<p>and I get this error:</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (1221,1000) (1221,)
</code></pre>
<p>I have also tried</p>
<pre><code>A = SHIGH.divide(SHIGH.index.values.tolist(), axis =1)
</code></pre>
<p>and also reindexing and using the column to divide and get the same error. </p>
<p>If someone could please point out my mistake it would be much appreciated. </p>
| 2 | 2016-08-04T10:09:24Z | 38,764,467 | <p>You need to convert the <code>Index</code> object to a <code>Series</code>:</p>
<pre><code>df.div(df.index.to_series(), axis=0)
</code></pre>
<p>Example:</p>
<pre><code>In [118]:
df = pd.DataFrame(np.random.randn(5,3))
df
Out[118]:
0 1 2
0 0.828540 -0.574005 -0.535122
1 -0.126242 2.152599 -1.356933
2 0.289270 -0.663178 -0.374691
3 -0.016866 -0.760110 -1.696402
4 0.130580 -1.043561 0.789491
In [124]:
df.div(df.index.to_series(), axis=0)
Out[124]:
0 1 2
0 inf -inf -inf
1 -0.126242 2.152599 -1.356933
2 0.144635 -0.331589 -0.187345
3 -0.005622 -0.253370 -0.565467
4 0.032645 -0.260890 0.197373
</code></pre>
| 1 | 2016-08-04T10:11:41Z | [
"python",
"pandas",
"indexing",
"dataframe"
] |
Pandas Divide dataframe by index values | 38,764,425 | <p>I am trying to divide all columns in the dataframe by the index.(1221 rows, 1000 columns)</p>
<pre><code> 5000058004097 5000058022936 5000058036940 5000058036827 \
91.0 3.667246e+10 3.731947e+12 2.792220e+14 2.691262e+13
94.0 9.869027e+10 1.004314e+13 7.514220e+14 7.242529e+13
96.0 2.536914e+11 2.581673e+13 1.931592e+15 1.861752e+14
...
</code></pre>
<p>Here is the code I have tried...</p>
<pre><code>A = SHIGH.divide(SHIGH.index, axis =1)
</code></pre>
<p>and I get this error:</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (1221,1000) (1221,)
</code></pre>
<p>I have also tried</p>
<pre><code>A = SHIGH.divide(SHIGH.index.values.tolist(), axis =1)
</code></pre>
<p>and also reindexing and using the column to divide and get the same error. </p>
<p>If someone could please point out my mistake it would be much appreciated. </p>
| 2 | 2016-08-04T10:09:24Z | 38,764,482 | <p>You need convert index <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.to_series.html" rel="nofollow"><code>to_series</code></a> and then divide by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.div.html" rel="nofollow"><code>div</code></a>:</p>
<pre><code>print (SHIGH.divide(SHIGH.index.to_series(), axis = 0))
5000058004097 5000058022936 5000058036940 5000058036827
91.0 4.029941e+08 4.101041e+10 3.068374e+12 2.957431e+11
94.0 1.049896e+09 1.068419e+11 7.993851e+12 7.704818e+11
96.0 2.642619e+09 2.689243e+11 2.012075e+13 1.939325e+12
</code></pre>
<p>In both solutions <code>timings</code> are same:</p>
<pre><code>SHIGH = pd.DataFrame({'5000058022936': {96.0: 25816730000000.0, 91.0: 3731947000000.0, 94.0: 10043140000000.0},
'5000058036940': {96.0: 1931592000000000.0, 91.0: 279222000000000.0, 94.0: 751422000000000.0},
'5000058036827': {96.0: 186175200000000.0, 91.0: 26912620000000.0, 94.0: 72425290000000.0},
'5000058004097': {96.0: 253691400000.0, 91.0: 36672460000.0, 94.0: 98690270000.0}})
print (SHIGH)
5000058004097 5000058022936 5000058036827 5000058036940
91.0 3.667246e+10 3.731947e+12 2.691262e+13 2.792220e+14
94.0 9.869027e+10 1.004314e+13 7.242529e+13 7.514220e+14
96.0 2.536914e+11 2.581673e+13 1.861752e+14 1.931592e+15
#[1200 rows x 1000 columns] in sample DataFrame
SHIGH = pd.concat([SHIGH]*400).reset_index(drop=True)
SHIGH = pd.concat([SHIGH]*250, axis=1)
In [212]: %timeit (SHIGH.divide(SHIGH.index.values, axis = 0))
100 loops, best of 3: 14.8 ms per loop
In [213]: %timeit (SHIGH.divide(SHIGH.index.to_series(), axis = 0))
100 loops, best of 3: 14.9 ms per loop
</code></pre>
| 1 | 2016-08-04T10:12:09Z | [
"python",
"pandas",
"indexing",
"dataframe"
] |
Pandas Divide dataframe by index values | 38,764,425 | <p>I am trying to divide all columns in the dataframe by the index.(1221 rows, 1000 columns)</p>
<pre><code> 5000058004097 5000058022936 5000058036940 5000058036827 \
91.0 3.667246e+10 3.731947e+12 2.792220e+14 2.691262e+13
94.0 9.869027e+10 1.004314e+13 7.514220e+14 7.242529e+13
96.0 2.536914e+11 2.581673e+13 1.931592e+15 1.861752e+14
...
</code></pre>
<p>Here is the code I have tried...</p>
<pre><code>A = SHIGH.divide(SHIGH.index, axis =1)
</code></pre>
<p>and I get this error:</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (1221,1000) (1221,)
</code></pre>
<p>I have also tried</p>
<pre><code>A = SHIGH.divide(SHIGH.index.values.tolist(), axis =1)
</code></pre>
<p>and also reindexing and using the column to divide and get the same error. </p>
<p>If someone could please point out my mistake it would be much appreciated. </p>
| 2 | 2016-08-04T10:09:24Z | 38,764,537 | <p>Another way of doing this is </p>
<pre><code>df.div(df.index.values, axis=0)
</code></pre>
<p>Example:</p>
<pre><code>In [7]: df = pd.DataFrame({'a': range(5), 'b': range(1, 6), 'c': range(2, 7)}).set_index('a')
In [8]: df.divide(df.index.values, axis=0)
Out[8]:
b c
a
0 inf inf
1 2.000000 3.000000
2 1.500000 2.000000
3 1.333333 1.666667
4 1.250000 1.500000
</code></pre>
| 1 | 2016-08-04T10:14:32Z | [
"python",
"pandas",
"indexing",
"dataframe"
] |
Initialize multiple lists in python | 38,764,493 | <pre><code>a = 2
b = 3
c = 4
x = y = z = [0 for i in xrange(a*b*c)]
</code></pre>
<p>Is there a way in which x,y,z can be initialized <strong>in one line</strong> (because I don't want to multiply a, b and c for each list initialization), as separate lists of 0s. In the above if x is updated, then y and z are also get updated simultaneously with the same changes.</p>
| 0 | 2016-08-04T10:12:35Z | 38,764,558 | <p>Just use another comprehension and unpack it:</p>
<pre><code>x, y, z = [[0 for i in xrange(a*b*c)] for _ in xrange(3)]
</code></pre>
<p>Note that <code>[0 for i in xrange(a*b*c)]</code> is equivalent to the simpler <code>[0] * a*b*c</code>.</p>
| 3 | 2016-08-04T10:15:20Z | [
"python",
"initialization",
"generator-expression"
] |
Initialize multiple lists in python | 38,764,493 | <pre><code>a = 2
b = 3
c = 4
x = y = z = [0 for i in xrange(a*b*c)]
</code></pre>
<p>Is there a way in which x,y,z can be initialized <strong>in one line</strong> (because I don't want to multiply a, b and c for each list initialization), as separate lists of 0s. In the above if x is updated, then y and z are also get updated simultaneously with the same changes.</p>
| 0 | 2016-08-04T10:12:35Z | 38,764,707 | <p>Looking at your stated intention rather than at the 'one line' requirement:</p>
<pre><code>a = 2
b = 3
c = 4
x = [0 for i in xrange(a*b*c)]
y = x [:]
z = x [:]
</code></pre>
<p>Not sure the optimizer is clever enough to avoid repeated multiplication at:</p>
<pre><code>x, y, z = [[0 for i in xrange(a*b*c)] for _ in xrange(3)]
</code></pre>
<p>Suppose a, b, and c were properties, so reading them would have side effects. How could the optimizer know this in a dynamically typed language?</p>
| 0 | 2016-08-04T10:21:38Z | [
"python",
"initialization",
"generator-expression"
] |
Is there break function in python (for PyCharm or other IDE)? | 38,764,742 | <p>There is <code>__debugbreak</code> function in C++.</p>
<p>I need to use similar function that breaks run-time with resume possibility in my Python code (with PyCharm IDE).</p>
| 1 | 2016-08-04T10:23:35Z | 38,765,075 | <p>There's the pdb (and ipdb) module which provide interactive debuggers.</p>
<p>You can use </p>
<pre><code>import pdb; pdb.set_trace()
</code></pre>
<p>to insert a breakpoint wherever you want.</p>
<p>I'm not sure how these will work with PyCharm (for which you should just be able to click to add a breakpoint anyway) but the literal answer to your question is "yes".</p>
<p>Using <code>ipdb</code> from the command line is a very easy way to debug your Python code.</p>
<p>See <a href="https://docs.python.org/2.7/library/pdb.html" rel="nofollow">https://docs.python.org/2.7/library/pdb.html</a></p>
| 2 | 2016-08-04T10:40:10Z | [
"python",
"debugging",
"pycharm",
"breakpoints",
"break"
] |
Groupby with conditions in pandas | 38,764,766 | <p>I have a <code>pd.DataFrame</code> that looks like this:</p>
<pre><code>In [30]: df
Out[30]:
DATES UID A
0 2014-01-01 1 False
1 2014-01-02 2 False
2 2014-01-03 3 True
3 2014-01-04 4 True
4 2014-01-05 5 False
5 2014-01-06 6 True
6 2014-01-07 1 False
7 2014-01-08 2 False
8 2014-01-09 3 False
9 2014-01-10 2 False
10 2014-01-11 3 False
11 2014-01-12 4 False
12 2014-01-13 5 False
13 2014-01-14 3 False
14 2014-01-15 1 False
</code></pre>
<p>and I would like to find a way to:</p>
<ol>
<li>Order by DATES ASC</li>
<li>Group by UID</li>
<li>Filter out all UID's where the first entry (per UID) has 'A' == False</li>
</ol>
<p>The desired output would look like this:</p>
<pre><code>In [30]: df
Out[30]:
DATES UID A
0 2014-01-03 3 True
1 2014-01-04 4 True
2 2014-01-06 6 True
3 2014-01-09 3 False
4 2014-01-11 3 False
5 2014-01-12 4 False
6 2014-01-14 3 False
</code></pre>
<p>Any ideas very much appreciated, thanks!</p>
| 0 | 2016-08-04T10:25:03Z | 38,764,963 | <p>It looks like need first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow"><code>sort_values</code></a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html" rel="nofollow"><code>filter</code></a>:</p>
<pre><code>df.sort_values(by='DATES', inplace=True)
df = df.groupby('UID', sort=False).filter(lambda x: x.A.iloc[0] == True)
print (df)
DATES UID A
2 2014-01-03 3 True
3 2014-01-04 4 True
5 2014-01-06 6 True
8 2014-01-09 3 False
10 2014-01-11 3 False
11 2014-01-12 4 False
13 2014-01-14 3 False
</code></pre>
| 1 | 2016-08-04T10:34:40Z | [
"python",
"pandas"
] |
Python: Writing a script that print filenames of zerolength files and a script that counts all images in a web page | 38,764,891 | <p>I am currently learning Python, and could really use help from experienced coders with help to getting started on solving this assignment: </p>
<ol>
<li><p>Using os.walk, write a script that will print the filenames of zero length files. It should also print the count of zero length files.</p></li>
<li><p>Write a script that will list and count all of the images in a given HTML web page/file. You can assume that:</p>
<pre><code> Each image file is enclosed with the tag <img and ends with >
The HTML page/file is syntactically correct
</code></pre></li>
</ol>
<p>Any input is much appreciated!</p>
| -3 | 2016-08-04T10:31:01Z | 38,767,689 | <p>You can use BeautifulSoup to easily count the number of images on the page. All you would need to do is scrape all of the tags and get the length of that scrape.</p>
<pre><code>import urllib
from bs4 import BeautifulSoup
url = 'whatever the website is'
r = urllib.urlopen(url).read()
soup = BeautifulSoup(r, 'html.parser')
num_images = len(soup.find_all('img'))
print num_images
</code></pre>
<p>This code hasn't been compiled. I don't think it's entirely accurate, but it should give you more than enough of an idea on how to do it.</p>
<p>Better yet would be to take a look at this SO post, specifically the answer that i've linked, which has an implementation using regex: <a href="http://stackoverflow.com/a/17395503/6464893">http://stackoverflow.com/a/17395503/6464893</a></p>
| 0 | 2016-08-04T12:43:53Z | [
"python",
"python-3.x",
"os.walk"
] |
Open all files in a folder starting with a certain letter | 38,764,902 | <p>I would like to open all excel files in a folder that start with a certain string. For example, let's say that I want all files that start with 'hello'. From the following list:
1)hello1.xls
2)hello2.xls
3)other2.xls
4)hello3.xls
5)other3.xls</p>
<p>I would like to open files 1, 2, 4. I would like to open each file, process it and then open then next file. So the workflow should something like:</p>
<pre><code> for i in files:
if string=='hello'
pd.read_xls(i)
do things
</code></pre>
<p>Thanks in advance.</p>
| -3 | 2016-08-04T10:31:40Z | 38,765,002 | <p>Assuming all files are in your current working directory, you can use <code>glob</code> like this:</p>
<pre><code>import glob
file_names = glob.glob("hello*")
for file_name in file_names:
with open(file_name) as f:
for line in f:
# do things
</code></pre>
| 1 | 2016-08-04T10:36:32Z | [
"python",
"pandas"
] |
Python JSON variables not being accepted | 38,764,938 | <p>I'm trying to save the variables from a JSON string to be used as settings for a different function, sadly though, they are not being accepted as good as I wanted. Here's what's cooking;</p>
<p>The JSON string comes through MQTT as so:</p>
<pre><code>def on_message(client, userdata, msg):
data = json.loads(msg.payload)
camera = picamera.PiCamera()
camera.resolution = (2592, 1944)
camera.sharpness = data['sharpness']
camera.contrast = data['contrast']
</code></pre>
<p>However, when it gets the message, it errors out:</p>
<pre><code>> File "/usr/local/lib/python2.7/dist-packages/picamera/camera.py",
> line 2392, in _set_sharpness
> "Invalid sharpness value: %d (valid range -100..100)" % value) TypeError: %d format: a number is required, not unicode
</code></pre>
<p>Any idea why? I don't really know why it comes up with %d while when I print the data:</p>
<pre><code>print data['sharpness']
>>> 50
</code></pre>
<p>It comes out as a number...</p>
<p>Any help is really appreciated!!</p>
| -1 | 2016-08-04T10:33:27Z | 38,765,260 | <p>Wrap your values with <code>int</code> as these are in <code>unicode</code> format</p>
<pre><code>def on_message(client, userdata, msg):
data = json.loads(msg.payload)
camera = picamera.PiCamera()
camera.resolution = (2592, 1944)
camera.sharpness = int(data['sharpness'])
camera.contrast = int(data['contrast'])
</code></pre>
| 0 | 2016-08-04T10:48:17Z | [
"python",
"json"
] |
Makemigrations after splitting models not detecting changes | 38,764,997 | <p>I am developing a django app in virtual environment. Django version 1.9.7.</p>
<p>I splitted the <code>models.py</code> into multiple files and kept them in a folder with <code>__init__.py</code> file. </p>
<pre><code>|-- Models
| |-- __init__.py
| |-- PersonModel.py
| `-- VehicleModel.py
</code></pre>
<p>Content of <strong>init</strong>.py file is - </p>
<pre><code>from .VehicleModel import *
from .PersonModel import *
</code></pre>
<p>Inside VehicleModel.py file, I created a model. </p>
<pre><code>from django.db import models
class Vehicle(models.Model):
regNo = models.charField(max_length=16, null = False, blank = False, primary_key = True )
chassisNo = models.charField(max_length=64)
engineNo = models.charField(max_length=64)
manufacturer = models.charField(max_length=128)
product = models.charField(max_length=128)
</code></pre>
<p>Now when I am running python <code>manage.py makemigrations MyAppName</code> it says <code>No changes detected in app 'MyAppName'</code></p>
<p>I did initial migrations and default tables are created in DB.</p>
<p>Also I don't have anything in migrations folder except init file.</p>
| 0 | 2016-08-04T10:36:22Z | 38,766,878 | <p>Just import all models into 'models.py' as the django look up for models in models.py for migrating a schema.</p>
| 0 | 2016-08-04T12:07:08Z | [
"python",
"django"
] |
command line python script run on a file in different directory | 38,765,061 | <p>I have a script.py in /Users/admin/Desktop and I want to run this script on a file that is in /Users/admin//Desktop/folder/file.txt, without changing the present dir. </p>
<p>Question: what is the most efficient way to do that on command-line ? I am using the following commands and results are not as expected. </p>
<pre><code>$ python script.py --script; /Users/admin/Desktop/file.txt
raise StopIteration('because of missing file (file.txt)')
StopIteration: because of missing file (file.txt)
</code></pre>
| 0 | 2016-08-04T10:39:33Z | 38,765,328 | <ol>
<li>Remove the semicolon because that will prematurely terminate the command.</li>
<li>Pass the correct path to the file to your program. You say it is <code>/Users/admin/Desktop/folder/file.txt</code>, however, your command is using <code>/Users/admin/Desktop/file.txt</code> (it's missing <code>folder</code>)</li>
</ol>
<p>So the command should (probably) be:</p>
<pre><code>$ python script.py --script /Users/admin/Desktop/folder/file.txt
</code></pre>
<p>If that doesn't work you will need to edit your question to show your code.</p>
| 1 | 2016-08-04T10:51:38Z | [
"python",
"command-line"
] |
List of dictionaries in jinja | 38,765,097 | <p>I want to iterate through a list of dictionaries using jinja.</p>
<p>here is my list :</p>
<pre><code>[{'product': 'EC2', 'cost': 3.5145240400000013}, {'product': 'ElastiCache', 'cost': 1.632000000000001}, {'product': 'Elasticsearch', 'cost': 4.423768260000001}, {'product': 'RDS', 'cost': 1.632000000000001}]
</code></pre>
<p>My template :</p>
<pre><code>{% for dict_item in products %}
{% for product, cost in dict_item.items() %}
<h1>Product: {{ product }}</h1>
<h2>Cost: {{ cost }}</h2>
{% endfor %}
{% endfor %}
</code></pre>
<p>And finally the output :</p>
<pre><code><h1>Product: product</h1>
<h2>Cost: EC2</h2>
<h1>Product: cost</h1>
<h2>Cost: 3.51452404</h2>
<h1>Product: product</h1>
<h2>Cost: ElastiCache</h2>
<h1>Product: cost</h1>
<h2>Cost: 1.632</h2>
<h1>Product: product</h1>
<h2>Cost: Elasticsearch</h2>
<h1>Product: cost</h1>
<h2>Cost: 4.42376826</h2>
<h1>Product: product</h1>
<h2>Cost: RDS</h2>
<h1>Product: cost</h1>
<h2>Cost: 1.632</h2>
</code></pre>
<p>As you can see there is something wrong with that output. All the data are mixed up and I don't get why. </p>
<p>I just want something like :</p>
<pre><code><h1>Product: EC2</h1>
<h2>Cost: 3.5145240400000013</h2>
</code></pre>
| -1 | 2016-08-04T10:41:10Z | 38,765,161 | <p>Try This:-</p>
<pre><code>{% for dict_item in products %}
<h1>Product: {{ dict_item['product'] }}</h1>
<h2>Cost: {{ dict_item['cost'] }}</h2>
{% endfor %}
</code></pre>
| 0 | 2016-08-04T10:44:16Z | [
"python",
"dictionary",
"jinja2"
] |
Python: How can I access IDs separated by | (pipe) in an argument | 38,765,150 | <p>I have searched both here and using google. The | (pipe) symbol is the bitwise OR operator, but I can't find anything specific to my problem. Here is an <a href="https://developers.maxon.net/docs/Cinema4DPythonSDK/html/modules/c4d.gui/GeDialog/index.html#GeDialog.AddButton" rel="nofollow">example</a> from the Cinema4D Python SDK and has an argument <em>flags</em> where you can set multiple IDs separated by the | symbol.</p>
<p>What is this specifically and how do I access the IDs in the function below?</p>
<pre><code>ID_OK = 100
ID_CANCEL = 101
def Func(flags):
print flags
return
Func(ID_OK|ID_CANCEL)
..
>> 101
</code></pre>
<p>Thank you.</p>
| 0 | 2016-08-04T10:43:49Z | 38,765,309 | <p>The key here is to "space out" the values of the flags in such a way that each combination of <code>|</code> with arbitrary number of flags produces a unique value. That way you can tell each flags are meant to be used simply by the value of the argument.</p>
<p>Even though the following article talks about C# you can still get the idea:
<a href="http://www.alanzucconi.com/2015/07/26/enum-flags-and-bitwise-operators/" rel="nofollow">http://www.alanzucconi.com/2015/07/26/enum-flags-and-bitwise-operators/</a></p>
| 0 | 2016-08-04T10:50:27Z | [
"python",
"function",
"arguments",
"python-2.x",
"flags"
] |
Python: How can I access IDs separated by | (pipe) in an argument | 38,765,150 | <p>I have searched both here and using google. The | (pipe) symbol is the bitwise OR operator, but I can't find anything specific to my problem. Here is an <a href="https://developers.maxon.net/docs/Cinema4DPythonSDK/html/modules/c4d.gui/GeDialog/index.html#GeDialog.AddButton" rel="nofollow">example</a> from the Cinema4D Python SDK and has an argument <em>flags</em> where you can set multiple IDs separated by the | symbol.</p>
<p>What is this specifically and how do I access the IDs in the function below?</p>
<pre><code>ID_OK = 100
ID_CANCEL = 101
def Func(flags):
print flags
return
Func(ID_OK|ID_CANCEL)
..
>> 101
</code></pre>
<p>Thank you.</p>
| 0 | 2016-08-04T10:43:49Z | 38,765,508 | <p>Flags like this are sometimes called "bit-masks". If you define them as hexadecimal you will find them much easier to use. </p>
<p>You can use binary <code>&</code> to determine if the flag is set. For example:</p>
<pre><code>ID_OK = 0x01
ID_CANCEL = 0x10
def Func(flags):
print "0x%02x" % (flags)
if flags & ID_OK:
print "ID_OK"
if flags & ID_CANCEL:
print "ID_CANCEL"
print
return
Func(ID_OK)
Func(ID_CANCEL)
Func(ID_OK|ID_CANCEL)
</code></pre>
<p>Gives:</p>
<pre><code>0x01
ID_OK
0x10
ID_CANCEL
0x11
ID_OK
ID_CANCEL
</code></pre>
<p>Flags are usually larger than this. If you have a small number of flags then it is much simpler if you can reserve one nybble for each flag.</p>
| 0 | 2016-08-04T10:59:50Z | [
"python",
"function",
"arguments",
"python-2.x",
"flags"
] |
UTF -8 issue in Datastxa Cassandra | 38,765,242 | <p>I am using Datastax Cassandra 4.8 version. Using SOlr for Search activity. In my table , I have chinese character as well as English character.
Table retrieval is workign smooth for english character search but once I am trying to search by Chinese character , either it gave me 0 row or give me below error. </p>
<pre><code>cqlsh:tradebees_dev> select title,isbn,id,author from tradebees_dev.yf_product_books where solr_query = 'author:[è±ä¸½]æ±èå¨
·å纳森';
'ascii' codec can't encode characters in position 106-107: ordinal not in range(128)
</code></pre>
<p>Please suggest me how to correct it. Is any configuration section where I have to enable any thing. </p>
| 1 | 2016-08-04T10:47:40Z | 38,765,615 | <p>It looks like the author column is configured to use an <code>ascii</code> data type. I'm not familiar with Datastax Cassandra, so I couldn't tell you how to change it, but I think if you change it to datatype <code>varchar</code> it should work.</p>
<p>Sorry for the kinda generic answer, but hope it helps anyway.</p>
| 0 | 2016-08-04T11:05:02Z | [
"php",
"python",
"solr",
"cassandra",
"datastax"
] |
UTF -8 issue in Datastxa Cassandra | 38,765,242 | <p>I am using Datastax Cassandra 4.8 version. Using SOlr for Search activity. In my table , I have chinese character as well as English character.
Table retrieval is workign smooth for english character search but once I am trying to search by Chinese character , either it gave me 0 row or give me below error. </p>
<pre><code>cqlsh:tradebees_dev> select title,isbn,id,author from tradebees_dev.yf_product_books where solr_query = 'author:[è±ä¸½]æ±èå¨
·å纳森';
'ascii' codec can't encode characters in position 106-107: ordinal not in range(128)
</code></pre>
<p>Please suggest me how to correct it. Is any configuration section where I have to enable any thing. </p>
| 1 | 2016-08-04T10:47:40Z | 38,789,546 | <p>It looks like you have run into <a href="https://issues.apache.org/jira/browse/CASSANDRA-10875" rel="nofollow">CASSANDRA-10875</a>. This was fixed in 2.0.13. You don't say which version of 4.8 you are running but I would suggest updating to the latest version of 4.8 where this issue should be fixed.</p>
| 0 | 2016-08-05T12:42:54Z | [
"php",
"python",
"solr",
"cassandra",
"datastax"
] |
pandas Dataframe set value fails | 38,765,480 | <p>I been following the documentation but it keeps on throwing this error</p>
<pre><code>row_names = ["ab_" + str(x) for x in range(4)]
col_names = ["n_" + str(x) for x in range(5)]
df = pd.DataFrame(index=row_names, columns=col_names)
df = df.fillna(0) # with 0s rather than NaNs
# Load the images
print df.loc['ab_3','n_1']
df.set_value['ab_3','n_1', '1']
print df
</code></pre>
<p>Error:</p>
<pre><code>TypeError: 'instancemethod' object has no attribute '__getitem__'
</code></pre>
| 1 | 2016-08-04T10:58:47Z | 38,765,506 | <p>Unlike <code>loc</code>, <code>set_value</code> is a method, so it needs to be called.</p>
<p>But <code>df.set_value(['ab_3','n_1', '1'])</code> won't do what you expect as well.
See the its <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_value.html" rel="nofollow">docs</a>.</p>
| 1 | 2016-08-04T10:59:46Z | [
"python",
"pandas"
] |
pandas Dataframe set value fails | 38,765,480 | <p>I been following the documentation but it keeps on throwing this error</p>
<pre><code>row_names = ["ab_" + str(x) for x in range(4)]
col_names = ["n_" + str(x) for x in range(5)]
df = pd.DataFrame(index=row_names, columns=col_names)
df = df.fillna(0) # with 0s rather than NaNs
# Load the images
print df.loc['ab_3','n_1']
df.set_value['ab_3','n_1', '1']
print df
</code></pre>
<p>Error:</p>
<pre><code>TypeError: 'instancemethod' object has no attribute '__getitem__'
</code></pre>
| 1 | 2016-08-04T10:58:47Z | 38,765,515 | <p>using wrong type of brackets, you want <code>()</code> not <code>[]</code>:</p>
<pre><code>df.set_value('ab_3','n_1', '1')
n_0 n_1 n_2 n_3 n_4
ab_0 0 0 0 0 0
ab_1 0 0 0 0 0
ab_2 0 0 0 0 0
ab_3 0 1 0 0 0
</code></pre>
| 2 | 2016-08-04T11:00:21Z | [
"python",
"pandas"
] |
Boolean Indexing with Pandas isn't working for me | 38,765,585 | <p>I'm getting some strange behaviour in pandas when using Boolean indexing, and I don't understand what's going wrong.</p>
<p>With a DataFrame <code>data</code> that contains a column <code>RSTAR</code> of <code>Float</code> values, among others, I'm getting the following when I try to do boolean indexing:</p>
<pre><code>rejection_list = list( data[ (data.RSTAR == 0) | (~ np.isfinite(data.RSTAR)) ].loc[:,'NAME'] )
</code></pre>
<p>Gives me an error: <code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</code></p>
<p>The following on the other hand:</p>
<pre><code>booll = (data.RSTAR == 0) | (~ np.isfinite(data.RSTAR))
rejection_list2 = list(data[booll].loc[:,'NAME'])
</code></pre>
<p>Works fine. As far as I can tell, these two expressions should do the exact same thing. So why does the bottom one work, but not the top one?</p>
<hr>
<p>UPDATE: Still don't understand what's going on, I looked further into it and here's what happened:</p>
<p>I tried to slice the <code>data</code> DataFrame so that I could post it on here. So with <code>data = data.loc[:5,:]</code> I get the same exact error. However, with <code>data = data.loc[:5, ['RSTAR', 'NAME']]</code> I get no error and it works as it should.</p>
<p>I'm not sure how to post the entire <code>data</code> array here since it's got lots of columns, but the column names are: </p>
<pre><code>data.columns
Index(['Unnamed: 0', 'NAME', 'RADIUS', 'RUPPER', 'RLOWER', 'UR', 'MASS',
'MASSUPPER', 'MASSLOWER', 'UMASS', 'A', 'AUPPER', 'ALOWER', 'UA',
'RSTAR', 'RSTARUPPER', 'RSTARLOWER', 'URSTAR', 'TEFF', 'TEFFUPPER',
'TEFFLOWER', 'UTEFF', 'ECC', 'LUM', 'RERRMAX', 'LOG_FLUX', 'FLUX'],
dtype='object')
</code></pre>
<p>So I can't see any duplication or anything. I just don't understand what's wrong.</p>
<hr>
<p>UPDATE 2: It got more confusing. So I went into pdb again, like so:</p>
<pre><code>pdb.set_trace() ###
rejection_list = list(data[ (data.RSTAR == 0) | (~ np.isfinite(data.RSTAR)) ].loc[:,'NAME'])
</code></pre>
<p>And keeping the same <code>data</code>, I copy and pasted the exact statement above: <code>rejection_list = list(data[ (data.RSTAR == 0) | (~ np.isfinite(data.RSTAR)) ].loc[:,'NAME'])</code> and it worked <strong>while in pdb mode</strong>. However, as soon as I click <code>c</code> to continue out of pdb into the next line, the <strong>same</strong> line I just successfully executed in pdb, it gives me the error again. I'm at a complete loss here. Is it something to do with a cache? I opened a new Terminal but it's still giving me the same problem.</p>
<hr>
<p>UPDATE 3: Tried it with isnull() and notnull() and same problem.</p>
<pre><code>booll = (data.RSTAR==0) | (data.RSTAR.isnull())
data[booll]
</code></pre>
<p>works, but the following doesn't:</p>
<pre><code>rejection_list = list(data[ (data.RSTAR == 0) | (data.RSTAR.isnull()) ].loc[:,'NAME'])
</code></pre>
<hr>
<p>UPDATE 4: The opposite works with no problem: <code>data = data[(data.RSTAR != 0) & (data.RSTAR.notnull())]</code>.</p>
<hr>
<p>EDIT: To make it clear, it seems to be the case that when I execute the command by typing it in directly in pdb, it works, for the small and large dataframes. However, when I just let the script run, then it doesn't work for small or large.</p>
| 2 | 2016-08-04T11:03:41Z | 38,765,620 | <p>I think you can use one line solution with pandas function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.notnull.html" rel="nofollow"><code>notnull</code></a>:</p>
<pre><code>rejection_list = data.ix[(data.RSTAR == 0) | (data.RSTAR.notnull()) , 'NAME'].tolist()
</code></pre>
<p>or:</p>
<pre><code>rejection_list = data.loc[(data.RSTAR == 0) | (data.RSTAR.notnull()) , 'NAME'].tolist()
</code></pre>
<p>I try reproduce your error, but all works correctly:</p>
<pre><code>import pandas as pd
import numpy as np
data = pd.DataFrame({'RSTAR':[0,2,-np.inf, np.nan,np.inf],
'NAME':[4,5,6,7,10]})
print (data)
NAME RSTAR
0 4 0.000000
1 5 2.000000
2 6 -inf
3 7 NaN
4 10 inf
rejection_list = list( data[ (data.RSTAR == 0) | (~ np.isfinite(data.RSTAR)) ].loc[:,'NAME'])
print (rejection_list)
[4, 6, 7, 10]
booll = (data.RSTAR == 0) | (~ np.isfinite(data.RSTAR))
rejection_list2 = list(data[booll].loc[:,'NAME'])
print (rejection_list2)
[4, 6, 7, 10]
rejection_list3 = data.ix[(data.RSTAR == 0) | (data.RSTAR.notnull()) , 'NAME'].tolist()
print (rejection_list2)
[4, 6, 7, 10]
</code></pre>
| 1 | 2016-08-04T11:05:21Z | [
"python",
"pandas",
"numpy",
"scipy"
] |
Ansible - "NameError: name 'urllib2' is not defined" | 38,765,586 | <p>Getting this below error while trying to run ansible(version >2) with python 3.5.2</p>
<p>I have looked into the github issues terming it as resolved, but can't sort out what needs to be done. <a href="https://github.com/ansible/ansible/issues/16013" rel="nofollow">https://github.com/ansible/ansible/issues/16013</a></p>
<p>How to resolve this?</p>
<pre><code>virtual@xxxxxxxxxx:~/ansible-spike> ansible all -m ping -vvv
Using /home/virtual/ansible-spike/ansible.cfg as config file
ERROR! Unexpected Exception: name 'urllib2' is not defined
the full traceback was:
Traceback (most recent call last):
File "/home/virtual/.pyenv/versions/3.5.2/bin/ansible", line 92, in <module>
exit_code = cli.run()
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/cli/adhoc.py", line 193, in run
result = self._tqm.run(play)
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/executor/task_queue_manager.py", line 202, in run
self.load_callbacks()
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/executor/task_queue_manager.py", line 171, in load_callbacks
for callback_plugin in callback_loader.all(class_only=True):
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/plugins/__init__.py", line 368, in all
self._module_cache[path] = self._load_module_source(name, path)
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/plugins/__init__.py", line 319, in _load_module_source
module = imp.load_source(name, path, module_file)
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/imp.py", line 172, in load_source
module = _load(spec)
File "<frozen importlib._bootstrap>", line 693, in _load
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 665, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/plugins/callback/hipchat.py", line 32, in <module>
from ansible.module_utils.urls import open_url
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/module_utils/urls.py", line 330, in <module>
if hasattr(httplib, 'HTTPSConnection') and hasattr(urllib2, 'HTTPSHandler'):
NameError: name 'urllib2' is not defined
</code></pre>
| 0 | 2016-08-04T11:03:41Z | 38,765,845 | <p>Urllib2 is specific to Python v2.</p>
<p>Urllib2 documentation at <a href="http://docs.python.org/library/urllib2.html" rel="nofollow">http://docs.python.org/library/urllib2.html</a>:</p>
<blockquote>
<p>The urllib2 module has been split across several modules in Python 3.0
named urllib.request and urllib.error.</p>
</blockquote>
<p>I don't think Ansible is compatible with Python 3 yet.</p>
| 3 | 2016-08-04T11:15:39Z | [
"python",
"linux",
"ansible"
] |
Ansible - "NameError: name 'urllib2' is not defined" | 38,765,586 | <p>Getting this below error while trying to run ansible(version >2) with python 3.5.2</p>
<p>I have looked into the github issues terming it as resolved, but can't sort out what needs to be done. <a href="https://github.com/ansible/ansible/issues/16013" rel="nofollow">https://github.com/ansible/ansible/issues/16013</a></p>
<p>How to resolve this?</p>
<pre><code>virtual@xxxxxxxxxx:~/ansible-spike> ansible all -m ping -vvv
Using /home/virtual/ansible-spike/ansible.cfg as config file
ERROR! Unexpected Exception: name 'urllib2' is not defined
the full traceback was:
Traceback (most recent call last):
File "/home/virtual/.pyenv/versions/3.5.2/bin/ansible", line 92, in <module>
exit_code = cli.run()
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/cli/adhoc.py", line 193, in run
result = self._tqm.run(play)
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/executor/task_queue_manager.py", line 202, in run
self.load_callbacks()
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/executor/task_queue_manager.py", line 171, in load_callbacks
for callback_plugin in callback_loader.all(class_only=True):
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/plugins/__init__.py", line 368, in all
self._module_cache[path] = self._load_module_source(name, path)
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/plugins/__init__.py", line 319, in _load_module_source
module = imp.load_source(name, path, module_file)
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/imp.py", line 172, in load_source
module = _load(spec)
File "<frozen importlib._bootstrap>", line 693, in _load
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 665, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/plugins/callback/hipchat.py", line 32, in <module>
from ansible.module_utils.urls import open_url
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/module_utils/urls.py", line 330, in <module>
if hasattr(httplib, 'HTTPSConnection') and hasattr(urllib2, 'HTTPSHandler'):
NameError: name 'urllib2' is not defined
</code></pre>
| 0 | 2016-08-04T11:03:41Z | 38,765,851 | <p>Ansible is currently not able to run with <code>Python3</code>. That is also stated in the linked Github Issue.</p>
| 0 | 2016-08-04T11:15:56Z | [
"python",
"linux",
"ansible"
] |
Ansible - "NameError: name 'urllib2' is not defined" | 38,765,586 | <p>Getting this below error while trying to run ansible(version >2) with python 3.5.2</p>
<p>I have looked into the github issues terming it as resolved, but can't sort out what needs to be done. <a href="https://github.com/ansible/ansible/issues/16013" rel="nofollow">https://github.com/ansible/ansible/issues/16013</a></p>
<p>How to resolve this?</p>
<pre><code>virtual@xxxxxxxxxx:~/ansible-spike> ansible all -m ping -vvv
Using /home/virtual/ansible-spike/ansible.cfg as config file
ERROR! Unexpected Exception: name 'urllib2' is not defined
the full traceback was:
Traceback (most recent call last):
File "/home/virtual/.pyenv/versions/3.5.2/bin/ansible", line 92, in <module>
exit_code = cli.run()
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/cli/adhoc.py", line 193, in run
result = self._tqm.run(play)
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/executor/task_queue_manager.py", line 202, in run
self.load_callbacks()
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/executor/task_queue_manager.py", line 171, in load_callbacks
for callback_plugin in callback_loader.all(class_only=True):
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/plugins/__init__.py", line 368, in all
self._module_cache[path] = self._load_module_source(name, path)
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/plugins/__init__.py", line 319, in _load_module_source
module = imp.load_source(name, path, module_file)
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/imp.py", line 172, in load_source
module = _load(spec)
File "<frozen importlib._bootstrap>", line 693, in _load
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 665, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/plugins/callback/hipchat.py", line 32, in <module>
from ansible.module_utils.urls import open_url
File "/home/virtual/.pyenv/versions/3.5.2/lib/python3.5/site-packages/ansible/module_utils/urls.py", line 330, in <module>
if hasattr(httplib, 'HTTPSConnection') and hasattr(urllib2, 'HTTPSHandler'):
NameError: name 'urllib2' is not defined
</code></pre>
| 0 | 2016-08-04T11:03:41Z | 38,765,858 | <p>The <a href="https://pypi.python.org/pypi/ansible/2.1.1.0" rel="nofollow">ansible python API</a> does not support Python 3. The PyPI page lists only 2.6 and 2.7 .</p>
| 1 | 2016-08-04T11:16:16Z | [
"python",
"linux",
"ansible"
] |
invalid type, must be a string or Tensor [TensorFlow] | 38,765,676 | <p>I have problem in Machine Learning library from Google - Tensorflow.
When I want to initialize my session, it tells me that must be string or tensor. I don't spot any mistake. </p>
<pre><code>import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
sess.run(tf.initialize_all_variables)
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
for i in range(1000):
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels})
</code></pre>
<p>This is output of the following program in terminal:</p>
<pre><code>(tensorflow) juldou-box@juldou-box:~/tensorflow$ python mnist_e.py
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
Traceback (most recent call last):
File "mnist_e.py", line 13, in <module>
sess.run(tf.initialize_all_variables)
File "/home/juldou-box/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/home/juldou-box/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 584, in _run
processed_fetches = self._process_fetches(fetches)
File "/home/juldou-box/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 540, in _process_fetches
% (subfetch, fetch, type(subfetch), str(e)))
TypeError: Fetch argument <function initialize_all_variables at 0x7fe4ca157c80> of <function initialize_all_variables at 0x7fe4ca157c80> has invalid type <type 'function'>, must be a string or Tensor. (Can not convert a function into a Tensor or Operation.)
</code></pre>
| 0 | 2016-08-04T11:08:09Z | 38,765,816 | <p>I think you just missed a pair of parentheses <code>()</code> after <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/state_ops.html#initialize_all_variables" rel="nofollow"><code>tf.initialize_all_variables</code></a> ;)</p>
<p>As python says, it's in line 13, look after</p>
<blockquote>
<p><code>sess.run(tf.initialize_all_variables)</code></p>
</blockquote>
| 0 | 2016-08-04T11:14:32Z | [
"python",
"initialization",
"syntax-error",
"tensorflow",
"mnist"
] |
How can I save lots of images after resizing them all into another directory in Python(PIL)? | 38,765,757 | <p>Here is my code :</p>
<pre><code># -*- coding: utf-8 -*-
#!/usr/bin/python
import PIL
from PIL import Image
import os,sys
path = "/home/ozer/Desktop/Yedek/Workspace/"
dirs = os.listdir (path)
def resize():
for item in dirs:
if os.path.isfile(path1+item):
img = Image.open(path1+item)
f,e = os.path.splitext(path1+item)
basewidth = 100
wpercent = (basewidth / float(img.size[0]))
hsize = int((float(img.size[1]) * float(wpercent)))
img = img.resize((basewidth, hsize))
img.save("/home/ozer/Desktop/Scripts/Last/"+"*.jpg","JPEG")
resize()
</code></pre>
<p>If I let this script to save resized images in the folder named "path", it resizes all images and saves there but it creates a mess, I mean unresized and resized images all in one directory. When I try to write a solution like that, it only saves one picture in the directory that i show in the last line. Can you help me in this?</p>
| 0 | 2016-08-04T11:11:56Z | 38,766,176 | <p>Try</p>
<pre><code>img.save("/home/ozer/Desktop/Scripts/Last/"+item+".jpg","JPEG")
</code></pre>
<p>or, equivalently</p>
<pre><code>img.save("/home/ozer/Desktop/Scripts/Last/{}.jpg".format(item),"JPEG")
</code></pre>
<p>At present you are trying to create a single output file called <code>*.jpg</code> for each input file.</p>
| 1 | 2016-08-04T11:31:27Z | [
"python",
"python-imaging-library"
] |
Pass multiple arguments to listbox | 38,765,877 | <p>I am trying to pass a label to the callback function of my listbox, but somehow I can't manage to do it.</p>
<p>I already tried with lambdas, but without success.</p>
<p>This is my current code-snippet:</p>
<pre><code>program_list.bind('<<ListboxSelect>>', MainController.select_program)
</code></pre>
<p>How can I get something like:</p>
<pre><code>program_list.bind('<<ListboxSelect>>', MainController.select_program(arg1))
</code></pre>
<p>EDIT:
other function:</p>
<pre><code>def select_program(selection, test):
global programs
print test
if not programs:
return
# Tkinter passes an event object to onselect()
w = selection.widget
index = int(w.curselection()[0])
value = w.get(index)
print 'You selected item %d: "%s"' % (index, value)
</code></pre>
| 0 | 2016-08-04T11:16:52Z | 38,765,965 | <p>You can use lambda like this:</p>
<p><strong>CHANGED</strong></p>
<pre><code>from tkinter import *
root = Tk()
def on_select(event, arg):
lb = event.widget
idx = lb.curselection()
item = lb.get(idx)
print('%s, %s' % (item, arg))
lst = Listbox(root)
for i in range(5):
lst.insert(END, 'item ' + str(i))
lst.pack()
lst.bind('<<ListboxSelect>>', lambda event: on_select(event, 'another value'))
root.mainloop()
</code></pre>
| 1 | 2016-08-04T11:20:50Z | [
"python",
"python-2.7",
"tkinter",
"listbox"
] |
How to patch globally in pytest? | 38,765,917 | <p>I use pytest quite a bit for my code. Sample code structure looks like this. The entire codebase is <code>python-2.7</code></p>
<pre><code>core/__init__.py
core/utils.py
#feature
core/feature/__init__.py
core/feature/service.py
#tests
core/feature/tests/__init__.py
core/feature/tests/test1.py
core/feature/tests/test2.py
core/feature/tests/test3.py
core/feature/tests/test4.py
core/feature/tests/test10.py
</code></pre>
<p>The <code>service.py</code> looks something like this:</p>
<pre><code>from modules import stuff
from core.utils import Utility
class FeatureManager:
# lots of other methods
def execute(self, *args, **kwargs):
self._execute_step1(*args, **kwargs)
# some more code
self._execute_step2(*args, **kwargs)
utility = Utility()
utility.doThings(args[0], kwargs['variable'])
</code></pre>
<p>All the tests in <code>feature/tests/*</code> end up using <code>core.feature.service.FeatureManager.execute</code> function. However <code>utility.doThings()</code> is not necessary for me to be run while I am running tests. I need it to happen while the production application runs but I do not want it to happen while the tests are being run.</p>
<p>I can do something like this in my <code>core/feature/tests/test1.py</code></p>
<pre><code>from mock import patch
class Test1:
def test_1():
with patch('core.feature.service.Utility') as MockedUtils:
exectute_test_case_1()
</code></pre>
<p>This would work. However I added <code>Utility</code> just now to the code base and I have more than 300 test cases. I would not want to go into each test case and write this <code>with</code> statement.</p>
<p>I could write a <code>conftest.py</code> which sets a os level environment variable based on which the <code>core.feature.service.FeatureManager.execute</code> could decide to not execute the <code>utility.doThings</code> but I do not know if that is a clean solution to this issue. </p>
<p>I would appreciate if someone could help me with global patches to the entire session. I would like to do what I did with the <code>with</code> block above globally for the entire session. Any articles in this matter would be great too.</p>
<p>TLDR: How do I create session wide patches while running pytests?</p>
| 1 | 2016-08-04T11:18:12Z | 38,782,306 | <p>I added a file called <code>core/feature/conftest.py</code> that looks like this</p>
<pre><code>import logging
import pytest
@pytest.fixture(scope="session", autouse=True)
def default_session_fixture(request):
"""
:type request: _pytest.python.SubRequest
:return:
"""
log.info("Patching core.feature.service")
patched = mock.patch('core.feature.service.Utility')
patched.__enter__()
def unpatch():
patched.__exit__()
log.info("Patching complete. Unpatching")
request.addfinalizer(unpatch)
</code></pre>
<p>This is nothing complicated. It is like doing</p>
<pre><code>with mock.patch('core.feature.service.Utility') as patched:
do_things()
</code></pre>
<p>but only in a session-wide manner.</p>
| 0 | 2016-08-05T06:13:07Z | [
"python",
"tdd",
"py.test",
"python-mock"
] |
Loading numpy arrays into Fortran quickly | 38,765,977 | <p>I am currently working on a Fortran program, which requires a large data file as input. This data file is created using Python, and I am currently saving it in a human readable format using the <code>np.savetxt()</code> function. </p>
<p>However, the size of this file is very large (at least 1.5GB of disk space) and so reading in the file takes a long time. I think it might be easier to save the array of data in a binary format using <code>np.save</code> (or maybe pickle it?), however I have no idea how I would read this file into my Fortran program - is there a simple way to do this?</p>
<p>I realise that an alternative solution to this would be to entirely cut Python out of the picture and create the data array in Fortran, however as I am close to a complete beginner in Fortran I am trying to minimise the amount of things I need it for.</p>
| 2 | 2016-08-04T11:21:24Z | 38,767,907 | <p>In my day to day work, I run a very large simulation switching between fortran and python for computation and visualization purposes. I would suggest using the netcdf libraries in both of them, netcdf is an excellent format for transferring between the two systems and keeps the file size in check. Some good links are provided below</p>
<p>Python : <a href="http://unidata.github.io/netcdf4-python/" rel="nofollow">http://unidata.github.io/netcdf4-python/</a></p>
<p>Fortran : <a href="http://www.unidata.ucar.edu/software/netcdf/examples/programs/" rel="nofollow">http://www.unidata.ucar.edu/software/netcdf/examples/programs/</a></p>
| 3 | 2016-08-04T12:53:17Z | [
"python",
"fortran",
"storage",
"loading"
] |
Loading numpy arrays into Fortran quickly | 38,765,977 | <p>I am currently working on a Fortran program, which requires a large data file as input. This data file is created using Python, and I am currently saving it in a human readable format using the <code>np.savetxt()</code> function. </p>
<p>However, the size of this file is very large (at least 1.5GB of disk space) and so reading in the file takes a long time. I think it might be easier to save the array of data in a binary format using <code>np.save</code> (or maybe pickle it?), however I have no idea how I would read this file into my Fortran program - is there a simple way to do this?</p>
<p>I realise that an alternative solution to this would be to entirely cut Python out of the picture and create the data array in Fortran, however as I am close to a complete beginner in Fortran I am trying to minimise the amount of things I need it for.</p>
| 2 | 2016-08-04T11:21:24Z | 38,777,615 | <p>It depends on your data structures, but if it is just one or a few arrays you don't need any external libraries (I am not impressed with all the hassle of NetCDF).</p>
<pre><code>import numpy as np
a = np.zeros([10,10], order="F")
a.tofile("a.bin")
</code></pre>
<p>and</p>
<pre><code> use iso_fortran_env
real(real64) :: a(10,10)
open(newunit=iu,file="a.bin",access="stream",status="old",action="read")
read(iu) a
close(iu)
end
</code></pre>
<p>and that's all.</p>
| 4 | 2016-08-04T21:23:18Z | [
"python",
"fortran",
"storage",
"loading"
] |
Fit Weibull to distribution with genextreme and weibull_min | 38,765,996 | <p>Using SciPy, I am trying to reproduce the weibull fit from <a href="http://stats.stackexchange.com/questions/132652/how-to-determine-which-distribution-fits-my-data-best">this question</a>. My fit looks good when I use the <code>genextreme</code> function as follows:</p>
<pre><code>import numpy as np
from scipy.stats import genextreme
import matplotlib.pyplot as plt
data=np.array([37.50,46.79,48.30,46.04,43.40,39.25,38.49,49.51,40.38,36.98,40.00,
38.49,37.74,47.92,44.53,44.91,44.91,40.00,41.51,47.92,36.98,43.40,
42.26,41.89,38.87,43.02,39.25,40.38,42.64,36.98,44.15,44.91,43.40,
49.81,38.87,40.00,52.45,53.13,47.92,52.45,44.91,29.54,27.13,35.60,
45.34,43.37,54.15,42.77,42.88,44.26,27.14,39.31,24.80,16.62,30.30,
36.39,28.60,28.53,35.84,31.10,34.55,52.65,48.81,43.42,52.49,38.00,
38.65,34.54,37.70,38.11,43.05,29.95,32.48,24.63,35.33,41.34])
shape, loc, scale = genextreme.fit(data)
plt.hist(data, normed=True, bins=np.linspace(15, 55, 9))
x = np.linspace(data.min(), data.max(), 1000)
y = genextreme.pdf(x, shape, loc, scale)
plt.plot(x, y, 'c', linewidth=3)
</code></pre>
<p>The parameters are: <code>(0.44693977076022462, 38.283622522613214, 7.9180988170857374)</code>. The shape parameter is positive, corresponding to the sign of the shape parameter on the <a href="https://en.wikipedia.org/wiki/Weibull_distribution" rel="nofollow">Weibull wikipedia page</a> which as I understand to be equivalent to a negative shape parameter in R? </p>
<p>So it seems <code>genextreme</code> decides by itself whether the distribution is Gumbel, Frechet or Weibull. Here it has chosen Weibull.</p>
<p>Now I am trying to reproduce a similar fit with the <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.weibull_min.html" rel="nofollow"><code>weibull_min</code></a> function. I have tried the following based on <a href="http://stackoverflow.com/questions/17481672/fitting-a-weibull-distribution-using-scipy">this post</a>, but the parameters look very different to what I got with <code>genextreme</code>:</p>
<pre><code>weibull_min.fit(data, floc=0)
</code></pre>
<p>The parameters now are: <code>(6.4633107529634319, 0, 43.247460728065136)</code></p>
<p>Is the <code>0</code> the shape parameter? Surely it should be positive if the distribution is Weibull?</p>
| 2 | 2016-08-04T11:21:53Z | 38,766,527 | <p>The parameters returned by <code>weibull_min.fit()</code> are <code>(shape, loc, scale)</code>. <code>loc</code> is the location parameter. (All scipy distributions include a location parameter, even those where a location parameter isn't normally used.) The docstring of <code>weibull_min.fit</code> includes this:</p>
<pre><code>Returns
-------
shape, loc, scale : tuple of floats
MLEs for any shape statistics, followed by those for location and
scale.
</code></pre>
<p>You used the argument <code>floc=0</code>, so, as expected, the location parameter returned by <code>fit(data, floc=0)</code> is 0.</p>
| 1 | 2016-08-04T11:48:31Z | [
"python",
"scipy",
"distribution",
"weibull"
] |
Start Idlex instead of Idle with Launcher Symbol | 38,766,023 | <p>I'm running with Ubuntu 16.04 and Python 2.7.12.
I downloaded and installed Idlex and can start the editor with the extensions, using the terminal in the folder, where the idlex.py file is. </p>
<p>Is it possible, to create a shortcut in the launcher like the standard Idle, such that Idlex is loaded by default?</p>
<p>Thank you in advance</p>
| 0 | 2016-08-04T11:23:17Z | 38,766,345 | <p>Make it's entry in $PATH veriable than </p>
<p>Go to system settings
click on Details
click on default application
set it here </p>
<p>OR</p>
<p>right click -> Properties
Go to open with tab
set it here or use command </p>
<p>Or </p>
<p>Use Ubuntu Tweak</p>
<pre><code>sudo add-apt-repository ppa:tualatrix/ppa
sudo apt-get update
sudo apt-get install ubuntu-tweak
</code></pre>
| 0 | 2016-08-04T11:40:42Z | [
"python",
"ubuntu",
"python-idle"
] |
Having key value separated on python with boto | 38,766,243 | <p>I am trying to get only the value from a dictionary in python>
I followed this <a href="https://www.mkyong.com/python/python-how-to-loop-a-dictionary/" rel="nofollow">link</a> but I have a weird error telling me </p>
<p><code>AttributeError: 'list' object has no attribute 'items'</code></p>
<p>But it is not a list it is a dictionary (I think...)</p>
<p>there is my code </p>
<pre><code>volumes = ec2.volumes.filter(
Filters=[{'Name': 'status', 'Values': ['in-use']}])
for volume in volumes:
print(volume.attachments)
for k, v in volume.attachments.items():
print("Code : {0}, Value : {1}".format(k, v))
</code></pre>
<p>As a result of the first print it shows this :</p>
<pre><code>[{u'AttachTime': datetime.datetime(2016, 8, 2, 14, 54, 27, tzinfo=tzutc()), u'InstanceId': 'i-xxxxxx', u'VolumeId': 'vol-xxxxx', u'State': 'attached', u'DeleteOnTermination': True, u'Device': '/dev/sda1'}]
</code></pre>
<p>Any idea ?
Thanks in advance</p>
| 0 | 2016-08-04T11:34:53Z | 38,766,541 | <p>You have the dictionary nested in a list. You have to iterate through the list to access all the <code>attachments</code> to that volume. Your approach only takes the last <code>volume</code> from the previous iteration, you should instead nest the <em>for-loops</em> to view all the volumes: </p>
<pre><code>for volume in volumes:
for attachment in volume.attachments:
for k, v in attachment.items():
print("Code : {0}, Value : {1}".format(k, v))
</code></pre>
<p>If each volume will ony have one attachment, then it is sufficient to index the <em>list of attachments</em> at index 0:</p>
<pre><code>for volume in volumes:
for k, v in volume.attachments[0].items():
# ^
print("Code : {0}, Value : {1}".format(k, v))
</code></pre>
| 0 | 2016-08-04T11:49:08Z | [
"python",
"dictionary",
"boto3"
] |
Python binding for MuJoCo physics library using mujoco-py package | 38,766,267 | <p>I want to use MuJoCo (<a href="http://www.mujoco.org/" rel="nofollow">http://www.mujoco.org/</a>), an advanced physics simulator with python bindings (<a href="https://github.com/openai/mujoco-py" rel="nofollow">https://github.com/openai/mujoco-py</a>).</p>
<p>I've got my MuJoCo license file mjkey.text and added the required paths MUJOCO_PY_MJKEY_PATH, MUJOCO_PY_MJPRO_PATH to the environment variables accordingly.</p>
<pre><code>MUJOCO_PY_MJPRO_PATH = C:\Dropbox\PhD\MuJoCo\mjpro131
MUJOCO_PY_MJKEY_PATH = C:\Dropbox\PhD\MuJoCo\mjpro131\bin
</code></pre>
<p>However, a soon as I want to import the libray with the following simple code,</p>
<pre><code>import mujoco_py
</code></pre>
<p>I got the following error message.</p>
<pre><code>C:\Dropbox\Python\Anaconda\python.exe
C:/Dropbox/PhD/Python/X/MujocoHelloWorld/test.py
Traceback (most recent call last):
File "C:/Dropbox/PhD/Python/X/MujocoHelloWorld/test.py", line 1, in <module>
import mujoco_py
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 664, in _load_unlocked
File "<frozen importlib._bootstrap>", line 634, in _load_backward_compatible
File "C:\Dropbox\Python\Anaconda\lib\site-packages\mujoco_py-0.5.4-py3.5.egg\mujoco_py\__init__.py", line 4, in <module>
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 664, in _load_unlocked
File "<frozen importlib._bootstrap>", line 634, in _load_backward_compatible
File "C:\Dropbox\Python\Anaconda\lib\site-packages\mujoco_py-0.5.4-py3.5.egg\mujoco_py\mjviewer.py", line 7, in <module>
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 664, in _load_unlocked
File "<frozen importlib._bootstrap>", line 634, in _load_backward_compatible
File "C:\Dropbox\Python\Anaconda\lib\site-packages\mujoco_py-0.5.4-py3.5.egg\mujoco_py\mjcore.py", line 6, in <module>
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 664, in _load_unlocked
File "<frozen importlib._bootstrap>", line 634, in _load_backward_compatible
File "C:\Dropbox\Python\Anaconda\lib\site-packages\mujoco_py-0.5.4-py3.5.egg\mujoco_py\mjlib.py", line 21, in <module>
File "C:\Dropbox\Python\Anaconda\lib\ctypes\__init__.py", line 425, in LoadLibrary
return self._dlltype(name)
File "C:\Dropbox\Python\Anaconda\lib\ctypes\__init__.py", line 347, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 193] %1 is not a valid Win32 application
Process finished with exit code 1
</code></pre>
<p>I am using a Python 3.5.1 64-bit on a Windows 7 64-bit with MuJoCo 1.31 64-bit.</p>
<p>I guessed the problem is due to a some kind of incompatibility, I also tried Python 3.5.2 32-bit with MuJoCo 1.31 32-bit. Even, I tried the non-logical 32-bit Python and 64-bit MuJoCo combination and vice versa. </p>
<p>The already compiled given example "simulate.exe" in the MuJoCo library works perfectly. So, I guess there is no problem with the 64-bit MuJoCo library that I have. ( By the way, 32-bit version of it doesn't run on 64-bit Windows )</p>
<p>So, the problem should probably occur when loading the C++ library to Python. I debugged and at least checked that the Python code in mujoco_py library tries to load "mujoco131.lib" ( Not "mujoco131.dll" though ) from the correct path. And then the error occurs and code fails to run further.</p>
<p>I am open to any kind of comments and suggestions..</p>
<p>Cheers! And have a nice day!</p>
| 4 | 2016-08-04T11:36:34Z | 38,839,741 | <p>Try editing <code>mjlib.py</code>, replacing <code>"bin/mujoco131.lib"</code> with <code>"bin/mujoco131.dll"</code> in the loader.</p>
<p>I also had to explicitly specify <code>platname = "win"</code> in <code>platname_targdir.py</code></p>
| 1 | 2016-08-08T23:01:31Z | [
"python",
"c++",
"dll",
"physics",
"dllimport"
] |
mayavi in python Anaconda | 38,766,293 | <p>I installed <code>mayavi</code> in Anaconda using the command </p>
<pre><code>conda.exe install mayavi
</code></pre>
<p>in Anaconda command prompt. Now when I close Spyder it doesn't open anymore. How do I fix this? I am using Windows.</p>
| 1 | 2016-08-04T11:37:52Z | 39,778,157 | <p>I want to show my solution for that problem :
My OS = Windows 10 - 64-bit
Python 2.7.12 (Anaconda2 4.2.0 64-bit)
Package Mayavi version 4.4.0</p>
<p>After installation of that package Mayavi (with Anaconda Navigator Beta -> Environments), Spyder couldn't be opened again (as mentioned in your question).
Then I tried to open Spyder with a Command Prompt and I could read that the problem occured with the Package Pandas version 0.18.1.</p>
<p>Solution found : I downgraded the Package Pandas with its version 0.17.1.</p>
<p>Now Spyder works fine and I'm able to execute the examples found in Mayavi's folder.</p>
| 1 | 2016-09-29T19:06:46Z | [
"python",
"anaconda",
"spyder",
"mayavi"
] |
Pandas transforming hbar in line | 38,766,333 | <p>I have a pandas hbar plot, but I would like to have a line instead of the bars (just a line going to the top of the bars).</p>
<p>Is that possible? </p>
<p>I have </p>
<pre><code>b3["R2_foret"].plot(legend=False, kind='barh')
</code></pre>
<p>which gives me</p>
<p><a href="http://i.stack.imgur.com/4WdaZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/4WdaZ.png" alt="enter image description here"></a></p>
<p>I would like not to see all this blue, but just a line going from one value to the next one.
The issue is to make a vertical line.</p>
<h1>EDIT</h1>
<p>Following the here below solution by unutbu, with alpha=0</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
b3 = pd.DataFrame({'R2_foret': [0.79381443298969068, 0.7766323024054983, 0.7903780068728522, 0.78006872852233677, 0.79725085910652915, 0.79725085910652915, 0.8041237113402061, 0.7903780068728522, 0.7903780068728522, 0.81443298969072153, 0.80068728522336763, 0.80756013745704458, 0.85567010309278368, 0.8556701030927838, 0.87628865979381465, 0.84536082474226804, 0.86597938144329922, 0.87628865979381454, 0.85910652920962216, 0.85910652920962227, 0.87579774177712344, 0.87628865979381465, 0.84536082474226792, 0.86597938144329922, 0.87628865979381454, 0.88316151202749149, 0.89347079037800703, 0.90378006872852246, 0.90378006872852246, 0.90034364261168398, 0.9106529209621993, 0.90378006872852246, 0.90721649484536093, 0.9101620029455082, 0.9106529209621993, 0.92096219931271495, 0.90721649484536093, 0.92096219931271495, 0.92096219931271495, 0.91408934707903777]})
b3['y'] = np.arange(len(b3))
ax = b3.plot(x='R2_foret', y='y', style=['-'])
b3['R2_foret'].plot(kind='barh', ax=ax, alpha=0)
plt.xlim(0,1)
plt.tick_params(left='off', labelleft='off')
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/nOt25.png" rel="nofollow"><img src="http://i.stack.imgur.com/nOt25.png" alt="enter image description here"></a></p>
| 0 | 2016-08-04T11:39:45Z | 38,766,500 | <p>do you simply mean</p>
<p><code>b3["R2_foret"].plot()</code> ? </p>
| 0 | 2016-08-04T11:47:26Z | [
"python",
"pandas",
"matplotlib",
"plot"
] |
Pandas transforming hbar in line | 38,766,333 | <p>I have a pandas hbar plot, but I would like to have a line instead of the bars (just a line going to the top of the bars).</p>
<p>Is that possible? </p>
<p>I have </p>
<pre><code>b3["R2_foret"].plot(legend=False, kind='barh')
</code></pre>
<p>which gives me</p>
<p><a href="http://i.stack.imgur.com/4WdaZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/4WdaZ.png" alt="enter image description here"></a></p>
<p>I would like not to see all this blue, but just a line going from one value to the next one.
The issue is to make a vertical line.</p>
<h1>EDIT</h1>
<p>Following the here below solution by unutbu, with alpha=0</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
b3 = pd.DataFrame({'R2_foret': [0.79381443298969068, 0.7766323024054983, 0.7903780068728522, 0.78006872852233677, 0.79725085910652915, 0.79725085910652915, 0.8041237113402061, 0.7903780068728522, 0.7903780068728522, 0.81443298969072153, 0.80068728522336763, 0.80756013745704458, 0.85567010309278368, 0.8556701030927838, 0.87628865979381465, 0.84536082474226804, 0.86597938144329922, 0.87628865979381454, 0.85910652920962216, 0.85910652920962227, 0.87579774177712344, 0.87628865979381465, 0.84536082474226792, 0.86597938144329922, 0.87628865979381454, 0.88316151202749149, 0.89347079037800703, 0.90378006872852246, 0.90378006872852246, 0.90034364261168398, 0.9106529209621993, 0.90378006872852246, 0.90721649484536093, 0.9101620029455082, 0.9106529209621993, 0.92096219931271495, 0.90721649484536093, 0.92096219931271495, 0.92096219931271495, 0.91408934707903777]})
b3['y'] = np.arange(len(b3))
ax = b3.plot(x='R2_foret', y='y', style=['-'])
b3['R2_foret'].plot(kind='barh', ax=ax, alpha=0)
plt.xlim(0,1)
plt.tick_params(left='off', labelleft='off')
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/nOt25.png" rel="nofollow"><img src="http://i.stack.imgur.com/nOt25.png" alt="enter image description here"></a></p>
| 0 | 2016-08-04T11:39:45Z | 38,766,751 | <p>Something close:</p>
<pre><code>b3["R2_foret"].plot(legend=False, drawstyle="steps")
</code></pre>
| 0 | 2016-08-04T11:59:51Z | [
"python",
"pandas",
"matplotlib",
"plot"
] |
Pandas transforming hbar in line | 38,766,333 | <p>I have a pandas hbar plot, but I would like to have a line instead of the bars (just a line going to the top of the bars).</p>
<p>Is that possible? </p>
<p>I have </p>
<pre><code>b3["R2_foret"].plot(legend=False, kind='barh')
</code></pre>
<p>which gives me</p>
<p><a href="http://i.stack.imgur.com/4WdaZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/4WdaZ.png" alt="enter image description here"></a></p>
<p>I would like not to see all this blue, but just a line going from one value to the next one.
The issue is to make a vertical line.</p>
<h1>EDIT</h1>
<p>Following the here below solution by unutbu, with alpha=0</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
b3 = pd.DataFrame({'R2_foret': [0.79381443298969068, 0.7766323024054983, 0.7903780068728522, 0.78006872852233677, 0.79725085910652915, 0.79725085910652915, 0.8041237113402061, 0.7903780068728522, 0.7903780068728522, 0.81443298969072153, 0.80068728522336763, 0.80756013745704458, 0.85567010309278368, 0.8556701030927838, 0.87628865979381465, 0.84536082474226804, 0.86597938144329922, 0.87628865979381454, 0.85910652920962216, 0.85910652920962227, 0.87579774177712344, 0.87628865979381465, 0.84536082474226792, 0.86597938144329922, 0.87628865979381454, 0.88316151202749149, 0.89347079037800703, 0.90378006872852246, 0.90378006872852246, 0.90034364261168398, 0.9106529209621993, 0.90378006872852246, 0.90721649484536093, 0.9101620029455082, 0.9106529209621993, 0.92096219931271495, 0.90721649484536093, 0.92096219931271495, 0.92096219931271495, 0.91408934707903777]})
b3['y'] = np.arange(len(b3))
ax = b3.plot(x='R2_foret', y='y', style=['-'])
b3['R2_foret'].plot(kind='barh', ax=ax, alpha=0)
plt.xlim(0,1)
plt.tick_params(left='off', labelleft='off')
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/nOt25.png" rel="nofollow"><img src="http://i.stack.imgur.com/nOt25.png" alt="enter image description here"></a></p>
| 0 | 2016-08-04T11:39:45Z | 38,767,093 | <p>Assign a new column of <code>y</code> values:</p>
<pre><code>b3['x'] = np.arange(len(b3))
</code></pre>
<p>Then plot <code>y</code> versus <code>R2_foret</code>:</p>
<pre><code> b3.plot(x='R2_foret', y='y', style=['o-'])
</code></pre>
<hr>
<p>For example,</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
b3 = pd.DataFrame({'R2_foret': [0.79381443298969068, 0.7766323024054983, 0.7903780068728522, 0.78006872852233677, 0.79725085910652915, 0.79725085910652915, 0.8041237113402061, 0.7903780068728522, 0.7903780068728522, 0.81443298969072153, 0.80068728522336763, 0.80756013745704458, 0.85567010309278368, 0.8556701030927838, 0.87628865979381465, 0.84536082474226804, 0.86597938144329922, 0.87628865979381454, 0.85910652920962216, 0.85910652920962227, 0.87579774177712344, 0.87628865979381465, 0.84536082474226792, 0.86597938144329922, 0.87628865979381454, 0.88316151202749149, 0.89347079037800703, 0.90378006872852246, 0.90378006872852246, 0.90034364261168398, 0.9106529209621993, 0.90378006872852246, 0.90721649484536093, 0.9101620029455082, 0.9106529209621993, 0.92096219931271495, 0.90721649484536093, 0.92096219931271495, 0.92096219931271495, 0.91408934707903777]})
b3['y'] = np.arange(len(b3))
ax = b3.plot(x='R2_foret', y='y', style=['o-'])
b3['R2_foret'].plot(kind='barh', ax=ax, alpha=0.3)
plt.xlim(0,1)
plt.tick_params(left='off', labelleft='off')
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/IJty5.png" rel="nofollow"><img src="http://i.stack.imgur.com/IJty5.png" alt="enter image description here"></a></p>
| 0 | 2016-08-04T12:16:14Z | [
"python",
"pandas",
"matplotlib",
"plot"
] |
How to write Python script like shell script for UNIX? | 38,766,357 | <p>I have many loaded shell scripts in UNIX.
I don't know more about shell scripting. But, I know Python programming.
I have to access multiple server using python.
Anyone guide for this?</p>
| -2 | 2016-08-04T11:41:10Z | 38,768,126 | <p>Assuming you have SSH access to the server and want to remotely write the script:</p>
<ol>
<li>Connect the SSH session</li>
<li>You have to install a text editor. <em>nano</em> is a very simple one. To install it on Debian based systems use <code>sudo apt-get install nano</code></li>
<li>Now create a file using <code>nano filename.py</code>. If you aren't using <em>nano</em> as your editor create the file with <code>touch filename.py</code> and get into the file with your editor.</li>
<li>Write the script</li>
<li>Add execute permission <code>chmod +x filename.py</code></li>
<li>Run file with <code>python filename.py</code> or <code>python3 filename.py</code></li>
</ol>
<p>NOTE: The script will stop running after you close the SSH session so I recommend using <code>screen</code> and running the script in a screen session.</p>
<p>If you are trying to connect to a server with ssh in Python use the SSH library.</p>
| 0 | 2016-08-04T13:02:36Z | [
"python",
"linux",
"shell",
"unix"
] |
Automatically scan RSS feeds and populate WebContent model | 38,766,502 | <p>I am trying to create a Django server application (currently on local host) that will routinely check given RSS feeds provided by the model <code>Blogger</code> (i.e. once every hour), extract data from then provide data for the model <code>WebContent</code>.</p>
<p>So far I have created a data endpoint at <code>http://127.0.0.1:8000/api/blogger/</code> which outputs the following information:</p>
<pre><code>[
{
"id": "c384f191-662f-43f9-a39d-2da737e7cbb8",
"name": "Patricia Bright",
"avatar": "http://127.0.0.1:8000/media/img/1470305802086_IMG_5921.JPG",
"rss_url": "http://patriciabright.co.uk/?feed=rss2",
},
{
"id": "dc70ca6b-94cc-4ba9-a0c8-0d907f7ab020",
"name": "Shirley B. Eniang",
"avatar": "http://127.0.0.1:8000/media/img/1470305797487_photo.jpg",
"rss_url": "http://shirleyswardrobe.com/feed/",
}
]
</code></pre>
<p>Now I would like to loop through the <code>rss_url</code> value above and extract particular information from each RSS feed to provide data for the model <code>WebContent</code>. I want to run this hourly, and a check should be made to see if the data already exists before populating the model <code>WebContent</code> (so I don't get any duplicate requests).</p>
<p>This is what I've done so far in <code>models.py</code>:</p>
<pre><code>from uuid import uuid4
from time import time
from django.db import models
from django.contrib.contenttypes.models import ContentType
import feedparser
def get_upload_avatar_path(instance, filename):
timestamp = int(round(time() * 1000))
path = "img/%s_%s" % (timestamp, filename)
return path
class Blogger(models.Model):
"""
Blogger model
"""
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
name = models.CharField(max_length=255, null=True, default=None)
avatar = models.ImageField(upload_to=get_upload_avatar_path, blank=True, null=True, default=None, max_length=255)
url = models.CharField(max_length=255, null=True, default=None)
rss_url = models.CharField(max_length=255, null=True, default=None)
instagram_url = models.CharField(max_length=255, null=True, default=None)
twitter_url = models.CharField(max_length=255, null=True, default=None)
youtube_url = models.CharField(max_length=255, null=True, default=None)
class Meta:
verbose_name_plural = "Bloggers"
def __str__(self):
return "%s" % (self.name)
def generate_web_content(self):
"""
Scan for blogger RSS feeds and generate web content
:return: None
"""
web_content = WebContent.objects.create(user_profile=self)
self._scan_web_content(web_content)
def _scan_web_content(self, web_content=None):
"""
Scan blogger RSS feeds
:param report: Associated WebContent object
:return: None
"""
urls = Blogger.objects.all()
d = feedparser.parse(urls['rss_url'])
for post in d.entries:
blogger = self
title = post.title.encode('ascii', 'ignore')
url = post.link.encode('ascii', 'ignore')
class WebContent(models.Model):
"""
Model to store blogger web content
"""
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
blogger = models.ForeignKey(Blogger)
title = models.CharField(max_length=255, null=True, default=None)
url = models.CharField(max_length=255, null=True, default=None)
class Meta:
verbose_name_plural = "Web Content"
</code></pre>
<p>I've managed to mockup an implementation in a seperate python file which works well. I guess I am trying to port that into my Django application.</p>
<pre><code>import feedparser
import json
import sys
import os
os.system('cls')
# Import json
with open('bloggers.json') as jsonfile:
j = json.load(jsonfile)
for blogger in j['bloggers']:
print (blogger['name'])
print "---------------------"
d = feedparser.parse(blogger['rssUrl'])
for post in d.entries:
print post.title.encode('ascii', 'ignore') + ": " + post.link.encode('ascii', 'ignore') + "\n"
</code></pre>
<p>Any help would be appreciated.</p>
| 0 | 2016-08-04T11:47:27Z | 38,805,055 | <p>There seem to be many problems in your code:</p>
<ol>
<li><p>Within the method <code>generate_web_content</code> you are creating a <code>WebContent</code> object by passing the argument <code>user_profile=self</code> while it should be <code>blogger=self</code>.</p></li>
<li><p>In the method <code>_scan_web_content</code> you've queried all the <code>Blogger</code> objects like:</p>
<pre><code>urls = Blogger.objects.all()
</code></pre>
<p>so, <code>urls</code> is a queryset object and you can't access the key like <code>urls['rss_url']</code> instead you should do </p>
<pre><code>d = feedparser.parse(self.rss_url)
</code></pre></li>
<li><p>Inside the for loop you should add attributes to the <code>WebContent</code> object passed as an argument like:</p>
<pre><code>for post in d.entries:
web_content.blogger = self
web_content.title = post.title.encode('ascii', 'ignore')
web_content.url = post.link.encode('ascii', 'ignore')
web_content.save()
</code></pre>
<p>otherwise this method does not do anything.</p></li>
</ol>
<p>Hope it clarifies!</p>
| 0 | 2016-08-06T14:00:01Z | [
"python",
"json",
"django",
"rss"
] |
Synax error on db.close() | 38,766,544 | <p>Im trying to use mysqldb in a python script.</p>
<p>Here is a part of the code from the script</p>
<pre><code>cursor = db.cursor()
sql = """INSERT INTO location(`name`, `lat`, `long`, `guid`, `image`, `date`)VALUES(%(name)s, %(lat)s, %(long)s, %(guid)s, %(image)s, %(date)s)"""
try:
cursor.execute(sql)
db.commit()
db.close()
</code></pre>
<p>Im gettig a error on the db.close()</p>
<p>"db.close()
^
SyntaxError: invalid syntax"</p>
<p>So any suggestions here?</p>
| 0 | 2016-08-04T11:49:10Z | 38,766,582 | <p>You cannot use <code>try</code> without <code>except</code>.</p>
<p>The proper way to ignore all errors is this:</p>
<pre><code>try:
cursor.execute(sql)
db.commit()
except:
pass
db.close()
</code></pre>
| 2 | 2016-08-04T11:51:02Z | [
"python",
"mysql-python"
] |
Synax error on db.close() | 38,766,544 | <p>Im trying to use mysqldb in a python script.</p>
<p>Here is a part of the code from the script</p>
<pre><code>cursor = db.cursor()
sql = """INSERT INTO location(`name`, `lat`, `long`, `guid`, `image`, `date`)VALUES(%(name)s, %(lat)s, %(long)s, %(guid)s, %(image)s, %(date)s)"""
try:
cursor.execute(sql)
db.commit()
db.close()
</code></pre>
<p>Im gettig a error on the db.close()</p>
<p>"db.close()
^
SyntaxError: invalid syntax"</p>
<p>So any suggestions here?</p>
| 0 | 2016-08-04T11:49:10Z | 38,766,594 | <p>The error is with the try: - it's looking for an except: prior to the db.close.</p>
| 1 | 2016-08-04T11:51:38Z | [
"python",
"mysql-python"
] |
Synax error on db.close() | 38,766,544 | <p>Im trying to use mysqldb in a python script.</p>
<p>Here is a part of the code from the script</p>
<pre><code>cursor = db.cursor()
sql = """INSERT INTO location(`name`, `lat`, `long`, `guid`, `image`, `date`)VALUES(%(name)s, %(lat)s, %(long)s, %(guid)s, %(image)s, %(date)s)"""
try:
cursor.execute(sql)
db.commit()
db.close()
</code></pre>
<p>Im gettig a error on the db.close()</p>
<p>"db.close()
^
SyntaxError: invalid syntax"</p>
<p>So any suggestions here?</p>
| 0 | 2016-08-04T11:49:10Z | 38,766,707 | <p>You would make everyone live easier if you posted fully working examples.
I added some dummy code to you post to make it run</p>
<pre><code>class fake:
def commit(self,):pass
def execute(self,sql):pass
def close(self,):pass
db =fake()
cursor=fake()
if 1:
sql = """INSERT INTO location(`name`, `lat`, `long`, `guid`, `image`, `date`)VALUES(%(name)s, %(lat)s, %(long)s, %(guid)s, %(image)s, %(date)s)"""
try:
cursor.execute(sql)
db.commit()
db.close()
</code></pre>
<p>If I run this I get:</p>
<pre><code>$ python3 test.py
File "test.py", line 17
db.close()
^
IndentationError: unexpected unindent
$
</code></pre>
<p>Which is show you are missing the except clause in your example.</p>
<p>This isn't the error you report, perhaps you Syntax Error is in part of the code you haven't included in your question.</p>
| 0 | 2016-08-04T11:57:59Z | [
"python",
"mysql-python"
] |
Synax error on db.close() | 38,766,544 | <p>Im trying to use mysqldb in a python script.</p>
<p>Here is a part of the code from the script</p>
<pre><code>cursor = db.cursor()
sql = """INSERT INTO location(`name`, `lat`, `long`, `guid`, `image`, `date`)VALUES(%(name)s, %(lat)s, %(long)s, %(guid)s, %(image)s, %(date)s)"""
try:
cursor.execute(sql)
db.commit()
db.close()
</code></pre>
<p>Im gettig a error on the db.close()</p>
<p>"db.close()
^
SyntaxError: invalid syntax"</p>
<p>So any suggestions here?</p>
| 0 | 2016-08-04T11:49:10Z | 38,767,070 | <p>Ignoring the indentation errors in your code, you need to use either an <code>except</code> clause, a <code>finally</code> clause, or both with your <code>try</code> statement.</p>
<p>With a <code>finally</code> clause you can ensure that the db connection is closed:</p>
<pre><code>try:
cursor.execute(sql)
db.commit()
finally:
db.close()
</code></pre>
<p>In practice it is worthwhile including an <code>except</code> clause so that the exception can be logged:</p>
<pre><code>import traceback
try:
cursor.execute(sql)
db.commit()
except Exception as exc:
traceback.print_exc() # or whatever logging you require
raise # optionally re-raise the exception
finally:
db.close() # _always_ executed
</code></pre>
| 0 | 2016-08-04T12:15:20Z | [
"python",
"mysql-python"
] |
Using a pattern as a dictionary key | 38,766,593 | <p>I'm trying to perform a set of search and replace in a file at once. For that, I'm using a dictionary where the pattern to search is the key, and the replacement text is the key value. I compile all the substitutions in a single pattern and I do the search-replace using the code below:</p>
<pre><code>re_compiled = re.compile("|".join(k for k in sub_dict))
# Pattern replacement inner function
def replacement_function(match_object):
key = str(match_object.group(0))
if key.startswith(r'C:\work\l10n'):
key = key.replace("\\", "\\\\")
key = key[:-1] + '.'
return sub_dict[key]
while 1:
lines = in_f.readlines(100000)
if not lines:
break
for line in lines:
line = re_compiled.sub(replacement_function, line)
out_f.write(line)
</code></pre>
<p>I define the dictionary as follows:</p>
<pre><code>g_sub_dict = {
r'C:\\work\\l10n\\.' : r'/ae/l10n/'
, r'maxwidth="0"' : r'maxwidth="-1"'
, r'></target>' : r'>#####</target>'
}
</code></pre>
<p>I've had a bit of a headache with the first key (which is a Windows path, and uses backslashes) mainly because it is used as a pattern.</p>
<ul>
<li>Dictionary definition: <code>r'C:\\work\\l10n\\.'</code> I escape the backslashes because that string is going to be used as a pattern.</li>
<li>If I print the dictionary: <code>C:\\\\work\\\\l10n\\\\.</code> Backslashes appear double escaped, I understand because I'm defining the string as raw.</li>
<li>If I walk the dictionary and print the keys: <code>C:\\work\\l10n\\.</code> I see exactly what I wrote as raw string. It is a bit confusing that printing the whole dictionary reports a different string than printing a single key, but I guess that has to do with "print dictionary" implementation.</li>
<li>What I read from file: <code>'C:\work\l10n\.'</code> Non escaped backslashes.</li>
<li>What I have to do to use what I read from file as a dictionary key: escape the backslashes, and transform the text to <code>C:\\work\\l10n\\.</code></li>
</ul>
<p>Could this code be simplified somehow? E.g. so that I wouldn't need to escape backslashes by code?</p>
| 2 | 2016-08-04T11:51:38Z | 38,767,914 | <p>You could try something like:</p>
<pre><code>>>> text = r'start C:\work\l10n\. normal data maxwidth="0" something something ></target> end'
>>> # sub_dict format: {'symbolic_group_name': ['pattern', 'replacement']}
...
>>> sub_dict = {'win_path': [r'C:\\work\\l10n\\.', r'/ae/l10n//'],
... 'max_width': [r'maxwidth="0"', r'maxwidth="-1"'],
... 'target': [r'></target>', r'>#####</target>']}
>>> p = re.compile("|".join('(?P<{}>{})'.format(k, v[0]) for k, v in sub_dict.items()))
>>> def replacement_function(match_object):
... for group_name, match_value in match_object.groupdict().items():
... if match_value:
... # based on how the pattern is compiled 1 group will be a match
... # when we find it, we return the replacement text
... return sub_dict[group_name][1]
...
>>> new_text = p.sub(replacement_function, text)
>>> print(new_text)
start /ae/l10n// normal data maxwidth="-1" something something >#####</target> end
>>>
</code></pre>
<p>Using named groups allows you to rely on a simple string for lookup in your replacement dictionary and won't require special handling for <code>\</code>.</p>
<p>EDIT:</p>
<p>About the change in regex pattern: I changed your a|b|c pattern to use named groups. A named capture group has the syntax <code>(?P<name>pattern)</code>. Functionally it is the same as having <code>pattern</code>, but having a named group allow to obtain data from the <code>Matcher</code> object using the group name (e.g.: <code>matcher.group('name')</code> vs <code>matcher.group(0)</code>)</p>
<p>The <code>groupdict</code> method returns the named groups from the pattern and the value they matched. Because the pattern is <code>group1|group2|group3</code> only 1 group will actually have a match; the other 2 will have a <code>None</code> value in the dict returned by <code>groupdict</code> (in my words from the example: <code>match_value</code> will be != None only for the group that caused the match).</p>
<p>The benefit is that the group name can be any plain string (preferably something simple and related to the purpose of the pattern) and it will not cause issues with <code>\</code> escaping.</p>
| 2 | 2016-08-04T12:53:38Z | [
"python",
"dictionary",
"pattern-matching",
"backslash"
] |
python zip cycle over multiple list | 38,766,596 | <p>Say I have these three lists:</p>
<pre><code>aList = [1,2,3,4,5,6]
bList = ['a','b','c','d']
cList = [1,2]
</code></pre>
<p>and I want to iterate over them using zip.</p>
<p>By using cycle with <code>zip</code> as following:</p>
<pre><code>from itertools import cycle
for a,b,c zip(aList, cycle(bList), cycle(cList)):
print a,b,c
</code></pre>
<p>I get the result as:</p>
<pre><code>1 a 1
2 b 2
3 c 1
4 d 2
5 a 1
6 b 2
</code></pre>
<p>Though I want my result to be like:</p>
<pre><code>1 a 1
2 b 1
3 c 1
4 d 1
5 a 2
6 b 2
</code></pre>
| 3 | 2016-08-04T11:51:40Z | 38,766,697 | <p>You can use <code>itertools.repeat()</code> to repeat the items of third list based on second list:</p>
<pre><code>>>> from itertools import repeat, chain
>>>
>>> zip(aList,cycle(bList), chain.from_iterable(zip(*repeat(cList, len(bList)))))
[(1, 'a', 1),
(2, 'b', 1),
(3, 'c', 1),
(4, 'd', 1),
(5, 'a', 2),
(6, 'b', 2)]
</code></pre>
| 2 | 2016-08-04T11:57:27Z | [
"python",
"numpy",
"zip",
"cycle",
"itertools"
] |
python zip cycle over multiple list | 38,766,596 | <p>Say I have these three lists:</p>
<pre><code>aList = [1,2,3,4,5,6]
bList = ['a','b','c','d']
cList = [1,2]
</code></pre>
<p>and I want to iterate over them using zip.</p>
<p>By using cycle with <code>zip</code> as following:</p>
<pre><code>from itertools import cycle
for a,b,c zip(aList, cycle(bList), cycle(cList)):
print a,b,c
</code></pre>
<p>I get the result as:</p>
<pre><code>1 a 1
2 b 2
3 c 1
4 d 2
5 a 1
6 b 2
</code></pre>
<p>Though I want my result to be like:</p>
<pre><code>1 a 1
2 b 1
3 c 1
4 d 1
5 a 2
6 b 2
</code></pre>
| 3 | 2016-08-04T11:51:40Z | 38,766,806 | <p>You can apply <a href="https://docs.python.org/2/library/itertools.html#itertools.product" rel="nofollow"><code>itertools.product</code></a> on <code>c</code> and <code>b</code> and then restore their original order in the <code>print</code> statement:</p>
<pre><code>>>> from itertools import product, cycle
>>>
>>> for a, b_c in zip(aList, cycle(product(cList, bList))):
... print a, b_c[1], b_c[0]
...
1 a 1
2 b 1
3 c 1
4 d 1
5 a 2
6 b 2
</code></pre>
| 2 | 2016-08-04T12:02:46Z | [
"python",
"numpy",
"zip",
"cycle",
"itertools"
] |
python progress bar using tqdm not staying on a single line | 38,766,681 | <p>I am trying to run a script that tries to install modules on a centos7 system via puppet management.
I want to implement a progress bar for the installation that happens along while running the script.
I am using tqdm module to do this.
this is a snap of how i have implemented the module:</p>
<pre><code>from tqdm import tqdm
for i in tqdm(commands):
res = run_apply(i)
</code></pre>
<p>Here run_apply() is the function that actually handles running and applying the puppet configuration.</p>
<p>So far so good, i get a progress bar but it keeps moving down the console as and when execution messages are written to the console.
But, i need the progress bar to stay constant at the bottom of the console and get updated dynamically without the run messages interfering with the bar.
I want the execution related messages on the console to go on as they want but the progress bar should just stay there at the bottom from start to the end of the execution.</p>
<p>Below is what i am seeing:</p>
<pre><code> File line: 0.00
Package: 0.05
Service: 0.19
File: 0.23
Exec: 0.23
Last run: 1470308227
Config retrieval: 3.90
Total: 4.60
Version:
Config: 1470308220
Puppet: 3.7.3
now here x
result: 2
38%|ââââââââââââââââââââââââââââââââââââââ | 5/13 [00:29<00:51, 6.44s/it]about to: profiles::install::download_packages
about to run puppet apply --summarize --detailed-exitcodes --certname puppet -e "include profiles::install::download_packages"
Error: Could not find class profiles::install::download_packages for puppet on node puppet
Error: Could not find class profiles::install::download_packages for puppet on node puppet
now here x
result: 1
46%|ââââââââââââââââââââââââââââââââââââââââââââââ | 6/13 [00:32<00:36, 5.27s/it]about to: profiles::install::install
about to run puppet apply --summarize --detailed-exitcodes --certname puppet -e "include profiles::install::install"
Error: Could not find class profiles::install::install for puppet on node puppet
Error: Could not find class profiles::install::install for puppet on node puppet
now here x
result: 1
54%|âââââââââââââââââââââââââââââââââââââââââââââââââââââ | 7/13 [00:34<00:26, 4.45s/it]about to: stx_network
about to run puppet apply --summarize --detailed-exitcodes --certname puppet -e "include stx_network"
Notice: Compiled catalog for puppet in environment production in 0.84 seconds
Notice: /Stage[main]/Stx_network/Tidy[purge unused nics]: Tidying File[/etc/sysconfig/network-scripts/ifcfg-lo]
...
</code></pre>
<p>Please let me know how i can achieve what i want.</p>
| 2 | 2016-08-04T11:56:54Z | 38,895,115 | <p>For the messages to be printed above the progress bar, you need to signal to tqdm that you are printing messages (else tqdm, nor any other progress bar, can not know that you are outputting messages beside the progress bar).</p>
<p>To do that, you can print your messages using <code>tqdm.write(msg)</code> instead of <code>print(msg)</code>. If you don't want to modify <code>run_apply()</code> to use <code>tqdm.write(msg)</code> instead of <code>print(msg)</code>, you can <a href="https://stackoverflow.com/questions/36986929/redirect-print-command-in-python-script-through-tqdm-write/37243211#37243211">redirect all standard output through tqdm from the toplevel script as described here</a>.</p>
| 1 | 2016-08-11T11:28:03Z | [
"python",
"progress-bar",
"tqdm"
] |
python progress bar using tqdm not staying on a single line | 38,766,681 | <p>I am trying to run a script that tries to install modules on a centos7 system via puppet management.
I want to implement a progress bar for the installation that happens along while running the script.
I am using tqdm module to do this.
this is a snap of how i have implemented the module:</p>
<pre><code>from tqdm import tqdm
for i in tqdm(commands):
res = run_apply(i)
</code></pre>
<p>Here run_apply() is the function that actually handles running and applying the puppet configuration.</p>
<p>So far so good, i get a progress bar but it keeps moving down the console as and when execution messages are written to the console.
But, i need the progress bar to stay constant at the bottom of the console and get updated dynamically without the run messages interfering with the bar.
I want the execution related messages on the console to go on as they want but the progress bar should just stay there at the bottom from start to the end of the execution.</p>
<p>Below is what i am seeing:</p>
<pre><code> File line: 0.00
Package: 0.05
Service: 0.19
File: 0.23
Exec: 0.23
Last run: 1470308227
Config retrieval: 3.90
Total: 4.60
Version:
Config: 1470308220
Puppet: 3.7.3
now here x
result: 2
38%|ââââââââââââââââââââââââââââââââââââââ | 5/13 [00:29<00:51, 6.44s/it]about to: profiles::install::download_packages
about to run puppet apply --summarize --detailed-exitcodes --certname puppet -e "include profiles::install::download_packages"
Error: Could not find class profiles::install::download_packages for puppet on node puppet
Error: Could not find class profiles::install::download_packages for puppet on node puppet
now here x
result: 1
46%|ââââââââââââââââââââââââââââââââââââââââââââââ | 6/13 [00:32<00:36, 5.27s/it]about to: profiles::install::install
about to run puppet apply --summarize --detailed-exitcodes --certname puppet -e "include profiles::install::install"
Error: Could not find class profiles::install::install for puppet on node puppet
Error: Could not find class profiles::install::install for puppet on node puppet
now here x
result: 1
54%|âââââââââââââââââââââââââââââââââââââââââââââââââââââ | 7/13 [00:34<00:26, 4.45s/it]about to: stx_network
about to run puppet apply --summarize --detailed-exitcodes --certname puppet -e "include stx_network"
Notice: Compiled catalog for puppet in environment production in 0.84 seconds
Notice: /Stage[main]/Stx_network/Tidy[purge unused nics]: Tidying File[/etc/sysconfig/network-scripts/ifcfg-lo]
...
</code></pre>
<p>Please let me know how i can achieve what i want.</p>
| 2 | 2016-08-04T11:56:54Z | 38,895,337 | <p>Try Using :
Progressbar</p>
<pre><code>import Progressbar
progress = progressbar.ProgressBar()
for i in progress(range(30)):
time.sleep(0.1)
</code></pre>
<p>It will look like this:
43% (13 of 30) |############################ | Elapsed Time: 0:00:01 ETA: 0:00:01</p>
| 1 | 2016-08-11T11:39:25Z | [
"python",
"progress-bar",
"tqdm"
] |
How to create a SQLite3 table with column names from a list? | 38,766,859 | <p>I'm storing patients' daily blood pressure data in an SQLite3 table. Each patient corresponds to a row, each date corresponds to a column. How do I initialize this table with the column names being the dates from a Python Pandas series? Perhaps something like:</p>
<pre><code>DATE_LIST = pandas.date_range(start_date, end_date)
cursor.execute('''CREATE TABLE bloodpressure DATE_LIST REAL''')
</code></pre>
<p>The above will create a table with a single column named DATE_LIST, which is not what I want. I want to use the dates in DATE_LIST as column names.</p>
| 0 | 2016-08-04T12:06:05Z | 38,767,351 | <p>You are breaking the 1NF of databases. Consider restructuring the database to follow the normal form. </p>
<p>In simpler terms, each row should be a patients blood pressure at some date. </p>
<pre><code>patient_id | bloodpressure | date
-----------+---------------+-----------
101 | 120/80 | 2016-08-04
101 | 130/84 | 2016-08-03
102 | 150/90 | 2016-08-03
</code></pre>
<p>The SQL statement to create this could look something like</p>
<pre><code>CREATE TABLE bloodpressure (
patient_id integer,
bloodpressure text,
date text,
primary key(patient_id)
)
</code></pre>
| 0 | 2016-08-04T12:27:24Z | [
"python",
"sqlite3"
] |
How to group data by specific time window, where second time is next day | 38,766,871 | <p>I need to calculate sum of some events between 2015-01-01 and 2015-12-31 made every night between 21:30 and 04:30 next day?</p>
<p>How to made it by using Pandas in a most elegant but possible simple and efficient way?</p>
<p>Example results table should look similar to the following:</p>
<pre><code> count
2015-04-01 38 (events between 2015-03-31 21:30 and 2015-04-01 04:30)
2015-04-02 15 (events between 2015-04-01 21:30 and 2015-04-02 04:30)
2015-04-03 27 (events between 2015-04-02 21:30 and 2015-04-03 04:30)
</code></pre>
<p>Thanks for any help and suggestions.</p>
| 1 | 2016-08-04T12:06:57Z | 38,768,291 | <p>You can use:</p>
<pre><code>df = pd.DataFrame({'a':['2015-04-01 15:00','2015-04-01 23:00','2015-04-01 04:00','2015-04-02 03:00','2015-05-02 16:00','2015-04-03 02:00'],
'b':[2,4,3,1,7,10]})
df['a'] = pd.to_datetime(df.a)
</code></pre>
<pre><code>print (df)
a b
0 2015-04-01 15:00:00 2
1 2015-04-01 23:00:00 4
2 2015-04-01 04:00:00 3
3 2015-04-02 03:00:00 1
4 2015-05-02 16:00:00 7
5 2015-04-03 02:00:00 10
</code></pre>
<p>Create <code>DatetimeIndex</code>:</p>
<pre><code>start = pd.to_datetime('2015-04-01')
d = pd.date_range(start, periods=3)
print (d)
DatetimeIndex(['2015-04-01', '2015-04-02', '2015-04-03'], dtype='datetime64[ns]', freq='D')
</code></pre>
<p>Loop by <code>DatetimeIndex</code>, select all rows by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> and get <code>len</code>:</p>
<pre><code>for dat in d:
date_sum = len(df.ix[(df.a >= dat.date()+pd.offsets.DateOffset(hours=21, minutes=30)) &
(df.a <= dat.date()+pd.offsets.DateOffset(days=1, hours=4, minutes=30)),'b'])
print (date_sum)
print (dat.date())
2
2015-04-01
1
2015-04-02
0
</code></pre>
<p>Create new <code>Series</code> by dict comprehension:</p>
<pre><code>out = { dat.date(): len(df.ix[(df.a >= dat.date() + pd.offsets.DateOffset(hours=21, minutes=30)) & (df.a <= dat.date() + pd.offsets.DateOffset(days=1, hours=4, minutes=30)), 'b']) for dat in d}
s = pd.Series(out)
print (s)
2015-04-01 2
2015-04-02 1
2015-04-03 0
dtype: int64
</code></pre>
| 1 | 2016-08-04T13:09:54Z | [
"python",
"pandas",
"numpy"
] |
How to solve mysql daily analytics that happens when date changes | 38,766,962 | <p>I have two separate programs; one counts the daily view stats and another calculates earning based on the stats.</p>
<p>Counter runs first and followed by Earning Calculator a few seconds later.</p>
<p>Earning Calculator works by getting stats from counter table using <code>date(created_at) > date(now())</code>.</p>
<p>The problem I'm facing is that let's say at 23:59:59 Counter added 100 views stats and by the time the Earning Calculator ran it's already the next day.</p>
<p>Since I'm using <code>date(created_at) > date(now())</code>, I will miss out the last 100 views added by the Counter.</p>
<p>One way to solve my problem is to summarise the previous daily report at 00:00:10 every day. But I do not like this.</p>
<p>Is there any other ways to solve this issue?</p>
<p>Thanks.</p>
| 0 | 2016-08-04T12:10:39Z | 38,770,424 | <p>You have to put a date on your data and instead of using now() use it.</p>
| 0 | 2016-08-04T14:38:14Z | [
"python",
"mysql",
"analytics"
] |
Python-social-auth returns admin user | 38,766,964 | <p>I am trying to setup python-social-auth to authenticate users from Vk. I have a standard setup as written in docs with normal pipeline. The problem is that when user tries to log in:</p>
<pre><code>'social.pipeline.social_auth.social_user',
</code></pre>
<p>always returns admin user. Basically, any user which tries to log in is associated with admin account. Any ideas why it happens and where to look at?!</p>
| 0 | 2016-08-04T12:10:40Z | 38,769,185 | <p>Ok. I have found the answer. The problem was that I had already social auth association between UID and user which was recorded in DB. After removing the wrong association everything was working like a charm. Thanks to all who tried to help! </p>
| 2 | 2016-08-04T13:46:59Z | [
"python",
"django",
"python-social-auth"
] |
How to write multiple try/excepts efficiently | 38,767,148 | <p>I quote often want to try to convert values to int and if they can't be converted, set them to some default value. For example:</p>
<pre><code>try:
a = int(a)
except:
a = "Blank"
try:
b = int(b)
except:
b = "Blank"
try:
c = int(c)
except:
c = "Blank"
</code></pre>
<p>Can this be written more efficiently in Python rather than having to write out every try and except?</p>
| 0 | 2016-08-04T12:19:21Z | 38,767,233 | <p>I'd simply use a function:</p>
<pre><code>def int_with_default(i):
try:
return int(i)
except ValueError:
return "Blank"
a = int_with_default(a)
b = int_with_default(b)
c = int_with_default(c)
</code></pre>
<p>If necessary, you can always add a second argument that tells what the default value should be if you don't want to use "Blank" every time.</p>
| 4 | 2016-08-04T12:23:11Z | [
"python"
] |
Python TypeError in Numpy polyfit ufunc did not contain loop with matching signature types | 38,767,154 | <p>This has been asked before so apologies for asking again, I have followed the suggested solutions provided in <a href="http://stackoverflow.com/questions/36637428/typeerror-ufunc-subtract-did-not-contain-a-loop-with-signature-matching-types?noredirect=1&lq=1">these</a> <a href="http://stackoverflow.com/questions/37751405/ufunc-multiply-did-not-contain-a-loop-with-signature-matching-types-dtypeu3">answers</a> and <a href="http://stackoverflow.com/questions/35013726/typeerror-ufunc-add-did-not-contain-a-loop-with-signature-matching-types">this</a>, but I'cant seem to remove my error. </p>
<p>I've tried changing object type for the slope list and vals range but still I get the error. I ducked into the polynomial.py script but I don't understand the line of code listed in the error. </p>
<p>Overall I'm trying to separate an 8-bit grayscale image into individual arrays based on the grayscale values (0-255), generate a line of best fit for each value, then use each of the slopes from the lines of best fit to get an overall line of best fit for the image. </p>
<p>Here's my code:</p>
<pre><code>image = Image.open('subfram-002.tif')
im_array = np.array(image)
#print(im_array)
####
multivec = []
####
# For loop to calculate the line of best fit for each value in the sub-frame array
arrays = [im_array] # Calling our array
vals = list(range(255)) # Range of values in array
slopes = [] # List to append each of the slopes of the line of best fit
skip = False
for i in arrays:
for j in vals:
index = list(zip(*np.where (j == i))) # Creating a list of the position indices for each value in the array
in_array = np.array(index) # Updating list to array, not needed will remove
a = list([int(i[0]) for i in index]) # Getting the first element of each tuple in the index list
b = list([int(i[1]) for i in index]) # Getting the second element of each tuple in the index list
# Add exception for vectors that are not generated due to values in 0-255 not being present
if len(a) == 0:
skip = True
elif len(b) == 0:
skip = True
else:
vec = list((np.poly1d(np.polyfit(a,b,1))).c) # Calculating list of best (1st order polynomial, consider increasing the order?)
slope = float(vec[0]) # Getting the 1st coefficient of the line of best fit, which is the slope
slopes.append(slope) # appending each slope to a list of slopes
print(type(slopes), type(vals))
slopes += ['0'] * (255 - len(slopes)) # Padding slope list in case less then 255 slopes generated
print(len(vals))
# Take in all of the slopes from each vector that is determined for each value
vec1 = (np.poly1d(np.polyfit(slopes,vals,1))).c # Determining the overall line of best fit for the array, using the slope of the vector as the comparator between sub-frames
</code></pre>
<p>Here's my error:</p>
<pre><code><class 'list'> <class 'list'>
255
Traceback (most recent call last):
File "aryslop.py", line 53, in <module>
vec1 = (np.poly1d(np.polyfit(slopes1,vals1,1))).c # Determining the overall line of best fit for the array, using the slope of the vector as the comparator between sub-frames
File "/home/vanoccupanther/anaconda3/lib/python3.5/site-packages/numpy/lib/polynomial.py", line 549, in polyfit
x = NX.asarray(x) + 0.0
TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('<U32') dtype('<U32') dtype('<U32')
</code></pre>
| 0 | 2016-08-04T12:19:36Z | 38,767,755 | <p>I solved this (I hope), I created new lists by iterating through each of the vals range and slopes list, turning each of the objects contained in each into floats. I had done that already inside my for loop so I should have just done that earlier.</p>
| 1 | 2016-08-04T12:46:36Z | [
"python",
"numpy"
] |
add a label to a plot with seaborn | 38,767,241 | <p>I did a plot using seaborn,
Here is my pltt <a href="http://i.stack.imgur.com/2pDID.png" rel="nofollow"><img src="http://i.stack.imgur.com/2pDID.png" alt="enter image description here"></a>
I would like to add a label for each line.
Can you help me please?</p>
<pre><code>import numpy as np
import matplotlib.pylab as plt
import matplotlib.dates as mdates
from matplotlib import style
import pandas as pd
%pylab inline
import seaborn as sns
sns.set_style('darkgrid')
import io
style.use('ggplot')
from datetime import datetime
import time
fig = plt.figure(figsize=(12, 8), dpi=100)
ax1 = fig.add_subplot(111)
x1 = pd.to_datetime(df_no_missing.TIMESTAMP, format="%h:%m")
y1 = df_no_missing.P_ACT_KW
y3 = df_no_missing.P_SOUSCR
yearFmt = mdates.DateFormatter("%H:%M:%S")
ax1.xaxis.set_major_formatter(yearFmt)
ax2 = ax1.twinx()
ax1.plot(x, y1, 'g-')
ax2.plot(x, y2, 'b-')
ax1.plot(x, y3, 'r-')
ax1.set_xlabel('temps')
ax1.set_ylabel('puissance', color='g')
ax2.set_ylabel('dépassement', color='b')
plt.ylim(plt.ylim()[0], 1.0)
plt.show()
</code></pre>
<p>Thank you in advance
<strong>EDIT</strong></p>
<p>I try like you mention :</p>
<pre><code>fig = plt.figure(figsize=(12, 5), dpi=100)
ax1 = fig.add_subplot(111)
x1 = pd.to_datetime(df_no_missing.TIMESTAMP, format="%h:%m")
y1 = df_no_missing.P_ACT_KW
y3 = df_no_missing.P_SOUSCR
yearFmt = mdates.DateFormatter("%H:%M:%S")
ax1.xaxis.set_major_formatter(yearFmt)
ax2 = ax1.twinx()
ax1.plot(x, y1, 'g-', label='label 1')
ax2.plot(x, y2, 'b-', label='label 2')
ax1.plot(x, y3, 'r-', label='label 3')
ax1.plot(0, 0, 'b-', label='label 2')
ax.legend(loc=0) # add legend in top right corner
ax.grid() # show grid lines
ax1.set_xlabel('temps')
ax1.set_ylabel('puissance', color='g')
ax2.set_ylabel('dépassement', color='b')
plt.ylim(plt.ylim()[0], 1.0)
plt.show()
</code></pre>
<p>But I got this error : </p>
<blockquote>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
C:\Users\Demonstrator\Anaconda3\lib\site-packages\IPython\core\formatters.py
</code></pre>
<p>in <strong>call</strong>(self, obj)
337 pass
338 else:
--> 339 return printer(obj)
340 # Finally look for special method names
341 method = _safe_get_formatter_method(obj, self.print_method)</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\IPython\core\pylabtools.py
</code></pre>
<p>in (fig)
226
227 if 'png' in formats:
--> 228 png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs))
229 if 'retina' in formats or 'png2x' in formats:
230 png_formatter.for_type(Figure, lambda fig: retina_figure(fig, **kwargs))</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\IPython\core\pylabtools.py
</code></pre>
<p>in print_figure(fig, fmt, bbox_inches, **kwargs)
117
118 bytes_io = BytesIO()
--> 119 fig.canvas.print_figure(bytes_io, **kw)
120 data = bytes_io.getvalue()
121 if fmt == 'svg':</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\backend_bases.py
</code></pre>
<p>in print_figure(self, filename, dpi, facecolor, edgecolor,
orientation, format, **kwargs)
2178 orientation=orientation,
2179 dryrun=True,
-> 2180 **kwargs)
2181 renderer = self.figure._cachedRenderer
2182 bbox_inches = self.figure.get_tightbbox(renderer)</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py
</code></pre>
<p>in print_png(self, filename_or_obj, *args, **kwargs)
525
526 def print_png(self, filename_or_obj, *args, **kwargs):
--> 527 FigureCanvasAgg.draw(self)
528 renderer = self.get_renderer()
529 original_dpi = renderer.dpi</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py
</code></pre>
<p>in draw(self)
472
473 try:
--> 474 self.figure.draw(self.renderer)
475 finally:
476 RendererAgg.lock.release()</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\artist.py
</code></pre>
<p>in draw_wrapper(artist, renderer, *args, **kwargs)
59 def draw_wrapper(artist, renderer, *args, **kwargs):
60 before(artist, renderer)
---> 61 draw(artist, renderer, *args, **kwargs)
62 after(artist, renderer)
63 </p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\figure.py
</code></pre>
<p>in draw(self, renderer)
1157 dsu.sort(key=itemgetter(0))
1158 for zorder, a, func, args in dsu:
-> 1159 func(*args)
1160
1161 renderer.close_group('figure')</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\artist.py
</code></pre>
<p>in draw_wrapper(artist, renderer, *args, **kwargs)
59 def draw_wrapper(artist, renderer, *args, **kwargs):
60 before(artist, renderer)
---> 61 draw(artist, renderer, *args, **kwargs)
62 after(artist, renderer)
63 </p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\axes\_base.py
</code></pre>
<p>in draw(self, renderer, inframe)
2322
2323 for zorder, a in dsu:
-> 2324 a.draw(renderer)
2325
2326 renderer.close_group('axes')</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\artist.py
</code></pre>
<p>in draw_wrapper(artist, renderer, *args, **kwargs)
59 def draw_wrapper(artist, renderer, *args, **kwargs):
60 before(artist, renderer)
---> 61 draw(artist, renderer, *args, **kwargs)
62 after(artist, renderer)
63 </p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\axis.py
</code></pre>
<p>in draw(self, renderer, *args, **kwargs)
1104 renderer.open_group(<strong>name</strong>)
1105
-> 1106 ticks_to_draw = self._update_ticks(renderer)
1107 ticklabelBoxes, ticklabelBoxes2 = self._get_tick_bboxes(ticks_to_draw,
1108 renderer)</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\axis.py
</code></pre>
<p>in _update_ticks(self, renderer)
947
948 interval = self.get_view_interval()
--> 949 tick_tups = [t for t in self.iter_ticks()]
950 if self._smart_bounds:
951 # handle inverted limits</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\axis.py
</code></pre>
<p>in (.0)
947
948 interval = self.get_view_interval()
--> 949 tick_tups = [t for t in self.iter_ticks()]
950 if self._smart_bounds:
951 # handle inverted limits</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\axis.py
</code></pre>
<p>in iter_ticks(self)
890 Iterate through all of the major and minor ticks.
891 """
--> 892 majorLocs = self.major.locator()
893 majorTicks = self.get_major_ticks(len(majorLocs))
894 self.major.formatter.set_locs(majorLocs)</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\dates.py
</code></pre>
<p>in <strong>call</strong>(self)
1004 def <strong>call</strong>(self):
1005 'Return the locations of the ticks'
-> 1006 self.refresh()
1007 return self._locator()
1008 </p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\dates.py
</code></pre>
<p>in refresh(self)
1024 def refresh(self):
1025 'Refresh internal information based on current limits.'
-> 1026 dmin, dmax = self.viewlim_to_dt()
1027 self._locator = self.get_locator(dmin, dmax)
1028 </p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\dates.py
</code></pre>
<p>in viewlim_to_dt(self)
768 vmin, vmax = vmax, vmin
769
--> 770 return num2date(vmin, self.tz), num2date(vmax, self.tz)
771
772 def _get_unit(self):</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\dates.py
</code></pre>
<p>in num2date(x, tz)
417 tz = _get_rc_timezone()
418 if not cbook.iterable(x):
--> 419 return _from_ordinalf(x, tz)
420 else:
421 x = np.asarray(x)</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\matplotlib\dates.py
</code></pre>
<p>in _from_ordinalf(x, tz)
269
270 ix = int(x)
--> 271 dt = datetime.datetime.fromordinal(ix).replace(tzinfo=UTC)
272
273 remainder = float(x) - ix</p>
<pre><code>ValueError: ordinal must be >= 1
<matplotlib.figure.Figure at 0x1d3b041a400>
</code></pre>
</blockquote>
<pre><code> print (df_no_missing.head())
</code></pre>
<blockquote>
<pre><code> TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR SITE \
145 2015-08-01 23:10:00 248.0 HC 425.0 ST GEREON
146 2015-08-01 23:20:00 244.0 HC 425.0 ST GEREON
147 2015-08-01 23:30:00 243.0 HC 425.0 ST GEREON
148 2015-08-01 23:40:00 238.0 HC 425.0 ST GEREON
149 2015-08-01 23:50:00 234.0 HC 425.0 ST GEREON
TARIF depassement date time
145 TURPE_HTA5 0.0 2015-08-01 23:10:00
146 TURPE_HTA5 0.0 2015-08-01 23:20:00
147 TURPE_HTA5 0.0 2015-08-01 23:30:00
148 TURPE_HTA5 0.0 2015-08-01 23:40:00
149 TURPE_HTA5 0.0 2015-08-01 23:50:00
</code></pre>
</blockquote>
| 2 | 2016-08-04T12:23:23Z | 38,768,867 | <p>You can specify the <code>label</code> names to each subplot axis and use <code>plt.legend</code> to add the appropriate legends to the center right corner. </p>
<pre><code>fig = sns.plt.figure(figsize=(12, 5), dpi=100)
ax1 = fig.add_subplot(111)
x1 = pd.to_datetime(df_no_missing.TIMESTAMP)
y1 = df_no_missing.P_ACT_KW
y2 = df_no_missing.depassement
y3 = df_no_missing.P_SOUSCR
yearFmt = mdates.DateFormatter("%H:%M:%S")
ax1.xaxis.set_major_formatter(yearFmt)
ax1.plot(x1, y1, 'g-', label='p_act_kw')
ax1.plot(x1, y3, 'r-', label='p_souscr')
ax2 = ax1.twinx()
ax2.plot(x1, y2, 'b-', label='depassement')
h1, l1 = ax1.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
ax1.legend(h1+h2, l1+l2, loc='center right')
ax1.set_xlabel('temps')
ax1.set_ylabel('puissance', color='g')
ax2.set_ylabel('dépassement', color='b')
sns.plt.ylim(plt.ylim()[0], 1.0)
sns.plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/9mqqj.png" rel="nofollow"><img src="http://i.stack.imgur.com/9mqqj.png" alt="Image"></a></p>
| 0 | 2016-08-04T13:34:18Z | [
"python",
"seaborn"
] |
Django objects.filter with args in loop | 38,767,242 | <p>I search a way to optimize my Django's research function (in python). At this time I have this :</p>
<pre><code>def search(acronym=None, name=None, reference=None):
queryset = Organization.objects
if acronym:
queryset = queryset.filter(acronym=acronym)
if name:
queryset = queryset.filter(name=name)
if reference:
queryset = queryset.filter(reference=reference)
return queryset
</code></pre>
<p>The problem is, each time I add an argument, I have to add an if + filter, etc.
There is a way to put this in a loop ?</p>
<p>For example :</p>
<pre><code> def search(acronym=None, name=None, reference=None):
queryset = Organization.objects
for arg in args :
if arg :
queryset = queryset.filter(arg = arg)
return queryset
</code></pre>
<p>or something like that ?</p>
<p>Thanks in advance</p>
| 0 | 2016-08-04T12:23:23Z | 38,767,396 | <p>Use <code>**kwargs</code> in your function definition and filter out <code>None</code> valued items using a <em>dictionary comprehension</em> before passing and <em>unpacking</em> the named arguments to <code>filter</code>:</p>
<pre><code>def search(**kwargs):
kwargs = {k: v for k, v in kwargs.items() if v}
queryset = Organization.objects.filter(**kwargs)
return queryset
</code></pre>
| 0 | 2016-08-04T12:29:50Z | [
"python",
"django"
] |
Button to add gui element to exisitng one | 38,767,288 | <p>I have a window with some elements such as Label, ButtonBox etc packed in a group from Pmw module. I want to make such button that when clicked, it will extended the window and add another element (for example another Label) in the bottom OR in existing frame. Problem is that when clicked, nothing happens.</p>
<p>Here is the code with window</p>
<pre><code>def __init__(self,
page,
groupname='myfirsttabdefault',
defaultstructurename='',
defaultchain=''
):
group = Pmw.ScrolledFrame(page,
labelpos='nw',
label_text=groupname)
self.groupname = groupname
self.group = group
group = Pmw.Group(page, tag_text = "Choose input file format")
group.pack(fill='x', expand=1, padx=5, pady=5)
prot_info = tk.Label(group.interior(), text='Single chain')
prot_info.pack(padx=2, pady=2, expand='yes', fill='y')
input_fileformat_buttons = Pmw.ButtonBox(group.interior(), padx=0)
input_fileformat_buttons.add("original file", command=self.orig_button_click)
input_fileformat_buttons.pack(fill='both', expand=1, padx=5, pady=1)
</code></pre>
<p>And here is the code for command orig_button_click</p>
<pre><code>def orig_button_click(self):
protein_info = tk.Label(self.group.interior(), text='something')
protein_info.pack(padx=2, pady=2, expand='yes', fill='y')
</code></pre>
<p><strong>now the question: how to write a button that when clicked will add this protein_info element to existing window?</strong></p>
| 0 | 2016-08-04T12:25:06Z | 38,767,671 | <pre><code>def orig_button_click(self):
protein_info = tk.Label(self.group.interior(), text='something')
protein_info.pack(padx=2, pady=2, expand='yes', fill='y')
protein_info.bind("<Button-1>",self.Label_Click) #Bind the mouse click event. if you need only click label
def Label_Click(self):
#Do stufff
pass
</code></pre>
| 0 | 2016-08-04T12:42:59Z | [
"python",
"user-interface",
"tkinter",
"widget",
"pmw"
] |
Callback to python function from Tkinter Tcl crashes in windows | 38,767,355 | <p>This is not exactly my application, but very similar. I have created this test code to show the problem. Basically I am trying to call tcl proc from python thread. Tcl proc will callback to python function when result is ready. This result will be posted as an event to wx frame. When I run as pure python code, it works fine. When I use tcl proc, the whole app crashes without any info. If I increase wait_time (say 100) then it works fine even with tcl. Is it the high rate of callback a problem or am I missing something else. This app runs on windows by the way.</p>
<pre><code>import wx
from Tkinter import Tcl
from threading import Thread
import wx.lib.newevent
from time import sleep
CountUpdateEvent, EVT_CNT_UPDATE = wx.lib.newevent.NewEvent()
tcl_code = 'proc tcl_worker {name max_count delay_time callback} { while {$max_count>0} {after $delay_time; $callback $name $max_count; incr max_count -1}}'
# Option to use Tcl or not for counter
# When enabled, Tcl will callback to python to upate counter value
use_tcl = True
# Option to create tcl interpreter per thread.
# Test shows single interpreter for all threads will fail.
use_per_thread_tcl = True
count = 5000
wait_time = 1 ;# in milliseconds
class Worker:
def __init__(self,name,ui,tcl):
global use_per_thread_tcl
self.name = name
self.ui = ui
if use_per_thread_tcl:
self.tcl = Tcl()
self.tcl.eval(tcl_code)
else:
self.tcl = tcl
self.target = ui.add_textbox(name)
self.thread = Thread(target=self.run)
self.thread.daemon = True
self.thread.start()
def callback(self, name, val):
evt = CountUpdateEvent(name=self.name, val=val, target=self.target)
wx.PostEvent(self.ui,evt)
def run(self):
global count, wait_time, use_tcl
if use_tcl:
# Register a python function to be called back from tcl
tcl_cmd = self.tcl.register(self.callback)
# Now call tcl proc
self.tcl.call('tcl_worker', self.name, str(count), str(wait_time), tcl_cmd)
else:
# Convert milliseconds to seconds for sleep
py_wait_time = wait_time / 1000
while count > 0:
# Directly call the callback from here
self.callback(self.name, str(count))
count -= 1
sleep(py_wait_time)
class MainWindow(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent, title="Decrement Counter", size=(600, 100))
self._DoLayout()
self.Bind(EVT_CNT_UPDATE, self.on_count_update)
def _DoLayout(self):
self.sizer = wx.BoxSizer(wx.HORIZONTAL)
self.panels = []
self.tbs = []
self.xpos = 0
def add_textbox(self,name):
panel = wx.Panel(self, pos=(self.xpos, 0), size=(60,40))
self.panels.append(panel)
tb = wx.StaticText(panel, label=name)
tb.SetFont(wx.Font(16,wx.MODERN,wx.NORMAL,wx.NORMAL))
self.sizer.Add(panel, 1, wx.EXPAND, 7)
self.tbs.append(tb)
self.xpos = self.xpos + 70
return tb
def on_count_update(self,ev):
ev.target.SetLabel(ev.val)
del ev
if __name__ == '__main__':
app = wx.App(False)
frame = MainWindow(None)
tcl = Tcl()
tcl.eval(tcl_code)
w1 = Worker('A', frame, tcl)
w2 = Worker('B', frame, tcl)
w3 = Worker('C', frame, tcl)
w4 = Worker('D', frame, tcl)
w5 = Worker('E', frame, tcl)
w6 = Worker('F', frame, tcl)
w7 = Worker('G', frame, tcl)
w8 = Worker('H', frame, tcl)
frame.Show()
app.MainLoop()
</code></pre>
| 1 | 2016-08-04T12:27:34Z | 38,767,665 | <p>Each Tcl interpreter object (i.e., the context that knows how to run a Tcl procedure) can only be safely used from the OS thread that creates it. This is because Tcl doesn't use a global interpreter lock like Python, and instead makes extensive use of thread-specific data to reduce the number of locks required internally. (Well-written Tcl code can take advantage of this to scale up very large on suitable hardware.)</p>
<p>Because of this, you <strong><em>must</em></strong> make sure that you only ever run Tcl commands or Tkinter operations from a single thread; that's typically the main thread, but I'm not sure if that's the real requirement for integrating with Python. You can launch worker threads if you want, but they'll be unable to use Tcl or Tkinter (well, not without very special precautions which are more trouble than it's likely worth). Instead, they need to send messages to the main thread for it to handle the interaction with the GUI; there are many different ways to do that.</p>
| 0 | 2016-08-04T12:42:46Z | [
"python",
"tkinter",
"wxpython",
"tcl"
] |
Django not detecting model classes when folder named as Models | 38,767,383 | <p>I splitted models into multiple files and placed them in a folder. Here is the tree structure. Mind the name of folder. It is 'Models' with capital M.</p>
<pre><code>|-- Models
| |-- __init__.py
| |-- PersonModel.py
| `-- VehicleModel.py
</code></pre>
<p>Content of <code>__init__.py</code> file is -</p>
<pre><code>from .VehicleModel import *
from .PersonModel import *
</code></pre>
<p>I created a model class. Now when I am running <code>python manage.py makemigrations MyAppName</code> it says <code>No changes detected in app 'MyAppName'</code></p>
<p>Things worked fine when I renamed folder form 'Models' to 'models'.</p>
<p>However I have done same for views i.e. splitted views into multiple files and placed in a folder. Tree structure is below.</p>
<pre><code>`-- Views
|-- DashboardView.py
|-- __init__.py
|-- __pycache__
| |-- DashboardView.cpython-34.pyc
| |-- __init__.cpython-34.pyc
| `-- VehicleView.cpython-34.pyc
`-- VehicleView.py
</code></pre>
<p>Here folder name is 'Views' with capital V and things are working fine. No complaining by django.</p>
<p>I am not able to understand why makemigrations is not detecting the model classes when placed in folder named as Models.</p>
| 0 | 2016-08-04T12:29:02Z | 38,767,506 | <p>Django doesn't know or care where your views are. You could put them in a folder called "thesearedefinitelynotviews" and it'll still work, as long as you imported them from there.</p>
<p>Models are different. Django needs to be able to find them on startup, so it can do things like set the relationships up correctly. In order for that to happen, they have to be accessible by importing <code>models</code>.</p>
| 3 | 2016-08-04T12:35:52Z | [
"python",
"django"
] |
Iterating on data in two 3D arrays python | 38,767,421 | <p>I'm trying to perform a number of functions to get some results from a set of satellite imagery (in the example case I am performing similarity functions). I first intended to iterate through all the pixels simultaneously, each containing 4 numbers, then calculating a value for each one based off these too numbers then write it to an array e.g scipy.spatial.distance.correlation(pixels_0, pixels_1).</p>
<p>The issue I have is when I run this loop I am having issues getting it to save to an array 1000x1000 giving it a value for each pixel.</p>
<pre><code>array_0 = # some array with dimensions(1000, 1000, 4)
array_1 = # some array with dimensions(1000, 1000, 4)
result_array = []
for rows_0, rows_1 in itertools.izip(array_0, array_1):
for pixels_0, pixels_1 in itertools.izip(rows_0, rows_1):
results = some_function(pixels_0, pixels_1)
print results
>>> # successfully prints desired results
results_array.append(results)
>>> # unsuccessful in creating the desired array
</code></pre>
<p>I am getting the results I want to get printing down the run window but I don't know how to put it back into an array which I could manipulate in a similar manor. Are my for loops the issue or is this a simple issue with appending it back to arrays? Any explanation on speeding it up would also be great too as I'm very new to python and programming all together. </p>
<pre><code>a = np.random.rand(10, 10, 4)
b = np.random.rand(10, 10, 4)
def dotprod(T0, T1):
return np.dot(T0, T1)/(np.linalg.norm(T0)*np.linalg.norm(T1))
results =dotprod(a.flatten(), b.flatten())
results = results.reshape(a.shape)
</code></pre>
<p>This now causes ValueError: total size of new array must be unchanged,
and when printing the first results value I receive only one number. Is this the fault of my own poorly constructed function or in how I am using numpy?</p>
| 0 | 2016-08-04T12:31:03Z | 38,767,610 | <p>Before investing anymore effort into programming it this way, take a look into the numpy package. It will be many times faster!</p>
<p>About your code: shouldn't your results array also be multidimensional? So in your inner (per row) loop you should be appending to a row, which you then in you outer loop append to your results matrix.</p>
<p>Try it with a small amount of data (e.g. 10 x 10 x 4) to learn from, but after that switch to numpy as soon as you can...</p>
| 0 | 2016-08-04T12:40:42Z | [
"python",
"arrays",
"for-loop",
"multidimensional-array",
"append"
] |
Iterating on data in two 3D arrays python | 38,767,421 | <p>I'm trying to perform a number of functions to get some results from a set of satellite imagery (in the example case I am performing similarity functions). I first intended to iterate through all the pixels simultaneously, each containing 4 numbers, then calculating a value for each one based off these too numbers then write it to an array e.g scipy.spatial.distance.correlation(pixels_0, pixels_1).</p>
<p>The issue I have is when I run this loop I am having issues getting it to save to an array 1000x1000 giving it a value for each pixel.</p>
<pre><code>array_0 = # some array with dimensions(1000, 1000, 4)
array_1 = # some array with dimensions(1000, 1000, 4)
result_array = []
for rows_0, rows_1 in itertools.izip(array_0, array_1):
for pixels_0, pixels_1 in itertools.izip(rows_0, rows_1):
results = some_function(pixels_0, pixels_1)
print results
>>> # successfully prints desired results
results_array.append(results)
>>> # unsuccessful in creating the desired array
</code></pre>
<p>I am getting the results I want to get printing down the run window but I don't know how to put it back into an array which I could manipulate in a similar manor. Are my for loops the issue or is this a simple issue with appending it back to arrays? Any explanation on speeding it up would also be great too as I'm very new to python and programming all together. </p>
<pre><code>a = np.random.rand(10, 10, 4)
b = np.random.rand(10, 10, 4)
def dotprod(T0, T1):
return np.dot(T0, T1)/(np.linalg.norm(T0)*np.linalg.norm(T1))
results =dotprod(a.flatten(), b.flatten())
results = results.reshape(a.shape)
</code></pre>
<p>This now causes ValueError: total size of new array must be unchanged,
and when printing the first results value I receive only one number. Is this the fault of my own poorly constructed function or in how I am using numpy?</p>
| 0 | 2016-08-04T12:31:03Z | 38,767,895 | <p>The best way is to use <code>Numpy</code> for your task. You should think in vectors. And you should write your <code>some_function()</code>to work in a vectorized manner. Here is an example:</p>
<pre><code>array_0 = np.random.rand(1000,1000,4)
array_1 = np.random.rand(1000,1000,4)
results = some_function(array_0.flatten(), array_1.flatten()) ## this will be (1000*1000*4 X 1)
results = results.reshape(array_0.shape) ## reshaping to make it the way you want it.
</code></pre>
| 0 | 2016-08-04T12:52:51Z | [
"python",
"arrays",
"for-loop",
"multidimensional-array",
"append"
] |
Scikit-Learn- How to add an 'unclassified' category? | 38,767,481 | <p>I am using Scikit-Learn to classify texts (in my case tweets) using LinearSVC. Is there a way to classify texts as unclassified when they are a poor fit with any of the categories defined in the training set? For example if I have categories for sport, politics and cinema and attempt to predict the classification on a tweet about computing it should remain unclassified. </p>
| 1 | 2016-08-04T12:34:32Z | 38,788,040 | <p>In the supervised learning approach as it is, you cannot add extra category.</p>
<p>Therefore I would use some heuristics. Try to predict probability for each category. Then, if all 4 or at least 3 probabilities are approximately equal, you can say that the sample is "unknown".
For this approach LinearSVC or other type of Support Vector Classifier is bad
suited, because it does not naturally gives you probabilities. Another classifier (Logistic Regression, Bayes, Trees, Forests) would be better</p>
| 1 | 2016-08-05T11:22:32Z | [
"python",
"scikit-learn",
"text-classification"
] |
Could not save product object to database | 38,767,499 | <p>I get an error while trying to post store, product, category and merchant data. The error is <strong>ValueError at /api/stores/create/
save() prohibited to prevent data loss due to unsaved related object 'product'.</strong></p>
<p><strong>The code</strong></p>
<pre><code>class Store(models.Model):
merchant = models.ForeignKey(User)
name_of_legal_entity = models.CharField(max_length=250)
pan_number = models.CharField(max_length=20)
registered_office_address = models.CharField(max_length=200)
name_of_store = models.CharField(max_length=100)
store_off_day = MultiSelectField(choices=DAY, max_length=7, default='Sat')
store_categories = models.ManyToManyField('StoreCategory',blank=True)
class Product(models.Model):
store = models.ForeignKey(Store)
image = models.ForeignKey('ProductImage',blank=True,null=True)
name_of_product = models.CharField(max_length=120)
description = models.TextField(blank=True, null=True)
price = models.DecimalField(decimal_places=2, max_digits=20)
active = models.BooleanField(default=True)
class ProductImage(models.Model):
image = models.ImageField(upload_to='products/images/')
@property
def imageName(self):
return str(os.path.basename(self.image.name))
class StoreCategory(models.Model):
product = models.ForeignKey(Product,null=True, on_delete=models.CASCADE,related_name="store_category")
store_category = models.CharField(choices=STORE_CATEGORIES, default='GROCERY', max_length=10)
</code></pre>
<p><strong>Serializers.py</strong></p>
<pre><code>class ProductSerializers(ModelSerializer):
image = ProductImageSerializer()
class Meta:
model = Product
fields=('id','image','name_of_product','description','price','active',)
class StoreCategorySerializer(ModelSerializer):
product = ProductSerializers()
class Meta:
model = StoreCategory
class StoreCreateSerializer(ModelSerializer):
store_categories = StoreCategorySerializer()
merchant = UserSerializer()
class Meta:
model = Store
fields=("id",
"merchant",
"store_categories",
"name_of_legal_entity",
"pan_number",
"registered_office_address",
"name_of_store",
"store_contact_number",
"store_long",
"store_lat",
"store_start_time",
"store_end_time",
"store_off_day",
)
def create(self,validated_data):
store_categories_data = validated_data.pop('store_categories')
merchant_data = validated_data.pop('merchant')
for merchantKey, merchantVal in merchant_data.items():
try:
merchant,created = User.objects.get_or_create(username=merchantVal)
print('merchant',merchant)
print(type(merchant))
validated_data['merchant']=merchant
store = Store.objects.create(**validated_data)
image = store_categories_data["product"].pop("image")
image_instance = ProductImage(**image)
product = store_categories_data["product"]
product_instance = Product(
image=image_instance,
name_of_product=product['name_of_product'],
description=product['description'],
price=product['price'],
active=product['active']
)
store_category = store_categories_data['store_category']
print('store_category',store_category)
store_category = StoreCategory(product=product_instance, store_category=store_category)
store_category.product.store = store
store_category.save()
return store
except User.DoesNotExist:
raise NotFound('not found')
</code></pre>
| 0 | 2016-08-04T12:35:28Z | 38,767,697 | <p>Use object.save(commit=False) thing.
<a href="https://docs.djangoproject.com/en/1.9/topics/forms/modelforms/#the-save-method" rel="nofollow">https://docs.djangoproject.com/en/1.9/topics/forms/modelforms/#the-save-method</a> this documentation will help it. </p>
| 0 | 2016-08-04T12:44:24Z | [
"python",
"django",
"python-3.x",
"django-rest-framework",
"django-1.9"
] |
Find out result size of unpacked archive without unpacking it. Or stop unpacking when certain size is exceed | 38,767,584 | <p>I need to validate result size of unpacked archive without unpacking it, so that to prevent huge archives to store on my server.
Or start unpacking and when size is exceeded certain size, then stop unpacking.
I have already tried lib <code>pyunpack</code>, but it allows only unpacking archives.
Need to validate such archive extensions:
<code>rar</code>, <code>zip</code>, <code>7z</code>, <code>tar</code>.
Maybe I can do it with using some linux features by calling them by <code>os.system</code>.</p>
| 0 | 2016-08-04T12:39:28Z | 38,768,586 | <p>I can't give you a native python answer, but, if you need to fall back on <code>os.system</code>, the command-line utilities for handling all four formats have switches which can be used to list the contents of the archive including the size of each file and possibly a total size:</p>
<ul>
<li><code>rar</code>: <code>unrar l FILENAME.rar</code> lists information on each file and the total size.</li>
<li><code>zip</code>: <code>unzip -l FILENAME.zip</code> lists size, timestamp, and name of each file, along with the total size.</li>
<li><code>7z</code>: <code>7z l FILENAME.7z</code> lists the details of each file and the total size.</li>
<li><code>tar</code>: <code>tar -tvf FILENAME.tar</code> or <code>tar -tvzf FILENAME.tgz</code> (or <code>.tar.gz</code>) lists details of each file including file size. No total size is provided, so you'll need to add them up yourself.</li>
</ul>
<p>If you're looking at native python libraries, you can also check for whether they have a "list" or "test" function. Those are the terms used by the command-line tools to describe the switches I mentioned above, so the same names are likely to have been used by the library authors.</p>
| 0 | 2016-08-04T13:23:00Z | [
"python",
"linux",
"archive",
"7zip",
"rar"
] |
What exactly happens on the computer when multiple requests came to the webserver serving django or pyramid application? | 38,767,616 | <p>I am having a hard time trying to figure out the big picture of the handling of multiple requests by the <code>uwsgi</code> server with <code>django</code> or <code>pyramid</code> application. </p>
<p><strong>My understanding at the moment is this:</strong>
When multiple <em>http requests</em> are sent to <em>uwsgi server</em> concurrently, the server creates a <em>separate processes or threads</em> (copies of itself) for every request (or assigns to them the request) and <em>every</em> <em>process/thread</em> loads the webapplication's code (say <strong>django</strong> or <strong>pyramid</strong>) into computers memory and executes it and returns the <em>response</em>. In between every copy of the code can access the <em>session</em>, <em>cache</em> or <em>database</em>. There is a separate database server usually and it can also handle concurrent requests to the database. </p>
<p><strong>So here some questions I am fighting with.</strong> </p>
<ol>
<li>Is my above understanding correct or not? </li>
<li>Are the copies of code interact with each other somehow or are they wholly separated from each other?</li>
<li>What about the session or cache? Are they shared between them or are they local to each copy? </li>
<li>How are they created: by the webserver or by copies of python code? </li>
<li>How are responses returned to the requesters: by each process concurrently or are they put to some kind of queue and sent synchroniously?</li>
</ol>
<p>I have googled these questions and have found very interesting answers on <em>StackOverflow</em> but anyway can't get the whole picture and the whole process remains a mystery for me. It would be fantastic if someone can explain the whole picture in terms of <strong>django</strong> or <strong>pyramid</strong> with uwsgi or whatever webserver.</p>
<p><em>Sorry for asking kind of dumb questions, but they really torment me every night and I am looking forward to your help:)</em></p>
| 1 | 2016-08-04T12:40:57Z | 38,767,725 | <p>The power and weakness of webservers is that they are in principle stateless. This enables them to be massively parallel. So indeed for each page request a different thread may be spawned. Wether or not this indeed happens depends on the configuration settings of you webserver. There's also a cost to spawning many threads, so if possible threads are reused from a thread pool.</p>
<p>Almost all serious webservers have page cache. So if the same page is requested multiple times, it can be retrieved from cache. In addition, browsers do their own caching. A webserver has to be clever about what to cache and what not. Static pages aren't a big problem, although they may be replaced, in which case it is quite confusing to still get the old page served because of the cache.</p>
<p>One way to defeat the cache is by adding (dummy) parameters to the page request.</p>
<p>The statelessness of the web was initialy welcomed as a necessity to achieve scalability, where webpages of busy sites even could be served concurrently from different servers at nearby or remote locations.</p>
<p>However the trend is to have stateful apps. State can be maintained on the browser, easing the burden on the server. If it's maintained on the server it requires the server to know 'who's talking'. One way to do this is saving and recognizing cookies (small identifiable bits of data) on the client.</p>
<p>For databases the story is a bit different. As soon as anything gets stored that relates to a particular user, the application is in principle stateful. While there's no conceptual difference between retaining state on disk and in RAM memory, traditionally statefulness was left to the database, which in turned used thread pools and load balancing to do its job efficiently.</p>
<p>With the advent of very large internet shops like amazon and google, mandatory disk access to achieve statefulness created a performance problem. The answer were in-memory databases. While they may be accessed traditionally using e.g. SQL, they offer much more flexibility in the way data is stored conceptually.</p>
<p>A type of database that enjoys growing popularity is persistent object store. With this database, while the distinction still can be made formally, the boundary between webserver and database is blurred. Both have their data in RAM (but can swap to disk if needed), both work with objects rather than flat records as in SQL tables. These objects can be interconnected in complex ways.</p>
<p>In short there's an explosion of innovative storage / thread pooling / caching/ persistence / redundance / synchronisation technology, driving what has become popularly know as 'the cloud'.</p>
| 2 | 2016-08-04T12:45:31Z | [
"python",
"django",
"multithreading",
"webserver",
"pyramid"
] |
What exactly happens on the computer when multiple requests came to the webserver serving django or pyramid application? | 38,767,616 | <p>I am having a hard time trying to figure out the big picture of the handling of multiple requests by the <code>uwsgi</code> server with <code>django</code> or <code>pyramid</code> application. </p>
<p><strong>My understanding at the moment is this:</strong>
When multiple <em>http requests</em> are sent to <em>uwsgi server</em> concurrently, the server creates a <em>separate processes or threads</em> (copies of itself) for every request (or assigns to them the request) and <em>every</em> <em>process/thread</em> loads the webapplication's code (say <strong>django</strong> or <strong>pyramid</strong>) into computers memory and executes it and returns the <em>response</em>. In between every copy of the code can access the <em>session</em>, <em>cache</em> or <em>database</em>. There is a separate database server usually and it can also handle concurrent requests to the database. </p>
<p><strong>So here some questions I am fighting with.</strong> </p>
<ol>
<li>Is my above understanding correct or not? </li>
<li>Are the copies of code interact with each other somehow or are they wholly separated from each other?</li>
<li>What about the session or cache? Are they shared between them or are they local to each copy? </li>
<li>How are they created: by the webserver or by copies of python code? </li>
<li>How are responses returned to the requesters: by each process concurrently or are they put to some kind of queue and sent synchroniously?</li>
</ol>
<p>I have googled these questions and have found very interesting answers on <em>StackOverflow</em> but anyway can't get the whole picture and the whole process remains a mystery for me. It would be fantastic if someone can explain the whole picture in terms of <strong>django</strong> or <strong>pyramid</strong> with uwsgi or whatever webserver.</p>
<p><em>Sorry for asking kind of dumb questions, but they really torment me every night and I am looking forward to your help:)</em></p>
| 1 | 2016-08-04T12:40:57Z | 38,782,897 | <p>There's no magic in pyramid or django that gets you past process boundaries. The answers depend entirely on the particular server you've selected and the settings you've selected. For example, uwsgi has the ability to run multiple threads and multiple processes. If uwsig spins up multiple processes then they will each have their own copies of data which are not shared unless you took the time to create some IPC (this is why you should keep state in a third party like a database instead of in-memory objects which are not shared across processes). Each process initializes a WSGI object (let's call it <code>app</code>) which the server calls via <code>body_iter = app(environ, start_response)</code>. This <code>app</code> object is shared across all of the threads in the process and is invoked concurrently, thus it needs to be threadsafe (usually the structures the <code>app</code> uses are either threadlocal or readonly to deal with this, for example a connection pool to the database).</p>
<p>In general the answers to your questions are that things happen concurrently, and objects may or may not be shared based on your server model but in general you should take anything that you want to be shared and store it somewhere that can handle concurrency properly (a database).</p>
| 2 | 2016-08-05T06:50:02Z | [
"python",
"django",
"multithreading",
"webserver",
"pyramid"
] |
How can I pack different lists and dicts into a single JSON object in python | 38,767,681 | <p>I have a set of different lists and dictionaries and variables.
Can I pack them all into one JSON object?
How would you go about it?
Call json.dumps() on each of them and then somehow put them together?</p>
<p>(My purpose behind this is to make the data available to my jacascript programm via npm python shell.)</p>
<p>thank you</p>
| 0 | 2016-08-04T12:43:32Z | 38,767,890 | <p>Just put all your individual lists and dicts into one big dictionary and <code>json.dumps</code> that dictionary. You could use the variable names as keys, or just put them all one after the other in a list.</p>
<pre><code>>>> a_list = [1,2,3]
>>> a_dict = {"foo": 42, "bar": [4,5,6]}
>>> import json
>>> everything = {"a_list": a_list, "a_dict": a_dict}
>>> json.dumps(everything)
'{"a_list": [1, 2, 3], "a_dict": {"foo": 42, "bar": [4, 5, 6]}}'
</code></pre>
| 0 | 2016-08-04T12:52:35Z | [
"python",
"json",
"list",
"dictionary"
] |
Showing dialog in non-main thread | 38,767,706 | <p>I have a non-GUI program that sometimes needs to display a dialog to user.
The problem is that my program runs in an infinite loop and when I show a dialog in this loop the execution of program halts until the dialog is dismissed and this is not wanted because my program loop is a background service that must be responsive all time. So I tried running the dialog showing code in another thread but it doesn't work properly: The dialog is shown only one time/the first time and subsequent calls show nothing.</p>
<p>How can I solve this problem?</p>
<p>This is a sample code for you to test the situation:</p>
<pre><code>import tkinter
import tkinter.messagebox
import threading
import time
def messageBox():
root=tkinter.Tk()
root.withdraw()
tkinter.messagebox.showinfo('dialog', 'test')
root.destroy()
while True:
threading.Thread(target=messageBox).start()
time.sleep(3)
</code></pre>
<p>I use Python 3.3.4 on Windows XP</p>
| 0 | 2016-08-04T12:44:43Z | 38,769,568 | <p>My suggestion is to make your dialog a separate script, and use the subprocess module to display the dialog in a separate process.</p>
| 1 | 2016-08-04T14:02:49Z | [
"python",
"multithreading",
"tkinter",
"modal-dialog"
] |
Issues with displaying time results in Python | 38,767,768 | <p>I have written a code to take in a running pace value (min/km), convert it to speed (km/hr) and then depending on the slope gradient and whether the direction of travel is up or downhill the lost speed is calculated (km/hr). The new running speed is then displayed along with the new running pace and the time your route is altered by. </p>
<p>The issue is when I input a pace such as 3:50 (min/km) with an uphill slope of 1% the new running pace is 3:60 (min/km). How do I get the script to tick over to 4:00 in this case? Also if 3:55 (min/km) is input the new running pace given is 4:5 (min/km) when it should read as 4:05 (min/km). How do i edit this?</p>
<p>The script is: </p>
<pre><code> import math
print('Q1')
SurveyPace = input("Running Pace (min/km): \n "). split(":")
SurveyPace = float(SurveyPace[0])*60 + float(SurveyPace[1])
Speed = 3600/SurveyPace
print("Original running speed =","%.2f" % round(Speed,2), 'km/hr')
print("--------------------------------------------------------")
print('Q2')
SlopeDirection = int(input('For Uphill press 1 \nFor Downhill press 2 \n '))
print("--------------------------------------------------------")
print('Q3')
SlopeGradient = float(input('Percentage gradient(without the % symbol)?\n '))
print("--------------------------------------------------------")
print('CALCULATED RESULTS')
print("--------------------------------------------------------")
if SlopeDirection == 1:
Change = - 0.65 * SlopeGradient
if SlopeDirection == 2:
Change = + 0.35 * SlopeGradient
print ('This route alters your speed by \n', Change,'km/hr')
print("--------------------------------------------------------")
AdjustedSpeed = Speed + Change
AdjustedPace = 3600/AdjustedSpeed
PaceSecs = round(AdjustedPace % 60)
PaceMins = math.floor(AdjustedPace/60)
print("New running speed is \n","%.2f" % round(AdjustedSpeed,2), 'km/hr')
print("--------------------------------------------------------")
print("New running pace is \n", str(PaceMins) + ":" + str(PaceSecs), 'min/km')
print("--------------------------------------------------------")
print("This route alters your pace by \n", int(PaceSecs + (PaceMins*60)) - SurveyPace, "sec/km") #Prints the time change incured
print("--------------------------------------------------------")
</code></pre>
<p>Thanks</p>
| -1 | 2016-08-04T12:47:02Z | 38,768,402 | <p>You can do this with the built-in function <a href="https://docs.python.org/2/library/functions.html#divmod" rel="nofollow"><code>divmod</code></a>:</p>
<pre><code># Round the AdjustedPace to seconds
AdjustedPace = round(3600/AdjustedSpeed)
minutes, seconds = divmod(AdjustedPace, 60)
print(minutes)
print(seconds)
</code></pre>
<p>This will lead to:</p>
<pre><code>#Pace = 3:50
#4
#0
#Pace = 3:55
#4
#5
</code></pre>
| 0 | 2016-08-04T13:14:51Z | [
"python",
"python-3.x"
] |
Issues with displaying time results in Python | 38,767,768 | <p>I have written a code to take in a running pace value (min/km), convert it to speed (km/hr) and then depending on the slope gradient and whether the direction of travel is up or downhill the lost speed is calculated (km/hr). The new running speed is then displayed along with the new running pace and the time your route is altered by. </p>
<p>The issue is when I input a pace such as 3:50 (min/km) with an uphill slope of 1% the new running pace is 3:60 (min/km). How do I get the script to tick over to 4:00 in this case? Also if 3:55 (min/km) is input the new running pace given is 4:5 (min/km) when it should read as 4:05 (min/km). How do i edit this?</p>
<p>The script is: </p>
<pre><code> import math
print('Q1')
SurveyPace = input("Running Pace (min/km): \n "). split(":")
SurveyPace = float(SurveyPace[0])*60 + float(SurveyPace[1])
Speed = 3600/SurveyPace
print("Original running speed =","%.2f" % round(Speed,2), 'km/hr')
print("--------------------------------------------------------")
print('Q2')
SlopeDirection = int(input('For Uphill press 1 \nFor Downhill press 2 \n '))
print("--------------------------------------------------------")
print('Q3')
SlopeGradient = float(input('Percentage gradient(without the % symbol)?\n '))
print("--------------------------------------------------------")
print('CALCULATED RESULTS')
print("--------------------------------------------------------")
if SlopeDirection == 1:
Change = - 0.65 * SlopeGradient
if SlopeDirection == 2:
Change = + 0.35 * SlopeGradient
print ('This route alters your speed by \n', Change,'km/hr')
print("--------------------------------------------------------")
AdjustedSpeed = Speed + Change
AdjustedPace = 3600/AdjustedSpeed
PaceSecs = round(AdjustedPace % 60)
PaceMins = math.floor(AdjustedPace/60)
print("New running speed is \n","%.2f" % round(AdjustedSpeed,2), 'km/hr')
print("--------------------------------------------------------")
print("New running pace is \n", str(PaceMins) + ":" + str(PaceSecs), 'min/km')
print("--------------------------------------------------------")
print("This route alters your pace by \n", int(PaceSecs + (PaceMins*60)) - SurveyPace, "sec/km") #Prints the time change incured
print("--------------------------------------------------------")
</code></pre>
<p>Thanks</p>
| -1 | 2016-08-04T12:47:02Z | 38,768,845 | <p>I would do this with <code>timedelta</code> objects from datetime:</p>
<pre><code>import datetime
inp = raw_input('Enter your pace in minutes per km (min:km):')
mins, kms = inp.split(':')
time = datetime.timedelta(minutes=int(mins))
</code></pre>
<p>If you enter 60 minutes, for example, will give you:</p>
<pre><code>> time
datetime.timedelta(0, 3600)
</code></pre>
<p>And then you can perform math operations on it and it stays correct:</p>
<pre><code>> time / 2
datetime.timedelta(0, 1800)
</code></pre>
<p>Or if you want minutes just divide it by 60, hours divide it by 3600. You can also add and subtract timedelta objects from each other, or from datetime objects if you want timestamps. Or if your divisor leaves a remainder:</p>
<pre><code>> new = time / 17
> new
datetime.timedelta(0, 3600)
> new.seconds
200
> new.microseconds
764706
</code></pre>
<p>Which you could then use to round if you wanted. It's a good way to make sure your time always stays accurate.</p>
| 0 | 2016-08-04T13:33:42Z | [
"python",
"python-3.x"
] |
Pyplot graphing seperate pie charts | 38,767,813 | <p>I want my program to output separate pie charts but when i run it, they end up on top of each other.
Here is the code I'm running: </p>
<pre><code>plt.pie(sizes, labels=labels, autopct = '%1.1f%%', shadow=True, startangle=90)
plt.axis('equal')
plt.suptitle(title)
plt.show()
</code></pre>
<p>However I call this functions from another function</p>
<pre><code>for x in y:
if x.isupper() == True:
result = chart(x)
return result
</code></pre>
<p>And then that would call the function above and I would like it to chart separate pie charts but they call end up on top of each other. </p>
| 0 | 2016-08-04T12:48:49Z | 38,768,032 | <p>Can you use subplots? You can add a subplot to the figure by using fig.add_subplot(y,x,n), with y the total plots in the y direction, x the plots in the x direction and n the current plot.</p>
<p>For example, this code:</p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure()
# First plot
fig.add_subplot(1,2,1)
plt.axis('equal')
plt.pie(range(0,5))
plt.title("Plot 1")
# Second plot
fig.add_subplot(1,2,2)
plt.axis('equal')
plt.pie(range(0,10))
plt.title("Plot 2")
plt.show()
</code></pre>
<p>Will create 2 different plots on the same figure object, as shown in <a href="http://i.stack.imgur.com/yQlwg.png" rel="nofollow">this image</a>.</p>
| 0 | 2016-08-04T12:58:17Z | [
"python",
"matplotlib",
"pie-chart"
] |
Creating a 2D array based upon a CSV | 38,767,866 | <p>I have a CSV file with millions of lines in the format of the below:</p>
<pre><code>start, finish,count;
101,101,10;
101,103,2;
101,104,8;
102,103,5;
</code></pre>
<p>So we have a start location, an end location and a count of the number of people who make that journey.</p>
<p>What I'd like to do is put this into a 'table-style' matrix with all the start locations running along the top, all the end locations running down the side and in the body of the matrix have a sum of all the counts that sit within that intersect. </p>
<p>So far I have the CSV file cleaned and imported and have the start and end locations stored as vectors, however I'm unsure how to proceed when forming the body of the matrix, can anyone help?</p>
<p>Thank you.</p>
<p>EDIT: I would like it to look as follows:</p>
<pre><code> 101,102;
101,10,0;
103,2,5;
104,8,0;
</code></pre>
| 2 | 2016-08-04T12:51:43Z | 38,768,488 | <p>You said you have millions of lines, so i don't know whether this will be effective or not, but if you don't run into memory issues a pandas dataframe is the way to go:</p>
<pre><code>import pandas as pd
df = pd.read_csv('inputfile.csv')
df = df.groupby(['start','finish']).agg({'count':sum}).reset_index()
# Create Pivot table
df_out = df.pivot(index='finish',columns = 'start',values='count')
# Write Output
df_out.rename_axis(None).to_csv('output.csv')
</code></pre>
| 0 | 2016-08-04T13:18:43Z | [
"python",
"arrays",
"pandas",
"numpy",
"matrix"
] |
Creating a 2D array based upon a CSV | 38,767,866 | <p>I have a CSV file with millions of lines in the format of the below:</p>
<pre><code>start, finish,count;
101,101,10;
101,103,2;
101,104,8;
102,103,5;
</code></pre>
<p>So we have a start location, an end location and a count of the number of people who make that journey.</p>
<p>What I'd like to do is put this into a 'table-style' matrix with all the start locations running along the top, all the end locations running down the side and in the body of the matrix have a sum of all the counts that sit within that intersect. </p>
<p>So far I have the CSV file cleaned and imported and have the start and end locations stored as vectors, however I'm unsure how to proceed when forming the body of the matrix, can anyone help?</p>
<p>Thank you.</p>
<p>EDIT: I would like it to look as follows:</p>
<pre><code> 101,102;
101,10,0;
103,2,5;
104,8,0;
</code></pre>
| 2 | 2016-08-04T12:51:43Z | 38,768,546 | <p>use <code>set_index</code> and <code>unstack</code></p>
<pre><code>df.set_index(['start', 'finish'])['count'].unstack(0)
</code></pre>
<p><a href="http://i.stack.imgur.com/lCPm4.png" rel="nofollow"><img src="http://i.stack.imgur.com/lCPm4.png" alt="enter image description here"></a></p>
<hr>
<p>To save to csv</p>
<pre><code>print df.set_index(['start', 'finish'])['count'].unstack(0).rename_axis(None) \
.to_csv('myfilename.csv')
,101,102
101,10.0,
103,2.0,5.0
104,8.0,
</code></pre>
| 2 | 2016-08-04T13:21:08Z | [
"python",
"arrays",
"pandas",
"numpy",
"matrix"
] |
Creating a 2D array based upon a CSV | 38,767,866 | <p>I have a CSV file with millions of lines in the format of the below:</p>
<pre><code>start, finish,count;
101,101,10;
101,103,2;
101,104,8;
102,103,5;
</code></pre>
<p>So we have a start location, an end location and a count of the number of people who make that journey.</p>
<p>What I'd like to do is put this into a 'table-style' matrix with all the start locations running along the top, all the end locations running down the side and in the body of the matrix have a sum of all the counts that sit within that intersect. </p>
<p>So far I have the CSV file cleaned and imported and have the start and end locations stored as vectors, however I'm unsure how to proceed when forming the body of the matrix, can anyone help?</p>
<p>Thank you.</p>
<p>EDIT: I would like it to look as follows:</p>
<pre><code> 101,102;
101,10,0;
103,2,5;
104,8,0;
</code></pre>
| 2 | 2016-08-04T12:51:43Z | 38,769,482 | <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow"><code>pivot</code></a>:</p>
<pre><code>print (df.pivot(index='finish', columns='start', values='count'))
start 101 102
finish
101 10.0 NaN
103 2.0 5.0
104 8.0 NaN
</code></pre>
<p>If need remove columns and index names use <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#changes-to-rename" rel="nofollow"><code>rename_axis</code></a> (new in <code>pandas</code> <code>0.18.0</code>):</p>
<pre><code>print (df.pivot(index='finish', columns='start', values='count')
.rename_axis(None)
.rename_axis(None, axis=1))
101 102
101 10.0 NaN
103 2.0 5.0
104 8.0 NaN
</code></pre>
| 0 | 2016-08-04T13:58:48Z | [
"python",
"arrays",
"pandas",
"numpy",
"matrix"
] |
Recursion Error: Maximum Recursion depth exceeded | 38,768,052 | <pre><code>from __future__ import print_function
import os, codecs, nltk.stem
english_stemmer = nltk.stem.SnowballStemmer('english')
for root, dirs, files in os.walk("/Users/Documents/corpus/source-document/test1"):
for file in files:
if file.endswith(".txt"):
posts = codecs.open(os.path.join(root,file),"r", "utf-8-sig")
from sklearn.feature_extraction.text import CountVectorizer
class StemmedCountVectorizer(CountVectorizer):
def build_analyzer(self):
analyzer = super(StemmedCountVectorizer, self.build_analyzer())
return lambda doc: (english_stemmer.stem(w) for w in analyzer(doc))
vectorizer = StemmedCountVectorizer(min_df = 1, stop_words = 'english')
X_train = vectorizer.fit_transform(posts)
num_samples, num_features = X_train.shape
print("#samples: %d, #features: %d" % (num_samples, num_features)) #samples: 5, #features: 25
print(vectorizer.get_feature_names())
</code></pre>
<p>When I run the above code for all the text file contained in the directory it is throwing the following error:
RecursionError: maximum recursion depth exceeded.</p>
<p>I tried to resolve the problem with sys.setrecursionlimit, but all in vain. When i provide large value like 20000 the the kernel crash error occurs.</p>
| 0 | 2016-08-04T12:59:34Z | 38,768,326 | <p>Your error is in <code>analyzer = super(StemmedCountVectorizer, self.build_analyzer())</code> here you are calling the function <code>build_analyzer</code> before the super call, which cause a infinite recursive loop. Change it for <code>analyzer = super(StemmedCountVectorizer, self).build_analyzer()</code></p>
| 1 | 2016-08-04T13:11:12Z | [
"python",
"scikit-learn",
"nltk",
"stemming"
] |
Text after <br> dissapears after replacing br tags | 38,768,201 | <p>I'm scraping some data from websites and have encountered a problem using <code>BeautifulSoup</code> (<code>bs4</code>). I need to get text of some elements, separated by anything (comma, space, etc.) that enables me to split the text in order it appears. </p>
<p><code>text</code> attribute of <code>bs4.element.Tag</code> gives textual content. The problem is, I am getting the text concatenated, even if there is a <code><br></code> in between. I have no way of differentiating whether <code>OneTwo</code> is one word/sentence or multiple.</p>
<p>I am using <code>find_all</code> to find all <code><br></code> tags and I replace them with comma <code>,</code> so I can split the text by it. However, replacing <code>br</code> tags seems to remove text that follows the <code>br</code> tags.</p>
<p>Here is some code that reproduces the problem:</p>
<pre><code>from bs4 import BeautifulSoup
soup = BeautifulSoup("""
<html>
<head>
</head>
<body>
<div>
One
<br>
Two
<br>
<br>
</div>
</body>
</html>
""".replace(' ', '').replace('\n', ''), "html.parser")
print soup.div.text
# Out: OneTwo
for br in soup.find_all('br'):
br.replace_with(',')
print soup.text.replace('\n', '')
# Out: One,
</code></pre>
<p>What I want it to print is <code>One,Two</code> or <code>One,Two,,</code> or something similar instead. How can I replace the <code>br</code> tags with a character, without removing other text in the process?</p>
| 0 | 2016-08-04T13:05:53Z | 38,768,938 | <p>Well, there probably are many ways to do this, but I wanted a clean solution that will work for real-world, possibly horrible html.</p>
<p>If someone comes looking for solution to similar problem, I stumbled upon one neat method, <code>insert</code>, which is exactly what I was looking for.</p>
<pre><code>from bs4 import BeautifulSoup
soup = BeautifulSoup("""
<html>
<head>
</head>
<body>
<div>
One
<br>
Two
<br>
<br>
</div>
</body>
</html>
""".replace(' ', '').replace('\n', ''), "html.parser")
for br in soup.find_all('br'):
br.insert(0, ',')
print soup.text.replace('\n', '')
# Out: One,Two,,
</code></pre>
<hr>
<p><strong>Edit</strong></p>
<p>Even better solution that Padraic Cunningham suggested is to just concatenate the text of <code>br</code> to the replacement, which will retain the original text.</p>
<pre><code>from bs4 import BeautifulSoup
soup = BeautifulSoup("""
<html>
<head>
</head>
<body>
<div>
One
<br>
Two
<br>
<br>
</div>
</body>
</html>
""".replace(' ', '').replace('\n', ''), "html.parser")
for br in soup.find_all('br'):
br.replace_with(',' + br.text)
print soup.text.replace('\n', '')
# Out: One,Two
</code></pre>
| 0 | 2016-08-04T13:37:16Z | [
"python",
"beautifulsoup",
"bs4"
] |
Python Tkinter Tabs and Canvas | 38,768,385 | <p>Objective:
I am trying to create a GUI with a portion of the screen having "tabs" (information displayed can be changed based on selected tab), and another portion constantly displaying the same thing.</p>
<pre><code>import ttk
import Tkinter
def demo():
#root = tk.Tk()
schedGraphics = Tkinter
root = schedGraphics.Tk()
root.title("Testing Bot")
universal_height = 606
canvas = schedGraphics.Canvas(root,width = 900, height = universal_height)
nb = ttk.Notebook(root)
# adding Frames as pages for the ttk.Notebook
# first page, which would get widgets gridded into it
page1 = ttk.Frame(nb,width = 300,height = universal_height)
# second page
page2 = ttk.Frame(nb,width = 300,height = universal_height)
nb.add(page1, text='One')
nb.add(page2, text='Two')
#
nb.grid()
day_label = schedGraphics.Label(page1, text="Day1:")
day_label.pack()
day_label.place(x=0, y=30)
day_label = schedGraphics.Label(page2, text="Day2:")
day_label.pack()
day_label.place(x=0, y=30)
canvas.create_rectangle(50,500,300,600,fill = "red")
canvas.grid()
root.mainloop()
if __name__ == "__main__":
demo()
</code></pre>
<p>Problems:</p>
<ol>
<li><p>In the current configuration the tabs are located in the MIDDLE of the screen not on the left side.</p></li>
<li><p>If I change canvas.grid() to canvas.pack() it doesn't actually open any window?</p></li>
<li><p>The rectangle on canvas does not appear!</p></li>
</ol>
<p>Thank you.</p>
| 1 | 2016-08-04T13:14:03Z | 38,771,025 | <ol>
<li><p>To do this, when gridding your notebook, pass the argument <code>column</code> and choose 0, so that it will be located at the far left, like this:</p>
<p><code>nb.grid(column=0)</code></p></li>
<li><p>That's because you have to chose, for your tkinter app, between <code>.grid()</code> and <code>.pack()</code>: the two are not compatible. As you used <code>.grid()</code> before, the window won't open and a <code>TclError</code> pops up.</p></li>
<li><p>Your canvas is in fact hidden under the notebook. To fix that, set the <code>row</code> argument when using <code>grid</code> to 0, so that it is at the top, like this:</p>
<p><code>canvas.grid(column=1, row=0)</code></p></li>
</ol>
<p>Final code:</p>
<pre><code>import Tkinter
import ttk
def demo():
#root = tk.Tk()
schedGraphics = Tkinter
root = schedGraphics.Tk()
root.title("Testing Bot")
universal_height = 606
nb = ttk.Notebook(root)
# adding Frames as pages for the ttk.Notebook
# first page, which would get widgets gridded into it
page1 = ttk.Frame(nb, width= 300,height = universal_height)
# second page
page2 = ttk.Frame(nb,width = 300,height = universal_height)
nb.add(page1, text='One')
nb.add(page2, text='Two')
nb.grid(column=0)
day_label = schedGraphics.Label(page1, text="Day1:")
day_label.pack()
day_label.place(x=0, y=30)
day_label = schedGraphics.Label(page2, text="Day2:")
day_label.pack()
day_label.place(x=0, y=30)
canvas = schedGraphics.Canvas(root, width=900, height=universal_height)
canvas.create_rectangle(50, 500, 300, 600, fill="red")
canvas.grid(column=1, row=0)
root.mainloop()
if __name__ == "__main__":
demo()
</code></pre>
<p>I hope this helps !</p>
| 0 | 2016-08-04T15:04:29Z | [
"python",
"tkinter",
"ttk"
] |
Keeping domain of Email but removing TLD | 38,768,446 | <p>I am using python and I want to be able to keep the domain of the email but remove the 'com', or '.co.uk', or 'us', etc</p>
<p>So basically if I have an email, say random@gmail.com. I want to have only @gmail left in string format, but I want to do this for any email. So random@yahoo.com would leave me with @yahoo, or random@aol.uk, would leave me with @aol</p>
<p>so far I have:</p>
<pre><code> domain = re.search("@[\w.]+", val)
domain = domain.group()
</code></pre>
<p>That returns the domain but with the TLD . So @gmail.com, or @aol.co</p>
| 2 | 2016-08-04T13:16:49Z | 38,768,602 | <p>If you do</p>
<pre><code>val = string.split('@')[1].split('.')[0]
</code></pre>
<p>Change 'string' for your email string variable name.</p>
<p>This will take everything after the '@' symbol, then everything up to the first '.'</p>
<p>Using on 'random@gmail.com' gives 'gmail'</p>
<p>If you require the '@' symbol you can add it back with;</p>
<pre><code>full = '@' + val
</code></pre>
| 2 | 2016-08-04T13:23:35Z | [
"python",
"pandas",
"string-parsing"
] |
Keeping domain of Email but removing TLD | 38,768,446 | <p>I am using python and I want to be able to keep the domain of the email but remove the 'com', or '.co.uk', or 'us', etc</p>
<p>So basically if I have an email, say random@gmail.com. I want to have only @gmail left in string format, but I want to do this for any email. So random@yahoo.com would leave me with @yahoo, or random@aol.uk, would leave me with @aol</p>
<p>so far I have:</p>
<pre><code> domain = re.search("@[\w.]+", val)
domain = domain.group()
</code></pre>
<p>That returns the domain but with the TLD . So @gmail.com, or @aol.co</p>
| 2 | 2016-08-04T13:16:49Z | 38,768,651 | <p>First split on "@", take the part after "@". Then split on "." and take the first part</p>
<pre><code>email = "this.that@gmail.com.x.y"
'@' + email.split("@")[1].split(".")[0]
'@gmail'
</code></pre>
| 2 | 2016-08-04T13:25:39Z | [
"python",
"pandas",
"string-parsing"
] |
Keeping domain of Email but removing TLD | 38,768,446 | <p>I am using python and I want to be able to keep the domain of the email but remove the 'com', or '.co.uk', or 'us', etc</p>
<p>So basically if I have an email, say random@gmail.com. I want to have only @gmail left in string format, but I want to do this for any email. So random@yahoo.com would leave me with @yahoo, or random@aol.uk, would leave me with @aol</p>
<p>so far I have:</p>
<pre><code> domain = re.search("@[\w.]+", val)
domain = domain.group()
</code></pre>
<p>That returns the domain but with the TLD . So @gmail.com, or @aol.co</p>
| 2 | 2016-08-04T13:16:49Z | 38,768,691 | <p>With pandas <a href="http://pandas.pydata.org/pandas-docs/stable/text.html" rel="nofollow">functions</a> use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>split</code></a>:</p>
<pre><code>df = pd.DataFrame({'a':['random@yahoo.com','random@aol.uk','random@aol.co.uk']})
print (df)
a
0 random@yahoo.com
1 random@aol.uk
2 random@aol.co.uk
print ('@' + df.a.str.split('@').str[1].str.split('.', 1).str[0] )
0 @yahoo
1 @aol
2 @aol
Name: a, dtype: object
</code></pre>
<p>But faster is use <code>apply</code>, if in column are not <code>NaN</code> values:</p>
<pre><code>df = pd.concat([df]*10000).reset_index(drop=True)
print ('@' + df.a.str.split('@').str[1].str.split('.', 1).str[0] )
print (df.a.apply(lambda x: '@' + x.split('@')[1].split('.')[0]))
In [363]: %timeit ('@' + df.a.str.split('@').str[1].str.split('.', 1).str[0] )
10 loops, best of 3: 79.1 ms per loop
In [364]: %timeit (df.a.apply(lambda x: '@' + x.split('@')[1].split('.')[0]))
10 loops, best of 3: 27.7 ms per loop
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="nofollow"><code>extract</code></a> is faster as <code>split</code>, it can be used if <code>NaN</code> values in column:</p>
<pre><code>#not sure with all valid characters in email address
print ( '@' + df.a.str.extract(r"\@([A-Za-z0-9_]+)\.", expand=False))
In [365]: %timeit ( '@' + df.a.str.extract(r"\@([A-Za-z0-9 _]+)\.", expand=False))
10 loops, best of 3: 39.7 ms per loop
</code></pre>
| 1 | 2016-08-04T13:27:09Z | [
"python",
"pandas",
"string-parsing"
] |
Keeping domain of Email but removing TLD | 38,768,446 | <p>I am using python and I want to be able to keep the domain of the email but remove the 'com', or '.co.uk', or 'us', etc</p>
<p>So basically if I have an email, say random@gmail.com. I want to have only @gmail left in string format, but I want to do this for any email. So random@yahoo.com would leave me with @yahoo, or random@aol.uk, would leave me with @aol</p>
<p>so far I have:</p>
<pre><code> domain = re.search("@[\w.]+", val)
domain = domain.group()
</code></pre>
<p>That returns the domain but with the TLD . So @gmail.com, or @aol.co</p>
| 2 | 2016-08-04T13:16:49Z | 38,772,107 | <p>For posterity and completeness, this can also be done via index and slice:</p>
<pre><code>email = 'random@aol.co.uk'
at = email.index('@')
dot = email.index('.', at)
domain = email[at:dot]
</code></pre>
<p>Using <code>split()</code>and <code>re</code> seems like overkill when the goal is to extract a single sub-string.</p>
| 0 | 2016-08-04T15:55:06Z | [
"python",
"pandas",
"string-parsing"
] |
Python: subset of list as equally distributed as possible? | 38,768,474 | <p>I have a range of possible values, for example:</p>
<pre><code>possible_values = range(100)
</code></pre>
<p>I have a list with unsystematic (but unique) numbers within that range, for example:</p>
<pre><code>somelist = [0, 5, 10, 15, 20, 33, 77, 99]
</code></pre>
<p>I want to create a new list of length < len(somelist) including a subset of these values but as equally distributed as possible over the range of possible values. For example:</p>
<pre><code>length_newlist = 2
newlist = some_function(somelist, length_newlist, possible_values)
print(newlist)
</code></pre>
<p>Which would then ideally output something like</p>
<pre><code>[33, 77]
</code></pre>
<p>So I neither want a random sample nor a sample that chosen from equally spaced integers. I'd like to have a sample based on a distribution (here an uniform distribution) in regard to an interval of possible values.
Is there a function or an easy way to achieve this?</p>
| 1 | 2016-08-04T13:18:00Z | 38,768,858 | <p>I think you should check <code>random.sample(population, k)</code> function. It samples the population in k-length list. </p>
| -2 | 2016-08-04T13:34:07Z | [
"python",
"list"
] |
Python: subset of list as equally distributed as possible? | 38,768,474 | <p>I have a range of possible values, for example:</p>
<pre><code>possible_values = range(100)
</code></pre>
<p>I have a list with unsystematic (but unique) numbers within that range, for example:</p>
<pre><code>somelist = [0, 5, 10, 15, 20, 33, 77, 99]
</code></pre>
<p>I want to create a new list of length < len(somelist) including a subset of these values but as equally distributed as possible over the range of possible values. For example:</p>
<pre><code>length_newlist = 2
newlist = some_function(somelist, length_newlist, possible_values)
print(newlist)
</code></pre>
<p>Which would then ideally output something like</p>
<pre><code>[33, 77]
</code></pre>
<p>So I neither want a random sample nor a sample that chosen from equally spaced integers. I'd like to have a sample based on a distribution (here an uniform distribution) in regard to an interval of possible values.
Is there a function or an easy way to achieve this?</p>
| 1 | 2016-08-04T13:18:00Z | 38,769,267 | <p>Suppose your range is 0..N-1, and you want a list of K<=N-1 values. Then define an "ideal" list of K values, which would be your desired distribution over this full list (which I am frankly not sure I understand what that would be, but hopefully you do). Finally, take the closest matches to those values from your randomly chosen greater-than-K-length sublist to get your properly distributed K-length random sublist.</p>
| 1 | 2016-08-04T13:51:02Z | [
"python",
"list"
] |
Python: subset of list as equally distributed as possible? | 38,768,474 | <p>I have a range of possible values, for example:</p>
<pre><code>possible_values = range(100)
</code></pre>
<p>I have a list with unsystematic (but unique) numbers within that range, for example:</p>
<pre><code>somelist = [0, 5, 10, 15, 20, 33, 77, 99]
</code></pre>
<p>I want to create a new list of length < len(somelist) including a subset of these values but as equally distributed as possible over the range of possible values. For example:</p>
<pre><code>length_newlist = 2
newlist = some_function(somelist, length_newlist, possible_values)
print(newlist)
</code></pre>
<p>Which would then ideally output something like</p>
<pre><code>[33, 77]
</code></pre>
<p>So I neither want a random sample nor a sample that chosen from equally spaced integers. I'd like to have a sample based on a distribution (here an uniform distribution) in regard to an interval of possible values.
Is there a function or an easy way to achieve this?</p>
| 1 | 2016-08-04T13:18:00Z | 38,769,633 | <p>What about the closest values of your subset to certain list's pivots? ie:</p>
<pre><code>def some_function(somelist, length_list, possible_values):
a = min(possible_values)
b = max(possible_values)
chunk_size = (b-a)/(length_list+1)
new_list = []
for i in range(1,length_list+1):
index = a+i*chunk_size
new_list.append(min(somelist, key=lambda x:abs(x-index)))
return new_list
possible_values = range(100)
somelist = [0, 5, 10, 15, 20, 33, 77, 99]
length_newlist = 2
newlist = some_function(somelist, length_newlist, possible_values)
print(newlist)
</code></pre>
<p>In any case, I'd also recommend to take a look to <a href="http://docs.scipy.org/doc/numpy/reference/routines.random.html" rel="nofollow">numpy's random sampling</a> functions, that could help you as well.</p>
| 1 | 2016-08-04T14:05:01Z | [
"python",
"list"
] |
How to sync data between a Google Sheet and a Mysql DB? | 38,768,475 | <p>So I have a Google sheet that maintains a lot of data. I also have a MySQL DB with a huge junk of data. There is a vital piece of information in the Sheet that is also present in the DB. Both needs to be in sync. The information always enters the Sheet first. I had a python script with mysql queries to update my database separately. </p>
<p>Now the work flow has changed. Data will enter the sheet and whenever that happens the database has to updated automatically.</p>
<ul>
<li>After some research, I found that using the <code>onEdit</code> function of Google AppScript (I learned from <a href="https://developers.google.com/apps-script/guides/triggers/" rel="nofollow">here</a>.), I could pickup when the file has changed. </li>
<li>The Next step is to fetch the data from relevant cell, which I can do using <a href="https://developers.google.com/apps-script/guides/sheets" rel="nofollow">this</a>.</li>
<li>Now I need to connect to the DB and send some queries. This is where I am stuck. </li>
</ul>
<p><strong>Approach 1:</strong></p>
<p>Have a python web-app running live. Send the data via <a href="https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app" rel="nofollow">UrlFetchApp</a>.This I yet have to try. </p>
<p><strong>Approach 2:</strong></p>
<p>Connect to mySQL remotely through appscript. But I am not sure this is possible after 2-3 hours of reading the docs.</p>
<p>So this is my scenario. Any viable solution you can think of or a better approach?</p>
| -1 | 2016-08-04T13:18:07Z | 38,782,303 | <p>Connect directly to mySQL. You likely missed reading this part <a href="https://developers.google.com/apps-script/guides/jdbc" rel="nofollow">https://developers.google.com/apps-script/guides/jdbc</a></p>
| 1 | 2016-08-05T06:13:04Z | [
"python",
"mysql",
"google-apps-script",
"google-spreadsheet"
] |
AWS Elastic Beanstalk Environment Variables in Python | 38,768,549 | <p>Recently I have been trying to deploy a django webapp to AWS Elastic Beanstalk and everything has been going fine. However part of my app uses that Twitter API so I need to import my API keys. My understanding is that I should use Configuration > Software Configurations > Environment Properties. I set this up inputting my keys but when I checked the site it still failed.</p>
<p>I have been using this to try and import the variables is that correct?</p>
<pre><code>import os
os.enviorn.get('TWITTER_ACCESS_TOKEN')
</code></pre>
<p>I checked to see if the variables were making it to the server and when I ran <code>eb printenv</code> I was shown this:</p>
<pre><code> Environment Variables:
TWITTER_ACCESS_TOKEN = XXXXX
TWITTER_ACCESS_SECRET = XXXX
TWITTER_CONSUMER_SECRET = XXXX
TWITTER_CONSUMER_KEY = XXXXX
</code></pre>
<p>Any help would be greatly appreciated.</p>
| 2 | 2016-08-04T13:21:12Z | 38,769,596 | <p>The key you are trying to get doesn't exist among your environment variables. Changing the code to -
<code>os.environ.get('TWITTER_ACCESS_TOKEN')</code> or any other key among your env vars should do the trick.</p>
| 0 | 2016-08-04T14:03:56Z | [
"python",
"django",
"amazon-web-services"
] |
How can I create an abstract syntax tree considering '|'? (Ply / Yacc) | 38,768,585 | <p>Considering the following grammar:</p>
<pre><code>expr : expr '+' term | expr '-' term | term
term : term '*' factor | term '/' factor | factor
factor : '(' expr ')' | identifier | number
</code></pre>
<p>This is my code using ply:</p>
<pre><code>from ply import lex, yacc
tokens = [
"identifier",
"number",
"plus",
"minus",
"mult",
"div"
]
t_ignore = r" \t"
t_identifier = r"^[a-zA-Z]+$"
t_number = r"[+-]?(\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?"
t_plus = r"\+"
t_minus = r"-"
t_mult = r"\*"
t_div = r"/"
def p_stmt(p):
"""stmt : expr"""
p[0] = ("stmt", p[1])
def p_expr(p):
"""expr : expr plus term
| expr minus term
| term"""
p[0] = ("expr", p[1], p[2]) # Problem here <<<
def p_term(p):
"""term : term mult factor
| term div factor
| factor"""
def p_factor(p):
"""factor : '(' expr ')'
| identifier
| number"""
if __name__ == "__main__":
lex.lex()
yacc.yacc()
data = "32 + 10"
result = yacc.parse(data)
print(result)
</code></pre>
<p>How am I supposed to build an AST with the expression if I can't access the operators? I could separate the functions like p_expr_plus, but in this case, I would eliminate operator precedence. The <a href="http://www.dabeaz.com/ply/ply.html" rel="nofollow">docs</a> are not so helpful, since I'm a beginner and can't solve this problem. The best material I've found on the subject <a href="http://www.dabeaz.com/ply/PLYTalk.pdf" rel="nofollow">is this</a>, but it does not consider the complexity of operator precedence.</p>
<p>EDIT: I can't access p<a href="http://www.dabeaz.com/ply/PLYTalk.pdf" rel="nofollow">2</a> or p[3], since I get an IndexError (It's matching the term only). In the PDF I've linked, they explicitly put the operator inside the tuple, like: ('+', p<a href="http://www.dabeaz.com/ply/ply.html" rel="nofollow">1</a>, p<a href="http://www.dabeaz.com/ply/PLYTalk.pdf" rel="nofollow">2</a>), and thus, evincing my problem considering precedence (I can't separate the functions, the expression is the expression, there should be a way to consider the pipes and access any operator).</p>
| 1 | 2016-08-04T13:22:50Z | 38,768,749 | <p>As far as I can see, in <code>p[0] = ("expr", p[1], p[2])</code>, p<a href="https://github.com/dabeaz/ply/blob/master/example/GardenSnake/GardenSnake.py" rel="nofollow">1</a> would be the left hand expression, p[2] would be the operator, and p[3] (that you aren't using) would be the right hand term. </p>
<p>Just use p[2] to determine the operator, add p[3], since you will need it, and you should be good to go.</p>
<p>Also, you must verify how many items <code>p</code> has, since if the last rule, <code>| term"""</code> is matched, p will only have two items instead of four.</p>
<p>Take a look at a snippet from the <a href="https://github.com/dabeaz/ply/blob/master/example/GardenSnake/GardenSnake.py" rel="nofollow">GardenSnake example:</a></p>
<pre><code>def p_comparison(p):
"""comparison : comparison PLUS comparison
| comparison MINUS comparison
| comparison MULT comparison
| comparison DIV comparison
| comparison LT comparison
| comparison EQ comparison
| comparison GT comparison
| PLUS comparison
| MINUS comparison
| power"""
if len(p) == 4:
p[0] = binary_ops[p[2]]((p[1], p[3]))
elif len(p) == 3:
p[0] = unary_ops[p[1]](p[2])
else:
p[0] = p[1]
</code></pre>
| 1 | 2016-08-04T13:29:37Z | [
"python",
"abstract-syntax-tree",
"yacc",
"ply"
] |
Installing a package in Conda environment, but only works in Python not iPython? | 38,768,620 | <p>I am using an Ubuntu docker image. I've installed Anaconda on it with no issues. I'm not trying to install tensorflow, using the directions on the tensorflow website:</p>
<pre><code>conda create --name tensorflow python=3.5
source activate tensorflow
<tensorflow> conda install -c conda-forge tensorflow
</code></pre>
<p>It installs with no errors. However, when I import in <code>iPython</code>, it tells me there is no module <code>tensorflow</code>. But if I import when in <code>Python</code>, it works fine.</p>
<p>What's going on and how do I fix it?</p>
| 0 | 2016-08-04T13:24:15Z | 38,771,575 | <p>You have to install IPython in the conda environment</p>
<pre><code>source activate tensorflow
conda install ipython
</code></pre>
| 1 | 2016-08-04T15:29:24Z | [
"python",
"docker",
"path",
"tensorflow",
"anaconda"
] |
How to correctly use fminsearch in Python? | 38,768,635 | <p>I'm trying to translate a part of my matlab code in python. Actually I'm looking for how to translate <code>fminsearch</code> and I found it on this website with this example :</p>
<pre><code>import scipy.optimize
banana = lambda x: 100*(x[1]-x[0]**2)**2+(1-x[0])**2
xopt = scipy.optimize.fmin(func=banana, x0=[-1.2,1])
</code></pre>
<p>My first question is how to return also the value of <code>fmin</code> ? </p>
<p>And in my code when I type :</p>
<pre><code>banana = lambda X: diff_norm(X, abst0, ord0);
Xu = scipy.optimize.fmin(func=banana, X)
</code></pre>
<p>Python answered me :</p>
<pre><code>Xu = scipy.optimize.fmin(func=banana, X)
SyntaxError: non-keyword arg after keyword arg
</code></pre>
<p>I don't understand why Python told me that because what i want to do is to minimize the function <code>diff_norm</code> changing the values of <code>X</code>, i precise <code>X</code> is an array of length 10.</p>
<p>Thank you very much for your help !</p>
| 1 | 2016-08-04T13:24:45Z | 38,768,841 | <p>Python told you that because in Python, keyword arguments always follow non keyword (i.e positional) arguments (keyword args have a name assigned to them, as in <code>func</code> in the <code>fmin</code> call). Your function call should look like:</p>
<pre><code>Xu = scipy.optimize.fmin(func=banana, x0=X)
</code></pre>
<p>in order to comply with <em><a href="https://docs.python.org/3/reference/expressions.html#calls" rel="nofollow">Python's calling conventions</a></em>. Alternatively, and, according to the <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin.html#scipy-optimize-fmin" rel="nofollow">function definition of <code>fmin</code></a>, you could only supply positional arguments for these two first arguments:</p>
<pre><code>Xu = scipy.optimize.fmin(banana, X)
</code></pre>
<p>this will return the values that minimize the function, so, just call the function providing these arguments:</p>
<pre><code>minval = banana(Xu)
</code></pre>
<p>Alternatively you could call <code>fmin</code> with <code>full_output = True</code> and get a tuple of elements back, the second element of that tuple is the minimum value:</p>
<pre><code>_, minval, *_ = scipy.optimize.fmin(banana, X, full_output=True)
</code></pre>
<p>Now <code>minval</code> contains your full output.</p>
| 0 | 2016-08-04T13:33:26Z | [
"python",
"matlab",
"python-2.7",
"python-3.x"
] |
Find element with certain text using Selenium with Python | 38,768,687 | <p>Currently, I am using this and it works.</p>
<pre><code>self.browser.find_element_by_xpath('//a/h4[text()="item I want"]').click()
</code></pre>
<p>Is there a better way to select by text? I feel like code readability suffers immensely using xpath.</p>
<p>Simple HTML example that can be any number of elements. The purpose is to test a specific wine that is added to the database for a functional test.</p>
<pre><code>{% extends 'wine/base.html' %}
{% block content %}
<section id="wine_content">
<div class="cards">
{% for wine in wines %}
<div class="card">
<a href="/wine/{{ wine.id }}">
<h4>{{ wine.name }}</h4>
<p>{{ wine.vintage }}</p>
<p>{{ wine.description}}</p>
</a>
</div>
{% endfor %}
</div>
</section>
{% endblock %}
</code></pre>
| 2 | 2016-08-04T13:27:01Z | 38,769,262 | <p>If you want to improve the code readability, you should see to follow a page object pattern.</p>
<p>A first step would be to define the locator in a variable with an explicit name:</p>
<pre class="lang-py prettyprint-override"><code>from selenium.webdriver.common.by import By
</code></pre>
<pre class="lang-py prettyprint-override"><code>item_wine_pinot_noir = (By.XPATH, "//a/h4[text()='%s']" % "Pinneau noir")
browser.find_element(*item_wine_pinot_noir).click()
</code></pre>
| 3 | 2016-08-04T13:50:55Z | [
"python",
"django",
"selenium",
"testcase"
] |
Find element with certain text using Selenium with Python | 38,768,687 | <p>Currently, I am using this and it works.</p>
<pre><code>self.browser.find_element_by_xpath('//a/h4[text()="item I want"]').click()
</code></pre>
<p>Is there a better way to select by text? I feel like code readability suffers immensely using xpath.</p>
<p>Simple HTML example that can be any number of elements. The purpose is to test a specific wine that is added to the database for a functional test.</p>
<pre><code>{% extends 'wine/base.html' %}
{% block content %}
<section id="wine_content">
<div class="cards">
{% for wine in wines %}
<div class="card">
<a href="/wine/{{ wine.id }}">
<h4>{{ wine.name }}</h4>
<p>{{ wine.vintage }}</p>
<p>{{ wine.description}}</p>
</a>
</div>
{% endfor %}
</div>
</section>
{% endblock %}
</code></pre>
| 2 | 2016-08-04T13:27:01Z | 38,769,385 | <p>You can try this one</p>
<pre><code>self.browser.find_element_by_link_text('item I want').click()
</code></pre>
| 0 | 2016-08-04T13:55:10Z | [
"python",
"django",
"selenium",
"testcase"
] |
How to append a "label" to a numpy array | 38,768,688 | <p>I have a numpy array created such as:</p>
<pre><code>x = np.array([[1,2,3,4],[5,6,7,8]])
y = np.asarray([x])
</code></pre>
<p>which prints out </p>
<pre><code> x=[[1 2 3 4]
[5 6 7 8]]
y=[[[1 2 3 4]
[5 6 7 8]]]
</code></pre>
<p>What I would like is an array such as </p>
<pre><code>[0 [[1 2 3 4]
[5 6 7 8]]]
</code></pre>
<p>What's the easiest way to go about this?</p>
<p>Thanks!</p>
| 0 | 2016-08-04T13:27:01Z | 38,768,945 | <p>To do what you're asking, just use the phrase</p>
<pre><code>labeledArray = [0, x]
</code></pre>
<p>This way, you will get a standard array with 0 as the first element and a Numpy array as the second element.</p>
<p>However, in practice, you are probably trying to label for the purpose of later recall. In that case, I'd recommend you use a dictionary, as it is less confusing to keep track of:</p>
<pre><code>myArrays = {}
myArrays[0] = x
</code></pre>
<p>Which can be used as follows:</p>
<pre><code>>>> myArrays
{0: array([[1, 2, 3, 4],
[5, 6, 7, 8]])}
>>> myArrays[0]
array([[1, 2, 3, 4],
[5, 6, 7, 8]])
</code></pre>
| 1 | 2016-08-04T13:37:40Z | [
"python",
"numpy"
] |
How to check if the input for date is correct? | 38,768,730 | <pre><code>def date(date):
DD, MM, YYYY=date.split(' ')
return datetime.date(int(YYYY),int(MM),int(DD))
while True:
end=input('End Date (DD MM YYYY): ')
end=date(end)
if end[0:1].isdigit() and end[3:4].isdigit() and end[6:9].isdigit() and datetime.datetime.strptime(end, '%d/%m/%Y'):
break
else:
print("Invalid")
</code></pre>
<p>Error:
Traceback (most recent call last):
File "C:\Users\NPStudent\Downloads\main.py", line 311, in
if start[0:1].isdigit() and start[3:4].isdigit() and start[6:9].isdigit() and datetime.datetime.strptime(start, '%d/%m/%Y'):
TypeError: 'datetime.date' object is not subscriptable</p>
| -2 | 2016-08-04T13:29:00Z | 38,976,636 | <p>Not sure on the purpose of the date function, but one potential way of checking the input is:</p>
<pre><code>import datetime
while True:
end = input('End Date (DD MM YYYY): ')
try:
datetime.datetime.strptime(end,'%d %m %Y')
print("Correct")
#date function?
break
except ValueError:
print("Invalid")
</code></pre>
| 0 | 2016-08-16T13:45:27Z | [
"python"
] |
View of a view of a numpy array is a copy? | 38,768,815 | <p>If you change a view of a numpy array, the original array is also altered. This is intended behaviour.</p>
<pre><code>arr = np.array([1,2,3])
mask = np.array([True, False, False])
arr[mask] = 0
arr
# Out: array([0, 2, 3])
</code></pre>
<p>However, if I take a view of such a view, and change that, then the original array is <em>not</em> altered:</p>
<pre><code>arr = np.array([1,2,3])
mask_1 = np.array([True, False, False])
mask_1_arr = arr[mask_1] # Becomes: array([1])
mask_2 = np.array([True])
mask_1_arr[mask_2] = 0
arr
# Out: array([1, 2, 3])
</code></pre>
<p>This implies to me that, when you take a view of a view, you actually get back a copy. Is this correct? Why is this?</p>
<p>The same behaviour occurs if I use numpy arrays of numerical indices instead of a numpy array of boolean values. (E.g. <code>arr[np.array([0])][np.array([0])] = 0</code> doesn't change the first element of <code>arr</code> to 0.)</p>
| 1 | 2016-08-04T13:32:21Z | 38,768,993 | <p>Selection by <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#basic-slicing-and-indexing" rel="nofollow">basic slicing</a> always returns a view. Selection by <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing" rel="nofollow">advanced
indexing</a> always returns a copy. <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#boolean-array-indexing" rel="nofollow">Selection by boolean mask</a> is a form of advanced
indexing. (The other form of advanced indexing is <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#integer-array-indexing" rel="nofollow">selection by integer array</a>.)</p>
<p>However, <strong>assignment</strong> by advanced indexing affects the original array.</p>
<p>So </p>
<pre><code>mask = np.array([True, False, False])
arr[mask] = 0
</code></pre>
<p>affects <code>arr</code> because it is an assignment. In contrast,</p>
<pre><code>mask_1_arr = arr[mask_1]
</code></pre>
<p>is selection by boolean mask, so <code>mask_1_arr</code> is a copy of part of <code>arr</code>.
Once you have a copy, the jig is up. When Python executes</p>
<pre><code>mask_2 = np.array([True])
mask_1_arr[mask_2] = 0
</code></pre>
<p>the assignment affects <code>mask_1_arr</code>, but since <code>mask_1_arr</code> is a copy,
it has no effect on <code>arr</code>.</p>
<hr>
<pre><code>| | basic slicing | advanced indexing |
|------------+------------------+-------------------|
| selection | view | copy |
| assignment | affects original | affects original |
</code></pre>
<hr>
<p>Under the hood, <code>arr[mask] = something</code> causes Python to call
<code>arr.__setitem__(mask, something)</code>. The <code>ndarray.__setitem__</code> method is
implemented to modify <code>arr</code>. After all, that is the natural thing one should expect
<code>__setitem__</code> to do.</p>
<p>In contrast, as an expression <code>arr[indexer]</code> causes Python to call
<code>arr.__getitem__(indexer)</code>. When <code>indexer</code> is a slice, the regularity of the
elements allows NumPy to return a view (by modifying the strides and offset). When <code>indexer</code>
is an arbitrary boolean mask or arbitrary array of integers, there is in general
no regularity to the elements selected, so there is no way to return a
view. Hence a copy must be returned.</p>
| 3 | 2016-08-04T13:39:31Z | [
"python",
"numpy"
] |
Django Foreign Key not recognising primary key in relation | 38,768,825 | <p>I have two models, <code>Article</code> and <code>ArticlePost</code>. <code>ArticlePost</code> references <code>Article</code> as a foreign key and is in a separate application to <code>Article</code>:</p>
<pre><code>====app1/models.py:======
class Article(models.Model):
name = models.CharField(max_length=200, primary_key=True)
====app2/models.py======
class ArticlePost(models.Model):
article = models.ForeignKey(Article, null=False, db_index=True)
created_at = models.DateTimeField(auto_now_add=True)
comment = models.TextField(blank=True)
</code></pre>
<p>I have run python manage makemigrations which gives the following:</p>
<pre><code>operations = [
migrations.CreateModel(
name='ArticlePost',
fields=[
('id', models.AutoField(auto_created=True, verbose_name='ID', serialize=False, primary_key=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('comment', models.TextField(blank=True)),
('article', models.ForeignKey(to='app2.Article')),
],
),
]
</code></pre>
<p>However when I run python manage migrate I get:</p>
<pre><code>django.db.utils.ProgrammingError: there is no unique constraint matching given keys for referenced table "article"
</code></pre>
<p>What is strange is that I have another model in <code>app1</code> which also references article with a foreign key which works perfectly. However in this case it would appear that Django does not know which field is the primary key for <code>Article</code>. The only difference is that <code>ArticlePost</code> is in a different application from <code>Article</code>. I am running Django 1.10. Does anyone have any idea what is causing this and how it might be fixed?</p>
<p>Alternatively if it is just a key issue maybe a solution is to remove the <code>primary_key</code> on Article and use the Django default <code>id</code> instead. In this case, how is best to do this while maintaining the Foreign Key references from other models to Article within app1?</p>
| 0 | 2016-08-04T13:32:38Z | 38,770,725 | <p>Okay so what you tried to do is absolutely correct. But the problem here is with Django.</p>
<p>When you are setting <code>name</code> as <code>primary_key</code> then according to the official documentation -> <a href="https://docs.djangoproject.com/ja/1.9/ref/models/fields/#django.db.models.Field.unique" rel="nofollow">https://docs.djangoproject.com/ja/1.9/ref/models/fields/#django.db.models.Field.unique</a></p>
<blockquote>
<p>primary_key=True implies null=False and unique=True.</p>
</blockquote>
<p>So technically, you do have a unique field in your <code>Article</code> model, but after going through few other posts on Stackoverflow,</p>
<p>see this as a reference (go through the comments too) -> <a href="http://stackoverflow.com/questions/6039443/primary-key-and-unique-key-in-django">Primary key and unique key in django</a></p>
<p>it seems that there some issue with the <code>primary_key=true</code>.
It seems that django only considers it as a primary key but not as <code>unique</code></p>
<p>So when you use a <code>primary_key=true</code> for <code>name</code>, then Django <strong>doesn't creates</strong> it own unique auto-incrementing ids for objects. Hence now you don't have any unique id for an object in the model <code>Article</code> due to which you get the error you are getting.</p>
<p>So, simply remove the <code>primary_key=true</code> and let Django use its own auto incrementing unique ids and you should not get that error.</p>
| 0 | 2016-08-04T14:51:39Z | [
"python",
"django",
"django-models",
"django-postgresql"
] |
Pass Argument to Python Script in Windows Command Prompt | 38,768,920 | <p>I'm trying to pass an argument to python via windows command prompt, but am encountering the following error: </p>
<p><code>[Errno 22] Invalid argument</code></p>
<p><strong>Command Prompt Code:</strong></p>
<p><code>C:\Python27\python.exe "C:\Users\Apples\Documents\ArcGIS\Grapes\Blueberry_Cobbler\recipe.py C:\Users\Apples\Documents\ArcGIS\Grapes\Blueberry_Cobbler\input1.txt"</code></p>
| -1 | 2016-08-04T13:36:35Z | 38,768,976 | <p>You are missing both a closing and opening <code>"</code>'s:</p>
<pre><code>C:\Python27\python.exe
"C:\Users\Apples\Documents\ArcGIS\Grapes\Blueberry_Cobbler\recipe.py"
"C:\Users\Apples\Documents\ArcGIS\Grapes\Blueberry_Cobbler\input1.txt"
</code></pre>
| 0 | 2016-08-04T13:38:50Z | [
"python",
"windows",
"parameter-passing",
"command-prompt",
"arcpy"
] |
Animated plot with different color for every data point | 38,768,921 | <p>I want to create an animated plot of a timeserie but I want to be able to color every data point differently. While I am running various analysis tasks on the timeserie data I want to color each data point according to the region that it belongs too.</p>
<p>I followed this <a href="http://matplotlib.org/examples/animation/animate_decay.html" rel="nofollow">example</a> to get an understanding of how animated plotting works and I also found that <a href="http://stackoverflow.com/questions/36699640/how-to-plot-animated-dots-in-different-colors-with-matplotlib">answer</a> that showcases how color can be incorporated. The problem is that in that approach the whole graph is re-plotted in every iteration, thus changing the color of the whole graph and not the newly plotted data point only.</p>
<p>Can someone show me how the decay example can be altered to assign different color to each data point?</p>
| 1 | 2016-08-04T13:36:40Z | 38,770,550 | <p>You can colour points using <code>scatter</code> and provided you're not planning on plotting too many points, simply adding new points each time with different colours may be the way to go. A minimal example based on decay,</p>
<pre><code>import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.animation as animation
def data_gen(t=0):
cnt = 0
while cnt < 1000:
cnt += 1
t += 0.01
yield t, np.sin(2*np.pi*t) * np.exp(-t/10.)
def get_colour(t):
cmap = matplotlib.cm.get_cmap('Spectral')
return cmap(t%1.)
def init():
ax.set_ylim(-1.1, 1.1)
ax.set_xlim(0, 10)
fig, ax = plt.subplots()
ax.grid()
def run(data):
# Get some data and plot
t, y = data
ax.scatter(t, y, c=get_colour(t))
#Update axis
xmin, xmax = ax.get_xlim()
if t >= xmax:
ax.set_xlim(xmin, 2*xmax)
ax.figure.canvas.draw()
ani = animation.FuncAnimation(fig, run, data_gen, blit=False, interval=10,
repeat=False, init_func=init)
plt.show()
</code></pre>
| 0 | 2016-08-04T14:43:47Z | [
"python",
"animation",
"matplotlib",
"plot"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.