title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Preventing multiple matches in list iteration | 38,663,046 | <p>I am relatively new to python, so I will try my best to explain what I am trying to do. I am trying to iterate through two lists of stars (which both contain record arrays), trying to match stars by their coordinates with a tolerance (in this case Ra and Dec, which are both indices within the record arrays). However, there appears to be multiple stars from one list that match the same star in the other. *This is due to both stars matching within the atol. Is there a way to prevent this? Here's what I have so far:</p>
<pre><code>from __future__ import print_function
import numpy as np
###importing data###
Astars = list()
for s in ApStars:###this is imported but not shown
Astars.append(s)
wStars = list()
data1 = np.genfromtxt('6819.txt', dtype=None, delimiter=',', names=True)
for star in data1:
wStars.append(star)
###beginning matching stars between the Astars and wStars###
list1 = list()
list2 = list()
for star,s in [(star,s) for star in wStars for s in Astars]:
if np.logical_and(np.isclose(s["RA"],star["RA"], atol=0.000277778)==True ,
np.isclose(s["DEC"],star["DEC"],atol=0.000277778)==True):
if star not in list1:
list1.append(star) #matched wStars
if s not in list2:
list2.append(s) #matched Astars
</code></pre>
<p>I cannot decrease the atol because it goes beyond the instrumental error. What happens is this: There are multiple Wstars that match one Astar. I just want a star for a star, if it is possible.</p>
<p>Any suggestions?</p>
| 6 | 2016-07-29T15:59:48Z | 38,669,732 | <p>I appreciate all of the you have provided! I did some asking around as well and found a clever way to accomplish what I was searching for. Here is what we came up with:</p>
<pre><code>sharedStarsW = list()
sharedStarsA = list()
for s in Astars:
distance = [((s["RA"] - star["RA"])**2 + (s["DEC"] - star["DEC"])**2)**0.5 for star in wStars]
if np.amin(distance) < 0.00009259266:
sharedStarsW.append(wStars[(np.argmin(distance))])
sharedStarsA.append(s)
</code></pre>
<p>Using list comprehension, this calculates the distance from an Astar to all wStars and takes all of them that fall within the 1/3 arcsecond. If an Astar star has multiple wStars matches, it appends the Wstar index that gives the shortest distance and its Astar.</p>
| 0 | 2016-07-30T02:11:29Z | [
"python",
"numpy"
] |
How to rotate image with Matplottlib without border? | 38,663,110 | <p>I have an image with a white background that I would like to rotate. The rotation, however, creates an unwanted border around the image (not the axes, but the grey outline around the pencil below). How can I remove this border?</p>
<pre><code>fig,ax = plt.subplots(figsize=(8,6))
pencil = mpimg.imread('Pencil.jpg')
pencil = ndimage.rotate(pencil,105)
ax.imshow(pencil,aspect='equal')
</code></pre>
<p><a href="http://i.stack.imgur.com/KbwoG.png" rel="nofollow"><img src="http://i.stack.imgur.com/KbwoG.png" alt="enter image description here"></a></p>
| 1 | 2016-07-29T16:03:32Z | 38,663,306 | <p><a href="http://stackoverflow.com/questions/9295026/matplotlib-plots-removing-axis-legends-and-white-spaces">Matplotlib plots: removing axis, legends and white spaces</a></p>
<p>Use <code>mpimg.axis('off')</code></p>
<p>See also the matplotlib documentation:</p>
<p><a href="http://matplotlib.org/api/axis_api.html" rel="nofollow">http://matplotlib.org/api/axis_api.html</a></p>
| 3 | 2016-07-29T16:16:06Z | [
"python",
"matplotlib"
] |
Python-Twiter: Can't covert 'bytes' object to string | 38,663,142 | <p>I am trying to encode tweets from unicode to utf-8 but I get the following error gets logged on CLI when I execute the file:</p>
<pre><code>File "PI.py", line 21, in analyze
text += s.text.encode('utf-8')
TypeError: Can't convert 'bytes' object to str implicitly
</code></pre>
<p>Here is my code:</p>
<pre><code>text = ""
for s in statuses:
if (s.lang =='en'):
text += s.text.encode('utf-8')
</code></pre>
<p>And here are the packages I am importing:</p>
<pre><code>import sys
import operator
import requests
import json
import twitter
from watson_developer_cloud import PersonalityInsightsV2 as PersonalityInsights
</code></pre>
<p>How can I get the strings (tweet text) to be converted to the right encosing properly so I can use them? What am I doing wrong?</p>
<p>Thanks in advance.</p>
| 0 | 2016-07-29T16:05:48Z | 38,663,175 | <p>You should initialize your <code>text</code> as bytes by appending a leading <code>b</code>:</p>
<pre><code>text = b""
</code></pre>
<p>This will allow the new bytes object to be concatenated without errors to the existing bytes object <code>text</code></p>
| 1 | 2016-07-29T16:07:49Z | [
"python",
"encoding",
"utf-8",
"python-twitter"
] |
Pivot table error:1 ndim Categorical are not supported at this time | 38,663,150 | <p>My goal is to box-plot the 'score' by 'label', I don't care about "date" and "Cusip". I want to use 'pivot' to reshape the data, so that each Label is in one column and I can boxplot it.</p>
<pre><code> date Cusip Label Score
663182 2015-07-31 00846UAG AAA 138.15
663183 2015-07-31 00846UAH AAA 171.93
663184 2015-07-31 00846UAJ AAA 175.67
663185 2015-07-31 023767AA BB 187.92
663186 2015-07-31 023770AA BB 176.25
t.pivot(index=['date','Cusip'],columns='Label',values='Score')
</code></pre>
<p>Errors shows:</p>
<pre><code>NotImplementedError: > 1 ndim Categorical are not supported at this time
</code></pre>
<p>More details:</p>
<pre><code>C:\Anaconda3\lib\site-packages\pandas\core\categorical.py in __init__(self, values, categories, ordered, name, fastpath, levels)
285 try:
--> 286 codes, categories = factorize(values, sort=True)
287 except TypeError:
C:\Anaconda3\lib\site-packages\pandas\core\algorithms.py in factorize(values, sort, order, na_sentinel, size_hint)
184 uniques = vec_klass()
--> 185 labels = table.get_labels(vals, uniques, 0, na_sentinel, True)
186
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_labels (pandas\hashtable.c:13921)()
ValueError: Buffer has wrong number of dimensions (expected 1, got 2)
</code></pre>
| 0 | 2016-07-29T16:06:17Z | 38,675,092 | <p>You really should be using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow"><code>pivot_table</code></a> as you have got duplicate entries in your <code>date</code> column.</p>
<pre><code>pd.pivot_table(df, values='Score', index=['date', 'Cusip'], columns=['Label']).boxplot()
</code></pre>
<p><img src="http://i.stack.imgur.com/8nx0m.png" alt="alt text" title="Resulting Boxplot"></p>
| 1 | 2016-07-30T14:37:40Z | [
"python",
"pandas",
"pivot"
] |
How to select a specific tag within this html? | 38,663,186 | <p>How would I select all of the titles in this page</p>
<pre><code>http://bulletin.columbia.edu/columbia-college/departments-instruction/african-american-studies/#coursestext
</code></pre>
<p>for example: I'm trying to get all the lines similar to this:</p>
<pre><code>AFAS C1001 Introduction to African-American Studies. 3 points.
</code></pre>
<p>main_page is iterating through all of the school classes from here so I can grab all of the titles like above:</p>
<pre><code>http://bulletin.columbia.edu/columbia-college/departments-instruction/
for page in main_page:
sub_abbrev = page.find("div", {"class": "courseblock"})
</code></pre>
<p>I have this code but I can't figure out exactly how to select all of the ('strong') tags of the first child.
Using latest python and beautiful soup 4 to web-scrape.
Lmk if there is anything else that is needed.
Thanks</p>
| 1 | 2016-07-29T16:08:39Z | 38,663,230 | <p>Iterate over elements with <code>courseblock</code> class, then, for every course, get the element with <code>courseblocktitle</code> class. Working example using <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors" rel="nofollow"><code>select()</code> and <code>select_one()</code> methods</a>:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = "http://bulletin.columbia.edu/columbia-college/departments-instruction/african-american-studies/#coursestext"
response = requests.get(url)
soup = BeautifulSoup(response.content, "html.parser")
for course in soup.select(".courseblock"):
title = course.select_one("p.courseblocktitle").get_text(strip=True)
print(title)
</code></pre>
<p>Prints:</p>
<pre><code>AFASÂ C1001Â Introduction to African-American Studies.3 points.
AFASÂ W3030Â African-American Music.3 points.
AFAS C3930 (Section 3) Topics in the Black Experience: Concepts of Race and Racism.4 points.
AFASÂ C3936Â Black Intellectuals Seminar.4 points.
AFASÂ W4031Â Protest Music and Popular Culture.3 points.
AFASÂ W4032Â Image and Identity in Contemporary Advertising.4 points.
AFASÂ W4035Â Criminal Justice and the Carceral State in the 20th Century United States.4 points.
AFAS W4037 (Section 1) Third World Studies.4 points.
AFASÂ W4039Â Afro-Latin America.4 points.
</code></pre>
<hr>
<p>A good follow-up question from @double_j:</p>
<blockquote>
<p>In the OPs example, he has a space between the points. How would you keep that? That's how the data shows on the site, even thought it's not really in the source code.</p>
</blockquote>
<p>I though about using <code>separator</code> argument of the <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#get-text" rel="nofollow"><code>get_text()</code> method</a>, but that would also add an extra space before the last dot. Instead, I would join the <code>strong</code> element texts via <code>str.join()</code>:</p>
<pre><code>for course in soup.select(".courseblock"):
title = " ".join(strong.get_text() for strong in course.select("p.courseblocktitle > strong"))
print(title)
</code></pre>
| 3 | 2016-07-29T16:11:51Z | [
"python",
"html",
"web-scraping",
"beautifulsoup"
] |
Strange results from timedelta with pandas | 38,663,296 | <p>I have a dataframe that looks like this:</p>
<pre><code>df = pd.DataFrame({'date_sent': ['06/11/2015', '', 'Not required', '06/11/2015'],
'date_published': ['06/11/2015', '', '', '23/01/2016']})
</code></pre>
<p>I want to calculate the difference between the two dates in each row, so first I convert the strings to date objects:</p>
<pre><code>df.date_published = pd.to_datetime(df.date_published.str.replace('Not required', ''))
df.date_sent = pd.to_datetime(df.date_sent.str.replace('Not required', ''))
</code></pre>
<p>Then I subtract one from the other:</p>
<pre><code>df['delay'] = df.date_published - df.date_sent
</code></pre>
<p>But this gives me peculiar results - it's not 226 days between 06/11/2015 and 23/01/2016:</p>
<pre><code>df
date_published date_sent delay
0 2015-06-11 2015-06-11 0 days
1 NaT NaT NaT
2 NaT NaT NaT
3 2016-01-23 2015-06-11 226 days
</code></pre>
<p>What am I doing wrong? I'm using pandas v0.18.</p>
| 0 | 2016-07-29T16:15:42Z | 38,663,462 | <p>It is exactly 226 days between those dates.</p>
| 0 | 2016-07-29T16:25:04Z | [
"python",
"pandas"
] |
Strange results from timedelta with pandas | 38,663,296 | <p>I have a dataframe that looks like this:</p>
<pre><code>df = pd.DataFrame({'date_sent': ['06/11/2015', '', 'Not required', '06/11/2015'],
'date_published': ['06/11/2015', '', '', '23/01/2016']})
</code></pre>
<p>I want to calculate the difference between the two dates in each row, so first I convert the strings to date objects:</p>
<pre><code>df.date_published = pd.to_datetime(df.date_published.str.replace('Not required', ''))
df.date_sent = pd.to_datetime(df.date_sent.str.replace('Not required', ''))
</code></pre>
<p>Then I subtract one from the other:</p>
<pre><code>df['delay'] = df.date_published - df.date_sent
</code></pre>
<p>But this gives me peculiar results - it's not 226 days between 06/11/2015 and 23/01/2016:</p>
<pre><code>df
date_published date_sent delay
0 2015-06-11 2015-06-11 0 days
1 NaT NaT NaT
2 NaT NaT NaT
3 2016-01-23 2015-06-11 226 days
</code></pre>
<p>What am I doing wrong? I'm using pandas v0.18.</p>
| 0 | 2016-07-29T16:15:42Z | 38,665,477 | <p>See if this helps.</p>
<pre><code> print pd.to_datetime('06/11/2016', dayfirst =True, format='%d/%m/%Y', errors = 'ignore')
print pd.to_datetime('06/11/2016', format='%m/%d/%Y', errors = 'ignore')
2016-11-06 00:00:00
2016-06-11 00:00:00
</code></pre>
| 0 | 2016-07-29T18:35:56Z | [
"python",
"pandas"
] |
How to setup path and env so correct Python used | 38,663,318 | <p>System A has both Python 2.7 and Python 3.4 installed.
System B has both Python 2.7 and Python 3.5 installed.</p>
<p>I have at top of Python script:</p>
<pre><code>#!/usr/bin/env python3.5
</code></pre>
<p>The reason being that python3 compiler must be used. I want to move it between machines but this will fail now. </p>
| 0 | 2016-07-29T16:16:39Z | 38,663,371 | <p>for me just</p>
<pre><code>#!/usr/bin/env python3
</code></pre>
<p>works fine</p>
| 1 | 2016-07-29T16:19:55Z | [
"python",
"bash",
"unix",
"environment-variables"
] |
How to setup path and env so correct Python used | 38,663,318 | <p>System A has both Python 2.7 and Python 3.4 installed.
System B has both Python 2.7 and Python 3.5 installed.</p>
<p>I have at top of Python script:</p>
<pre><code>#!/usr/bin/env python3.5
</code></pre>
<p>The reason being that python3 compiler must be used. I want to move it between machines but this will fail now. </p>
| 0 | 2016-07-29T16:16:39Z | 38,663,443 | <p>Use Virtualenv to setup your python environment.</p>
| 0 | 2016-07-29T16:24:10Z | [
"python",
"bash",
"unix",
"environment-variables"
] |
How to setup path and env so correct Python used | 38,663,318 | <p>System A has both Python 2.7 and Python 3.4 installed.
System B has both Python 2.7 and Python 3.5 installed.</p>
<p>I have at top of Python script:</p>
<pre><code>#!/usr/bin/env python3.5
</code></pre>
<p>The reason being that python3 compiler must be used. I want to move it between machines but this will fail now. </p>
| 0 | 2016-07-29T16:16:39Z | 38,663,718 | <p>If your are set on using <code>#!/usr/bin/env python3.5</code>, you could create a symbolic link to the python3.4 version (called python3.5) and then reference that in your script. So both environments could use the directive <code>#!/usr/bin/env python3.5</code>. Of course, please add a comment somewhere that this is a symbolic link so people are aware of this environment situation.</p>
<p>In fact, I think the <code>python</code> in <code>#!/usr/bin/env python</code> solution is a symbolic link.</p>
| 0 | 2016-07-29T16:42:17Z | [
"python",
"bash",
"unix",
"environment-variables"
] |
How can I turn the results from this joined query into dict based on one of the key values? | 38,663,333 | <p>Sorry i'm a bit new to all this.</p>
<p>I'm using psycopg2, python, and postgres.</p>
<p>I have two tables - Users and Events. I've set up a many-to-many relationship because an event can have many users and users can join many events. There's an intermediate table called UsersEvents matching User id to Event id. I want to get the list of events and users in a dict similar to this:</p>
<pre><code>[
{
event_id: 1,
event_name : 'event-name',
joined : [
{'username' : 'name'},
{'username' : 'name'},
{'username' : 'name'}
]
},
{
event_id: 2,
event_name : 'event2-name',
joined : [
{'username' : 'name2'},
{'username' : 'name2'}
]
}
]
</code></pre>
<p>I'm running a query like this with psycopg2 to get the list of events and users associated with them:</p>
<pre><code>SELECT events.id, event_name, users.username
FROM events
JOIN usersevents on events_id = events.id
JOIN users on usersevents.users_id = users.id;
</code></pre>
<p>which returns me back something like this:</p>
<pre><code>[
(1, 'event-name', 'name'),
(1, 'event-name', 'name'),
(1, 'event-name', 'name'),
(2, 'event2-name', 'name2'),
(2, 'event2-name', 'name2')
]
</code></pre>
<p>So it's great that it pulls back the information I need but now I have to try and figure out how to merge it into the format above. Is there a better query for what I'm trying to do, or should I just loop through all the results and try merging results with the same event name?</p>
<p>Edit: I understand how to return the results as a dict as pointed out in the comments. I want to take the rows that have duplicate event names and nest the users joining that event into a single dict row for a single event object.</p>
| 1 | 2016-07-29T16:17:29Z | 38,664,575 | <p>You want to build a data structure for each event by name. Create a dict that will map names to event data. For each row, either create or update the data for the event. Finally, the values from the dict is the list of event data.</p>
<pre><code>for id, event, user in results:
event = events.setdefault(id, {'event_id': id, 'event_name': name, 'joined': []})
event['joined'].append({'username': user})
data = list(events.values())
</code></pre>
| 0 | 2016-07-29T17:39:09Z | [
"python",
"psycopg2"
] |
How can I force NumPy to append a list as an object? | 38,663,380 | <p>I have an empty <s>list</s> array declared, as such:</p>
<p><code>r = np.array([])</code></p>
<p>And I have an operation that adds an array of values to <code>r</code> on every loop. Say the first loop adds <code>[1,2,3]</code> and the next adds <code>[4,5,6,7]</code>. How can I append to the array <code>r</code> while forcing the arguments to <code>append</code> to be added as <em>objects</em> rather than element-wise?</p>
<p>Meaning, I want this after it is finished:</p>
<p><code>r = np.array([[1, 2, 3], [4, 5, 6, 7]])</code></p>
<p>where I assume it would be required that <code>dtype = object</code>. Or at least that's what I think I want. </p>
<p>If I use</p>
<pre><code>r = np.append(r, [1,2,3])
r = np.append(r, [4,5,6,7])
</code></pre>
<p>I get </p>
<p><code>r = np.array([1., 2., 3., 4., 5., 6., 7.])</code></p>
<p>The only problem is, I need to use this later in a specific way. I need to be able to access the sub-lists individually and do operations on them, but I also, later on, need to be able to access the array as a whole. Normally I would do a <code>np.ravel</code> to get this, but I cannot do that when the sub-arrays don't have agreeing lengths (they won't).</p>
| 1 | 2016-07-29T16:20:23Z | 38,663,521 | <p>You can use a list comprehension when creating the array:</p>
<pre><code>>>> np.array([row for row in [1, 2, 3], [4, 5, 6, 6]])
array([[1, 2, 3], [4, 5, 6, 6]], dtype=object)
</code></pre>
| 2 | 2016-07-29T16:29:06Z | [
"python",
"arrays",
"numpy"
] |
Python encryption and rotation | 38,663,417 | <p>Could you please help with the following code. I am trying to write a function def encrypt(text, rot), which will receive input as a string and integer. My function should result of rotating each letter in the text by no of rotations to the right. For example my output should be like this:`The final outcome is like</p>
<pre><code>Enter a message:
Hello!
Rotate by:
5
Mjqqt
</code></pre>
<p>!</p>
<p>This is the code: </p>
<pre><code> def encrypt(text, rot):
alphabet = 'abcdefghijklmnopqrstuvwxyz'
encrypted = ''
for char in text, rot:
if char == ' ':
encrypted = encrypted + ' '
else:
rotated_index = alphabet.index(char) + 5
if rotated_index < 26:
encrypted = encrypted + alphabet[rotated_index]
else:
encrypted = encrypted + alphabet[rotated_index % 26]
return encrypted
print(rot5('abcde'))
</code></pre>
<p>Could you please help with the above following code. This should generate the following ouput: Enter a message:
Hello!
Rotate by:
5
Mjqqt</p>
<hr>
<p>Thank you. I did correct this, but the problem is when I change hello to Hello, World it is giving me an error.</p>
<pre><code>def encrypt(text, rot):
alphabet = 'abcdefghijklmnopqrstuvwxyz'
encrypted = ''
for char in text:
if char == ' ':
encrypted = encrypted + ' '
else:
rotated_index = alphabet.index(char) + rot
if rotated_index < 26:
encrypted = encrypted + alphabet[rotated_index]
else:
encrypted = encrypted + alphabet[rotated_index % 26]
return encrypted
print(encrypt('Hello, World', 5))
</code></pre>
| -1 | 2016-07-29T16:22:48Z | 38,663,862 | <p>You don't have to check if the index is less than 26. Because the <code>%</code> operator returns the dividend when the dividend is less than the divisor (26) and the remainder when the dividend is greater than or equal to the divisor.</p>
<pre><code>def rotate(s, n):
from string import ascii_letters as letters
length = len(letters)
for char in s:
index = (letters.find(char) + n) % length
yield letters[index]
''.join(rot('Hello', 5))
'Mjqqt'
</code></pre>
| 0 | 2016-07-29T16:52:26Z | [
"python",
"python-2.7",
"python-3.x"
] |
Repeat numbers according to pattern numpy | 38,663,470 | <p>I have an array, say <code>p = [2,3,2,4]</code> and a number, say <code>n = 4</code>. I need to generate an array of ones and zeros according to the pattern p, n-p. That is for each element, u in p, there are u ones followed by n-u zeros. It's very easy to do this using the np.insert operation. But theano doesn't have any insert op. Is it possible to achieve this without using a loop? Also, given multiple such ps and corresponding ns, is it possible to generate the ones and zeros patterns without using a loop?
Here's an example:
1 value of p:</p>
<pre><code>p = [2,3,2,4,1], n=4
n-p = [2,1,2,0,3]
result = [1,1,0,0,1,1,1,0,1,1,0,0,1,1,1,1,1,0,0,0]
</code></pre>
<p>multiple values of p: In this case all p's will have same dimension(p is a 2D array)</p>
<pre><code>p = [[2,3,2,4,1],[2,2,3,5,4]], n = [4, 5]
n-p = [[2,1,2,0,3],[3,3,2,0,1]]
result = [[1,1,0,0,1,1,1,0,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0],[1,1,0,0,0,1,1,0,0,0,1,1,1,0,0,1,1,1,1,1,1,1,1,1,0]]
</code></pre>
<p>Please note that I've padded result[0] with 0s at the end to match the dimensions of result[0] and result[1]</p>
| 2 | 2016-07-29T16:25:20Z | 38,663,777 | <pre><code>p = numpy.array([2, 3, 2, 4])
n = 4
result = (p[:, None] > numpy.arange(n)).ravel().astype(int)
</code></pre>
<p>We compare</p>
<pre><code>[[2]
[3]
[2]
[4]]
</code></pre>
<p>to <code>[0 1 2 3]</code> to get an array of booleans, then flatten it and convert it to integers to get the output you want.</p>
| 3 | 2016-07-29T16:46:15Z | [
"python",
"numpy"
] |
Python: subprocess.Popen().communicate() prints output of an SSH command to stdout instead of returning the output | 38,663,473 | <p>Here's a copy of my python terminal:</p>
<pre><code>>> import subprocess
>> import shlex
>> host = 'myhost'
>> subprocess.Popen(shlex.split('ssh -o LogLevel=error -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o UserKnownHostsFile=/dev/null -o BatchMode=yes %s "hostname --short"' % (host))).communicate()
myhost
(None, None)
</code></pre>
<p>I would expect the output to be <code>('myhost', None)</code>. Why isn't the output being stored in the return tuple for communicate()?</p>
| 0 | 2016-07-29T16:25:32Z | 38,663,586 | <p>You need to give the <code>subprocess.Popen</code> call a <code>stdout</code> argument. For example:</p>
<pre><code>>>> subprocess.Popen("ls", stdout=subprocess.PIPE).communicate()
(b'Vagrantfile\n', None)
>>>
</code></pre>
<p>The output of the command then appears where you expect it.</p>
| 2 | 2016-07-29T16:33:06Z | [
"python",
"python-2.7",
"subprocess",
"shlex"
] |
Setting values for a numpy ndarray using mask | 38,663,530 | <p>I want to calculate business days between two times, both of which contain null values, following <a href="http://stackoverflow.com/questions/37576552/dealing-with-none-values-when-using-pandas-groupby-and-apply-with-a-function">this question</a> related to calculating business days. I've identified that the way I'm setting values using a mask does not behave as expected. </p>
<p>I'm using python 2.7.11, pandas 0.18.1 and numpy 1.11.0. My slightly modified code:</p>
<pre><code>import datetime
import numpy as np
import pandas as pd
def business_date_diff(start, end):
mask = pd.notnull(start) & pd.notnull(end)
start = start[mask]
end = end[mask]
start = start.values.astype('datetime64[D]')
end = end.values.astype('datetime64[D]')
result = np.empty(len(mask), dtype=float)
result[mask] = np.busday_count(start, end)
result[~mask] = np.nan
return result
</code></pre>
<p>Unfortunately, this doesn't return the expected business day differences (instead I get a number of very near 0 floats). When I check <code>np.busday_count(start, end)</code> the results look correct. </p>
<pre><code>print start[0:5]
print end[0:5]
print np.busday_count(start, end)[0:5]
# ['2016-07-04' '2016-07-04' '2016-07-04' '2016-07-04' '2016-07-04']
# ['2016-07-05' '2016-07-05' '2016-07-05' '2016-07-06' '2016-07-06']
# [1 1 1 2 2]
</code></pre>
<p>But when I check the values for <code>results</code> the results do not make sense:</p>
<pre><code>...
result = np.empty(len(mask), dtype=float)
result[mask] = np.busday_count(start, end)
result[~mask] = np.nan
print result
# [ nan nan 1.43700866e-210 1.45159738e-210
# 1.45159738e-210 1.45159738e-210 1.45159738e-210 1.46618609e-210
# 1.45159738e-210 1.64491834e-210 1.45159738e-210 1.43700866e-210
# 1.43700866e-210 1.43700866e-210 1.43700866e-210 1.45159738e-210
# 1.43700866e-210 1.43700866e-210 1.43700866e-210 1.43700866e-210
</code></pre>
<p>What am I doing wrong?</p>
| 3 | 2016-07-29T16:29:37Z | 38,669,888 | <p>Your problem is that with your version of numpy, you can't use a boolean array as an index to an array. Just use <code>np.where(mask==True)</code> instead of mask and <code>np.where(mask==False)</code> instead of ~mask, and it will work as desired.</p>
| 1 | 2016-07-30T02:44:03Z | [
"python",
"numpy",
"pandas"
] |
reshape dataframe in pandas to layout data horizontally | 38,663,573 | <p>I have the following DataFrame:</p>
<pre><code>data = [['label1', 1234], ['label1', 12345], ['label2', 2345], ['label2', 4567], ['label3', 123], ['label2', 4589]]
pd.DataFrame(data, columns=['label', 'id'])
</code></pre>
<p>outputs:</p>
<pre><code> label id
0 label1 1234
1 label1 12345
2 label2 2345
3 label2 4567
4 label3 123
5 label2 4589
</code></pre>
<p>I would like to reshape the data to the following:</p>
<pre><code> label id1 id2 id3
0 label1 1234 12345 None
1 label2 2345 4567 4589
2 label3 123 None None
</code></pre>
<p>Basically lay out the ids horizontally, and add unique labels to the ids, with each row keyed on label.</p>
<p>I was looking at pivoting operations in pandas, and I can't seem to figure out the exact incantation I need to get the data in the format I need for output.</p>
<p>Any help would be greatly appreciated!</p>
| 0 | 2016-07-29T16:32:07Z | 38,663,683 | <p>Assign a new column to enumerate the ids, then use pivot:</p>
<pre><code>(df.assign(ids='id' + (df.groupby('label').cumcount()+1).astype(str))
.pivot(index='label', columns='ids', values='id'))
Out:
ids id1 id2 id3
label
label1 1234.0 12345.0 NaN
label2 2345.0 4567.0 4589.0
label3 123.0 NaN NaN
</code></pre>
| 2 | 2016-07-29T16:40:15Z | [
"python",
"pandas",
"reshape"
] |
How can I serve a HTTP page and a websocket on the same url in tornado | 38,663,666 | <p>I have an API and, depending on the request protocol, I want to either serve the client an HTTP response, or I want to connect a websocket and send the response followed by incremental updates. However in Tornado I can only specify a single handler for any URL.</p>
| 0 | 2016-07-29T16:38:51Z | 38,663,667 | <p>The difference between a request for an HTTP page and a websocket is the presence of a header. Unfortunately afaik there is no way to tell the tornado router to choose one or the other handler based on a header (other than host).</p>
<p>I can however make a handler that inherits from both my already quite elaborate <code>MyBaseRequestHandler</code> and <code>WebSocketHandler</code>, with some magic. The following code works in python 3.5 and tornado 4.3, your mileage may vary on other versions:</p>
<pre><code>class WebSocketHandlerMixin(tornado.websocket.WebSocketHandler):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# since my parent doesn't keep calling the super() constructor,
# I need to do it myself
bases = type(self).__bases__
assert WebSocketHandlerMixin in bases
meindex = bases.index(WebSocketHandlerMixin)
try:
nextparent = bases[meindex + 1]
except IndexError:
raise Exception("WebSocketHandlerMixin should be followed "
"by another parent to make sense")
# undisallow methods --- t.ws.WebSocketHandler disallows methods,
# we need to re-enable these methods
def wrapper(method):
def undisallow(*args2, **kwargs2):
getattr(nextparent, method)(self, *args2, **kwargs2)
return undisallow
for method in ["write", "redirect", "set_header", "set_cookie",
"set_status", "flush", "finish"]:
setattr(self, method, wrapper(method))
nextparent.__init__(self, *args, **kwargs)
async def get(self, *args, **kwargs):
if self.request.headers.get("Upgrade", "").lower() != 'websocket':
return await self.http_get(*args, **kwargs)
# super get is not async
super().get(*args, **kwargs)
class MyDualUseHandler(WebSocketHandlerMixin, MyBaseHandler):
def http_get():
self.write("My HTTP page")
def open(self, *args, **kwargs):
self.write_message("Hello WS buddy")
def on_message(self, message):
self.write_message("I hear you: %s" % message)
</code></pre>
| 0 | 2016-07-29T16:38:51Z | [
"python",
"websocket",
"tornado"
] |
Python Pandas - Combining Multiple Columns into one Staggered Column | 38,663,711 | <p>How do you combine multiple columns into one staggered column? For example, if I have data:</p>
<pre><code> Column 1 Column 2
0 A E
1 B F
2 C G
3 D H
</code></pre>
<p>And I want it in the form:</p>
<pre><code> Column 1
0 A
1 E
2 B
3 F
4 C
5 G
6 D
7 H
</code></pre>
<p>What is a good, vectorized pythonic way to go about doing this? I could probably do some sort of df.apply() hack but I'm betting there is a better way. The application is putting multiple dimensions of time series data into a single stream for ML applications. </p>
| 2 | 2016-07-29T16:41:45Z | 38,663,768 | <p>First stack the columns and then drop the multiindex:</p>
<pre><code>df.stack().reset_index(drop=True)
Out:
0 A
1 E
2 B
3 F
4 C
5 G
6 D
7 H
dtype: object
</code></pre>
| 5 | 2016-07-29T16:45:14Z | [
"python",
"pandas"
] |
Python Pandas - Combining Multiple Columns into one Staggered Column | 38,663,711 | <p>How do you combine multiple columns into one staggered column? For example, if I have data:</p>
<pre><code> Column 1 Column 2
0 A E
1 B F
2 C G
3 D H
</code></pre>
<p>And I want it in the form:</p>
<pre><code> Column 1
0 A
1 E
2 B
3 F
4 C
5 G
6 D
7 H
</code></pre>
<p>What is a good, vectorized pythonic way to go about doing this? I could probably do some sort of df.apply() hack but I'm betting there is a better way. The application is putting multiple dimensions of time series data into a single stream for ML applications. </p>
| 2 | 2016-07-29T16:41:45Z | 38,663,788 | <p>To get a dataframe:</p>
<pre><code> pd.DataFrame(df.values.reshape(-1, 1), columns=['Column 1'])
</code></pre>
<p><a href="http://i.stack.imgur.com/axhTD.png" rel="nofollow"><img src="http://i.stack.imgur.com/axhTD.png" alt="enter image description here"></a></p>
<p>For a series answering OP question:</p>
<pre><code> pd.Series(df.values.flatten(), name='Column 1')
</code></pre>
<p>For a series timing tests:</p>
<pre><code>pd.Series(get_df(n).values.flatten(), name='Column 1')
</code></pre>
<hr>
<h3>Timing</h3>
<p><strong>code</strong></p>
<pre><code>def get_df(n=1):
df = pd.DataFrame({'Column 2': {0: 'E', 1: 'F', 2: 'G', 3: 'H'},
'Column 1': {0: 'A', 1: 'B', 2: 'C', 3: 'D'}})
return pd.concat([df for _ in range(n)])
</code></pre>
<p><strong>Given Sample</strong></p>
<p><a href="http://i.stack.imgur.com/DzASl.png" rel="nofollow"><img src="http://i.stack.imgur.com/DzASl.png" alt="enter image description here"></a></p>
<p><strong>Given Sample * 10,000</strong></p>
<p><a href="http://i.stack.imgur.com/uBs0J.png" rel="nofollow"><img src="http://i.stack.imgur.com/uBs0J.png" alt="enter image description here"></a></p>
<p><strong>Given Sample * 1,000,000</strong></p>
<p><a href="http://i.stack.imgur.com/Mja0Z.png" rel="nofollow"><img src="http://i.stack.imgur.com/Mja0Z.png" alt="enter image description here"></a></p>
| 4 | 2016-07-29T16:46:52Z | [
"python",
"pandas"
] |
Submitting form using mechanize in python | 38,663,745 | <p>I am trying to submit a form to <a href="http://apps.fas.usda.gov/esrquery/esrq.aspx" rel="nofollow">http://apps.fas.usda.gov/esrquery/esrq.aspx</a> in python, using the following code:</p>
<pre><code>import urllib
from bs4 import BeautifulSoup
import mechanize
import datetime
today = datetime.date.today().strftime("%m/%d/%Y")
url = 'http://apps.fas.usda.gov/esrquery/esrq.aspx'
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
viewstate = soup.find('input', {'id' : '__VIEWSTATE'})['value']
eventval = soup.find('input', {'id' : '__EVENTVALIDATION'})['value']
br = mechanize.Browser(factory=mechanize.RobustFactory())
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
br.open(url)
# fill form
br.select_form("aspnetForm")
br.form.set_all_readonly(False)
br.form['__EVENTTARGET'] = ''
br.form['__EVENTARGUMENT'] = ''
br.form['__LASTFOCUS'] = ''
br.form['__VIEWSTATE'] = viewstate
br.form['__VIEWSTATEGENERATOR'] = '41AA5B91'
br.form['__EVENTVALIDATION'] = eventval
br.form['ctl00$MainContent$lbCommodity'] = ['401']
br.form['ctl00$MainContent$lbCountry'] = ['0:0']
br.form['ctl00$MainContent$ddlReportFormat'] = ['10']
br.find_control('ctl00$MainContent$cbxSumGrand').items[0].selected = True
br.find_control('ctl00$MainContent$cbxSumKnown').items[0].selected = False
br.form['ctl00$MainContent$rblOutputType'] = ['2']
br.form['ctl00$MainContent$tbStartDate'] = '01/01/1999'
br.form['ctl00$MainContent$ibtnStart'] = ''
br.form['ctl00$MainContent$tbEndDate'] = today
br.form['ctl00$MainContent$ibtnEnd'] = ''
br.form['ctl00$MainContent$rblColumnSelection'] = ['regular']
response = br.submit()
</code></pre>
<p>The response I am getting is essentially just the html code of the site with the form filled out as expected. However, I was expecting an excel file (as I have selected OutputType value of 2)</p>
<p>I think I am missing something on the submission front. Could somebody shed some light on what I am missing?</p>
| 0 | 2016-07-29T16:44:06Z | 38,665,747 | <p>You're close, but there's more you need to do after submitting. In this case, just add:</p>
<pre><code>doc = response.read()
ofile = '<your path>'
with open(ofile, 'w') as f:
f.write(doc)
</code></pre>
<p>I couldn't actually test this on your website at the moment, so I'm just assuming all your settings before this are correct. I only have Python 3 at work, and mechanize only works on 2.x. Regardless, this is generally the way you want to retrieve this sort of output.</p>
| 0 | 2016-07-29T18:53:27Z | [
"python",
"forms",
"submit",
"mechanize",
"submission"
] |
Given multiple two columns sets of a min/max how to return index if a number falls between min/max | 38,663,764 | <p>My input CSV looks like this </p>
<pre><code>Tier | A | | B | | C |
| Min | Max | Min | Max | Min | Max
1 | 0 | .5 | 0 | .25 | 0 | .92
2 |.51 | 1.0 | .26 | .50 | .93 | 1.5
</code></pre>
<p>Given an input dictionary <code>{A: .56, B: .22, C: .99}</code> I want to return <code>{A: 2, B: 1, C: 2}</code>, the tiers corresponding to where the number is within the range.</p>
<p>My problem is that I'm not sure how to read the header into a multi-index, or even if its worth bothering. </p>
<p>Currently what I've tried is zipping the columns every other and then turning those into one column tuples, storing the tuples under every set of min/max per "A B C" set. I'm also thinking about just going down the max column and finding the first tier which the number is under.</p>
<p>But these don't seem like the best way to do this, any tips? </p>
| 1 | 2016-07-29T16:44:58Z | 38,664,879 | <p>Skip the first couple rows. You can use the kwarg <code>header=[0,1]</code> to read the first two rows as a <code>MultiIndex</code> but the missing values in level 0 cause placeholder names to be used (in the columns that don't have an 'A', 'B', or 'C'.</p>
<p>See the <code>read_csv</code> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">docs</a> for more details about the args/kwargs.</p>
<pre><code>df = pd.read_csv('tmp.csv', sep=' *\| *', skiprows=2, index_col=0, header=None)
df.columns = pd.MultiIndex.from_product([('A', 'B', 'C'), ('Min', 'Max')])
def get_indicator(letter, val, df):
m = (df[letter]['Min'] <= val) & (df[letter]['Max'] >= val)
m = m[m]
return None if m.empty else m.index[0]
d = {A: .56, B: .22, C: .99}
res = {k: get_indicator(k, v, df) for (k, v) in d.items()}
</code></pre>
| 2 | 2016-07-29T17:58:50Z | [
"python",
"pandas"
] |
Given multiple two columns sets of a min/max how to return index if a number falls between min/max | 38,663,764 | <p>My input CSV looks like this </p>
<pre><code>Tier | A | | B | | C |
| Min | Max | Min | Max | Min | Max
1 | 0 | .5 | 0 | .25 | 0 | .92
2 |.51 | 1.0 | .26 | .50 | .93 | 1.5
</code></pre>
<p>Given an input dictionary <code>{A: .56, B: .22, C: .99}</code> I want to return <code>{A: 2, B: 1, C: 2}</code>, the tiers corresponding to where the number is within the range.</p>
<p>My problem is that I'm not sure how to read the header into a multi-index, or even if its worth bothering. </p>
<p>Currently what I've tried is zipping the columns every other and then turning those into one column tuples, storing the tuples under every set of min/max per "A B C" set. I'm also thinking about just going down the max column and finding the first tier which the number is under.</p>
<p>But these don't seem like the best way to do this, any tips? </p>
| 1 | 2016-07-29T16:44:58Z | 38,668,405 | <p>With this setup: </p>
<pre><code>arrays = [[0, .5, 0, .25, 0, .92,],[.51,1, .26, .5, .93, 1.5, ]]
col = pd.MultiIndex.from_product([('A', 'B', 'C'), ('Min', 'Max')])
df = pd.DataFrame(arrays, columns=col )
A B C
Min Max Min Max Min Max
0 0.00 0.5 0.00 0.25 0.00 0.92
1 0.51 1.0 0.26 0.50 0.93 1.50
dd = {'A':.56,'B':.22, 'C':.99}
</code></pre>
<p>Try This: </p>
<pre><code>ddOut = {}
for k,v in dd.iteritems():
if v <= df[(k, "Max")][0] : ddOut[k] = 1
elif v >= df[(k, "Max")][0] and v < df[(k, "Max")][1]: ddOut[k] = 2
print ddOut
{'A': 2, 'C': 2, 'B': 1}
</code></pre>
| 0 | 2016-07-29T22:33:25Z | [
"python",
"pandas"
] |
Why is this inter-process communication working? | 38,663,898 | <p>In Python, exchanging objects between processes is well documented: queues, pipes or pools should be used (see <a href="https://docs.python.org/3/library/multiprocessing.html#exchanging-objects-between-processes" rel="nofollow">doc</a>). So why is this super simple code working without any of these communication tools?</p>
<pre><code>from multiprocessing import Process
from time import sleep
from random import random
class Child_process(Process):
def __init__(self):
super(Child_process,self).__init__()
self._memory = {'a':1}
def writeInMemory(self,key,value):
self._memory[key]=value
def readFromMemory(self,key):
return self._memory[key]
def run(self):
while True:
sleep(random())
def main():
# start up the child process:
child = Child_process()
child.daemon=True
child.start()
print 'Type Ctrl C to stop'
while True:
print "in sub process a = ", child.readFromMemory('a')
child.writeInMemory('b',random())
print "in sub process b = ", child.readFromMemory('b')
sleep(5*random())
# exiting
child.terminate()
child.join()
if __name__ == '__main__':
main()
</code></pre>
<p>Result is</p>
<pre><code>Type Ctrl C to stop
in sub process a = 1
in sub process b = 0.469400505093
in sub process a = 1
in sub process b = 0.43154478374
in sub process a = 1
in sub process b = 0.519863589476
</code></pre>
| 0 | 2016-07-29T16:54:25Z | 38,664,067 | <p>Because it is only relying on shared memory between the threads.
But if you want to send blocking messages and have both good response time and low cpu consumption you have to share pipes or whatever.</p>
<p>In your example you use random sleep calls so it is not representative.</p>
| 0 | 2016-07-29T17:05:26Z | [
"python",
"process",
"multiprocessing"
] |
Why is this inter-process communication working? | 38,663,898 | <p>In Python, exchanging objects between processes is well documented: queues, pipes or pools should be used (see <a href="https://docs.python.org/3/library/multiprocessing.html#exchanging-objects-between-processes" rel="nofollow">doc</a>). So why is this super simple code working without any of these communication tools?</p>
<pre><code>from multiprocessing import Process
from time import sleep
from random import random
class Child_process(Process):
def __init__(self):
super(Child_process,self).__init__()
self._memory = {'a':1}
def writeInMemory(self,key,value):
self._memory[key]=value
def readFromMemory(self,key):
return self._memory[key]
def run(self):
while True:
sleep(random())
def main():
# start up the child process:
child = Child_process()
child.daemon=True
child.start()
print 'Type Ctrl C to stop'
while True:
print "in sub process a = ", child.readFromMemory('a')
child.writeInMemory('b',random())
print "in sub process b = ", child.readFromMemory('b')
sleep(5*random())
# exiting
child.terminate()
child.join()
if __name__ == '__main__':
main()
</code></pre>
<p>Result is</p>
<pre><code>Type Ctrl C to stop
in sub process a = 1
in sub process b = 0.469400505093
in sub process a = 1
in sub process b = 0.43154478374
in sub process a = 1
in sub process b = 0.519863589476
</code></pre>
| 0 | 2016-07-29T16:54:25Z | 38,664,146 | <p>That's because <strong>no</strong> interprocess communication is happening.</p>
<p>Modify the code to print the value of <code>_memory</code> in the child process:</p>
<pre><code>def run(self):
while True:
print('Memory is', self._memory)
sleep(random())
</code></pre>
<p>And you'll see that the memory inside the subprocess never changes.</p>
<p>A sample output is:</p>
<pre><code>Type Ctrl C to stop
in sub process a = 1
in sub process b = 0.571476878791
('memory', {'a': 1})
('memory', {'a': 1})
('memory', {'a': 1})
('memory', {'a': 1})
in sub process a = 1
in sub process b = 0.0574249557159
('memory', {'a': 1})
('memory', {'a': 1})
</code></pre>
<p>Note how it always prints:</p>
<pre><code>('memory', {'a':1})
</code></pre>
<p>instead of, say:</p>
<pre><code>('memory', {'a': 1, 'b': 0.0574249557159})
</code></pre>
<p>So what is happening is that all your calls are <em>local</em>. you are spawning a subprocess that does nothing and you aren't performing any communication with it. The subprocess when is spawned <em>copies</em> the <code>_memory</code> attribute but that attribute is then never modified, and your changes in the main process do not affect the child process.</p>
<hr>
<p>To be dead clear: you are basically wasting a subprocess. You are spawning one without making <em>any</em> use out of it, and after spawning it you are treating your <code>child</code> exactly as any other python object. </p>
| 0 | 2016-07-29T17:11:04Z | [
"python",
"process",
"multiprocessing"
] |
Executing script requiring root access via CGI | 38,663,991 | <p>I have a script <strong>script.py</strong> which I'm able to execute via CGI by navigating to <em>mydomain.com/runscript</em>.</p>
<p>The script however makes a subprocess call to <strong>echo "mysqldump ..." | sudo -i</strong> and <strong>sudo python</strong> using <strong>os.system(COMMAND)</strong>. When I attempt to run the script via the weblink, I get this error in <em>/var/log/apache2/error.log</em>:</p>
<blockquote>
<p>[Fri Jul 29 16:52:42.515223 2016] [cgi:error] [pid 3013] [client ##.###.##.##:#####] AH01215: sudo: no tty present and no askpass program specified</p>
</blockquote>
<p>This is because CGI runs as user <strong>www-data</strong>, which doesn't have sudo permission.</p>
<p>I've tried adding this to my <strong>sudoers</strong> file:</p>
<pre><code>%www-data ALL = (root) NOPASSWD: /var/www/html/scripts/script.py
</code></pre>
<p>however the errors persist. Why are the subprocess calls not receiving root access, and how can I give it to them and ONLY them? Thank you!</p>
| 0 | 2016-07-29T17:00:26Z | 38,981,305 | <p>You can pass the password into sudo command by piping it:</p>
<p><code>echo <your password> | sudo -S <your command></code> </p>
| 0 | 2016-08-16T17:45:10Z | [
"python",
"apache",
"cgi",
"root",
"sudoers"
] |
split string list into sublist in python | 38,663,995 | <p>Looking for a way to split the following string list into sublist and print using a for loop.</p>
<pre><code>[[[u'Book1', None, u'Thriller', u'John C', u'07/12/2012'],
[u'Book2', u'1', u'Action', u'Tom B', u'07/12/2012'],
[u'Book3', None, u'Romance', u'Angie P', u'07/12/2012'],
[u'Book4', None, u'Comedy', u'Tracy N', u'07/12/2012'],
[u'Book5', None, u'Drama', u'Kumar P', u'07/12/2012'],
[u'Book6', None, u'Action&Drama', u'Ben J', u'07/12/2012']]]
</code></pre>
<p>Any suggestion please.</p>
| 0 | 2016-07-29T17:00:39Z | 38,664,137 | <p>Your question is a bit vague! If I understood your question correctly, you can do it like this:</p>
<pre><code>a = [[[u'Book1', None, u'Thriller', u'John C', u'07/12/2012'], [u'Book2', u'1', u'Action', u'Tom B', u'07/12/2012'], [u'Book3', None, u'Romance', u'Angie P', u'07/12/2012'], [u'Book4', None, u'Comedy', u'Tracy N', u'07/12/2012'], [u'Book5', None, u'Drama', u'Kumar P', u'07/12/2012'], [u'Book6', None, u'Action&Drama', u'Ben J', u'07/12/2012']]]
for v in a[0]:
print(v)
</code></pre>
| 1 | 2016-07-29T17:10:28Z | [
"python",
"string",
"list",
"loops",
"split"
] |
split string list into sublist in python | 38,663,995 | <p>Looking for a way to split the following string list into sublist and print using a for loop.</p>
<pre><code>[[[u'Book1', None, u'Thriller', u'John C', u'07/12/2012'],
[u'Book2', u'1', u'Action', u'Tom B', u'07/12/2012'],
[u'Book3', None, u'Romance', u'Angie P', u'07/12/2012'],
[u'Book4', None, u'Comedy', u'Tracy N', u'07/12/2012'],
[u'Book5', None, u'Drama', u'Kumar P', u'07/12/2012'],
[u'Book6', None, u'Action&Drama', u'Ben J', u'07/12/2012']]]
</code></pre>
<p>Any suggestion please.</p>
| 0 | 2016-07-29T17:00:39Z | 38,664,142 | <p>Are you looking for this?</p>
<pre><code>def testString():
input = [[
[u'Book1', None, u'Thriller', u'John C', u'07/12/2012'],
[u'Book2', u'1', u'Action', u'Tom B', u'07/12/2012'],
[u'Book3', None, u'Romance', u'Angie P', u'07/12/2012'],
[u'Book4', None, u'Comedy', u'Tracy N', u'07/12/2012'],
[u'Book5', None, u'Drama', u'Kumar P', u'07/12/2012'],
[u'Book6', None, u'Action&Drama', u'Ben J', u'07/12/2012']
]]
for subarray in input[0]:
print (subarray)
</code></pre>
<p>This is the output</p>
<pre><code>['Book1', None, 'Thriller', 'John C', '07/12/2012']
['Book2', '1', 'Action', 'Tom B', '07/12/2012']
['Book3', None, 'Romance', 'Angie P', '07/12/2012']
['Book4', None, 'Comedy', 'Tracy N', '07/12/2012']
['Book5', None, 'Drama', 'Kumar P', '07/12/2012']
['Book6', None, 'Action&Drama', 'Ben J', '07/12/2012']
</code></pre>
| 2 | 2016-07-29T17:10:47Z | [
"python",
"string",
"list",
"loops",
"split"
] |
How do I Put 2 Conditions in an "if" Statement? | 38,664,017 | <p>I am trying to set up a function in a way so that it will only execute if two conditions are met: if the variable is greater than some value and smaller than a another value.</p>
<p>I have two functions over this one that define start_time and end_time, as well as a loop that processes the files. As you can see by my <code>if</code> statement, I am trying to read a file's data within a range of numbers. When I set it as I did, however, I get this error:</p>
<pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>I don't understand how to solve this, especially because I am using two variables (start_date, end_date: both are given a numeric value on the previous function).</p>
<p><strong>In short, how to I make my desired "if" statement possible?</strong></p>
<p><strong>Edit</strong>: In addition, I want the files that don't meet the criteria to be ignored, and I am not sure if they will be if I don't write an "else" statement.</p>
| -3 | 2016-07-29T17:02:04Z | 38,664,094 | <p>The error is from the fact you are comparing an array (a.k.a. <code>juld</code>) to a number. In short, you need to specify either a specific element for the if statement to check, or use the <code>any()</code> or <code>all()</code> methods as detailed in the error message. You can find the definition of these methods <a href="https://docs.python.org/2/library/functions.html" rel="nofollow">here</a>.</p>
| 1 | 2016-07-29T17:07:22Z | [
"python",
"function",
"loops",
"if-statement",
"python-3.5"
] |
How do I Put 2 Conditions in an "if" Statement? | 38,664,017 | <p>I am trying to set up a function in a way so that it will only execute if two conditions are met: if the variable is greater than some value and smaller than a another value.</p>
<p>I have two functions over this one that define start_time and end_time, as well as a loop that processes the files. As you can see by my <code>if</code> statement, I am trying to read a file's data within a range of numbers. When I set it as I did, however, I get this error:</p>
<pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>I don't understand how to solve this, especially because I am using two variables (start_date, end_date: both are given a numeric value on the previous function).</p>
<p><strong>In short, how to I make my desired "if" statement possible?</strong></p>
<p><strong>Edit</strong>: In addition, I want the files that don't meet the criteria to be ignored, and I am not sure if they will be if I don't write an "else" statement.</p>
| -3 | 2016-07-29T17:02:04Z | 38,664,167 | <p>I don't really understand what you want to do here, but there are three options. If you only want to go into the block if all values in the array satisfy it, use a.all(condition). If you want to go in if any values satisfy the condition, use a.any(condition). If you want to go in to the block for each value of the array that satisfies the value, you would do</p>
<pre><code>for x in array:
if(condition):
do stuff
else:
do other stuff
</code></pre>
| 0 | 2016-07-29T17:12:23Z | [
"python",
"function",
"loops",
"if-statement",
"python-3.5"
] |
How do I Put 2 Conditions in an "if" Statement? | 38,664,017 | <p>I am trying to set up a function in a way so that it will only execute if two conditions are met: if the variable is greater than some value and smaller than a another value.</p>
<p>I have two functions over this one that define start_time and end_time, as well as a loop that processes the files. As you can see by my <code>if</code> statement, I am trying to read a file's data within a range of numbers. When I set it as I did, however, I get this error:</p>
<pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>I don't understand how to solve this, especially because I am using two variables (start_date, end_date: both are given a numeric value on the previous function).</p>
<p><strong>In short, how to I make my desired "if" statement possible?</strong></p>
<p><strong>Edit</strong>: In addition, I want the files that don't meet the criteria to be ignored, and I am not sure if they will be if I don't write an "else" statement.</p>
| -3 | 2016-07-29T17:02:04Z | 38,664,402 | <p>The problem is because you are comparing an array i.e. <code>juld</code> to two dates/numbers, <code>start_date</code> and <code>end_date</code>. So you need to be able to change the array to a date/number. I can't see the what is in <code>juld</code> but I suspect if you just change your code to: </p>
<pre><code>if start_date<juld[0]<end_date:
</code></pre>
<p>then it will work i.e. the following works (returns <code>1</code>):</p>
<pre><code>import datetime
d1 = datetime.date(1996, 4, 1)
d2 = datetime.date(2017, 7, 29)
dt = datetime.date(2016, 7, 29)
x = Read_Data(dt, d1 , d2)
</code></pre>
<p>where: </p>
<pre><code>def Read_Data(date, start_date, end_date):
if start_date<date<end_date:
return 1
</code></pre>
| 0 | 2016-07-29T17:29:18Z | [
"python",
"function",
"loops",
"if-statement",
"python-3.5"
] |
How do I Put 2 Conditions in an "if" Statement? | 38,664,017 | <p>I am trying to set up a function in a way so that it will only execute if two conditions are met: if the variable is greater than some value and smaller than a another value.</p>
<p>I have two functions over this one that define start_time and end_time, as well as a loop that processes the files. As you can see by my <code>if</code> statement, I am trying to read a file's data within a range of numbers. When I set it as I did, however, I get this error:</p>
<pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>I don't understand how to solve this, especially because I am using two variables (start_date, end_date: both are given a numeric value on the previous function).</p>
<p><strong>In short, how to I make my desired "if" statement possible?</strong></p>
<p><strong>Edit</strong>: In addition, I want the files that don't meet the criteria to be ignored, and I am not sure if they will be if I don't write an "else" statement.</p>
| -3 | 2016-07-29T17:02:04Z | 38,959,290 | <p>First of all, I think you'd wanna look at the min/max value of <code>juld</code>, as it is an array. Try something like:</p>
<pre><code>if max(juld) < start_date or min(juld) > end_date:
</code></pre>
<p>I think that should work!</p>
| 1 | 2016-08-15T16:45:33Z | [
"python",
"function",
"loops",
"if-statement",
"python-3.5"
] |
How do I get PHP to kill www-data processes? | 38,664,033 | <p>I have a Debian server running apache2/PHP. From this website I can execute many python scripts over time. I want to be able to kill these scripts after a certain amount of time so that they don't pile up. The python scripts are owned by www-data.</p>
<p>I have tried killing the python scripts by storing the PID and the time stamp of execution and then looping through all these to find the ones older than a given time.</p>
<pre><code>for($x = 0; $x < $arraylen; $x++) {
if ( round(microtime(true)) - $timearray[$x] > 60){
$command = "kill -9 " . $pidarray[$x];
$killme = exec($command);
}
}
</code></pre>
<p>I grab the PID using this:</p>
<pre><code>$PID = shell_exec('/usr/bin/python2.7 /var/www/worker.py ......');
</code></pre>
<p>I cannot seem to get PHP to kill these processes. However, if I know the PID of the process I can type this into the terminal which works fine</p>
<pre><code>sudo su www-data
kill "PID of the process"
</code></pre>
<p>How do I get PHP to kill one of its own processes?</p>
| 0 | 2016-07-29T17:03:40Z | 38,664,116 | <pre><code>exec("sudo -u <YOURUSER> -S kill $PID");
</code></pre>
| 0 | 2016-07-29T17:08:58Z | [
"php",
"python",
"linux",
"bash"
] |
Scrape heading of a video using selenium python3 | 38,664,035 | <p>I want to scrape video name of this <a href="https://www.youtube.com/watch?v=POk-uOQSJVk" rel="nofollow">link</a> as it is " Insane Woman Goes Crazy On Guy Who Just Wants A Refund". </p>
<p>The code on the web is:</p>
<pre><code><span id="eow-title" class="watch-title" dir="ltr" title="Insane Woman Goes Crazy On Guy Who Just Wants A Refund">
Insane Woman Goes Crazy On Guy Who Just Wants A Refund
</code></pre>
<p></p>
<p>I am doing in this way:</p>
<pre><code>browser = webdriver.Firefox()
browser.get("https://www.youtube.com/watch?v=POk-uOQSJVk")
head = browser.find_elements_by_class_name('watch-title')
print(head.text)
</code></pre>
<p>It is prompting as:</p>
<blockquote>
<p>AttributeError: 'list' object has no attribute 'text'</p>
</blockquote>
<p>Is there any wrong?</p>
| 0 | 2016-07-29T17:03:42Z | 38,664,087 | <p>First of all, <a href="http://selenium-python.readthedocs.io/api.html#selenium.webdriver.remote.webdriver.WebDriver.find_elements_by_class_name" rel="nofollow"><code>find_elements_by_class_name()</code> method</a> <em>returns a list of <code>WebElement</code>s, while you need a single one</em>. Also, you need to <a href="http://selenium-python.readthedocs.io/waits.html#explicit-waits" rel="nofollow">let the page load until the desired element is present</a>:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
browser = webdriver.Firefox()
browser.get("https://www.youtube.com/watch?v=POk-uOQSJVk")
# wait for the presence of the video title
element = WebDriverWait(browser, 10).until(
EC.presence_of_element_located((By.ID, "eow-title"))
)
print(element.text)
browser.close()
</code></pre>
<p>Prints:</p>
<pre><code>Insane Woman Goes Crazy On Guy Who Just Wants A Refund
</code></pre>
| 0 | 2016-07-29T17:06:49Z | [
"python",
"python-3.x",
"selenium",
"web-crawler"
] |
Difference of type() function in Python 2 and Python 3 | 38,664,068 | <p>When trying the following script in Python 2, </p>
<pre><code>a = 200
print type(a)
</code></pre>
<p>its output is </p>
<pre><code><type 'int'>
</code></pre>
<p>and in Python 3, the script </p>
<pre><code>a = 200
print (type(a))
</code></pre>
<p>its output is, </p>
<pre><code><class 'int'>
</code></pre>
<p>What may be the reason for that?</p>
| 3 | 2016-07-29T17:05:29Z | 38,664,124 | <p><code>type</code> isn't behaving any differently. They just changed things so all classes show up as <code><class ...></code> instead of <code><type ...></code>, regardless of whether the class is a built-in type like <code>int</code> or a class created with the <code>class</code> statement. It's one of the final steps of the elimination of the class/type distinction, a process that began back in 2.2.</p>
| 2 | 2016-07-29T17:09:16Z | [
"python",
"python-2.7",
"python-3.x"
] |
Difference of type() function in Python 2 and Python 3 | 38,664,068 | <p>When trying the following script in Python 2, </p>
<pre><code>a = 200
print type(a)
</code></pre>
<p>its output is </p>
<pre><code><type 'int'>
</code></pre>
<p>and in Python 3, the script </p>
<pre><code>a = 200
print (type(a))
</code></pre>
<p>its output is, </p>
<pre><code><class 'int'>
</code></pre>
<p>What may be the reason for that?</p>
| 3 | 2016-07-29T17:05:29Z | 38,664,249 | <p>In the days of yore, built-in types such as <code>int</code> and <code>dict</code> and <code>list</code> were very different from types built with <code>class</code>. You could not subclass built-in types for example.</p>
<p>Gradually, in successive Python 2.x releases, the differences between <code>class</code> types and builtin types have eroded; the introduction of <em>new-style classes</em> (inheriting from <code>object</code>) in Python 2.2 was one such (major) step. See <a href="https://www.python.org/download/releases/2.2.3/descrintro/"><em>Unifying types and classes in Python 2.2</em></a>.</p>
<p>Removing the use of <code>type</code> in built-in type representations is just the last step in that process. There was no reason anymore to use the name <code>type</code> anymore.</p>
<p>In other words, between Python 2.7 and 3.x this is a <em>cosmetic</em> change, nothing more.</p>
| 6 | 2016-07-29T17:18:02Z | [
"python",
"python-2.7",
"python-3.x"
] |
Python jsonrpclib not working after upgrade to Python 3.5.2 | 38,664,102 | <p>I previously had Python 2.7 installed and was making calls like this:</p>
<pre><code>api = jsonrpclib.Server('my host')
api.someFunctionCall()
</code></pre>
<p>I then upgraded to Python 3.5.2 and now when I run the code above, I'm receiving this message:</p>
<pre><code>Traceback (most recent call last):
File "C:\login\login.py", line 1, in <module>
import jsonrpclib
File "C:\Python3.5.2\lib\site-packages\jsonrpclib\__init__.py", line 5, in <module>
from jsonrpclib.jsonrpc import Server, MultiCall, Fault
ImportError: No module named 'xmlrpclib'
</code></pre>
<p>I checked my installation and I do indeed have the xmlrpc lib:</p>
<pre><code>c:\Python3.5.2\Lib\xmlrpc
</code></pre>
<p>What am I doing wrong?</p>
| 0 | 2016-07-29T17:08:02Z | 38,780,308 | <p>Python 3.x has relocated the xmlrpclib module. Per the <a href="https://docs.python.org/2.7/library/xmlrpclib.html" rel="nofollow">Python 2.7 xmlrpclib documentation</a>:</p>
<p>"The xmlrpclib module has been renamed to xmlrpc.client in Python 3. The 2to3 tool will automatically adapt imports when converting your sources to Python 3."</p>
<p>It looks like the author of jsonrpclib has an open issue for Python 3 support, but hasn't responded or taken pull requests in a year. You may want to give the <a href="https://github.com/tcalmant/jsonrpclib" rel="nofollow">jsonrpclib-pelix</a> fork a look for Python 3 support.</p>
| 0 | 2016-08-05T02:45:13Z | [
"python",
"python-3.x",
"xml-rpc",
"json-rpc"
] |
return inverse string selection in python | 38,664,190 | <p>I have a python snippet that returns the contents within two strings using regex.</p>
<pre><code>res = re.search(r'Presets = {(.*)Version = 1,', data, re.DOTALL)
</code></pre>
<p>What I now want to do is return the two strings surrounding this inner part. Keep in mind this is a multiline string. How can I get the bordering strings, the beginning and end part in a two part list would be ideal.</p>
<pre><code>data = """{
data = {
friends = {
max = 0 0,
min = 0 0,
},
family = {
cars = {
van = "honda",
car = "ford",
bike = "trek",
},
presets = {
location = "italy",
size = 10,
travelers = False,
},
version = 1,
},
},
stuff = {
this = "great",
},
}"""
import re
res = re.search(r'presets = {(.*)version = 1,', data, re.DOTALL)
print res.groups(1)
</code></pre>
<p>In this case I would want to return the beginning string:</p>
<pre><code>data = """{
data = {
friends = {
max = 0 0,
min = 0 0,
},
family = {
cars = {
van = "honda",
car = "ford",
bike = "trek",
},
</code></pre>
<p>And the end string:</p>
<pre><code> },
},
stuff = {
this = "great",
},
}"""
</code></pre>
| 0 | 2016-07-29T17:13:42Z | 38,664,291 | <p>Regex is really not a good tool for parsing these strings, but you can use <code>re.split</code> to achieve what you wanted. It can even combine the 2 tasks into one:</p>
<pre><code>begin, middle, end = re.split(r'presets = \{(.*)version = 1,', data,
flags=re.DOTALL)
</code></pre>
<p><code>re.split</code> splits the string at matching positions; ordinarily the separator is not in the resulting list. However, if the regular expression contains capturing groups, then the matching contents of the first group is returned in the place of the delimiter.</p>
| 1 | 2016-07-29T17:21:27Z | [
"python"
] |
UnicodeDecodeError while loading file in python | 38,664,228 | <p>I'm running this:</p>
<pre><code>news_train = load_mlcomp('20news-18828', 'train')
vectorizer = TfidfVectorizer(encoding='latin1')
X_train = vectorizer.fit_transform((open(f, errors='ignore').read()
for f in news_train.filenames))
</code></pre>
<p>but it got UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe4 in position 39: invalid continuation byte. at open() function.</p>
<p>I checked the news_train.filenames. It is:</p>
<pre><code>array(['/Users/juby/Downloads/mlcomp/379/train/sci.med/12836-58920',
..., '/Users/juby/Downloads/mlcomp/379/train/sci.space/14129-61228'],
dtype='<U74')
</code></pre>
<p>Paths look correct. It may be about dtype or my environment (I'm Mac OSX 10.11), but I can't fix it after I tried many times. Thank you!!! </p>
<p>p.s it's a ML tutorial from <a href="http://scikit-learn.org/stable/auto_examples/text/mlcomp_sparse_document_classification.html#example-text-mlcomp-sparse-document-classification-py" rel="nofollow">http://scikit-learn.org/stable/auto_examples/text/mlcomp_sparse_document_classification.html#example-text-mlcomp-sparse-document-classification-py</a></p>
| 0 | 2016-07-29T17:16:24Z | 38,666,703 | <p>Well I found the solution. Using </p>
<pre><code>open(f, encoding = "latin1")
</code></pre>
<p>I'm not sure why it only happens on my mac though. Wish to know it.</p>
| 0 | 2016-07-29T20:07:23Z | [
"python",
"character-encoding",
"environment"
] |
UnicodeDecodeError while loading file in python | 38,664,228 | <p>I'm running this:</p>
<pre><code>news_train = load_mlcomp('20news-18828', 'train')
vectorizer = TfidfVectorizer(encoding='latin1')
X_train = vectorizer.fit_transform((open(f, errors='ignore').read()
for f in news_train.filenames))
</code></pre>
<p>but it got UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe4 in position 39: invalid continuation byte. at open() function.</p>
<p>I checked the news_train.filenames. It is:</p>
<pre><code>array(['/Users/juby/Downloads/mlcomp/379/train/sci.med/12836-58920',
..., '/Users/juby/Downloads/mlcomp/379/train/sci.space/14129-61228'],
dtype='<U74')
</code></pre>
<p>Paths look correct. It may be about dtype or my environment (I'm Mac OSX 10.11), but I can't fix it after I tried many times. Thank you!!! </p>
<p>p.s it's a ML tutorial from <a href="http://scikit-learn.org/stable/auto_examples/text/mlcomp_sparse_document_classification.html#example-text-mlcomp-sparse-document-classification-py" rel="nofollow">http://scikit-learn.org/stable/auto_examples/text/mlcomp_sparse_document_classification.html#example-text-mlcomp-sparse-document-classification-py</a></p>
| 0 | 2016-07-29T17:16:24Z | 38,668,240 | <p>Actually in Python 3+, the <code>open</code> function opens and reads file in default mode <code>'r'</code> which will decode the file content (on most platform, in UTF-8). Since your files are encoded in latin1, decode them using UTF-8 could cause <code>UnicodeDecodeError</code>. The solution is either opening the files in binary mode (<code>'rb'</code>), or specify the correct encoding (<code>encoding="latin1"</code>).</p>
<pre><code>open(f, 'rb').read() # returns `byte` rather than `str`
# or,
open(f, encoding='latin1').read() # returns latin1 decoded `str`
</code></pre>
| 0 | 2016-07-29T22:16:24Z | [
"python",
"character-encoding",
"environment"
] |
Nosetests and Coverage not excluding lines | 38,664,407 | <p>Running into a small problem with some code coverage using nosetests and coverage with a Django web application. I have created a .coveragerc file to exclude a huge amount of code (things like class declarations) but I'm still getting some weird results.</p>
<p>Here is my .coveragerc file:</p>
<pre><code>[run]
omit = ../*migrations*, ../*admin.py
[report]
show_missing = True
exclude_lines =
pragma: no cover
from
= models\.
</code></pre>
<p>This is an example of one of the models.py files:</p>
<pre><code>from django.db import models
class Query(models.Model):
variable1 = models.CharField(max_length=100)
variable2 = models.CharField(max_length=100)
variable3 = models.CharField(max_length=100)
variable4 = models.CharField(max_length=100)
variable5 = models.CharField(max_length=100)
id = models.AutoField(primary_key=True)
def some_function(self):
self.variable1 = self.variable2 + self.variable3 + self.variable4 + self.variable 5
return self.variable1
</code></pre>
<p>So when I run code coverage, the issue I run into is that despite me telling coverage to explicitly exclude anything with the string "= models.", it still says the lines are missing in the report given through the command line. This is making it very hard to determine which lines I'm actually failing to cover in my test cases. Can anyone offer some insight to this?</p>
| 1 | 2016-07-29T17:29:43Z | 38,666,754 | <p>Your <code>.coveragerc</code> file should list things to exclude starting from the root of your directory.</p>
<p>For example:</p>
<p><code>
proj
|-- app1
|
-- models.py
-- migrations.py
|-- app2
</code></p>
<p>Then your <code>coverage.rc</code> file should look like:</p>
<p><code>
[run]
omit = app1/migrations.py, app1/admin.py
</code></p>
<p>or</p>
<p><code>
[run]
omit = proj/*/migrations.py, proj/*/admin.py
</code></p>
| 0 | 2016-07-29T20:10:37Z | [
"python",
"django",
"coverage.py"
] |
Nosetests and Coverage not excluding lines | 38,664,407 | <p>Running into a small problem with some code coverage using nosetests and coverage with a Django web application. I have created a .coveragerc file to exclude a huge amount of code (things like class declarations) but I'm still getting some weird results.</p>
<p>Here is my .coveragerc file:</p>
<pre><code>[run]
omit = ../*migrations*, ../*admin.py
[report]
show_missing = True
exclude_lines =
pragma: no cover
from
= models\.
</code></pre>
<p>This is an example of one of the models.py files:</p>
<pre><code>from django.db import models
class Query(models.Model):
variable1 = models.CharField(max_length=100)
variable2 = models.CharField(max_length=100)
variable3 = models.CharField(max_length=100)
variable4 = models.CharField(max_length=100)
variable5 = models.CharField(max_length=100)
id = models.AutoField(primary_key=True)
def some_function(self):
self.variable1 = self.variable2 + self.variable3 + self.variable4 + self.variable 5
return self.variable1
</code></pre>
<p>So when I run code coverage, the issue I run into is that despite me telling coverage to explicitly exclude anything with the string "= models.", it still says the lines are missing in the report given through the command line. This is making it very hard to determine which lines I'm actually failing to cover in my test cases. Can anyone offer some insight to this?</p>
| 1 | 2016-07-29T17:29:43Z | 39,024,710 | <p>Found the solution to my problem. It turns out I don't need to use nosetests at all. I can simply run the coverage.py with manage.py test and pass in the test modules. The code coverage worked great and I'm up to 96% coverage :-)</p>
| 0 | 2016-08-18T17:59:32Z | [
"python",
"django",
"coverage.py"
] |
Setting Django WSGI workers with long external API response | 38,664,410 | <p>I'm writing an e-commerce plug-in app in Python/Django that integrates with Shopify stores. Whenever a customer for a store reaches checkout, Shopify sends a request to my app with shopping cart and destination address data, and my app is required to respond with shipping price information. The problem is that I need to make an external API call between them sending me the request and sending them the response, and under moderate load, my WSGI workers get filled very easily.</p>
<p>I'm trying to avoid scaling out unnecessarily. Should I simply increase my number of workers past the recommended <code>cores * 2 + 1</code>? Do I simply monitor CPU load in order to adjust this number? What's the ideal CPU load % I should be looking for? Since I'm also handing short non-blocked requests from the same app, will this cause any problems?</p>
<p>Is Django simply not a good match for this kind of use-case? If so, what is a good match, and what would be the best way to apply it without rewriting my whole app?</p>
<p>EDIT: My WSGI server is Gunicorn</p>
| 1 | 2016-07-29T17:29:57Z | 38,667,337 | <p>There are a couple of things you can do to improve the performance of gunicorn here. Given your design, it's almost certain that your workers are IO-bound. So for a start you could configure them to use multiple threads per worker; the docs suggest 2-4.</p>
<p>However, again because of the IO-bound nature of your site, it seems likely that you'll get even better improvements by using one of the asynchronous worker types. See <a href="http://docs.gunicorn.org/en/stable/design.html#async-workers" rel="nofollow">the design docs</a> for details: I don't think there is much to choose between gevent and eventlet, personally I've had good results from the former.</p>
| 0 | 2016-07-29T20:55:23Z | [
"python",
"django",
"django-views",
"wsgi",
"web-worker"
] |
Creating a tree/deeply nested dict with lists from an indented text file | 38,664,465 | <p>I want to iterate through a file and put the contents of each line into a deeply nested dict, the structure of which is defined by leading whitespace. This desire is very much like that documented <a href="http://stackoverflow.com/questions/17858404/creating-a-tree-deeply-nested-dict-from-an-indented-text-file-in-python">here</a>. I've solved that but now have the problem of handling the case where repeating keys are overwritten instead of being cast into a list.</p>
<p>Essentially:</p>
<pre><code>a:
b: c
d: e
a:
b: c2
d: e2
d: wrench
</code></pre>
<p>is cast into <code>{"a":{"b":"c2","d":"wrench"}}</code> when it should be cast into</p>
<pre><code>{"a":[{"b":"c","d":"e"},{"b":"c2","d":["e2","wrench"]}]}
</code></pre>
<p>A self-contained example:</p>
<pre><code>import json
def jsonify_indented_tree(tree):
#convert indentet text into json
parsedJson= {}
parentStack = [parsedJson]
for i, line in enumerate(tree):
data = get_key_value(line)
if data['key'] in parsedJson.keys(): #if parent key is repeated, then cast value as list entry
# stuff that doesn't work
# if isinstance(parsedJson[data['key']],list):
# parsedJson[data['key']].append(parsedJson[data['key']])
# else:
# parsedJson[data['key']]=[parsedJson[data['key']]]
print('Hey - Make a list now!')
if data['value']: #process child by adding it to its current parent
currentParent = parentStack[-1] #.getLastElement()
currentParent[data['key']] = data['value']
if i is not len(tree)-1:
#determine when to switch to next branch
level_dif = data['level']-get_key_value(tree[i+1])['level'] #peek next line level
if (level_dif > 0):
del parentStack[-level_dif:] #reached leaf, process next branch
else:
#group node, push it as the new parent and keep on processing.
currentParent = parentStack[-1] #.getLastElement()
currentParent[data['key']] = {}
newParent = currentParent[data['key']]
parentStack.append(newParent)
return parsedJson
def get_key_value(line):
key = line.split(":")[0].strip()
value = line.split(":")[1].strip()
level = len(line) - len(line.lstrip())
return {'key':key,'value':value,'level':level}
def pp_json(json_thing, sort=True, indents=4):
if type(json_thing) is str:
print(json.dumps(json.loads(json_thing), sort_keys=sort, indent=indents))
else:
print(json.dumps(json_thing, sort_keys=sort, indent=indents))
return None
#nested_string=['a:', '\tb:\t\tc', '\td:\t\te', 'a:', '\tb:\t\tc2', '\td:\t\te2']
#nested_string=['w:','\tgeneral:\t\tcase','a:','\tb:\t\tc','\td:\t\te','a:','\tb:\t\tc2','\td:\t\te2']
nested_string=['a:',
'\tb:\t\tc',
'\td:\t\te',
'a:',
'\tb:\t\tc2',
'\td:\t\te2',
'\td:\t\twrench']
pp_json(jsonify_indented_tree(nested_string))
</code></pre>
| 1 | 2016-07-29T17:32:56Z | 38,778,867 | <p>This approach is (logically) a lot more straightforward (though longer):</p>
<ol>
<li>Track the <code>level</code> and <code>key</code>-<code>value</code> pair of each line in your multi-line string</li>
<li>Store this data in a <code>level</code> keyed dict of lists:
{<code>level1</code>:[<code>dict1</code>,<code>dict2</code>]}</li>
<li>Append only a string representing the key in a <em>key-only</em> line: {<code>level1</code>:[<code>dict1</code>,<code>dict2</code>,<code>"nestKeyA"</code>]}</li>
<li>Since a <em>key-only</em> line means the next line is one level deeper, process that on the next level: {<code>level1</code>:[<code>dict1</code>,<code>dict2</code>,<code>"nestKeyA"</code>],<code>level2</code>:[...]}. The contents of some deeper level <code>level2</code> may itself be just another <em>key-only</em> line (and the next loop will add a new level <code>level3</code> such that it will become {<code>level1</code>:[<code>dict1</code>,<code>dict2</code>,<code>"nestKeyA"</code>],<code>level2</code>:[<code>"nestKeyB"</code>],<code>level3</code>:[...]}) or a new dict <code>dict3</code> such that {<code>level1</code>:[<code>dict1</code>,<code>dict2</code>,<code>"nestKeyA"</code>],<code>level2</code>:[<code>dict3</code>]</li>
<li><p>Steps 1-4 continue until the current line is indented less than the previous one (signifying a return to some prior scope). This is what the data structure looks like on my example per line iteration.</p>
<pre><code>0, {0: []}
1, {0: [{'k': 'sds'}]}
2, {0: [{'k': 'sds'}, 'a']}
3, {0: [{'k': 'sds'}, 'a'], 1: [{'b': 'c'}]}
4, {0: [{'k': 'sds'}, 'a'], 1: [{'b': 'c'}, {'d': 'e'}]}
5, {0: [{'k': 'sds'}, {'a': {'d': 'e', 'b': 'c'}}, 'a'], 1: []}
6, {0: [{'k': 'sds'}, {'a': {'d': 'e', 'b': 'c'}}, 'a'], 1: [{'b': 'c2'}]}
7, {0: [{'k': 'sds'}, {'a': {'d': 'e', 'b': 'c'}}, 'a'], 1: [{'b': 'c2'}, {'d': 'e2'}]}
</code></pre>
<p>Then two things need to happen. <strong>1</strong>: the list of dict need to be inspected for containing duplicate keys and any of those duplicated dict's values combined in a list - this will be demonstrated in a moment. <strong>2</strong>: as can be seen between iteration 4 and 5, the list of dicts from the deepest level (here <code>1</code>) are combined into one dict... Finally, to demonstrate duplicate handling observe:</p>
<pre><code>[7b, {0: [{'k': 'sds'}, {'a': {'d': 'e', 'b': 'c'}}, 'a'], 1: [{'b': 'c2'}, {'d': 'e2'}, {'d': 'wrench'}]}]
[7c, {0: [{'k': 'sds'}, {'a': {'d': 'e', 'b': 'c'}}, {'a': {'d': ['wrench', 'e2'], 'b': 'c2'}}], 1: []}]
</code></pre>
<p>where <code>wrench</code> and <code>e2</code> are placed in a list that itself goes into a dict keyed by their original key.</p></li>
<li><p>Repeat Steps 1-5, hoisting deeper scoped dicts up and onto their parent keys until the current line's scope (level) is reached.</p></li>
<li>Handle termination condition to combine the list of dict on the zeroth level into a dict.</li>
</ol>
<p>Here's the code:</p>
<pre><code>import json
def get_kvl(line):
key = line.split(":")[0].strip()
value = line.split(":")[1].strip()
level = len(line) - len(line.lstrip())
return {'key':key,'value':value,'level':level}
def pp_json(json_thing, sort=True, indents=4):
if type(json_thing) is str:
print(json.dumps(json.loads(json_thing), sort_keys=sort, indent=indents))
else:
print(json.dumps(json_thing, sort_keys=sort, indent=indents))
return None
def jsonify_indented_tree(tree): #convert shitty sgml header into json
level_map= {0:[]}
tree_length=len(tree)-1
for i, line in enumerate(tree):
data = get_kvl(line)
if data['level'] not in level_map.keys():
level_map[data['level']]=[] # initialize
prior_level=get_kvl(tree[i-1])['level']
level_dif = data['level']-prior_level # +: line is deeper, -: shallower, 0:same
if data['value']:
level_map[data['level']].append({data['key']:data['value']})
if not data['value'] or i==tree_length:
if i==tree_length: #end condition
level_dif = -len(list(level_map.keys()))
if level_dif < 0:
for level in reversed(range(prior_level+level_dif+1,prior_level+1)): # (end, start)
#check for duplicate keys in current deepest (child) sibling group,
# merge them into a list, put that list in a dict
key_freq={} #track repeated keys
for n, dictionary in enumerate(level_map[level]):
current_key=list(dictionary.keys())[0]
if current_key in list(key_freq.keys()):
key_freq[current_key][0]+=1
key_freq[current_key][1].append(n)
else:
key_freq[current_key]=[1,[n]]
for k,v in key_freq.items():
if v[0]>1: #key is repeated
duplicates_list=[]
for index in reversed(v[1]): #merge value of key-repeated dicts into list
duplicates_list.append(list(level_map[level].pop(index).values())[0])
level_map[level].append({k:duplicates_list}) #push that list into a dict on the same stack it came from
if i==tree_length and level==0: #end condition
#convert list-of-dict into dict
parsed_nest={k:v for d in level_map[level] for k,v in d.items()}
else:
#push current deepest (child) sibling group onto parent key
key=level_map[level-1].pop() #string
#convert child list-of-dict into dict
level_map[level-1].append({key:{k:v for d in level_map[level] for k,v in d.items()}})
level_map[level]=[] #reset deeper level
level_map[data['level']].append(data['key'])
return parsed_nest
nested_string=['k:\t\tsds', #need a starter key,value pair otherwise this won't work... fortunately I always have one
'a:',
'\tb:\t\tc',
'\td:\t\te',
'a:',
'\tb:\t\tc2',
'\td:\t\te2',
'\td:\t\twrench']
pp_json(jsonify_indented_tree(nested_string))
</code></pre>
| 0 | 2016-08-04T23:23:02Z | [
"python",
"json",
"parsing",
"data-structures",
"nested"
] |
heroku hosting apython flask application error | 38,664,520 | <p>i have recently deployed my python flask app to heroku but the following error occurs:</p>
<pre><code>2016-07-29T17:32:00.145010+00:00 heroku[web.1]: State changed from crashed to starting
2016-07-29T17:32:11.162187+00:00 heroku[web.1]: Starting process with command `gunicorn myapp:app --log-file=-`
2016-07-29T17:32:13.548294+00:00 heroku[web.1]: Process exited with status 3
2016-07-29T17:32:13.448537+00:00 app[web.1]: [2016-07-29 17:32:13 +0000] [3] [INFO] Starting gunicorn 19.6.0
2016-07-29T17:32:13.449154+00:00 app[web.1]: [2016-07-29 17:32:13 +0000] [3] [INFO] Listening at: http://0.0.0.0:57535 (3)
2016-07-29T17:32:13.456988+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
2016-07-29T17:32:13.456990+00:00 app[web.1]: return self.load_wsgiapp()
2016-07-29T17:32:13.456993+00:00 app[web.1]: File "/app/myapp.py", line 71
2016-07-29T17:32:13.456994+00:00 app[web.1]: data[i]={**a[i],**b,**c,**d,**e}
2016-07-29T17:32:13.563973+00:00 heroku[web.1]: State changed from starting to crashed
2016-07-29T17:32:14.847126+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=obscure-plateau-26852.herokuapp.com request_id=820cdbd3-d478-434b-be65-06f959ca2798 fwd="79.167.50.52" dyno= connect= service= status=503 bytes=
2016-07-29T17:32:14.370705+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=obscure-plateau-26852.herokuapp.com request_id=48a7f184-053b-4ba9-9833-a5e27df61867 fwd="79.167.50.52" dyno= connect= service= status=503 bytes=
</code></pre>
<p>i have tried changing the default ports etc. but the same error occurs.</p>
<p>Any ideas?</p>
| 0 | 2016-07-29T17:35:57Z | 38,665,275 | <p>Well, the answer is in the traceback. Error code <code>H10</code> means the app crashed when Gunicorn tried to load it onto the dyno. There's an error at line 71 of your <code>myapp.py file</code>, the <code>data[i]={**a[i],**b,**c,**d,**e}</code> line. You may need to show the rest of your file to debug that. </p>
| 1 | 2016-07-29T18:24:06Z | [
"python",
"heroku",
"flask",
"hosting"
] |
Parsing / Scraping a table using Python, Urllib from an FTP site | 38,664,529 | <p>I'm trying to parse / scrape some data from an FTP site. Specifically: <a href="ftp://ftp.ncbi.nlm.nih.gov/genomes/all/GCA_000513335.1_PCAMFM013_20131106" rel="nofollow">ftp://ftp.ncbi.nlm.nih.gov/genomes/all/GCA_000513335.1_PCAMFM013_20131106</a> </p>
<p>Eventually I would like to be able to have a Python script that will download all the files from this site. But first, i'm trying to learn how to get all the download links using BeautifulSoup4 and urllib2. (Since requests doesn't work with HTML sites?) </p>
<p>I inspected the elements and see they are stored in a table, however I get an attribute error for findAll when i call it. </p>
<p>This is what my code looks like right now (still starting. I want to mess around with the table data): </p>
<pre><code>import urllib2
from bs4 import BeautifulSoup
url ="ftp://ftp.ncbi.nlm.nih.gov/genomes/all/GCA_000513335.1_PCAMFM013_20131106"
html = urllib2.urlopen(url).read()
soup = BeautifulSoup(html)
table = soup.find('table')
rows = table.findAll('tr') <== error here
print rows
</code></pre>
<p>Does anyone know what I'm doing wrong here? Am i approaching this incorrectly? Any help would be appreciated. </p>
| 0 | 2016-07-29T17:36:38Z | 38,664,808 | <p>The HTML table is not sent by the FTP server. Only the browser will generate the HTML based on the directory listing returned by the FTP server. This means you can not use BeautifulSoup to parse it. Instead look into <a href="https://docs.python.org/3/library/ftplib.html" rel="nofollow">ftplib</a> for interacting with FTP servers.</p>
| 2 | 2016-07-29T17:54:40Z | [
"python",
"parsing",
"ftp",
"web-scraping",
"beautifulsoup"
] |
Python- What colormap scheme should I use for exponential-ish data? | 38,664,560 | <p><strong>The issue</strong></p>
<p>I have a plot I'm trying to make for trends of precipitation rates around the world using gridded data. I can make the plot itself fine, but the color range is giving me issues. I can't figure out how to make the colormap better fit my data, which seems exponential. <strong>I tried a logarithmic range, but it doesn't quite fit the data right.</strong></p>
<p><strong>The code & data range</strong></p>
<p>Here's what my 8,192 data values look like when plotted in order on a simple x-y line plot. Data points are on the x-axis & values are on the y-axis.
<a href="http://i.stack.imgur.com/txdJp.png"><img src="http://i.stack.imgur.com/txdJp.png" alt="enter image description here"></a></p>
<p>Here's what my data looks like plotted with a LogNormal color range. It's too much mint green & orange-red for me.</p>
<pre><code>#Set labels
lonlabels = ['0','45E','90E','135E','180','135W','90W','45W','0']
latlabels = ['90S','60S','30S','Eq.','30N','60N','90N']
#Set cmap properties
norm = colors.LogNorm() #creates logarithmic scale
#Create basemap
fig,ax = plt.subplots(figsize=(15.,10.))
m = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,llcrnrlon=0,urcrnrlon=360.,lon_0=180.,resolution='c')
m.drawcoastlines(linewidth=1)
m.drawcountries(linewidth=1)
m.drawparallels(np.arange(-90,90,30.),linewidth=0.3)
m.drawmeridians(np.arange(-180.,180.,45.),linewidth=0.3)
meshlon,meshlat = np.meshgrid(lon,lat)
x,y = m(meshlon,meshlat)
#Plot variables
trend = m.pcolormesh(x,y,lintrends[:,:,0],cmap='jet', norm=norm, shading='gouraud')
#Set plot properties
#Colorbar
cbar=m.colorbar(trend, size='8%',location='bottom',pad=0.8) #Set colorbar
cbar.set_label(label='Linear Trend (mm/day/decade)',size=25) #Set label
for t in cbar.ax.get_xticklabels():
t.set_fontsize(25) #Set tick label sizes
#Titles & labels
fig.suptitle('Linear Trends of Precipitation (CanESM2)',fontsize=40,x=0.51,y=0.965)
ax.set_title('a) 1979-2014 Minimum Trend',fontsize=35)
ax.set_xticks(np.arange(0,405,45))
ax.set_xticklabels(lonlabels,fontsize=20)
ax.set_ylabel('Latitude',fontsize=25)
ax.set_yticks(np.arange(-90,120,30))
ax.set_yticklabels(latlabels,fontsize=20)
</code></pre>
<p><a href="http://i.stack.imgur.com/6gWj4.png"><img src="http://i.stack.imgur.com/6gWj4.png" alt="enter image description here"></a></p>
<p>And here's what it looks like with a default, unaltered color range. (Same code minus the norm=norm argument.)</p>
<p><a href="http://i.stack.imgur.com/NX8PT.png"><img src="http://i.stack.imgur.com/NX8PT.png" alt="enter image description here"></a></p>
<p><strong>The question</strong></p>
<p>Is there a mathematical scheme I can use to create a colormap that better shows the range of my data? Or do I need to make a custom range?</p>
| 5 | 2016-07-29T17:38:24Z | 39,135,115 | <h1>A hack</h1>
<p>You could try applying a maximum value, i.e. for any value above 2 simply replace it with 2.</p>
<p>Then you would have a single color (the maximum) representing 2+ and the rest of the colors would be spread across your data more evenly.</p>
| 0 | 2016-08-25T00:56:10Z | [
"python",
"matplotlib",
"exponential",
"colormap"
] |
How to initialize OpenGL context with PyGame instead of GLUT | 38,664,572 | <p>I'm trying to start with OpenGL, using Python and PyGame.</p>
<p>I'm going to use PyGame instead of GLUT to do all the initializing, windows opening, input handling, etc.</p>
<p>However, my shaders are failing to compile, unless I specify exactly the version of OpenGL and profile. </p>
<p>They <em>do</em> compile with GLUT initialization from the book:</p>
<pre><code>glutInit()
glutInitDisplayMode(GLUT_RGBA)
glutInitWindowSize(400, 400)
# this is what I need
glutInitContextVersion(3, 3)
glutInitContextProfile(GLUT_CORE_PROFILE)
glutCreateWindow("main")
</code></pre>
<p>But, with simple PyGame initialization like this:</p>
<pre><code>pygame.init()
display = (400, 400)
pygame.display.set_mode(display, pygame.DOUBLEBUF|pygame.OPENGL)
</code></pre>
<p>which doesn't specify exact OpenGL version 3.3 and CORE_PROFILE,
the same program would fail when trying to compile shaders:</p>
<blockquote>
<p>RuntimeError: ('Shader compile failure (0): 0:2(10): error: GLSL 3.30
is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.00 ES,
and 3.00 ES\n', ['\n #version 330 core\n layout(location = 0) in
vec4 position;\n void main()\n {\n gl_Position =
position;\n }\n '], GL_VERTEX_SHADER)</p>
</blockquote>
<p>My question is: how do I do this initialization with PyGame?</p>
| 2 | 2016-07-29T17:38:59Z | 38,675,169 | <p>I think this: </p>
<blockquote>
<p><a href="https://gist.github.com/MorganBorman/4243336" rel="nofollow">https://gist.github.com/MorganBorman/4243336</a></p>
</blockquote>
<p>might be what your looking for. It shows how to use vertex shaders and fragment shaders in pygame and PyOpenGL. If your not using PyOpenGL, then your gonna have to. To download it, just run :</p>
<blockquote>
<p>pip install PyOpenGL</p>
</blockquote>
<p>in your command prompt/terminal</p>
<p>If that does not work, I recommend taking a look at the PyOpenGL installation page for more details: </p>
<blockquote>
<p><a href="http://pyopengl.sourceforge.net/documentation/installation.html" rel="nofollow">http://pyopengl.sourceforge.net/documentation/installation.html</a></p>
</blockquote>
<p>I included a short boiled down example of what I think your trying to do using some of the code from the link.</p>
<pre><code>import OpenGL.GL as GL
import OpenGL.GL.shaders
import pygame as pg
#-------------not my code, credit to: Morgan Borman--------------#
vertex_shader = """
#version 330
in vec4 position;
void main()
{
gl_Position = position;
}
"""
fragment_shader = """
#version 330
void main()
{
gl_FragColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);
}
"""
#----------------------------------------------------------------#
def main():
pg.init()
#-------------not my code, credit to: Morgan Borman--------------#
GL.glClearColor(0.5, 0.5, 0.5, 1.0)
GL.glEnable(GL.GL_DEPTH_TEST)
shader = OpenGL.GL.shaders.compileProgram(
OpenGL.GL.shaders.compileShader(vertex_shader, GL.GL_VERTEX_SHADER),
OpenGL.GL.shaders.compileShader(fragment_shader, GL.GL_FRAGMENT_SHADER)
)
#----------------------------------------------------------------#
DISPLAY_DIMENSIONS = (640, 480)
display = pg.display.set_mode(DISPLAY_DIMENSIONS, pg.DOUBLEBUF | pg.OPENGL)
clock = pg.time.Clock()
FPS = 60
while True:
clock.tick(FPS)
for e in pg.event.get():
if e.type == pg.QUIT:
return
pg.display.flip()
if __name__ == '__main__':
try:
main()
finally:
pg.quit()
</code></pre>
<p>It does not do anything but load shaders in pygame. Which i believe is your main goal.</p>
| 0 | 2016-07-30T14:46:54Z | [
"python",
"opengl",
"pygame",
"glut",
"pyopengl"
] |
Any way to access methods from individual stages in PySpark PipelineModel? | 38,664,620 | <p>I've created a PipelineModel for doing LDA in Spark 2.0 (via PySpark API):</p>
<pre><code>def create_lda_pipeline(minTokenLength=1, minDF=1, minTF=1, numTopics=10, seed=42, pattern='[\W]+'):
"""
Create a pipeline for running an LDA model on a corpus. This function does not need data and will not actually do
any fitting until invoked by the caller.
Args:
minTokenLength:
minDF: minimum number of documents word is present in corpus
minTF: minimum number of times word is found in a document
numTopics:
seed:
pattern: regular expression to split words
Returns:
pipeline: class pyspark.ml.PipelineModel
"""
reTokenizer = RegexTokenizer(inputCol="text", outputCol="tokens", pattern=pattern, minTokenLength=minTokenLength)
cntVec = CountVectorizer(inputCol=reTokenizer.getOutputCol(), outputCol="vectors", minDF=minDF, minTF=minTF)
lda = LDA(k=numTopics, seed=seed, optimizer="em", featuresCol=cntVec.getOutputCol())
pipeline = Pipeline(stages=[reTokenizer, cntVec, lda])
return pipeline
</code></pre>
<p>I want to calculate the perplexity on a dataset using the trained model with the <code>LDAModel.logPerplexity()</code> method, so I tried running the following:</p>
<pre><code>try:
training = get_20_newsgroups_data(test_or_train='test')
pipeline = create_lda_pipeline(numTopics=20, minDF=3, minTokenLength=5)
model = pipeline.fit(training) # train model on training data
testing = get_20_newsgroups_data(test_or_train='test')
perplexity = model.logPerplexity(testing)
pprint(perplexity)
</code></pre>
<p>This just results in the following <code>AttributeError</code>:</p>
<pre><code>'PipelineModel' object has no attribute 'logPerplexity'
</code></pre>
<p>I understand why this error happens, since the <code>logPerplexity</code> method belongs to <code>LDAModel</code>, not <code>PipelineModel</code>, but I'm wondering if there is a way to access the method from that stage. </p>
| 1 | 2016-07-29T17:42:06Z | 38,664,792 | <p>All transformers in the pipeline are stored in <code>stages</code> property. Extract <code>stages</code>, take the last one, and you're ready to go:</p>
<pre><code>model.stages[-1].logPerplexity(testing)
</code></pre>
| 3 | 2016-07-29T17:54:06Z | [
"python",
"apache-spark",
"pyspark",
"apache-spark-mllib"
] |
str object is not callable - calling a function with a function/string as an argument | 38,664,642 | <p>when I execute this code:</p>
<pre><code>clffunc = sys.argv[1]
def fun(clffunc):
error_vector = clffunc()
print error_vector
loss_total = sum(error_vector)
loss_mean = np.mean(error_vector)
print "The mean error is %.2f" % loss_mean
def svm_clf():
#The clasificator object
clf = svm.SVC()
clf.fit(train_features, train_targets)
# Prediction
test_predicted = clf.predict(test_features)
# Analysis and output
return np.absolute(test_predicted-test_targets)
if __name__ == "__main__":
fun(clffunc)
</code></pre>
<p>from the terminal as:</p>
<pre><code>python GraspT.py svm_clf
</code></pre>
<p>I get the following error:</p>
<pre><code> File "/home/iki/griper validating/GraspT.py", line 24, in fun
error_vector = clffunc()
TypeError: 'str' object is not callable
</code></pre>
<p>In internet I couldn't find a solution.
'str' object is not callable is almost always done when someone redefines a built-in function or something similar. This is not my case. Here I'm passing a string from terminal, and then this is used as a string in a function argument. This argument is in fact a function. So I want to choose the function (a classifier method in machine learning) that is going to be executed in the code.</p>
| 0 | 2016-07-29T17:43:50Z | 38,664,775 | <p><code>svm_clf</code> is a string, not a function object. The <em>contents</em> of that string may match the name of a function, but that doesn't make it that function.</p>
<p>You could use the a dictionary to map valid names to function:</p>
<pre><code>functions = {'svm_clf': svm_clf}
if __name__ == "__main__":
fun(function[clffunc])
</code></pre>
<p>or you could use the dictionary that <code>globals()</code> returns for that purpose:</p>
<pre><code>if __name__ == "__main__":
fun(globals()[clffunc])
</code></pre>
<p>This is probably alright in a command-line tool, but take into account that this allows the user of the tool to make your script call anything with a global name.</p>
| 1 | 2016-07-29T17:52:54Z | [
"python",
"python-2.7"
] |
str object is not callable - calling a function with a function/string as an argument | 38,664,642 | <p>when I execute this code:</p>
<pre><code>clffunc = sys.argv[1]
def fun(clffunc):
error_vector = clffunc()
print error_vector
loss_total = sum(error_vector)
loss_mean = np.mean(error_vector)
print "The mean error is %.2f" % loss_mean
def svm_clf():
#The clasificator object
clf = svm.SVC()
clf.fit(train_features, train_targets)
# Prediction
test_predicted = clf.predict(test_features)
# Analysis and output
return np.absolute(test_predicted-test_targets)
if __name__ == "__main__":
fun(clffunc)
</code></pre>
<p>from the terminal as:</p>
<pre><code>python GraspT.py svm_clf
</code></pre>
<p>I get the following error:</p>
<pre><code> File "/home/iki/griper validating/GraspT.py", line 24, in fun
error_vector = clffunc()
TypeError: 'str' object is not callable
</code></pre>
<p>In internet I couldn't find a solution.
'str' object is not callable is almost always done when someone redefines a built-in function or something similar. This is not my case. Here I'm passing a string from terminal, and then this is used as a string in a function argument. This argument is in fact a function. So I want to choose the function (a classifier method in machine learning) that is going to be executed in the code.</p>
| 0 | 2016-07-29T17:43:50Z | 38,665,307 | <p>You could directly access the <code>globals</code> and index by the function's name.</p>
<pre><code>def some_func():
print 'called some_func!'
def some_other_func():
print 'called some_other_func!'
globals()['some_func']()
globals()['some_other_func']()
globals()[sys.argv[1]]()
</code></pre>
<p>Also, you could consider using <a href="https://docs.python.org/2/library/functions.html#eval" rel="nofollow" title="eval">eval</a>.</p>
| 0 | 2016-07-29T18:25:46Z | [
"python",
"python-2.7"
] |
str object is not callable - calling a function with a function/string as an argument | 38,664,642 | <p>when I execute this code:</p>
<pre><code>clffunc = sys.argv[1]
def fun(clffunc):
error_vector = clffunc()
print error_vector
loss_total = sum(error_vector)
loss_mean = np.mean(error_vector)
print "The mean error is %.2f" % loss_mean
def svm_clf():
#The clasificator object
clf = svm.SVC()
clf.fit(train_features, train_targets)
# Prediction
test_predicted = clf.predict(test_features)
# Analysis and output
return np.absolute(test_predicted-test_targets)
if __name__ == "__main__":
fun(clffunc)
</code></pre>
<p>from the terminal as:</p>
<pre><code>python GraspT.py svm_clf
</code></pre>
<p>I get the following error:</p>
<pre><code> File "/home/iki/griper validating/GraspT.py", line 24, in fun
error_vector = clffunc()
TypeError: 'str' object is not callable
</code></pre>
<p>In internet I couldn't find a solution.
'str' object is not callable is almost always done when someone redefines a built-in function or something similar. This is not my case. Here I'm passing a string from terminal, and then this is used as a string in a function argument. This argument is in fact a function. So I want to choose the function (a classifier method in machine learning) that is going to be executed in the code.</p>
| 0 | 2016-07-29T17:43:50Z | 38,665,370 | <pre><code>import sys
def foo(): sys.stdout.write("foo called\n")
def foo1(): sys.stdout.write("foo1 called\n")
if __name__ == "__main__":
if len(sys.argv) < 2: # prevent IndexError
sys.stdout.write("Use commandline argument: foo or foo1\n")
sys.exit() # Print --help and exit
name = sys.argv[1] # Get name (str is)
m = sys.modules["__main__"] # Get main module instance obj
if hasattr(m, name): # Check attribute exists
a = getattr(m, name) # Get attribute by str-name
if hasattr(a, '__call__'): # Verify callable is?
a() # Call
else:
sys.stderr.write("ERROR: is not some callable '%s'\n" % name)
sys.exit(1)
else:
sys.stderr.write("ERROR: has no attr named '%s'\n" % name)
sys.exit(1)
</code></pre>
| 0 | 2016-07-29T18:29:37Z | [
"python",
"python-2.7"
] |
str object is not callable - calling a function with a function/string as an argument | 38,664,642 | <p>when I execute this code:</p>
<pre><code>clffunc = sys.argv[1]
def fun(clffunc):
error_vector = clffunc()
print error_vector
loss_total = sum(error_vector)
loss_mean = np.mean(error_vector)
print "The mean error is %.2f" % loss_mean
def svm_clf():
#The clasificator object
clf = svm.SVC()
clf.fit(train_features, train_targets)
# Prediction
test_predicted = clf.predict(test_features)
# Analysis and output
return np.absolute(test_predicted-test_targets)
if __name__ == "__main__":
fun(clffunc)
</code></pre>
<p>from the terminal as:</p>
<pre><code>python GraspT.py svm_clf
</code></pre>
<p>I get the following error:</p>
<pre><code> File "/home/iki/griper validating/GraspT.py", line 24, in fun
error_vector = clffunc()
TypeError: 'str' object is not callable
</code></pre>
<p>In internet I couldn't find a solution.
'str' object is not callable is almost always done when someone redefines a built-in function or something similar. This is not my case. Here I'm passing a string from terminal, and then this is used as a string in a function argument. This argument is in fact a function. So I want to choose the function (a classifier method in machine learning) that is going to be executed in the code.</p>
| 0 | 2016-07-29T17:43:50Z | 38,665,525 | <p>The problem is that you don't properly understand what you are trying to accomplish. </p>
<pre><code>clffunc = sys.argv[1]
def fun(clffunc):
error_vector = clffunc()
</code></pre>
<p>You are trying to call a string. <code>sys.argv[1]</code> returns second argument as string. You are doing this : <code>"svm_clf"()</code>. The simple solution is to use <code>eval</code> buildin function. For example : <code>eval('%s()' % clffunc)</code>. this expression will definitely do the job :).</p>
<p>Correction that should make it work :</p>
<pre><code>if __name__ == "__main__":
fun(eval(clffunc))
</code></pre>
| 0 | 2016-07-29T18:39:30Z | [
"python",
"python-2.7"
] |
Different solution in Matematica and Python | 38,664,651 | <p>This is another "my-results-doesn't-match-wolfram-website question".</p>
<p>I recently decided to give a try to python (I'm not sure why, to be honest, I'm spending too much time of my research in learn something which I don't know if I will use... Curiosity I suppose). I'm a really novel beginner in Python, so for start, I try to solve this equation:</p>
<pre><code>cos(x)*cosh(x)=1
</code></pre>
<p>Using python, I wrote the next code:</p>
<pre><code>from scipy import optimize
import numpy as np
func = lambda x : np.cos(x*1.0)*np.cosh(x*1.0)-1.0
for iii in range(1,10):
solution = fsolve(func, iii*1.0)
print(solution)
</code></pre>
<p>For my surprise, I discover the solution is different from <a href="http://www.wolframalpha.com/input/?i=cos(x)*cosh(x)%3D1" rel="nofollow">wolfram website</a> </p>
<p>Basically, my solution has some "residual solutions" I would say, probably because of numerical errors.</p>
<p>I don't know if I'm doing something wrong or forgetting something, but the code looks (in my opinion) good. </p>
<p>Any ideas to fix the code will be appreciated. </p>
<p>Thanks very much to all. Kind regards,</p>
<p>German</p>
<p>============= UPDATE ==========</p>
<p>Interestingly, Matlab gets the correct results. </p>
<pre><code>NM=5 ;
Beta_InitialModal=0 ;
Beta_FinalModal=8*NM ;
F = @(Beta1) (-1+cos(Beta1)*cosh(Beta1)) ; %Equation of cantilever beam
interval = [Beta_InitialModal, Beta_FinalModal];
N =(Beta_FinalModal-Beta_InitialModal)*50 ;
start_pts = linspace(interval(1),interval(2),N);
found_roots = [];
for iii=1:numel(start_pts)-1
if length(found_roots)==NM
break
else
try %#ok<TRYNC>
found_roots(end+1) = fzero(F,[start_pts(iii),start_pts(iii+1)]); %#ok<SAGROW>
end
end
end
display(found_roots);
</code></pre>
<p>Is it Python worse than Matlab/Mathematica? I don't think so...</p>
<p>I think maybe it's the input format number? Like <code>fsolve</code> works with a lower precision number than Matlab ? I don't know honestly.</p>
<p>Kind regards,</p>
| 2 | 2016-07-29T17:44:05Z | 38,665,565 | <p>Not sure if you realize, but your equation has infinitely many solutions, which, except for zero, are very close to the zeros of <code>Cos</code> or approximately <code>(n+1/2) Pi</code> for large <code>n</code>. </p>
<p>The wolfram alpha page is a bit misleading in not telling you that.</p>
<p>Here are the first 100 solutions: ( plus <code>0</code> ) using mathematica with extended precision:</p>
<pre><code>x /. FindRoot[ Cos[x] Cosh[x] == 1 , {x, (# + 1/2) Pi },
WorkingPrecision -> 1000] & /@ Range[101]
</code></pre>
<blockquote>
<p>4.7300407448627040260, 7.8532046240958375565, 10.995607838001670907, 14.137165491257464177, 17.278759657399481438, 20.420352245626061091, 23.561944902040455075, 26.703537555508186248, 29.845130209103254267, 32.986722862692819562, 36.128315516282622650, 39.269908169872415463, 42.411500823462208720, 45.553093477052001958, 48.694686130641795196, 51.836278784231588435, 54.977871437821381673, 58.119464091411174912, 61.261056745000968150, 64.402649398590761388, 67.544242052180554627, 70.685834705770347865, 73.827427359360141104, 76.969020012949934342, 80.110612666539727581, 83.252205320129520819, 86.393797973719314058, 89.535390627309107296, 92.676983280898900535, 95.818575934488693773, 98.960168588078487012, 102.10176124166828025, 105.24335389525807349, 108.38494654884786673, 111.52653920243765997, 114.66813185602745320, 117.80972450961724644, 120.95131716320703968, 124.09290981679683292, 127.23450247038662616, 130.37609512397641940, 133.51768777756621263, 136.65928043115600587, 139.80087308474579911, 142.94246573833559235, 146.08405839192538559, 149.22565104551517883, 152.36724369910497207, 155.50883635269476530, 158.65042900628455854, 161.79202165987435178, 164.93361431346414502, 168.07520696705393826, 171.21679962064373150, 174.35839227423352473, 177.49998492782331797, 180.64157758141311121, 183.78317023500290445, 186.92476288859269769, 190.06635554218249093, 193.20794819577228417, 196.34954084936207740, 199.49113350295187064, 202.63272615654166388, 205.77431881013145712, 208.91591146372125036, 212.05750411731104360, 215.19909677090083683, 218.34068942449063007, 221.48228207808042331, 224.62387473167021655, 227.76546738526000979, 230.90706003884980303, 234.04865269243959627, 237.19024534602938950, 240.33183799961918274, 243.47343065320897598, 246.61502330679876922, 249.75661596038856246, 252.89820861397835570, 256.03980126756814893, 259.18139392115794217, 262.32298657474773541, 265.46457922833752865, 268.60617188192732189, 271.74776453551711513, 274.88935718910690837, 278.03094984269670160, 281.17254249628649484, 284.31413514987628808, 287.45572780346608132, 290.59732045705587456, 293.73891311064566780, 296.88050576423546103, 300.02209841782525427, 303.16369107141504751, 306.30528372500484075, 309.44687637859463399, 312.58846903218442723, 315.73006168577422047, 318.87165433936401370</p>
</blockquote>
<p>Those larger values require extended precision to find..</p>
<pre><code> Plot[ Cos[x] Cosh[x] - 1 , {x , 310, 320 }]
</code></pre>
<p><a href="http://i.stack.imgur.com/hi0H2.png" rel="nofollow"><img src="http://i.stack.imgur.com/hi0H2.png" alt="enter image description here"></a></p>
<p>Note even at high precision mathematica's <code>FindRoot</code> won't return an exact zero if we give an initial guess close to zero</p>
<pre><code>FindRoot[Cos[x] Cosh[x] == 1, {x, 1/100}] (* x -> 0.000148132 *)
FindRoot[Cos[x] Cosh[x] == 1, {x, 1/100}, WorkingPrecision -> 100] (* x -> ~3 10^-15 *)
</code></pre>
<p>of course if the initial guess is zero <code>FindRoot</code> returns 0..</p>
<pre><code>FindRoot[Cos[x] Cosh[x] == 1, {x, 0}] (* x->0. *)
</code></pre>
| 0 | 2016-07-29T18:41:29Z | [
"python",
"python-3.x",
"wolfram-mathematica",
"solver",
"equation-solving"
] |
using dict to connect items | 38,664,781 | <p>I have taken data from a text file, and currently have a list that contains numerous urls, some of which repeat, and unix timestamps (tab delimited). I want to create an output that has each unique url, the number of times the url occurs, and the time of the earliest occurrence. This is what the data look like:</p>
<pre><code>url1 1441076681663
url2 1441076234873
url2 1441123894050
url2 1441432348975
url3 1441659082347
url1 1441450392840
</code></pre>
<p>I would like this to be my output, in a csv file:</p>
<pre><code>url count time
url1 2 1441076681663
url2 3 1441076234873
url3 1 1441659082347
</code></pre>
<p>I was thinking of using a dictionary, but I am not sure how you would replace the time with the earliest occurrence. maybe some sort of for/if loop?</p>
| -1 | 2016-07-29T17:53:30Z | 38,664,859 | <p>Make your url as a key to dictionary as it will always be unique and you can maintain dictionary something like</p>
<pre><code>Dict = {url1 : [mintime, count]} #to track minimum and count
</code></pre>
<p>or</p>
<pre><code>Dict = {url1 : [time1, time2, time3]} #to track all timestamps,
# I would prefer this one if you don't space constraint as you would get more info
</code></pre>
<p>Code for second data structure</p>
<pre><code>Dict = {} #empty dictionary
with open("file.txt", "r") as file: #reading file
for line in file.readlines():
if len(line) > 0:
mylist = line.split() #spliting with tab
key = mylist[0]
value = mylist[1]
if key in Dict:
Dict[key].append(value) #if url already exists as key
else:
Dict[key] = [value]
else:
print "No more lines to render"
print Dict
</code></pre>
| 0 | 2016-07-29T17:57:56Z | [
"python",
"csv",
"dictionary",
"count"
] |
using dict to connect items | 38,664,781 | <p>I have taken data from a text file, and currently have a list that contains numerous urls, some of which repeat, and unix timestamps (tab delimited). I want to create an output that has each unique url, the number of times the url occurs, and the time of the earliest occurrence. This is what the data look like:</p>
<pre><code>url1 1441076681663
url2 1441076234873
url2 1441123894050
url2 1441432348975
url3 1441659082347
url1 1441450392840
</code></pre>
<p>I would like this to be my output, in a csv file:</p>
<pre><code>url count time
url1 2 1441076681663
url2 3 1441076234873
url3 1 1441659082347
</code></pre>
<p>I was thinking of using a dictionary, but I am not sure how you would replace the time with the earliest occurrence. maybe some sort of for/if loop?</p>
| -1 | 2016-07-29T17:53:30Z | 38,664,964 | <p>This is an instance where a Counter object might also be useful: <a href="https://docs.python.org/2/library/collections.html" rel="nofollow">https://docs.python.org/2/library/collections.html</a></p>
<p>Here's an implementation (please don't downvote, this works):</p>
<pre><code>from collections import Counter
# Get list of data
my_list = []
my_list.append(('url1', 1441076681663))
my_list.append(('url2', 1441076234873))
my_list.append(('url2', 1441123894050))
my_list.append(('url2', 1441432348975))
my_list.append(('url3', 1441659082347))
my_list.append(('url1', 1441450392840))
# First get the count
my_counter = Counter([pair[0] for pair in my_list])
# Then find the first instance
my_dict = {}
for pair in my_list:
key = pair[0]
val = pair[1]
if (key not in my_dict) or (my_dict[key] > val):
my_dict[key] = val
print "URL\tCount\tFirst Instance"
for key in my_dict:
print key, my_counter[key], my_dict[key]
</code></pre>
| -2 | 2016-07-29T18:04:14Z | [
"python",
"csv",
"dictionary",
"count"
] |
using dict to connect items | 38,664,781 | <p>I have taken data from a text file, and currently have a list that contains numerous urls, some of which repeat, and unix timestamps (tab delimited). I want to create an output that has each unique url, the number of times the url occurs, and the time of the earliest occurrence. This is what the data look like:</p>
<pre><code>url1 1441076681663
url2 1441076234873
url2 1441123894050
url2 1441432348975
url3 1441659082347
url1 1441450392840
</code></pre>
<p>I would like this to be my output, in a csv file:</p>
<pre><code>url count time
url1 2 1441076681663
url2 3 1441076234873
url3 1 1441659082347
</code></pre>
<p>I was thinking of using a dictionary, but I am not sure how you would replace the time with the earliest occurrence. maybe some sort of for/if loop?</p>
| -1 | 2016-07-29T17:53:30Z | 38,665,428 | <p>Here's a solution using <code>pandas</code>.</p>
<pre><code>import pandas as pd
df = pd.read_csv('input.txt', names=['url', 'timestamp'],
header=None, delim_whitespace=True)
output = df.groupby('url')['timestamp'].agg({'count': 'size', 'time': 'min'})
output.to_csv('output.csv')
</code></pre>
| -1 | 2016-07-29T18:32:58Z | [
"python",
"csv",
"dictionary",
"count"
] |
using dict to connect items | 38,664,781 | <p>I have taken data from a text file, and currently have a list that contains numerous urls, some of which repeat, and unix timestamps (tab delimited). I want to create an output that has each unique url, the number of times the url occurs, and the time of the earliest occurrence. This is what the data look like:</p>
<pre><code>url1 1441076681663
url2 1441076234873
url2 1441123894050
url2 1441432348975
url3 1441659082347
url1 1441450392840
</code></pre>
<p>I would like this to be my output, in a csv file:</p>
<pre><code>url count time
url1 2 1441076681663
url2 3 1441076234873
url3 1 1441659082347
</code></pre>
<p>I was thinking of using a dictionary, but I am not sure how you would replace the time with the earliest occurrence. maybe some sort of for/if loop?</p>
| -1 | 2016-07-29T17:53:30Z | 38,665,753 | <p>Here's a solution using only Python standard libraries.</p>
<pre><code>import csv
from collections import defaultdict
d = defaultdict(list)
with open('input.txt', 'r') as f:
for line in f.readlines():
url, timestamp = line.split()
d[url].append(int(timestamp))
with open('output.csv', 'w') as f:
writer = csv.writer(f)
writer.writerow(['url', 'count', 'time'])
for url, timestamps in d.items():
writer.writerow([url, len(timestamps), min(timestamps)])
</code></pre>
| 0 | 2016-07-29T18:53:53Z | [
"python",
"csv",
"dictionary",
"count"
] |
What's the advantage of making your appengine app thread safe? | 38,664,788 | <p>I have an appserver that is getting painfully complicated in that it has to buffer data from incoming requests then push those buffers out, via pubsub, after enough has been received. The buffering isn't the problem, but efficient locking is... hairy, and I'm concerned that it's slowing down my service. I'm considering dropping thread safety in order to remove all the locking, but I'm worried that my app instance count will have to double (or more) to handle the same user load.</p>
<p>My understanding is that a threadsafe app is one where each thread is a billed app instance. In other words, I get billed for two instances by allowing multiple threads to run in a process, with the only advantage being that the threads can share memory and therefore, have a smaller overall footprint.</p>
<p>So to rephrase, does a multithreaded app instance handle multiple simultaneous connections, or is each billed app instance a separate thread - only capable of handling one request at a time? If I remove thread safety, am I going to need to run a larger pool of app instances?</p>
| 1 | 2016-07-29T17:54:01Z | 38,665,116 | <p>Multithreading is not related to billing in any way - you still pay for one instance even if 10 threads are running in parallel on that instance.</p>
| 0 | 2016-07-29T18:13:51Z | [
"python",
"multithreading",
"google-app-engine"
] |
Smartsheet Error Objects - Attribute Error | 38,664,805 | <p>I'm trying to grab all of the cell history of a given sheet and spit it out to a csv. The code works on one sheet but not on another. When I get to the revision section of the code, I get a bunch of attribute errors. The following code is in the guts of a function cycling through the cells of a sheet. </p>
<p>Error reads <code>'Error' object has no attribute 'data'</code> </p>
<p>The weirdest part is that the errors are not found consistently. As in, cycling through the same sheet, different cells will pop the error than when I ran the script last time. I am catching the Attribute Error, but that doesn't really solve the problem. Help? </p>
<pre><code> #get the cell history
action = smartsheet.Cells.get_cell_history(
sheetid,
row.id,
columns[c].id,
#include_all=True
)
try:
revisions = action.data
except AttributeError as inst:
print('found Attribute error in this cell:')
print(inst)
</code></pre>
| 0 | 2016-07-29T17:54:35Z | 38,776,623 | <p>This is likely because the cell has no history. You'll need to do a safety check to verify that the attribute exists before accessing it.</p>
<pre><code>if not hasattr(action, 'data'):
print('no history found')
</code></pre>
<p>You may be polling the API too rapidly, which is why sometimes values aren't returned. From the <a href="http://smartsheet-platform.github.io/api-docs/?python#get-cell-history" rel="nofollow">API documentation</a>:</p>
<blockquote>
<p>This is a resource-intensive operation and incurs 10 additional requests against the rate limit.</p>
</blockquote>
<p>Make sure you're handling any returned errors appropriately to see if this is the case.</p>
| 0 | 2016-08-04T20:12:07Z | [
"python",
"smartsheet-api"
] |
is while loop infinite | 38,664,876 | <p>I'm trying to build a list of inputs from several files. I need the list to consist only of the first file with a given basename. So if I a & b were folders, and there were "C:\a\file1.ext", "C:\b\file1" and "C:\c\file1" and I had a list of names file1, file2 and so on, I would want the script to find file1 in C:\a\file1.ext and then move on to the next name in the list. In some cases file-x.ext may not be in C:\a or C:\c or C:\b. </p>
<p>I'm setting a condition to count the file once it's base name is found in the list. Once the count = 1 it exits the while loop, resets the count to 0, and goes to the next name in the list, adding the first instance only of the file name to a new input list. The code I have seems to keep running so I think I have an internal loop but I thought setting count to 0 outside of the while loop would prevent this:</p>
<pre><code>count = 0
for name in dbfOnlyLst:
for file in fileLst:
while count < 1:
if os.path.basename(file) == name+".dbf":
values.add(file)
count += 1
count = 0
inList = list(values)
</code></pre>
| -1 | 2016-07-29T17:58:42Z | 38,664,992 | <p>Your while loop will be infinite because it only achieves the exit condition if <code>os.path.basename(file) == name+".dbf"</code> returns <code>True</code>. If it isn't true, then count will never be updated, and the loop will perform the same conditional check over and over again. </p>
<p><code>os.path.basename(file)</code> just returns the filename without the path -- it doesn't continue on to the next file in your list, so there's no reason why performing that check multiple times will do anything different.</p>
<p>So, you don't really need that <code>while</code> loop at all. You're just trying to check if each file in your <code>fileLst</code> object is equal to the filename you're looking for, so just iterate over <code>fileLst</code>. </p>
<p>And since you want to just record the first match of your base filename, you can use the <code>break</code> keyword to exit your inner loop early, as soon as you find a match. This way you won't keep iterating over <code>fileLst</code> and will move on to the next <code>name</code> in <code>dbfOnlyLst</code></p>
<pre><code>for name in dbfOnlyLst:
for file in fileLst:
if os.path.basename(file) == name+".dbf":
values.add(file)
break # only add the first match
inList = list(values)
</code></pre>
| 1 | 2016-07-29T18:05:49Z | [
"python",
"while-loop"
] |
is while loop infinite | 38,664,876 | <p>I'm trying to build a list of inputs from several files. I need the list to consist only of the first file with a given basename. So if I a & b were folders, and there were "C:\a\file1.ext", "C:\b\file1" and "C:\c\file1" and I had a list of names file1, file2 and so on, I would want the script to find file1 in C:\a\file1.ext and then move on to the next name in the list. In some cases file-x.ext may not be in C:\a or C:\c or C:\b. </p>
<p>I'm setting a condition to count the file once it's base name is found in the list. Once the count = 1 it exits the while loop, resets the count to 0, and goes to the next name in the list, adding the first instance only of the file name to a new input list. The code I have seems to keep running so I think I have an internal loop but I thought setting count to 0 outside of the while loop would prevent this:</p>
<pre><code>count = 0
for name in dbfOnlyLst:
for file in fileLst:
while count < 1:
if os.path.basename(file) == name+".dbf":
values.add(file)
count += 1
count = 0
inList = list(values)
</code></pre>
| -1 | 2016-07-29T17:58:42Z | 38,665,222 | <p>I know it has been mentioned in comments, but I figured I would demonstrate it.
Your loop keeps going until x is no longer less than 1. This only happens when a file is found with ".dbf" in it, because that leads to x += 1. If a file with ".dbf" is not found, the loop will keep running.</p>
<p>For example...</p>
<pre><code>count = 0
x = 12 # my imitation of finding a file with .dbf in it
while count < 1:
if x == 12:
print("yes")
x += 1
... 'yes'
</code></pre>
<p>This is when the loop would end. However, if x didn't equal 12...</p>
<pre><code>count = 0
x = 8
while count < 1:
if x == 12:
print("yes")
x += 1
else:
print("no") # will show you are stuck in the loop
... 'no'
... 'no'
... 'no'
... 'no'
# And so on...
</code></pre>
<p>I would recommend what @xgord said with avoiding the while loop. I answered this just so you could see what's going on "behind the scenes". I hope this helped</p>
| 0 | 2016-07-29T18:20:27Z | [
"python",
"while-loop"
] |
TensorFlow placement algorithm | 38,664,942 | <p>I would like to know when the placement algorithm of TensorFlow (as described in the white paper) gets actually employed. All examples for distributing TensorFlow that I have seen so far seem to specify manually where the nodes should be executed on, using "tf.device()".</p>
| 1 | 2016-07-29T18:02:34Z | 38,666,008 | <p>The dynamic placement algorithm described in Section 3.2.1 of the <a href="http://download.tensorflow.org/paper/whitepaper2015.pdf" rel="nofollow">TensorFlow whitepaper</a> was not included in the open-source release. Instead, the "simple placer" (whose implementation can be found in <a href="https://github.com/tensorflow/tensorflow/blob/4183d0a284b2b682b78937ac814b1daa926be032/tensorflow/core/common_runtime/simple_placer.cc" rel="nofollow"><code>simple_placer.cc</code></a>) is used, but it requires some explicit annotations (via <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/framework.html#device" rel="nofollow"><code>tf.device()</code></a>) to make yield an efficient placement. Higher-level constructs like <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/train.html#replica_device_setter" rel="nofollow"><code>tf.train.replica_device_setter()</code></a> wrap <code>tf.device()</code> to specify common policies such as "shard the variables across parameter servers, and otherwise put all ops on the worker device," and we use this extensively in <a href="https://www.tensorflow.org/versions/r0.9/how_tos/distributed/index.html" rel="nofollow">distributed training</a>.</p>
<p>In practice we have found that a small set of annotations usually yields a more efficient placement than the dynamic placer will determine, but improving the placement algorithm remains an area of active research.</p>
| 3 | 2016-07-29T19:12:04Z | [
"python",
"algorithm",
"tensorflow",
"distributed",
"placement"
] |
Python/Django database username and password? | 38,664,958 | <p>I'm learning python and I want to understand the database section and when setting up for postgresql database.</p>
<p><a href="https://docs.djangoproject.com/en/1.9/ref/settings/#databases" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/settings/#databases</a></p>
<p>Are all the values necessary? </p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'mydatabase',
'USER': 'mydatabaseuser',
'PASSWORD': 'mypassword',
'HOST': '127.0.0.1',
'PORT': '5432',
}
}
</code></pre>
<p>Specifically <code>USER</code>, <code>PASSWORD</code>, <code>HOST</code>, <code>PORT</code>? Is <code>USER</code> and <code>PASSWORD</code> values that we can create in django settings.py? Or is this the actual <code>USER/PASSWORD</code> of the database? Also, HOST is currently <code>127.0.0.1</code> for localhost, but when deploying to production, do I change this to the domain name (<a href="http://www.example.com" rel="nofollow">http://www.example.com</a>)? And <code>PORT</code>, is it necessary?</p>
| 0 | 2016-07-29T18:03:43Z | 38,665,182 | <blockquote>
<p><strong>Yes!</strong> <strong><em>all of those information are necessary</em></strong> , there is no way you could connect to the database unless those values are
specified.</p>
</blockquote>
<p><strong>Yes!</strong> <em><code>user</code></em> and <em><code>password</code></em> are the actual credentials of your PostgreSQL database.</p>
<p><strong><em>regarding deployment</em></strong>, you should set the correct IP/host where your production database is located. that might be example.com or xxx.xxx.xxx.xxx</p>
<p>I think you are <strong><em><code>concerned about security</code></em></strong> (revealing your database credentials in the source) if that is the case you could put your credentials in secure config file like <strong>.env</strong> and use <strong><a href="https://github.com/theskumar/python-dotenv" rel="nofollow">this</a></strong> library to work with your config file. </p>
| 2 | 2016-07-29T18:17:40Z | [
"python",
"django",
"postgresql"
] |
Python/Django database username and password? | 38,664,958 | <p>I'm learning python and I want to understand the database section and when setting up for postgresql database.</p>
<p><a href="https://docs.djangoproject.com/en/1.9/ref/settings/#databases" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/settings/#databases</a></p>
<p>Are all the values necessary? </p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'mydatabase',
'USER': 'mydatabaseuser',
'PASSWORD': 'mypassword',
'HOST': '127.0.0.1',
'PORT': '5432',
}
}
</code></pre>
<p>Specifically <code>USER</code>, <code>PASSWORD</code>, <code>HOST</code>, <code>PORT</code>? Is <code>USER</code> and <code>PASSWORD</code> values that we can create in django settings.py? Or is this the actual <code>USER/PASSWORD</code> of the database? Also, HOST is currently <code>127.0.0.1</code> for localhost, but when deploying to production, do I change this to the domain name (<a href="http://www.example.com" rel="nofollow">http://www.example.com</a>)? And <code>PORT</code>, is it necessary?</p>
| 0 | 2016-07-29T18:03:43Z | 38,665,944 | <blockquote>
<p>Specifically USER, PASSWORD, HOST, PORT? Is USER and PASSWORD values
that we can create in django settings.py? Or is this the actual
USER/PASSWORD of the database? Also, HOST is currently 127.0.0.1 for
localhost, but when deploying to production, do I change this to the
domain name (<a href="http://www.example.com" rel="nofollow">http://www.example.com</a>)? And PORT, is it necessary?</p>
</blockquote>
<p>The <code>USER</code> and <code>PASSWORD</code> is what you configure in the database, then you enter it in the file.</p>
<p>The <code>HOST</code> is the IP address or hostname where the server is running. In production, you have to check with your hosting provider for the correct details; it is rare that it is your domain name.</p>
<p>The <code>PORT</code> you only need to adjust if its different than the default port (<code>5432</code>). If it is different, your host will tell you.</p>
<p>Finally, keep in mind that <code>http://www.example.com</code> is not a domain name, this the the complete URL. The domain name is <code>example.com</code>, and the host is <code>www</code>, the fully qualified <strong>hostname</strong> is <code>www.example.com</code>.</p>
| 1 | 2016-07-29T19:07:43Z | [
"python",
"django",
"postgresql"
] |
Python 3(.5.2) Tkinter | 38,665,059 | <p>I need help. I was developing an interface with tkinter. at one point, when I insert a button in the window, it does not work as it should ... in fact, does not work. I give me this error when i press the button:</p>
<pre><code> File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/tkinter/__init__.py", line 1550, in __call__
return self.func(*args)
File "/Users/Edoardo/Desktop/Progetti/Programming Projects/Python Projects/Encoder 1.0 alpha.py", line 277, in <lambda>
encode = Button(text="ENCODE", command=lambda: encode())
TypeError: 'Button' object is not callable
</code></pre>
<p>This is the code (some parts are missing cause, before write the entire program, i want make sure that it works):</p>
<pre><code>from tkinter import *
import tkinter.messagebox
# encode = {char : code}
utf8int = {
"!": "\\33",
"\"": "\\34",
"#": "\\35",
"$": "\\36",
"%": "\\37",
"&": "\\38",
"'": "\\39",
"(": "\\40",
")": "\\41",
"*": "\\42",
"+": "\\43",
",": "\\44",
"-": "\\45",
".": "\\46",
"/": "\\47",
"0": "\\48",
"1": "\\49",
"2": "\\50",
"3": "\\51",
"4": "\\52",
"5": "\\53",
"6": "\\54",
"7": "\\55",
"8": "\\56",
"9": "\\57",
":": "\\58",
";": "\\59",
"<": "\\60",
"=": "\\61",
">": "\\62",
"?": "\\63",
"@": "\\64",
"A": "\\65",
"B": "\\66",
"C": "\\67",
"D": "\\68",
"E": "\\69",
"F": "\\70",
"G": "\\71",
"H": "\\72",
"I": "\\73",
"J": "\\74",
"K": "\\75",
"L": "\\76",
"M": "\\77",
"N": "\\78",
"O": "\\79",
"P": "\\80",
"Q": "\\81",
"R": "\\82",
"S": "\\83",
"T": "\\84",
"U": "\\85",
"V": "\\86",
"W": "\\87",
"X": "\\88",
"Y": "\\89",
"Z": "\\90",
"[": "\\91",
"\\": "\\92",
"]": "\\93",
"^": "\\94",
"_": "\\95",
"`": "\\96",
"a": "\\97",
"b": "\\98",
"c": "\\99",
"d": "\\100",
"e": "\\101",
"f": "\\102",
"g": "\\103",
"h": "\\104",
"i": "\\105",
"j": "\\106",
"k": "\\107",
"l": "\\108",
"m": "\\109",
"n": "\\110",
"o": "\\111",
"p": "\\112",
"q": "\\113",
"r": "\\114",
"s": "\\115",
"t": "\\116",
"u": "\\117",
"v": "\\118",
"w": "\\119",
"x": "\\120",
"y": "\\121",
"z": "\\122",
"{": "\\123",
"|": "\\124",
"}": "\\125",
"~": "\\126",
"â¬": "\\128",
"â": "\\130",
"Æ": "\\131",
"â": "\\132",
"â¦": "\\133",
"â ": "\\134",
"â¡": "\\135",
"Ë": "\\136",
"â°": "\\137",
"Å ": "\\138",
"â¹": "\\139",
"Å": "\\140",
"Ž": "\\142",
"â": "\\145",
"â": "\\146",
"â": "\\147",
"â": "\\148",
"â¢": "\\149",
"â": "\\150",
"â": "\\151",
"Ë": "\\152",
"â¢": "\\153",
"Å¡": "\\154",
"âº": "\\155",
"Å": "\\156",
"ž": "\\158",
"Ÿ": "\\159",
"¡": "\\161",
"¢": "\\162",
"£": "\\163",
"¤": "\\164",
"Â¥": "\\165",
"¦": "\\166",
"§": "\\167",
"¨": "\\168",
"©": "\\169",
"ª": "\\170",
"«": "\\171",
"¬": "\\172",
"®": "\\174",
"¯": "\\175",
"°": "\\176",
"±": "\\177",
"²": "\\178",
"³": "\\179",
"´": "\\180",
"µ": "\\181",
"¶": "\\182",
"·": "\\183",
"¸": "\\184",
"¹": "\\185",
"º": "\\186",
"»": "\\187",
"¼": "\\188",
"½": "\\189",
"¾": "\\190",
"¿": "\\191",
"Ã": "\\192",
"Ã": "\\193",
"Ã": "\\194",
"Ã": "\\195",
"Ã": "\\196",
"Ã
": "\\197",
"Ã": "\\198",
"Ã": "\\199",
"Ã": "\\200",
"Ã": "\\201",
"Ã": "\\202",
"Ã": "\\203",
"Ã": "\\204",
"Ã": "\\205",
"Ã": "\\206",
"Ã": "\\207",
"Ã": "\\208",
"Ã": "\\209",
"Ã": "\\210",
"Ã": "\\211",
"Ã": "\\212",
"Ã": "\\213",
"Ã": "\\214",
"Ã": "\\215",
"Ã": "\\216",
"Ã": "\\217",
"Ã": "\\218",
"Ã": "\\219",
"Ã": "\\220",
"Ã": "\\221",
"Ã": "\\222",
"Ã": "\\223",
"Ã ": "\\224",
"á": "\\225",
"â": "\\226",
"ã": "\\227",
"ä": "\\228",
"Ã¥": "\\229",
"æ": "\\230",
"ç": "\\231",
"è": "\\232",
"é": "\\233",
"ê": "\\234",
"ë": "\\235",
"ì": "\\236",
"Ã": "\\237",
"î": "\\238",
"ï": "\\239",
"ð": "\\240",
"ñ": "\\241",
"ò": "\\242",
"ó": "\\243",
"ô": "\\244",
"õ": "\\245",
"ö": "\\246",
"÷": "\\247",
"ø": "\\248",
"ù": "\\249",
"ú": "\\250",
"û": "\\251",
"ü": "\\252",
"ý": "\\253",
"þ": "\\254",
"ÿ": "\\255",
"Ä": "\\256",
"Ä": "\\257",
"Ä": "\\258",
"Ä": "\\259",
"Ä": "\\260",
"Ä
": "\\261",
"Ä": "\\262",
"Ä": "\\263",
"Ä": "\\264",
"Ä": "\\265",
"Ä": "\\266",
"Ä": "\\267",
"Ä": "\\268",
"Ä": "\\269",
"Ä": "\\270",
"Ä": "\\271",
"Ä": "\\272",
"Ä": "\\273"
}
utf8hex = {
}
Ascii = {
}
binary = {
}
root = Tk()
root.resizable(0, 0)
root.title("Encoder 1.0 (Alpha)")
v = IntVar()
infoLabel = Label(text="This is an encoder, please choose the encode type and enter a message ")
infoLabel.grid(column=1, columnspan=3, row=1)
RB_UTF8_int = Radiobutton(text="UTF-8 (int)", variable=v, value="utf-8 int")
RB_UTF8_int.grid(column=1, row=2)
RB_UTF8_hex = Radiobutton(text="UTF-8 (hex)", variable=v, value="utf-8 hex")
RB_UTF8_hex.grid(column=2, row=2)
RB_Ascii = Radiobutton(text="Ascii", variable=v, value="Ascii")
RB_Ascii.grid(column=1, row=3)
RB_UTF16 = Radiobutton(text="Binary", variable=v, value="binary")
RB_UTF16.grid(column=2, row=3)
message = Entry()
message.grid(column=1, columnspan=4, row=4)
encode = Button(text="ENCODE", command=lambda: encode())
encode.grid(column=1, columnspan=4, row=5)
root.mainloop()
def encode():
errors = 0
decoded_message = []
encoded_message = []
for i in message.get():
decoded_message.append(i)
if message.get() != "":
if v == "utf-8 int":
for i in decoded_message:
for char, code in utf8int:
if char == i:
encoded_message.append(code)
else:
encoded_message.append("(error)")
errors += 1
pass
decoded_message1 = str(decoded_message)
decoded_message2 = decoded_message1.replace("[", "")
decoded_message3 = decoded_message2.replace("]", "")
decoded_message4 = decoded_message3.replace(", ", "")
decoded_message5 = decoded_message4.replace("'", "")
if tkinter.messagebox.askyesno(title="Succesfully Generated", message="Your code is succesfully "
"generated\n"
"with"+str(errors)+"errors.\n"
"This is your code:\n" +
decoded_message5 +
"Copy it on the clipboard?"):
root.clipboard_clear()
root.clipboard_append(decoded_message5)
else:
pass
elif v == "utf-8 hex":
for i in decoded_message:
for char, code in utf8hex:
if char == i:
encoded_message.append(code)
else:
errors += 1
pass
elif v == "Ascii":
for i in decoded_message:
for char, code in Ascii:
if char == i:
encoded_message.append(code)
else:
errors += 1
pass
elif v == "binary":
for i in decoded_message:
for char, code in binary:
if char == i:
encoded_message.append(code)
else:
errors += 1
pass
else:
tkinter.messagebox.showerror(title="FATAL ERROR", message="Please, select an encode type.")
else:
tkinter.messagebox.showerror(title="ERROR 404", message="ERROR 404: message not found!")
</code></pre>
| 0 | 2016-07-29T18:09:58Z | 38,665,155 | <p>You have a definition problem there: your callback defintion is never reached,as Tkinter's <code>mainloop</code> is called before. What happens when the Button is pressed, it tries to call the button object itself, not the <code>encode</code> function defined bellow.</p>
<p>To get it working as is, simply move your <code>root.mainloop()</code> to the last line of the script.</p>
<p>Besides that, you are calling the Button and your callback function with the same name. That would work (and may have worked in other versions of the program), if you start Tkinter's loop after defining the callback function, and if you won't need access to the Button itself afterwards. You should use different names.</p>
<p>But better than different names, there is an organisation problem of trying to put your interface definition code in the program root level, instead of inside a function.</p>
| 1 | 2016-07-29T18:16:06Z | [
"python",
"python-3.x",
"button",
"tkinter",
"messagebox"
] |
Calling python script from a Bash script | 38,665,093 | <p>I'm trying to call a python script from a bash script. I get import errors only if I try to run the .py from the bash script. If I run with python myscript.py everything is fine. This is my bash script: </p>
<pre><code>while true; do
python script.py
echo "Restarting...";
sleep 3;
done
</code></pre>
<p>The error I get:</p>
<pre><code>Traceback (most recent call last):
File "script.py", line 39, in <module>
from pokemongo_bot import logger
File "/Users/Paolo/Downloads/folder/t/__init__.py", line 4, in <module>
import googlemaps
ImportError: No module named googlemaps
</code></pre>
| 1 | 2016-07-29T18:11:55Z | 38,665,366 | <p>Your problem is in the script itself, your bash code is OK!. If you don't have problem running <code>python scrip.py</code> from bash directly, you should test if you use the same interpreter for both calls. You can check the shebang line in the python script, it is the first line in the file for example <code>#!/usr/bin/env python</code> or <code>#!/usr/bin/python</code> and compare it to the output of <code>which python</code> command if the output is different try to change or add the shebang line in to the file. If you call directly file in bash <code>./some_script.py</code> bash reads the first line and if it is shebang line he wil execute the specific command for the file. My point is that if you use two diferent interpreters for calling file directly with <code>python script.py</code> and indirectly <code>./script.py</code> one of them may not have the proper python modules.</p>
<p>Howto code:</p>
<pre><code>$ which python
/usr/local/bin/python
</code></pre>
<p>So the second line is the path for your interpreter to build a shebang from it write in the first line of your script file this.</p>
<pre><code>#!/usr/local/bin/python
</code></pre>
| 1 | 2016-07-29T18:29:31Z | [
"python",
"bash",
"import"
] |
Calling python script from a Bash script | 38,665,093 | <p>I'm trying to call a python script from a bash script. I get import errors only if I try to run the .py from the bash script. If I run with python myscript.py everything is fine. This is my bash script: </p>
<pre><code>while true; do
python script.py
echo "Restarting...";
sleep 3;
done
</code></pre>
<p>The error I get:</p>
<pre><code>Traceback (most recent call last):
File "script.py", line 39, in <module>
from pokemongo_bot import logger
File "/Users/Paolo/Downloads/folder/t/__init__.py", line 4, in <module>
import googlemaps
ImportError: No module named googlemaps
</code></pre>
| 1 | 2016-07-29T18:11:55Z | 38,665,779 | <p>There is more to this story that isn't in your question.
Your PYTHONPATH variable is getting confused somewhere along the way.<br>
Insert a couple quick test lines:</p>
<p>in bash:</p>
<pre><code>echo $PYTHONPATH
</code></pre>
<p>in your python:</p>
<pre><code>import os
print os.environ["PYTHONPATH"]
</code></pre>
<p>At some point, the path to googlemaps got lost.</p>
| 3 | 2016-07-29T18:55:53Z | [
"python",
"bash",
"import"
] |
Python py2exe Not Including `os` Module | 38,665,237 | <p>I have a python program which imports <code>os</code> so that I can retrieve the application's path (i.e. <code>os.path.dirname(os.path.realpath(__file__))</code>). I have been using py2exe to make this python file into an exe, and I have had no issues until I started to use <code>os</code>. Here is the command window (notice it says <code>1 missing Modules</code>):
<a href="http://i.stack.imgur.com/YIw2Y.png" rel="nofollow"><img src="http://i.stack.imgur.com/YIw2Y.png" alt="enter image description here"></a></p>
<p>When I try to open the <code>.exe</code> that gets created, it closes on me immediately. All the other imports seem to work fine, and they are: <code>win32api, win32con, time, msvcrt, win32gui, re</code>. Again, the <code>.exe</code> stops working properly when I import <code>os</code> but the Python project itself works fine. What can I do to fix this? Thanks.</p>
| 0 | 2016-07-29T18:21:16Z | 38,666,635 | <p>Use cx-Freeze for create a .exe on windows instead of py2exe.</p>
| 0 | 2016-07-29T20:01:58Z | [
"python",
"py2exe"
] |
Python subprocess called from Matlab fails | 38,665,247 | <p>I've hit a problem in trying to join some code from different sources in Matlab and I don't really know how to approach it. Essentially, I have some Python code for a compression algorithm called from the command line, which itself uses subprocess to run and communicate with C++ code compiled to a binary.</p>
<p>The function in Python (which is part of a larger object) looks like this:</p>
<pre><code>def __extractRepeats(self, repeatClass):
process = subprocess.Popen(["./repeats1/repeats11", "-i", "-r"+repeatClass, "-n2", "-psol"],stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT)
process.stdin.write(' '.join(map(str,self.__concatenatedDAG)))
text_file = ''
while process.poll() is None:
output = process.communicate()[0].rstrip()
text_file += output
process.wait()
repeats=[]
firstLine = False
for line in text_file.splitlines():
if firstLine == False:
firstLine = True
continue
repeats.append(line.rstrip('\n'))
return repeats
</code></pre>
<p>In order to minimise porting issues, I decided to do the integration with Matlab entirely indirectly through the system command, by putting together a script with all of the components and running it by</p>
<pre><code>system('./temp_script')
</code></pre>
<p>where temp_script is executable and looks like this:</p>
<pre><code>cd /home/ben/Documents/MATLAB/StructureDiscovery/+sd/Lexis
python Lexis.py -f i /home/ben/Documents/MATLAB/StructureDiscovery/+sd/Lexis/aabb.txt >> /home/ben/Documents/MATLAB/StructureDiscovery/+sd/Lexis/lexis_results.txt
</code></pre>
<p>Now I'm running this in Ubuntu 16.04, where running the script from terminal works. Running the same script from Matlab, however, gives me the error</p>
<pre><code> Traceback (most recent call last):
File "Lexis.py", line 762, in <module>
g.GLexis(quietLog, rFlag, functionFlag, costWeight)
File "Lexis.py", line 191, in GLexis
(maximumRepeatGainValue, selectedRepeatOccs) = self.__retreiveMaximumGainRepeat(normalRepeatType, CostFunction.EdgeCost)
File "Lexis.py", line 242, in __retreiveMaximumGainRepeat
repeats = self.__extractRepeats(repeatClass)
File "Lexis.py", line 302, in __extractRepeats
process.stdin.write(' '.join(map(str,self.__concatenatedDAG)))
IOError: [Errno 32] Broken pipe
</code></pre>
<p>or the error</p>
<pre><code> File "Lexis.py", line 251, in __retreiveMaximumGainRepeat
idx = map(int,repeatStats[2][1:-1].split(','))[0]
ValueError: invalid literal for int() with base 10: 'ersio'
</code></pre>
<p>and I haven't been able to figure out when I get which one.</p>
<p>the relevant snippet for repeatStats is</p>
<pre><code> repeats = self.__extractRepeats(repeatClass)
for r in repeats: #Extracting maximum repeat
repeatStats = r.split()
idx = map(int,repeatStats[2][1:-1].split(','))[0]
</code></pre>
<p>I don't really know what's different between Matlab calling something via system and calling it directly from terminal, so I don't know what's going wrong. On OSX 10.11, exactly the same code works.</p>
<p>Does anyone know about the inner workings of Matlab's system command and why it might fail to allow Python to call a subprocess?</p>
<p>Any help would be appreciated!</p>
| 1 | 2016-07-29T18:21:58Z | 38,671,699 | <pre><code>repeats = self.__extractRepeats(repeatClass)
for r in repeats: #Extracting maximum repeat
repeatStats = r.split()
idx = map(int,repeatStats[2][1:-1].split(','))[0]
</code></pre>
<p>Assuming your repeatStats[2] is xersiox,10</p>
<p>Then repeatStats[2][1:-1] becomes ersiox,1</p>
<p>Then repeatStats[2][1:-1].split(',') results in a list ['ersio', '1']</p>
<p>As you have you have passing entire list to the entire statement</p>
<pre><code>idx = map(int,repeatStats[2][1:-1].split(','))[0]
# it looks like this idx = map(int, ['ersio', '1'])[0]
# it picks the first item from the list, which is 'ersio', which does not have any numerical in that.
</code></pre>
<p>Instead please try below:</p>
<pre><code>idx = map(int,repeatStats[2][1:-1].split(',')[1])[0]
# or provide an index of the item in the list, which has a number in it.
</code></pre>
| 0 | 2016-07-30T07:56:28Z | [
"python",
"matlab",
"ubuntu",
"subprocess"
] |
django : LookupError: App ' doesn't have a 'models' model | 38,665,252 | <p><a href="http://i.stack.imgur.com/PejnM.png" rel="nofollow"><img src="http://i.stack.imgur.com/PejnM.png" alt="enter image description here"></a></p>
<p>I'm working through <a href="https://bixly.com/blog/awesome-forms-django-crispy-forms/" rel="nofollow">https://bixly.com/blog/awesome-forms-django-crispy-forms/</a> , trying to set up a bootstrap 3 form using django crispy forms.</p>
<p>in app1/models.py, I have set up my form:</p>
<pre><code>from django.db import models
from django.contrib.auth.models import User
from django.contrib.auth.models import AbstractUser
from django import forms
class User(AbstractUser):
# Address
contact_name = models.CharField(max_length=50)
contact_address = models.CharField(max_length=50)
contact_email = models.CharField(max_length=50)
contact_phone = models.CharField(max_length=50)
......
</code></pre>
<p>Please note I have not created any db tables yet. I don't need them at this stage. I'm just trying to get the forms working. When I run this I get:</p>
<pre><code>Performing system checks...
Unhandled exception in thread started by <function wrapper at 0x02B63EF0>
Traceback (most recent call last):
File "C:\lib\site-packages\django\utils\autoreload.py", line 222, in wrapper
fn(*args, **kwargs)
File "C:\lib\site-packages\django\core\management\commands\runserver.py", line 105, in inner_run
self.validate(display_num_errors=True)
File "C:\lib\site-packages\django\core\management\base.py", line 362, in validate
return self.check(app_configs=app_configs, display_num_errors=display_num_errors)
File "C:\lib\site-packages\django\core\management\base.py", line 371, in check
all_issues = checks.run_checks(app_configs=app_configs, tags=tags)
File "C:\lib\site-packages\django\core\checks\registry.py", line 59, in run_checks
new_errors = check(app_configs=app_configs)
File "C:\lib\site-packages\django\contrib\auth\checks.py", line 12, in check_user_model
cls = apps.get_model(settings.AUTH_USER_MODEL)
File "C:\lib\site-packages\django\apps\registry.py", line 202, in get_model
return self.get_app_config(app_label).get_model(model_name.lower())
File "C:\lib\site-packages\django\apps\config.py", line 166, in get_model
"App '%s' doesn't have a '%s' model." % (self.label, model_name))
LookupError: App 'app1' doesn't have a 'models' model.
</code></pre>
<p>How can I fix this?</p>
| 0 | 2016-07-29T18:22:26Z | 38,665,300 | <p>The <code>AUTH_USER_MODEL</code> settings should be of the form <code><app name>.<model></code>. Your model name is <code>User</code>, not <code>model</code>, so your setting should be:</p>
<pre><code>AUTH_USER_MODEL = 'app1.User'
</code></pre>
<p>You should also remove the following <code>User</code> import from your <code>models.py</code>. You only have to import <code>AbstractUser</code>.</p>
<pre><code>from django.contrib.auth.models import User
</code></pre>
| 5 | 2016-07-29T18:25:29Z | [
"python",
"django"
] |
Python win32api get "stack" of windows | 38,665,380 | <p>I'm looking for a way to find out what order windows are open on my desktop in order to tell what parts of what windows are visible to the user. </p>
<p>Say, in order, I open up a maximized chrome window, a maximized notepad++ window, and then a command prompt that only covers a small portion of the screen. Is there a way using the win32api (or possibly other library) that can tell me the stack of windows open so I can take the window dimensions and find out what is visible? I already know how to get which window has focus and the top-level window, but I'm looking for more info than that.</p>
<p>In the example I mentioned above, I'd return that the full command prompt is visible but in the places it isn't, the notepad++ window is visible for example. No part of the chrome window would be visible.</p>
| 0 | 2016-07-29T18:30:09Z | 38,867,461 | <p>This does not yet have any logic deciding if windows are overlaid but it does return a dictionary of existing windows with info of their title, visibility, minimization, size and the next window handle.</p>
<pre><code>import win32gui
import win32con
def enum_handler(hwnd, results):
results[hwnd] = {
"title":win32gui.GetWindowText(hwnd),
"visible":win32gui.IsWindowVisible(hwnd),
"minimized":win32gui.IsIconic(hwnd),
"rectangle":win32gui.GetWindowRect(hwnd), #(left, top, right, bottom)
"next":win32gui.GetWindow(hwnd, win32con.GW_HWNDNEXT) # Window handle to below window
}
def get_windows():
enumerated_windows = {}
win32gui.EnumWindows(enum_handler, enumerated_windows)
return enumerated_windows
if __name__ == "__main__":
windows = get_windows()
for window_handle in windows:
if windows[window_handle]["title"] is not "":
print "{}, {}, {}, {}".format(windows[window_handle]["minimized"],
windows[window_handle]["rectangle"],
windows[window_handle]["next"],
windows[window_handle]["title"])
</code></pre>
<p>Microsoft MSDN has good artice on zorder info with GetWindow() and GW_HWNDNEXT
<a href="https://msdn.microsoft.com/en-us/library/windows/desktop/ms633515(v=vs.85).aspx" rel="nofollow">https://msdn.microsoft.com/en-us/library/windows/desktop/ms633515(v=vs.85).aspx</a></p>
| 0 | 2016-08-10T08:00:10Z | [
"python",
"pywin32"
] |
typeerror exception not working | 38,665,383 | <p>I'm trying to complete an exercise asking me to use the TypeError exception in Python to account for strings when integers are required. The example is simple, I ask the user for two numbers and then add them. I want to use a try block to handle when the user accidentally puts in a string instead of an int.
What I get back is a ValueError traceback saying something about base of 10.</p>
<p>Here's the code:</p>
<pre><code>print ("Give me two numbers, and I'll add them.")
print ("Enter 'q' to quit.")
while True:
try:
num1 = input("\nEnter first number: ")
if num1 == 'q':
break
except TypeError:
print ("Please enter a number not a letter.")
try:
num2 = input("\nEnter second number: ")
if num2 == 'q':
break
except TypeError:
print ("Please enter a number not a letter.")
sum = int(num1) + int(num2)
print ("The sum of your two numbers is: " + str(sum))
</code></pre>
<p>Here's the error message:</p>
<pre><code>Traceback (most recent call last):
File "chapt10 - files and exceptions.py", line 212, in <module>
sum = int(num1) + int(num2)
ValueError: invalid literal for int() with base 10: 'd'
</code></pre>
| 0 | 2016-07-29T18:30:17Z | 38,665,488 | <p>It's great you already have a <code>try/except</code> block, but the problem is the <em>construct</em> is not being correctly used. If a letter is entered instead of a number, that will be a <code>ValueError</code> not a <code>TypeError</code>. That's why you have a <code>ValueError</code> being raised by your code when a letter is entered; that exception class is not being handled.</p>
<p>More importantly, the <code>try</code> block should actually include the <em>operation</em> that is likely to raise the error:</p>
<pre><code>print ("Give me two numbers, and I'll add them.")
print ("Enter 'q' to quit.")
while True:
num1 = input("\nEnter first number: ")
num2 = input("\nEnter second number: ")
if num1 == 'q' or num2 == 'q':
break
try:
num1 = int(num1) # casting to int will likely raise an error
num2 = int(num2)
except ValueError:
print ("One or both of the entries is not a number. Please try again")
continue
total = num1 + num2
print ("The sum of your two numbers is: " + str(total))
</code></pre>
<p>On a side note, using <code>sum</code> as the name of variable is not good idea as <code>sum</code> is a Python builtin.</p>
| 0 | 2016-07-29T18:36:28Z | [
"python",
"exception",
"typeerror"
] |
How to read in an edge list to make a scipy sparse matrix | 38,665,388 | <p>I have a large file where each line has a pair of 8 character strings. Something like:</p>
<pre><code>ab1234gh iu9240gh
</code></pre>
<p>on each line.</p>
<p>This file really represents a graph and each string is a node id. I would like to read in the file and directly make a scipy sparse adjacency matrix. I will then run PCA on this matrix using one of the many tools available in python</p>
<p>Is there a neat way to do this or do I need to first make a graph in RAM and then convert that into a sparse matrix? As the file is large I would like to avoid intermediate steps if possible. </p>
<p>Ultimately I will feed the sparse adjacency matrix into <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html#sklearn.decomposition.TruncatedSVD" rel="nofollow">http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html#sklearn.decomposition.TruncatedSVD</a> .</p>
| 2 | 2016-07-29T18:30:31Z | 38,667,644 | <p>I think this is a regular task in <code>sklearn</code>, so there must be some tool in the package that does this, or an answer in other SO questions. We need to add the correct tag.</p>
<p>But just working from my knowledge of <code>numpy</code> and <code>sparse</code>, where what I'd do:</p>
<p>Make a sample 2d array - N rows, 2 columns with character values:</p>
<pre><code>In [638]: A=np.array([('a','b'),('b','d'),('a','d'),('b','c'),('d','e')])
In [639]: A
Out[639]:
array([['a', 'b'],
['b', 'd'],
['a', 'd'],
['b', 'c'],
['d', 'e']],
dtype='<U1')
</code></pre>
<p>Use <code>np.unique</code> to identify the unique strings, and as a bonus a map from those strings to the original array. This is the workhorse of the task.</p>
<pre><code>In [640]: k1,k2,k3=np.unique(A,return_inverse=True,return_index=True)
In [641]: k1
Out[641]:
array(['a', 'b', 'c', 'd', 'e'],
dtype='<U1')
In [642]: k2
Out[642]: array([0, 1, 7, 3, 9], dtype=int32)
In [643]: k3
Out[643]: array([0, 1, 1, 3, 0, 3, 1, 2, 3, 4], dtype=int32)
</code></pre>
<p>I can reshape that <code>inverse</code> array to identify the row and col for each entry in <code>A</code>. </p>
<pre><code>In [644]: rows,cols=k3.reshape(A.shape).T
In [645]: rows
Out[645]: array([0, 1, 0, 1, 3], dtype=int32)
In [646]: cols
Out[646]: array([1, 3, 3, 2, 4], dtype=int32)
</code></pre>
<p>with those it is trivial to construct a sparse matrix that has <code>1</code> at each 'intersection`. </p>
<pre><code>In [648]: M=sparse.coo_matrix((np.ones(rows.shape,int),(rows,cols)))
In [649]: M
Out[649]:
<4x5 sparse matrix of type '<class 'numpy.int32'>'
with 5 stored elements in COOrdinate format>
In [650]: M.A
Out[650]:
array([[0, 1, 0, 1, 0],
[0, 0, 1, 1, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 1]])
</code></pre>
<p>the first row, <code>a</code> has values in the 2nd and 4th col, <code>b</code> and <code>d</code>. and so on.</p>
<p>============================</p>
<p>Originally I had:</p>
<pre><code>In [648]: M=sparse.coo_matrix((np.ones(k1.shape,int),(rows,cols)))
</code></pre>
<p>This is wrong. The <code>data</code> array should match <code>rows</code> and <code>cols</code> in shape. Here it didn't raise an error because <code>k1</code> happens to have the same size. But with a different mix unique values could raise an error.</p>
<p>====================</p>
<p>This approach assumes the whole data base, <code>A</code> can be loaded into memory. <code>unique</code> probably requires similar memory usage. Initially a <code>coo</code> matrix might not increase the memory usage, since it will use the arrays provided as parameters. But any calculations and/or conversion to <code>csr</code> or other format will make further copies.</p>
<p>I can imagine getting around memory issues by loading the data base in chunks and using some other structure to get the unique values and mapping. You might even be able to construct a <code>coo</code> matrix from chunks. But sooner or later you'll hit memory issues. The scikit code will be making one or more copies of that sparse matrix.</p>
| 2 | 2016-07-29T21:18:35Z | [
"python",
"numpy",
"scipy",
"scikit-learn",
"igraph"
] |
How can I associate a dict key to an attribute of an object within a list? | 38,665,404 | <pre><code>class SpreadsheetRow(object):
def __init__(self,Account1):
self.Account1=Account1
self.Account2=0
</code></pre>
<p>I have a while loop that fills a list of objects ,and another loop that fills a dictionary associating Var1:Account2. But, I need to get that dictionary's value into each object, if the key matches the object's Account1.</p>
<p>So basically, I have:</p>
<pre><code>listofSpreadsheetRowObjects=[SpreadsheetRow1, SpreadsheetRow2, SpreadsheetRow3]
dict_var1_to_account2={1234:888, 1991:646, 90802:5443}
</code></pre>
<p>I've tried this:</p>
<pre><code>for k, v in dict_var1_to_account2.iteritems():
if k in listOfSpreadsheetRowObjects:
if self.account1=k:
self.account2=v
</code></pre>
<p>But, it's not working, and I suspect it's my first "if" statement, because listOfSpreadsheetRowObjects is just a list of those objects. How would I access account1 of each object, so I can match them as needed? </p>
<p>Eventually, I should have three objects with the following information:
SpreadsheetRow
self.Account1=Account1
self.Account2=(v from my dictionary, if account1 matches the key in my dictionary)</p>
| 1 | 2016-07-29T18:31:28Z | 38,665,489 | <p>You can use a generator expression within <code>any()</code> to check if any <code>account1</code> attribute of those objects is equal with <code>k</code>:</p>
<pre><code>if any(k == item.account1 for item in listOfSpreadsheetRows):
</code></pre>
| 0 | 2016-07-29T18:36:30Z | [
"python",
"object",
"dictionary"
] |
How can I associate a dict key to an attribute of an object within a list? | 38,665,404 | <pre><code>class SpreadsheetRow(object):
def __init__(self,Account1):
self.Account1=Account1
self.Account2=0
</code></pre>
<p>I have a while loop that fills a list of objects ,and another loop that fills a dictionary associating Var1:Account2. But, I need to get that dictionary's value into each object, if the key matches the object's Account1.</p>
<p>So basically, I have:</p>
<pre><code>listofSpreadsheetRowObjects=[SpreadsheetRow1, SpreadsheetRow2, SpreadsheetRow3]
dict_var1_to_account2={1234:888, 1991:646, 90802:5443}
</code></pre>
<p>I've tried this:</p>
<pre><code>for k, v in dict_var1_to_account2.iteritems():
if k in listOfSpreadsheetRowObjects:
if self.account1=k:
self.account2=v
</code></pre>
<p>But, it's not working, and I suspect it's my first "if" statement, because listOfSpreadsheetRowObjects is just a list of those objects. How would I access account1 of each object, so I can match them as needed? </p>
<p>Eventually, I should have three objects with the following information:
SpreadsheetRow
self.Account1=Account1
self.Account2=(v from my dictionary, if account1 matches the key in my dictionary)</p>
| 1 | 2016-07-29T18:31:28Z | 38,665,634 | <p>You can try to use the <a href="https://docs.python.org/3/library/functions.html#next" rel="nofollow"><code>next</code></a> function like this:</p>
<pre><code>next(i for i in listOfSpreadsheetRows if k == i.account1)
</code></pre>
| 0 | 2016-07-29T18:45:45Z | [
"python",
"object",
"dictionary"
] |
How can I associate a dict key to an attribute of an object within a list? | 38,665,404 | <pre><code>class SpreadsheetRow(object):
def __init__(self,Account1):
self.Account1=Account1
self.Account2=0
</code></pre>
<p>I have a while loop that fills a list of objects ,and another loop that fills a dictionary associating Var1:Account2. But, I need to get that dictionary's value into each object, if the key matches the object's Account1.</p>
<p>So basically, I have:</p>
<pre><code>listofSpreadsheetRowObjects=[SpreadsheetRow1, SpreadsheetRow2, SpreadsheetRow3]
dict_var1_to_account2={1234:888, 1991:646, 90802:5443}
</code></pre>
<p>I've tried this:</p>
<pre><code>for k, v in dict_var1_to_account2.iteritems():
if k in listOfSpreadsheetRowObjects:
if self.account1=k:
self.account2=v
</code></pre>
<p>But, it's not working, and I suspect it's my first "if" statement, because listOfSpreadsheetRowObjects is just a list of those objects. How would I access account1 of each object, so I can match them as needed? </p>
<p>Eventually, I should have three objects with the following information:
SpreadsheetRow
self.Account1=Account1
self.Account2=(v from my dictionary, if account1 matches the key in my dictionary)</p>
| 1 | 2016-07-29T18:31:28Z | 38,666,382 | <p>If you have a dictionary <code>d</code> and want to get the value associated to the key <code>x</code> then you look up that value like this:</p>
<pre><code>v = d[x]
</code></pre>
<p>So if your dictionary is called <code>dict_of_account1_to_account2</code> and the key is <code>self.Account1</code> and you want to set that value to <code>self.Account2</code> then you would do:</p>
<pre><code>self.Account2 = dict_of_account1_to_account2[self.Account1]
</code></pre>
<p>The whole point of using a dictionary is that you don't have to iterate through the entire thing to look things up.</p>
<p>Otherwise if you are doing this initialization of <code>.Account2</code> after creating all the <code>SpreadsheetRow</code> objects then using <code>self</code> doesn't make sense, you would need to iterate through each <code>SpreadsheetRow</code> item and do the assignment for each one, something like this:</p>
<pre><code>for row in listofSpreadsheetRowObjects:
for k, v in dict_of_account1_to_account2.iteritems():
if row.Account1 == k:
row.Account2 = v
</code></pre>
<p>But again, you don't have to iterate over the dictionary to make the assignment, just look up <code>row.Account1</code> from the dict:</p>
<pre><code>for row in listofSpreadsheetRowObjects:
row.Account2 = dict_of_account1_to_account2[row.Account1]
</code></pre>
| 0 | 2016-07-29T19:41:24Z | [
"python",
"object",
"dictionary"
] |
Creating a superclass SunPlanet for Sun and Planets using inheritance. | 38,665,407 | <p>My Class Program:</p>
<pre><code>import turtle
import math
class SunPlanet:
def __init__(self,iname,irad,im):
self.name = iname
self.radius = irad
self.mass = im
def getMass(self):
return self.mass
class Sun(SunPlanet):
def __init__(self, iname, irad, im, itemp):
super().__init__(self,iname,irad,im)
self.temp = itemp
self.x = 0
self.y = 0
self.sturtle = turtle.Turtle()
self.sturtle.shape("circle")
self.sturtle.color("yellow")
# other methods as before
def __str__(self):
return self.name
def getXPos(self):
return self.x
def getYPos(self):
return self.y
class Planet(SunPlanet):
def __init__(self, iname, irad, im, idist, ivx, ivy, ic):
super().__init__(self,iname,irad,im)
self.distance = idist
self.x = idist
self.y = 0
self.velx = ivx
self.vely = ivy
self.color = ic.strip()
self.pturtle = turtle.Turtle()
self.pturtle.color(self.color)
self.pturtle.shape("circle")
self.pturtle.up()
self.pturtle.goto(self.x,self.y)
self.pturtle.down()
#other methods as before
def getXPos(self):
return self.x
def getYPos(self):
return self.y
# animation methods
def moveTo(self, newx, newy):
self.x = newx
self.y = newy
self.pturtle.goto(newx, newy)
def getXVel(self):
return self.velx
def getYVel(self):
return self.vely
def setXVel(self, newvx):
self.velx = newvx
def setYVel(self, newvy):
self.vely = newvy
class SolarSystem:
def __init__(self, width, height):
self.thesun = None
self.planets = []
self.ssturtle = turtle.Turtle()
self.ssturtle.hideturtle()
self.ssscreen = turtle.Screen()
self.ssscreen.setworldcoordinates(-width/2.0,-height/2.0,width/2.0,height/2.0)
def addPlanet(self, aplanet):
self.planets.append(aplanet)
def addSun(self, asun):
self.thesun = asun
def showSun(self):
print(self.thesun)
def showPlanets(self):
for aplanet in self.planets:
print(aplanet)
def freeze(self):
self.ssscreen.exitonclick()
# animation methods
def movePlanets(self):
G = .1
dt = .001
for p in self.planets:
p.moveTo(p.getXPos() + dt * p.getXVel(),
p.getYPos() + dt * p.getYVel())
rx = self.thesun.getXPos() - p.getXPos()
ry = self.thesun.getYPos() - p.getYPos()
r = math.sqrt(rx**2 + ry**2)
accx = G * self.thesun.getMass()*rx/r**3
accy = G * self.thesun.getMass()*ry/r**3
p.setXVel(p.getXVel() + dt * accx)
p.setYVel(p.getYVel() + dt * accy)
</code></pre>
<p>My Main Program:</p>
<pre><code>ssInputStrings = []
inputPath = str(input("Please enter the source location for the solar system files: "))
startDate = datetime.datetime.now()
while True:
endDate = datetime.datetime.now()
delta = endDate - startDate
# if the duration has been met, break out of the loop
if delta.seconds > 10:
break
print(delta.seconds)
# initialize switch
addToCollection = True
# read and store the content of each input file in the collection
for file in os.listdir(inputPath):
print(file)
inputFilePath = inputPath + file
inputFile = open(inputFilePath, 'r')
text = inputFile.read()
inputFile.close()
# get the first word from the input file which identifies the solar system object
firstWordFromInputFile = text.split(",")
# if the solar system object has already been stored in the collection,
# do not store it again
for string in ssInputStrings:
firstWordFromInputString = string.split(",")
if firstWordFromInputFile[0] == firstWordFromInputString[0]:
addToCollection = False
break
else:
addToCollection = True
if addToCollection == True:
ssInputStrings.append(text)
# os.remove(inputFilePath)
# pause the thread for one second (necessary otherwise cpu will spike up)
time.sleep(1)
#----------------------------------------------------------
# Instantiate objects and run simulation
#----------------------------------------------------------
from ClassModule import *
def createSSandAnimate():
ss = SolarSystem(2,2)
so = ""
# sun
for string in ssInputStrings:
if string[0:3] == "SUN":
so = string.split(",")
sun = Sun(str(so[0]), int(so[1]), int(so[2]), int(so[3]))
# sun = Sun("SUN", 5000, 10, 5800)
ss.addSun(sun)
for string in ssInputStrings:
if string[0:7] == "MERCURY":
so = string.split(",")
m = Planet(str(so[0]), float(so[1]), int(so[2]), float(so[3]), int(so[4]), int(so[5]),str(so[6]) )
#m = Planet("MERCURY", 19.5, 1000, .25, 0, 2, "blue")
ss.addPlanet(m)
for string in ssInputStrings:
if string[0:5] == "EARTH":
so = string.split(",")
m= Planet(str(so[0]), float(so[1]), int(so[2]), float(so[3]), int(so[4]), float(so[5]),str(so[6]))
#m = Planet("EARTH", 47.5, 5000, 0.3, 0, 2.0, "green")
ss.addPlanet(m)
for string in ssInputStrings:
if string[0:4] == "MARS":
so = string.split(",")
m=Planet(str(so[0]), int(so[1]), int(so[2]), float(so[3]), int(so[4]), float(so[5]),str(so[6]))
#m = Planet("MARS", 50, 9000, 0.5, 0, 1.63, "red")
ss.addPlanet(m)
for string in ssInputStrings:
if string[0:7] == "JUPITER":
so = string.split(",")
m=Planet(str(so[0]), int(so[1]), int(so[2]), float(so[3]), int(so[4]), int(so[5]),str(so[6]))
#m = Planet("JUPITER", 100, 49000, 0.7, 0, 1, "black")
ss.addPlanet(m)
#ss.showSun()
ss.showPlanets()
numTimePeriods = 2000
for amove in range(numTimePeriods):
ss.movePlanets()
ss.freeze()
createSSandAnimate()
ssInputStrings = []
inputPath = str(input("Please enter the source location for the solar system files: "))
startDate = datetime.datetime.now()
while True:
endDate = datetime.datetime.now()
delta = endDate - startDate
# if the duration has been met, break out of the loop
if delta.seconds > 10:
break
print(delta.seconds)
# initialize switch
addToCollection = True
# read and store the content of each input file in the collection
for file in os.listdir(inputPath):
print(file)
inputFilePath = inputPath + file
inputFile = open(inputFilePath, 'r')
text = inputFile.read()
inputFile.close()
# get the first word from the input file which identifies the solar system object
firstWordFromInputFile = text.split(",")
# if the solar system object has already been stored in the collection,
# do not store it again
for string in ssInputStrings:
firstWordFromInputString = string.split(",")
if firstWordFromInputFile[0] == firstWordFromInputString[0]:
addToCollection = False
break
else:
addToCollection = True
if addToCollection == True:
ssInputStrings.append(text)
# os.remove(inputFilePath)
# pause the thread for one second (necessary otherwise cpu will spike up)
time.sleep(1)
#----------------------------------------------------------
# Instantiate objects and run simulation
#----------------------------------------------------------
from ClassModule import *
def createSSandAnimate():
ss = SolarSystem(2,2)
so = ""
# sun
for string in ssInputStrings:
if string[0:3] == "SUN":
so = string.split(",")
sun = Sun(str(so[0]), int(so[1]), int(so[2]), int(so[3]))
# sun = Sun("SUN", 5000, 10, 5800)
ss.addSun(sun)
for string in ssInputStrings:
if string[0:7] == "MERCURY":
so = string.split(",")
m = Planet(str(so[0]), float(so[1]), int(so[2]), float(so[3]), int(so[4]), int(so[5]),str(so[6]) )
#m = Planet("MERCURY", 19.5, 1000, .25, 0, 2, "blue")
ss.addPlanet(m)
for string in ssInputStrings:
if string[0:5] == "EARTH":
so = string.split(",")
m= Planet(str(so[0]), float(so[1]), int(so[2]), float(so[3]), int(so[4]), float(so[5]),str(so[6]))
#m = Planet("EARTH", 47.5, 5000, 0.3, 0, 2.0, "green")
ss.addPlanet(m)
for string in ssInputStrings:
if string[0:4] == "MARS":
so = string.split(",")
m=Planet(str(so[0]), int(so[1]), int(so[2]), float(so[3]), int(so[4]), float(so[5]),str(so[6]))
#m = Planet("MARS", 50, 9000, 0.5, 0, 1.63, "red")
ss.addPlanet(m)
for string in ssInputStrings:
if string[0:7] == "JUPITER":
so = string.split(",")
m=Planet(str(so[0]), int(so[1]), int(so[2]), float(so[3]), int(so[4]), int(so[5]),str(so[6]))
#m = Planet("JUPITER", 100, 49000, 0.7, 0, 1, "black")
ss.addPlanet(m)
#ss.showSun()
ss.showPlanets()
numTimePeriods = 2000
for amove in range(numTimePeriods):
ss.movePlanets()
ss.freeze()
createSSandAnimate()
</code></pre>
<p><a href="http://i.stack.imgur.com/TkO8o.jpg" rel="nofollow">This is the link for the link for the error description.</a></p>
<p>Please go through the class code and please help me out. I have been trying this from a week. I am not getting where i missed.</p>
| 0 | 2016-07-29T18:31:42Z | 38,665,539 | <p>Addressing only the error, <a href="https://docs.python.org/2/library/functions.html#super" rel="nofollow"><code>super()</code></a> isn't being called correctly. Change it to:</p>
<pre><code>super(Sun, self).__init__(iname, irad, im)
</code></pre>
<p>And later:</p>
<pre><code>super(Planet, self).__init__(iname, irad, im)
</code></pre>
<p>If you want others to go over your code, this probably isn't the best place. Consider posting to <a href="http://codereview.stackexchange.com/">codereview.stackexchange.com</a> instead.</p>
| 0 | 2016-07-29T18:40:08Z | [
"python",
"inheritance"
] |
Comparing a string to a list of characters | 38,665,427 | <p>Say I have a list of characters <code>['h','e','l','l','o']</code> and I wanted to see if the list of characters match a string <code>'hello'</code>, how would I do this? The list needs to match the characters exactly. I thought about using something like:</p>
<pre><code>hList = ['h','e','l','l','o']
hStr = "Hello"
running = False
if hList in hStr :
running = True
print("This matches!")
</code></pre>
<p>but this does not work, how would I do something like this?? </p>
| 1 | 2016-07-29T18:32:47Z | 38,665,451 | <p>You want <code>''.join(hList) == hStr</code>.</p>
<p>That turns the list into a string, so it can be easily compared to the other string.</p>
<p>In your case you don't seem to care about case, so you can use a case insensitive compare. See <a href="http://stackoverflow.com/questions/319426/how-do-i-do-a-case-insensitive-string-comparison-in-python">How do I do a case insensitive string comparison in Python?</a> for a discussion of this.</p>
| 5 | 2016-07-29T18:34:07Z | [
"python",
"string",
"list",
"python-3.x"
] |
Comparing a string to a list of characters | 38,665,427 | <p>Say I have a list of characters <code>['h','e','l','l','o']</code> and I wanted to see if the list of characters match a string <code>'hello'</code>, how would I do this? The list needs to match the characters exactly. I thought about using something like:</p>
<pre><code>hList = ['h','e','l','l','o']
hStr = "Hello"
running = False
if hList in hStr :
running = True
print("This matches!")
</code></pre>
<p>but this does not work, how would I do something like this?? </p>
| 1 | 2016-07-29T18:32:47Z | 38,665,485 | <p>Or, another way is the reverse of what the other answer suggests, create a list out of <code>hStr</code> and compare that:</p>
<pre><code>list(hStr) == hList
</code></pre>
<p>Which simply compares the lists: </p>
<pre><code>list('Hello') == hList
False
list('hello') == hList
True
</code></pre>
| 1 | 2016-07-29T18:36:23Z | [
"python",
"string",
"list",
"python-3.x"
] |
Comparing a string to a list of characters | 38,665,427 | <p>Say I have a list of characters <code>['h','e','l','l','o']</code> and I wanted to see if the list of characters match a string <code>'hello'</code>, how would I do this? The list needs to match the characters exactly. I thought about using something like:</p>
<pre><code>hList = ['h','e','l','l','o']
hStr = "Hello"
running = False
if hList in hStr :
running = True
print("This matches!")
</code></pre>
<p>but this does not work, how would I do something like this?? </p>
| 1 | 2016-07-29T18:32:47Z | 38,665,512 | <p>Alternative solution is to split the string into array:</p>
<pre><code>list(hStr) == hList
>>> list("hello")
['h', 'e', 'l', 'l', 'o']
</code></pre>
| 0 | 2016-07-29T18:37:57Z | [
"python",
"string",
"list",
"python-3.x"
] |
Custom logger with time stamp in python | 38,665,440 | <p>I have lots of code on a project with print statements and wanted to make a quick a dirty logger of these print statements and decided to go the custom route. I managed to put together a logger that prints both to the terminal and to a file (with the help of this site), but now I want to add a simple time stamp to each statement and I am running into a weird issue.</p>
<p>Here is my logging class.</p>
<pre><code>class Logger(object):
def __init__(self, stream):
self.terminal = stream
self.log = open("test.log", 'a')
def write(self, message):
self.terminal.flush()
self.terminal.write(self.stamp() + message)
self.log.write(self.stamp() + message)
def stamp(self):
d = datetime.today()
string = d.strftime("[%H:%M:%S] ")
return string
</code></pre>
<p>Notice the stamp method that I then attempt to use in the write method.</p>
<p>When running the following two lines I get an unexpected output:</p>
<pre><code>sys.stdout = Logger(sys.stdout)
print("Hello World!")
</code></pre>
<p>Output:</p>
<pre><code>[11:10:47] Hello World![11:10:47]
</code></pre>
<p>This what the output also looks in the log file, however, I see no reason why the string that I am adding appends to the end. Can someone help me here?</p>
<p><strong>UPDATE</strong>
See answer below. However, for quicker reference the issue is using "print()" in general; replace it with sys.stdout.write after assigning the variable.</p>
<p>Also use "logging" for long-term/larger projects right off the bat.</p>
| 0 | 2016-07-29T18:33:31Z | 38,665,535 | <p>It calls the <code>.write()</code> method of your stream twice because in cpython <code>print</code> calls the stream <code>.write()</code> method twice. The first time is with the object, and the second time it writes a newline character. For example look at <a href="https://hg.python.org/cpython/file/v3.5.2/Lib/pprint.py#l138" rel="nofollow">line 138 in the <code>pprint</code> module in cpython v3.5.2</a></p>
<pre><code>def pprint(self, object):
self._format(object, self._stream, 0, 0, {}, 0)
self._stream.write("\n") # <- write() called again!
</code></pre>
<p>You can test this out:</p>
<pre><code>>>> from my_logger import Logger # my_logger.py has your Logger class
>>> import sys
>>> sys.stdout = Logger(stream=sys.stdout)
>>> sys.stdout.write('hi\n')
[14:05:32] hi
</code></pre>
<p>You can replace <code>print(<blah>)</code> everywhere in your code using <a href="https://www.gnu.org/software/sed/manual/sed.html" rel="nofollow"><code>sed</code></a>.</p>
<pre><code>$ for mymodule in *.py; do
> sed -i -E "s/print\((.+)\)/LOGGER.debug(\1)/" $mymodule
> done
</code></pre>
<p>Check out <a href="https://docs.python.org/2/library/logging.html" rel="nofollow">Python's Logging builtin module</a>. It has pretty comprehensive logging including inclusion of a timestamp in all messages format.</p>
<pre><code>import logging
FORMAT = '%(asctime)-15s %(message)s'
DATEFMT = '%Y-%m-%d %H:%M:%S'
logging.basicConfig(format=FORMAT, datefmt=DATEFMT)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.debug('message: %s', 'message')
</code></pre>
<p>This outputs <code>2016-07-29 11:44:20 message: message</code> to <code>stdout</code>. There are also handlers to send output to files. There is a <a href="https://docs.python.org/2/howto/logging.html#logging-basic-tutorial" rel="nofollow">basic tutorial</a>, an <a href="https://docs.python.org/2/howto/logging.html#logging-advanced-tutorial" rel="nofollow">advanced tutorial</a> and a <a href="https://docs.python.org/2/howto/logging-cookbook.html#logging-cookbook" rel="nofollow">cookbook of common logging recipes</a>.</p>
<p>There is an example of using <a href="https://docs.python.org/2.7/howto/logging-cookbook.html#using-logging-in-multiple-modules" rel="nofollow">simultaneous file and console loggers</a> in the cookbook.</p>
<pre><code>import logging
LOGGER = logging.getLogger(__name__) # get logger named for this module
LOGGER.setLevel(logging.DEBUG) # set logger level to debug
# create formatter
LOG_DATEFMT = '%Y-%m-%d %H:%M:%S'
LOG_FORMAT = ('\n[%(levelname)s/%(name)s:%(lineno)d] %(asctime)s ' +
'(%(processName)s/%(threadName)s)\n> %(message)s')
FORMATTER = logging.Formatter(LOG_FORMAT, datefmt=LOG_DATEFMT)
CH = logging.StreamHandler() # create console handler
CH.setLevel(logging.DEBUG) # set handler level to debug
CH.setFormatter(FORMATTER) # add formatter to ch
LOGGER.addHandler(CH) # add console handler to logger
FH = logging.FileHandler('myapp.log') # create file handler
FH.setLevel(logging.DEBUG) # set handler level to debug
FH.setFormatter(FORMATTER) # add formatter to fh
LOGGER.addHandler(FH) # add file handler to logger
LOGGER.debug('test: %s', 'hi')
</code></pre>
<p>This outputs:</p>
<pre><code>[DEBUG/__main__:22] 2016-07-29 12:20:45 (MainProcess/MainThread)
> test: hi
</code></pre>
<p>to both console and file <code>myapp.log</code> simultaneously.</p>
| 1 | 2016-07-29T18:39:59Z | [
"python",
"python-2.7"
] |
Custom logger with time stamp in python | 38,665,440 | <p>I have lots of code on a project with print statements and wanted to make a quick a dirty logger of these print statements and decided to go the custom route. I managed to put together a logger that prints both to the terminal and to a file (with the help of this site), but now I want to add a simple time stamp to each statement and I am running into a weird issue.</p>
<p>Here is my logging class.</p>
<pre><code>class Logger(object):
def __init__(self, stream):
self.terminal = stream
self.log = open("test.log", 'a')
def write(self, message):
self.terminal.flush()
self.terminal.write(self.stamp() + message)
self.log.write(self.stamp() + message)
def stamp(self):
d = datetime.today()
string = d.strftime("[%H:%M:%S] ")
return string
</code></pre>
<p>Notice the stamp method that I then attempt to use in the write method.</p>
<p>When running the following two lines I get an unexpected output:</p>
<pre><code>sys.stdout = Logger(sys.stdout)
print("Hello World!")
</code></pre>
<p>Output:</p>
<pre><code>[11:10:47] Hello World![11:10:47]
</code></pre>
<p>This what the output also looks in the log file, however, I see no reason why the string that I am adding appends to the end. Can someone help me here?</p>
<p><strong>UPDATE</strong>
See answer below. However, for quicker reference the issue is using "print()" in general; replace it with sys.stdout.write after assigning the variable.</p>
<p>Also use "logging" for long-term/larger projects right off the bat.</p>
| 0 | 2016-07-29T18:33:31Z | 38,665,792 | <p>You probably need to use newline character.</p>
<pre><code>class Logger(object):
def __init__(self, stream):
self.terminal = stream
self.log = open("test.log", 'a')
def write(self, message):
self.terminal.flush()
self.terminal.write(self.stamp() + message + "\n")
self.log.write(self.stamp() + message + "\n")
def stamp(self):
d = datetime.today()
string = d.strftime("[%H:%M:%S] ")
return string
</code></pre>
<p>Anyway, using built-in logging module will be better.</p>
| 0 | 2016-07-29T18:57:05Z | [
"python",
"python-2.7"
] |
Number path: using recursion in a maze-like situation | 38,665,518 | <p>I'm struggling with an assignment for my CS class. Here are some of the instructions: </p>
<hr>
<p>In this assignment, you will write a program that finds a path through a grid of numbers using a technique called depth-first search.</p>
<p>As input, you will be given a grid of numbers, a start point, an end point, and a target sum. Your task is to find a path that moves orthogonally through the grid, keeping a running total of the numbers along the path, and ending at the end point with the required target sum.</p>
<p>Details:</p>
<p>You can assume that a number grid will be no larger than 10 by 10. An example of a number grid might look like the following:
34 58 12 10 34
3 91 10 10 41
10 76 10 7 12
10 82 10 81 98
10 10 10 9 17</p>
<p>The start point will be specified with two numbers, the row number and the column number. Note that when you count rows and columns, you start with zero. Consequently, in the example grid below, we might specify the start point to be row 2, column 0. This would indicate the number 10 directly under the 3.
Similarly, the end point will be specified with a row number and a column number. An end point of row 0, column 3 would point to the 10 in the top row between the 12 and 34.
A target sum is just an integer value that you want your path to sum up to.
If you were given the grid above, with start point (2,0), end point (0,3), and target sum of 100, then you can find a successful path by following the ten 10s in the grid.</p>
<p>Input:</p>
<p>An input file will contain the following:</p>
<p>First line: 7 integers
targetValue, the target sum
grid_rows, the number of rows in the grid
grid_cols, the number of columns in the grid
start_row, the row number of the start point
start_col, the column number of the start point
end_row, the row number of the end point
end_col, the column number of end point
All subsequent lines: there will be (grid_rows) additional lines in the input file. Each line will consist of (grid_cols) integers representing the numbers in the grid for that row.
Here are three examples of input files in which your program should successfully find paths:</p>
<p>pathdata1: this file has a path that is not obvious to find.
pathdata2: this has a smaller, rectangular grid.
pathdata3: this has a larger grid with several dead-end paths. This file is an excellent example of how you can design complex mazes and solve them using your program!
Hints:</p>
<p>Your main program should do the following tasks:
Open the input file "pathdata.txt".
Read the contents of the first line into variables.
Read in the grid. I recommend representing it as a list of lists.
Define a class "Problem", which has as class variables a grid, a path history, a start row and column, and a sum.
Create an instance of class Problem, assigning appropriate values to its instance variables.
Print out the values of the variables and the grid in a nice format. This will ensure you read everything in correctly and built your data structure the way you wanted it. I strongly recommend you define a nice <strong>str</strong> method for class Problem to use here, to show progress as your program executes, and to help you debug.
Call a function "solve", described below.</p>
<p>You should also have a function "solve" which takes a Problem instance as an argument and returns the solution path, if it finds one, or "None" if it doesn't. "solve" should do the following tasks:
Test to see if the Problem instance is a goal state, meaning it's currently at the end state and the sum matches the target sum. If so, print an appropriate message and show the path history.
If it's not a goal state, check to see if the sum exceeds the target sum. If so, print a message and return "None".
Try moving right, if doing so is a legal move. Create a new Problem instance with the appropriate start row/column. Set the current grid point to "None" (to make it as "already visited"). Update the sum and history. Print the new Problem instance, then recursively call "solve" using the instance as an argument. If the recursive call returns a successful path, return the result.
If moving right doesn't work, try moving up, then down, then left.
If none of the attempts succeed, return "None".</p>
<p>It would probably be helpful for you to write a function "isValid" that lets you know if a proposed move is a valid one. isValid would take as arguments a current grid, its size, and a proposed row and column position, and returns True if it's a valid position, False if it isn't. The position would be invalid if you're either trying to move outside the grid boundary, or if you were trying to move to a location already visited by your current path (meaning the location has value "None".</p>
<hr>
<p>That's the instructions. The problem I'm having is with the backtracking. I tried doing it so if the target sum was exceeded, I reset path history, sum and position to what they were before I exceeded and then to continue the maze. But this just got me in an infinite loop where I'd go back, then go to the spot where I exceeded the sum, then go back, etc, etc. Any tips? </p>
<p>I don't want to cheat so just point me in the right direction if you can. Tips for how to implement the solution in my code are appreciated. </p>
| -2 | 2016-07-29T18:38:33Z | 38,665,696 | <p>What algorithms have you used? I recommend that you research basic graph traversal algorithms. <a href="https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm" rel="nofollow">Dijkstra's algorithm</a> is a good starting point, and is readily available in <a href="http://code.activestate.com/recipes/119466-dijkstras-algorithm-for-shortest-paths/" rel="nofollow">generic Python code</a>.</p>
<p>The basic idea is that you maintain one list of nodes you've already visited (with the minimum cost to reach each one), and a list of "to-do" nodes to visit ("next", or one step beyond some visited node). Take the top node off the "to-do" list, add it to the visited list (with the cost to get there), and check each node connected to it. If the node is unvisited, add it to the "to-do" list with the current cost; if it has been visited, check the cost of getting there and keep the minimum.</p>
<p>Dead ends will die on their own; when you run out of nodes to visit, see which path was the least cost to reach the destination node.</p>
| 0 | 2016-07-29T18:50:06Z | [
"python",
"recursion",
"path",
"grid",
"maze"
] |
Matching words and vectors in gensim Word2Vec model | 38,665,556 | <p>I have had the <a href="https://radimrehurek.com/gensim/" rel="nofollow">gensim</a> <a href="https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec" rel="nofollow">Word2Vec</a> implementation compute some word embeddings for me. Everything went quite fantastically as far as I can tell; now I am clustering the word vectors created, hoping to get some semantic groupings. </p>
<p>As a next step, I would like to look at the words (rather than the vectors) contained in each cluster. I.e. if I have the vector of embeddings <code>[x, y, z]</code>, I would like to find out which actual word this vector represents. I can get the words/Vocab items by calling <code>model.vocab</code> and the word vectors through <code>model.syn0</code>. But I could not find a location where these are explicitly matched. </p>
<p>This was more complicated than I expected and I feel I might be missing the obvious way of doing it. Any help is appreciated!</p>
<h3>Problem:</h3>
<p>Match words to embedding vectors created by <code>Word2Vec ()</code> -- how do I do it? </p>
<h3>My approach:</h3>
<p>After creating the model (code below*), I would now like to match the indexes assigned to each word (during the <code>build_vocab()</code> phase) to the vector matrix outputted as <code>model.syn0</code>.
Thus</p>
<pre><code>for i in range (0, newmod.syn0.shape[0]): #iterate over all words in model
print i
word= [k for k in newmod.vocab if newmod.vocab[k].__dict__['index']==i] #get the word out of the internal dicationary by its index
wordvector= newmod.syn0[i] #get the vector with the corresponding index
print wordvector == newmod[word] #testing: compare result of looking up the word in the model -- this prints True
</code></pre>
<ul>
<li><p>Is there a better way of doing this, e.g. by feeding the vector into the model to match the word?</p></li>
<li><p>Does this even get me correct results?</p></li>
</ul>
<p>*My code to create the word vectors:</p>
<pre><code>model = Word2Vec(size=1000, min_count=5, workers=4, sg=1)
model.build_vocab(sentencefeeder(folderlist)) #sentencefeeder puts out sentences as lists of strings
model.save("newmodel")
</code></pre>
<p>I found <a href="http://stackoverflow.com/questions/35914287/word2vec-how-to-get-words-from-vectors">this question</a> which is similar but has not really been answered. </p>
| 2 | 2016-07-29T18:40:54Z | 38,669,212 | <p>If all you want to do is map a <em>word</em> to a <em>vector</em>, you can simply use the <code>[]</code> operator, e.g. <code>model["hello"]</code> will give you the vector corresponding to hello.</p>
<p>If you need to recover a word from a vector you could loop through your list of vectors and check for a match, as you propose. However, this is inefficient and not pythonic. A convenient solution is to use the <code>similar_by_vector</code> method of the word2vec model, like this:</p>
<pre><code>import gensim
documents = [['human', 'interface', 'computer'],
['survey', 'user', 'computer', 'system', 'response', 'time'],
['eps', 'user', 'interface', 'system'],
['system', 'human', 'system', 'eps'],
['user', 'response', 'time'],
['trees'],
['graph', 'trees'],
['graph', 'minors', 'trees'],
['graph', 'minors', 'survey']]
model = gensim.models.Word2Vec(documents, min_count=1)
print model.similar_by_vector(model["survey"], topn=1)
</code></pre>
<p>which outputs:</p>
<pre><code>[('survey', 1.0000001192092896)]
</code></pre>
<p>where the number represents the similarity.</p>
<p>However, this method is still inefficient, as it still has to scan all of the word vectors to search for the most similar one. The best solution to your problem is to <strong>find a way to keep track of your vectors</strong> during the clustering process so you don't have to rely on expensive reverse mappings.</p>
| 1 | 2016-07-30T00:25:56Z | [
"python",
"vector",
"machine-learning",
"gensim",
"word2vec"
] |
Matching words and vectors in gensim Word2Vec model | 38,665,556 | <p>I have had the <a href="https://radimrehurek.com/gensim/" rel="nofollow">gensim</a> <a href="https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec" rel="nofollow">Word2Vec</a> implementation compute some word embeddings for me. Everything went quite fantastically as far as I can tell; now I am clustering the word vectors created, hoping to get some semantic groupings. </p>
<p>As a next step, I would like to look at the words (rather than the vectors) contained in each cluster. I.e. if I have the vector of embeddings <code>[x, y, z]</code>, I would like to find out which actual word this vector represents. I can get the words/Vocab items by calling <code>model.vocab</code> and the word vectors through <code>model.syn0</code>. But I could not find a location where these are explicitly matched. </p>
<p>This was more complicated than I expected and I feel I might be missing the obvious way of doing it. Any help is appreciated!</p>
<h3>Problem:</h3>
<p>Match words to embedding vectors created by <code>Word2Vec ()</code> -- how do I do it? </p>
<h3>My approach:</h3>
<p>After creating the model (code below*), I would now like to match the indexes assigned to each word (during the <code>build_vocab()</code> phase) to the vector matrix outputted as <code>model.syn0</code>.
Thus</p>
<pre><code>for i in range (0, newmod.syn0.shape[0]): #iterate over all words in model
print i
word= [k for k in newmod.vocab if newmod.vocab[k].__dict__['index']==i] #get the word out of the internal dicationary by its index
wordvector= newmod.syn0[i] #get the vector with the corresponding index
print wordvector == newmod[word] #testing: compare result of looking up the word in the model -- this prints True
</code></pre>
<ul>
<li><p>Is there a better way of doing this, e.g. by feeding the vector into the model to match the word?</p></li>
<li><p>Does this even get me correct results?</p></li>
</ul>
<p>*My code to create the word vectors:</p>
<pre><code>model = Word2Vec(size=1000, min_count=5, workers=4, sg=1)
model.build_vocab(sentencefeeder(folderlist)) #sentencefeeder puts out sentences as lists of strings
model.save("newmodel")
</code></pre>
<p>I found <a href="http://stackoverflow.com/questions/35914287/word2vec-how-to-get-words-from-vectors">this question</a> which is similar but has not really been answered. </p>
| 2 | 2016-07-29T18:40:54Z | 38,695,747 | <p>As @bpachev mentioned, gensim does have an option of searching by vector, namely <code>similar_by_vector</code>.</p>
<p>It however implements a brute force linear search, i.e. computes cosine similarity between given vector and vectors of all words in vocabulary, and gives off the top neighbours. An alternate option, as mentioned in the other <a href="http://stackoverflow.com/a/35979229/2353472">answer</a> is to use an approximate nearest neighbour search algorithm like FLANN.</p>
<p>Sharing a gist demonstrating the same:
<a href="https://gist.github.com/kampta/139f710ca91ed5fabaf9e6616d2c762b" rel="nofollow">https://gist.github.com/kampta/139f710ca91ed5fabaf9e6616d2c762b</a></p>
| 0 | 2016-08-01T09:47:50Z | [
"python",
"vector",
"machine-learning",
"gensim",
"word2vec"
] |
pip install pubnub throws 'gcc' failed error | 38,665,704 | <p>I am trying to install pubnub libraries and I get the error when I do pip install pubnub </p>
<pre><code>Compiling support for Intel AES instructions
building 'Crypto.Hash._MD2' extension
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/src
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DLTC_NO_ASM -DHAVE_CPUID_H -Isrc/ -I/usr/include/python2.7 -c src/MD2.c -o build/temp.linux-x86_64-2.7/src/MD2.o
gcc -pthread -shared build/temp.linux-x86_64-2.7/src/MD2.o -L/usr/lib64 -lpython2.7 -o build/lib.linux-x86_64-2.7/Crypto/Hash/_MD2.so
/usr/bin/ld: cannot find -lpython2.7
collect2: error: ld returned 1 exit status
error: command 'gcc' failed with exit status 1
</code></pre>
<p>These are the steps I followed</p>
<pre><code>curl -O https://bootstrap.pypa.io/get-pip.py
sudo python27 get-pip.py
sudo yum install git
git clone https://github.com/pubnub/python && cd python/python
sudo update-alternatives --config python
sudo yum install python-devel
sudo yum install gcc
</code></pre>
<p>Thanks</p>
| 0 | 2016-07-29T18:50:25Z | 38,665,726 | <p>You need to install Python's header files. How you do that will depend on your operating system.</p>
<p>On Debian or Ubuntu, for example, something like</p>
<pre><code>sudo apt-get install python-dev
</code></pre>
<p>should do it.</p>
<p>On Fedora / CentOS / Red Hat, try</p>
<pre><code>sudo yum install python-devel
</code></pre>
| 2 | 2016-07-29T18:51:48Z | [
"python",
"python-2.7",
"python-3.x",
"pip",
"pubnub"
] |
pip install pubnub throws 'gcc' failed error | 38,665,704 | <p>I am trying to install pubnub libraries and I get the error when I do pip install pubnub </p>
<pre><code>Compiling support for Intel AES instructions
building 'Crypto.Hash._MD2' extension
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/src
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DLTC_NO_ASM -DHAVE_CPUID_H -Isrc/ -I/usr/include/python2.7 -c src/MD2.c -o build/temp.linux-x86_64-2.7/src/MD2.o
gcc -pthread -shared build/temp.linux-x86_64-2.7/src/MD2.o -L/usr/lib64 -lpython2.7 -o build/lib.linux-x86_64-2.7/Crypto/Hash/_MD2.so
/usr/bin/ld: cannot find -lpython2.7
collect2: error: ld returned 1 exit status
error: command 'gcc' failed with exit status 1
</code></pre>
<p>These are the steps I followed</p>
<pre><code>curl -O https://bootstrap.pypa.io/get-pip.py
sudo python27 get-pip.py
sudo yum install git
git clone https://github.com/pubnub/python && cd python/python
sudo update-alternatives --config python
sudo yum install python-devel
sudo yum install gcc
</code></pre>
<p>Thanks</p>
| 0 | 2016-07-29T18:50:25Z | 38,668,288 | <p>The solution for this, I had to follow these steps</p>
<pre><code>ld -lpython2.7 --verbose
attempt to open /usr/x86_64-amazon-linux/lib64/libpython2.7.so failed
attempt to open /usr/x86_64-amazon-linux/lib64/libpython2.7.a failed
attempt to open /usr/local/lib64/libpython2.7.so failed
attempt to open /usr/local/lib64/libpython2.7.a failed
attempt to open /lib64/libpython2.7.so failed
attempt to open /lib64/libpython2.7.a failed
attempt to open /usr/lib64/libpython2.7.so failed
attempt to open /usr/lib64/libpython2.7.a failed
attempt to open /usr/x86_64-amazon-linux/lib/libpython2.7.so failed
attempt to open /usr/x86_64-amazon-linux/lib/libpython2.7.a failed
attempt to open /usr/lib64/libpython2.7.so failed
attempt to open /usr/lib64/libpython2.7.a failed
attempt to open /usr/local/lib/libpython2.7.so failed
attempt to open /usr/local/lib/libpython2.7.a failed
attempt to open /lib/libpython2.7.so failed
attempt to open /lib/libpython2.7.a failed
attempt to open /usr/lib/libpython2.7.so failed
attempt to open /usr/lib/libpython2.7.a failed
</code></pre>
<p>Check ldconfig softlink for python and find out what its pointing to</p>
<pre><code>ldconfig -p | grep python2.7
libpython2.7.so.1.0 (libc6,x86-64) => /usr/lib64/libpython2.7.so.1.0
</code></pre>
<p>This shows that it was looking for a wrong softlink and I changed the soft link like this</p>
<pre><code>sudo ln -s /usr/lib64/libpython2.7.so.1.0 /usr/lib64/libpython2.7.so
</code></pre>
<p>and then had to run pip like this</p>
<pre><code>sudo /usr/local/bin/pip install pubnub -- Location of pip installed
</code></pre>
<p>Worked Pretty Good</p>
| 1 | 2016-07-29T22:22:04Z | [
"python",
"python-2.7",
"python-3.x",
"pip",
"pubnub"
] |
Python tkinter password strength checker gui | 38,665,732 | <p>I'm trying to create a password strength checker gui that checks a password for length(over 8 characters) lowercase and uppercase letters, numbers and special characters and then tells you what level it is, weak, strong etc.... It then creates an md5 hash and displays this, then you can store this hash in a text file. Then re enter the password again and verify from the text file (havent done any code for this yet).</p>
<p>I've managed to achieve the strength check, hash generation, log to a file....I think. Although if no password is entered i would like the code to return 'Password cannot be blank', but it doesnt seem to work in a gui. The same code works from within a shell. The code also never returns 'Very Weak' as a strength, even when only 3 characters are used as a password.</p>
<p>Here is my code so far:</p>
<pre><code>from tkinter import *
import hashlib
import os
import re
myGui = Tk()
myGui.geometry('500x400+700+250')
myGui.title('Password Generator')
guiFont = font = dict(family='Courier New, monospaced', size=18, color='#7f7f7f')
#====== Password Entry ==========
eLabel = Label(myGui, text="Please Enter you Password: ", font=guiFont)
eLabel.grid(row=0, column=0)
ePassword = Entry(myGui, show="*")
ePassword.grid(row=0, column=1)
#====== Strength Check =======
def checkPassword():
strength = ['Password can not be Blank', 'Very Weak', 'Weak', 'Medium', 'Strong', 'Very Strong']
score = 1
password = ePassword.get()
if len(password) < 1:
return strength[0]
if len(password) < 4:
return strength[1]
if len(password) >= 8:
score += 1
if re.search("[0-9]", password):
score += 1
if re.search("[a-z]", password) and re.search("[A-Z]", password):
score += 1
if re.search(".", password):
score += 1
passwordStrength.set(strength[score])
passwordStrength = StringVar()
checkStrBtn = Button(myGui, text="Check Strength", command=checkPassword, height=2, width=25, font=guiFont)
checkStrBtn.grid(row=2, column=0)
checkStrLab = Label(myGui, textvariable=passwordStrength)
checkStrLab.grid(row=2, column=1, sticky=W)
#====== Hash the Password ======
def passwordHash():
hash_obj1 = hashlib.md5()
pwmd5 = ePassword.get().encode('utf-8')
hash_obj1.update(pwmd5)
md5pw.set(hash_obj1.hexdigest())
md5pw = StringVar()
hashBtn = Button(myGui, text="Generate Hash", command=passwordHash, height=2, width=25, font=guiFont)
hashBtn.grid(row=3, column=0)
hashLbl = Label(myGui, textvariable=md5pw)
hashLbl.grid(row=3, column=1, sticky=W)
#====== Log the Hash to a file =======
def hashlog():
loghash = md5pw.get()
if os.path.isfile('password_hash_log.txt'):
obj1 = open('password_hash_log.txt', 'a')
obj1.write(loghash)
obj1.write("\n")
obj1.close()
else:
obj2 = open('password_hash_log.txt', 'w')
obj2.write(loghash)
obj2.write("\n")
obj2.close()
btnLog = Button(myGui, text="Log Hash", command=hashlog, height=2, width=25, font=guiFont)
btnLog.grid(row=4, column=0)
#====== Re enter password and check against stored hash ======
lblVerify = Label(myGui, text="Enter Password to Verify: ", font=guiFont)
lblVerify.grid(row=5, column=0, sticky=W)
myGui.mainloop()
</code></pre>
<p>Any help would be much appreciated. Thanks.</p>
| 1 | 2016-07-29T18:52:19Z | 38,665,916 | <p>You have the results of the checkPassword function,where the passwords are either less than 4 or 1 characters, being returned as return statements. You have no variable assigned to the result of checkPassword, so in those two cases can not receive data from function. I'd Suggest something more like:</p>
<pre><code>from tkinter import *
import hashlib
import os
import re
myGui = Tk()
myGui.geometry('500x400+700+250')
myGui.title('Password Generator')
guiFont = font = dict(family='Courier New, monospaced', size=18, color='#7f7f7f')
#====== Password Entry ==========
eLabel = Label(myGui, text="Please Enter you Password: ", font=guiFont)
eLabel.grid(row=0, column=0)
ePassword = Entry(myGui, show="*")
ePassword.grid(row=0, column=1)
#====== Strength Check =======
def checkPassword():
strength = ['Password can not be Blank', 'Very Weak', 'Weak', 'Medium', 'Strong', 'Very Strong']
score = 1
password = ePassword.get()
print password, len(password)
if len(password) == 0:
passwordStrength.set(strength[0])
return
if len(password) < 4:
passwordStrength.set(strength[1])
return
if len(password) >= 8:
score += 1
if re.search("[0-9]", password):
score += 1
if re.search("[a-z]", password) and re.search("[A-Z]", password):
score += 1
if re.search(".", password):
score += 1
passwordStrength.set(strength[score])
passwordStrength = StringVar()
checkStrBtn = Button(myGui, text="Check Strength", command=checkPassword, height=2, width=25, font=guiFont)
checkStrBtn.grid(row=2, column=0)
checkStrLab = Label(myGui, textvariable=passwordStrength)
checkStrLab.grid(row=2, column=1, sticky=W)
#====== Hash the Password ======
def passwordHash():
hash_obj1 = hashlib.md5()
pwmd5 = ePassword.get().encode('utf-8')
hash_obj1.update(pwmd5)
md5pw.set(hash_obj1.hexdigest())
md5pw = StringVar()
hashBtn = Button(myGui, text="Generate Hash", command=passwordHash, height=2, width=25, font=guiFont)
hashBtn.grid(row=3, column=0)
hashLbl = Label(myGui, textvariable=md5pw)
hashLbl.grid(row=3, column=1, sticky=W)
#====== Log the Hash to a file =======
def hashlog():
loghash = md5pw.get()
if os.path.isfile('password_hash_log.txt'):
obj1 = open('password_hash_log.txt', 'a')
obj1.write(loghash)
obj1.write("\n")
obj1.close()
else:
obj2 = open('password_hash_log.txt', 'w')
obj2.write(loghash)
obj2.write("\n")
obj2.close()
btnLog = Button(myGui, text="Log Hash", command=hashlog, height=2, width=25, font=guiFont)
btnLog.grid(row=4, column=0)
#====== Re enter password and check against stored hash ======
lblVerify = Label(myGui, text="Enter Password to Verify: ", font=guiFont)
lblVerify.grid(row=5, column=0, sticky=W)
myGui.mainloop()
</code></pre>
| 1 | 2016-07-29T19:05:03Z | [
"python",
"user-interface",
"tkinter"
] |
automatically send an email when a file gets generated | 38,665,759 | <p>I am trying to write a python script that automatically sends an email when a specific file is generated. I think i have the code for sending the email, but im not sure how to monitor a directory looking for a specific file. </p>
<p>a high level example is:</p>
<p>from within directory foo/
when file baz is populated do sendEmail()</p>
| 1 | 2016-07-29T18:54:17Z | 38,665,786 | <pre><code>import os.path
if os.path.exists(file_path) and os.path.isfile(fname):
send_email()
</code></pre>
| 0 | 2016-07-29T18:56:25Z | [
"python",
"bash",
"scripting"
] |
Neural Network classifier in python | 38,665,776 | <p>Here I developed a neural network classifier to solve the titanic problem.</p>
<pre><code>from sknn.mlp import Classifier, Layer
nn = Classifier(
layers=[
Layer("Maxout", units=100, pieces=2),
Layer("Softmax")],
learning_rate=0.001,
n_iter=25)
nn.fit(X_train, y_train)
</code></pre>
<p>I got this error, I have tried a lot to fix it but nothing works with me.
Please, help me </p>
<blockquote>
<p>TypeError: <strong>init</strong>() got an unexpected keyword argument 'pieces'</p>
</blockquote>
| -2 | 2016-07-29T18:55:36Z | 38,665,810 | <p>The signature of <a href="http://scikit-neuralnetwork.readthedocs.io/en/latest/module_mlp.html#layer-specifications" rel="nofollow"><code>Layer</code></a> does not define any argument called <code>pieces</code>. To create two layers with same parameters, you'll have to define the <code>Layer</code> object twice:</p>
<pre><code>layers=[
Layer("Sigmoid", units=100),
Layer("Sigmoid", units=100),
Layer("Softmax", units=1)] # The units parameter is not optional
</code></pre>
<p>More so, <code>"Maxout"</code> does not look like a <code>Layer</code> type. Not sure of where you found that.</p>
<blockquote>
<p>Specifically, options are <code>Rectifier</code>, <code>Sigmoid</code>, <code>Tanh</code>, and <code>ExpLin</code>
for non-linear layers and <code>Linear</code> or <code>Softmax</code> for output layers</p>
</blockquote>
| 0 | 2016-07-29T18:58:33Z | [
"python",
"csv",
"numpy"
] |
Extracting information from json input in python on the basis of other field's value | 38,665,823 | <pre><code>{
"Steps": [
{
"Status": {
"State": "PENDING",
"StateChangeReason": {}
},
"ActionOnFailure": "CANCEL_AND_WAIT",
"Name": "ABCD"
},
{
"Status": {
"State": "COMPLETED",
"StateChangeReason": {}
},
"ActionOnFailure": "CANCEL_AND_WAIT",
"Name": "KLMN"
},
{
"Status": {
"Timeline": {
"CreationDateTime": 1469815629.4289999
},
"State": "PENDING",
"StateChangeReason": {}
},
"ActionOnFailure": "TERMINATE_CLUSTER",
"Name": "XYZ"
}
]
}
</code></pre>
<p>I want to check whether the status of step with name = "KLMN" is completed or not. How can I do that in python.</p>
<blockquote>
<p>python -c 'import json,sys;obj=json.load(sys.stdin);print
obj["Steps"]....'</p>
</blockquote>
<p>how should I code print step to print COMPLETED</p>
| -3 | 2016-07-29T18:59:35Z | 38,665,976 | <p>You can type this:</p>
<pre><code>[step['Status']['State'] for step in data['Steps'] if step['Name']=='KLMN']
</code></pre>
<p>Where <em>data</em> is your data structure. You will get :</p>
<pre><code>['COMPLETED']
</code></pre>
<p>Which is list with one element.</p>
| 1 | 2016-07-29T19:10:21Z | [
"python",
"json"
] |
Extracting information from json input in python on the basis of other field's value | 38,665,823 | <pre><code>{
"Steps": [
{
"Status": {
"State": "PENDING",
"StateChangeReason": {}
},
"ActionOnFailure": "CANCEL_AND_WAIT",
"Name": "ABCD"
},
{
"Status": {
"State": "COMPLETED",
"StateChangeReason": {}
},
"ActionOnFailure": "CANCEL_AND_WAIT",
"Name": "KLMN"
},
{
"Status": {
"Timeline": {
"CreationDateTime": 1469815629.4289999
},
"State": "PENDING",
"StateChangeReason": {}
},
"ActionOnFailure": "TERMINATE_CLUSTER",
"Name": "XYZ"
}
]
}
</code></pre>
<p>I want to check whether the status of step with name = "KLMN" is completed or not. How can I do that in python.</p>
<blockquote>
<p>python -c 'import json,sys;obj=json.load(sys.stdin);print
obj["Steps"]....'</p>
</blockquote>
<p>how should I code print step to print COMPLETED</p>
| -3 | 2016-07-29T18:59:35Z | 38,666,048 | <pre><code>steps = {
"Steps": [
{
"Status": {
"State": "PENDING",
"StateChangeReason": {}
},
"ActionOnFailure": "CANCEL_AND_WAIT",
"Name": "ABCD"
},
{
"Status": {
"State": "COMPLETED",
"StateChangeReason": {}
},
"ActionOnFailure": "CANCEL_AND_WAIT",
"Name": "KLMN"
},
{
"Status": {
"Timeline": {
"CreationDateTime": 1469815629.4289999
},
"State": "PENDING",
"StateChangeReason": {}
},
"ActionOnFailure": "TERMINATE_CLUSTER",
"Name": "XYZ"
}
]
}
if [step["Status"]["State"] for step in steps['Steps'] if step["Name"] == "KLMN"][0] == 'COMPLETED':
#Do something
</code></pre>
| 1 | 2016-07-29T19:15:32Z | [
"python",
"json"
] |
Kivy tutorial - pong.py no listeners error - .kv doesn't close properly | 38,665,903 | <p>I am new to GUIs in Python and wanted to try Kivy.
But already in the Pong tutorial I get stuck:
I run the original scripts and get the following errors:</p>
<pre><code>[WARNING ] [Lang ] The file C:\Users\Canopy\Skripts\Pong\pong.kv is loaded multiples times, you might have unwanted behaviors.
[INFO ] [Base ] Start application main loop
[ERROR ] [Base ] No event listeners have been created
[ERROR ] [Base ] Application will leave
</code></pre>
<p>Why are these listerners missing and why can other (apparently) run the code?</p>
<blockquote>
<p><strong>EDIT:</strong> A restart fixed it for once, but the next time I run the application I
run into the same problem. The .kv doesn't seem to be properly closed.</p>
</blockquote>
<p>I am running Python 2.7 (canopy) on win 7
These are the scripts:
main.py:</p>
<pre><code>from kivy.app import App
from kivy.uix.widget import Widget
from kivy.properties import NumericProperty, ReferenceListProperty,\
ObjectProperty
from kivy.vector import Vector
from kivy.clock import Clock
class PongPaddle(Widget):
score = NumericProperty(0)
def bounce_ball(self, ball):
if self.collide_widget(ball):
vx, vy = ball.velocity
offset = (ball.center_y - self.center_y) / (self.height / 2)
bounced = Vector(-1 * vx, vy)
vel = bounced * 1.1
ball.velocity = vel.x, vel.y + offset
class PongBall(Widget):
velocity_x = NumericProperty(0)
velocity_y = NumericProperty(0)
velocity = ReferenceListProperty(velocity_x, velocity_y)
def move(self):
self.pos = Vector(*self.velocity) + self.pos
class PongGame(Widget):
ball = ObjectProperty(None)
player1 = ObjectProperty(None)
player2 = ObjectProperty(None)
def serve_ball(self, vel=(4, 0)):
self.ball.center = self.center
self.ball.velocity = vel
def update(self, dt):
self.ball.move()
#bounce of paddles
self.player1.bounce_ball(self.ball)
self.player2.bounce_ball(self.ball)
#bounce ball off bottom or top
if (self.ball.y < self.y) or (self.ball.top > self.top):
self.ball.velocity_y *= -1
#went of to a side to score point?
if self.ball.x < self.x:
self.player2.score += 1
self.serve_ball(vel=(4, 0))
if self.ball.x > self.width:
self.player1.score += 1
self.serve_ball(vel=(-4, 0))
def on_touch_move(self, touch):
if touch.x < self.width / 3:
self.player1.center_y = touch.y
if touch.x > self.width - self.width / 3:
self.player2.center_y = touch.y
class PongApp(App):
def build(self):
game = PongGame()
game.serve_ball()
Clock.schedule_interval(game.update, 1.0 / 60.0)
return game
if __name__ == '__main__':
PongApp().run()
</code></pre>
<p>and Pong.py:</p>
<pre><code>#:kivy 1.9.1
<PongBall>:
size: 50, 50
canvas:
Ellipse:
pos: self.pos
size: self.size
<PongPaddle>:
size: 25, 200
canvas:
Rectangle:
pos:self.pos
size:self.size
<PongGame>:
ball: pong_ball
player1: player_left
player2: player_right
canvas:
Rectangle:
pos: self.center_x-5, 0
size: 10, self.height
Label:
font_size: 70
center_x: root.width / 4
top: root.top - 50
text: str(root.player1.score)
Label:
font_size: 70
center_x: root.width * 3 / 4
top: root.top - 50
text: str(root.player2.score)
PongBall:
id: pong_ball
center: self.parent.center
PongPaddle:
id: player_left
x: root.x
center_y: root.center_y
PongPaddle:
id: player_right
x: root.width-self.width
center_y: root.center_y
</code></pre>
| 0 | 2016-07-29T19:04:14Z | 38,666,200 | <p>Ok, the basic IT solution actually worked:
After restating the PC the error didn't occur anymore.
<strong>EDIT</strong>: After running the application again, I ran into the same problem. Still solution needed!</p>
| 0 | 2016-07-29T19:28:06Z | [
"python",
"user-interface",
"kivy"
] |
Convert PIL Image into pygame surface image | 38,665,920 | <p>I'm trying to load .png images into a program using PIL.Image, so that I can manipulate them, ready for use as pygame surfaces in sprites. The following code shows how I've tried to convert those Pil Images into pygame images :</p>
<pre><code>bytes = someImagefile.tobytes()
new_image = pygame.image.fromstring(bytes, size, "RGB")
</code></pre>
<p>I'm getting : "ValueError: String length does not equal format and resolution size"</p>
<p>Is there a way to do this without saving a new .png copy after I'm done with playing with it?</p>
| 0 | 2016-07-29T19:05:08Z | 38,667,537 | <p>The following code works for me. Python2.7+PIL 2.5+Pygame1.9.2</p>
<pre><code>import Image
import pygame
image = Image.open("SomeImage.png")
mode = image.mode
size = image.size
data = image.tobytes()
py_image = pygame.image.fromstring(data, size, mode)
</code></pre>
| 0 | 2016-07-29T21:10:15Z | [
"python",
"image",
"pygame",
"python-imaging-library"
] |
Using python 2.7 - regex to find all CR character at end of lines | 38,665,931 | <p>I have a folder with about 50,000 text files in it, and I need to see if any of them have lines that end in the CR character only (not CR/LF, or LF) - hex 0x0D.</p>
<p>The following code doesn't return any results, and takes a LONG time to process.</p>
<pre><code>import re
import os
rootDir = 'Z:\Archive\\20160701'
for root, dirs, files in os.walk(rootDir):
print('--\nroot = ' + rootDir)
for filename in rootDir:
file_path = os.path.join(rootDir, filename)
print('Searching file: %s' % filename)
with open(file_path, 'r') as f:
f_content = f.read()
check = re.search('[\x0D$]', f_content, re.MULTILINE)
if check:
print check
gotit = open('U:\Temp3\\foundit.txt', 'a')
gotit.write(file_path + '\n')
gotit.close()
</code></pre>
<p>Thanks in advance for any insight anyone can provide. I know there's at least one file in the folder that has line breaks as the 0x0D character only.</p>
| 0 | 2016-07-29T19:06:45Z | 38,666,716 | <p>This line is wrong:</p>
<pre><code>for filename in rootDir: # rootDir is 'Z:\Archive\\20160701'
</code></pre>
<p>Should be:</p>
<pre><code>for filename in files:
</code></pre>
<p>If all the files are in one folder, as you said, it is easier to use <code>os.listdir</code> You don't need all the power of <code>os.walk</code> that gives you the whole tree under the root dir, including sub dirs, and files.</p>
<p>Now, as for using <code>regex</code> to detect the newline characters, the problem is that when Python opens the file in <code>'r'</code> mode, <code>read</code> or <code>readline</code> change the newlines all to be <code>\n</code>.</p>
<p>The option would be to open the file in <code>'rb'</code> mode:</p>
<pre><code>LF = b'\n'
CR = b'\r'
CRLF = b'\r\n'
def sniff(filename):
with open(filename, 'rb') as f:
content = f.read()
if CRLF in content:
newline = 'CRLF'
elif LF in content:
newline = 'LF'
elif CR in content:
newline = 'CR'
return newline
</code></pre>
<p>*nix systems have the <code>file</code> command to determine the file type. <code>file</code> can detect the file type based on "magic number", extension, etc. so that determining the type of text file is a very trivial task for <code>file</code></p>
<p>What kept me <em>waddling</em> for a while is when I tested a text file created on a mac using nano. I got <code>\n</code> instead of the expected <code>\r</code>, until I <a href="http://superuser.com/a/439443">found out</a> that MacOS changed to <code>\n</code> in order to be Unix compliant, leaving the <code>\r</code> to legacy text files.</p>
<p>Hope this helps a bit.\n</p>
<p>EOF</p>
| 1 | 2016-07-29T20:08:09Z | [
"python",
"regex"
] |
python __new__ - must return an instance of cls | 38,665,942 | <p><a href="https://docs.python.org/3/reference/datamodel.html#object.__new__" rel="nofollow">According to the docs,</a></p>
<blockquote>
<p>Typical implementations create a new instance of the class by invoking
the superclassâs <code>__new__()</code> method using <code>super(currentclass,
cls).__new__(cls[, ...])</code> with appropriate arguments and then modifying
the newly-created instance as necessary before returning it. <br/>
...</p>
<p>If <code>__new__</code> does not return an instance of cls, then the new
instanceâs <code>__init__()</code> method will not be invoked.</p>
</blockquote>
<p>The <a href="https://docs.python.org/3/reference/datamodel.html#object.__new__" rel="nofollow">simplest implementation of <code>__new__</code>:</a></p>
<pre><code>class MyClass:
def __new__(cls):
RetVal = super(currentclass, cls).__new__(cls)
return RetVal
</code></pre>
<p>How exactly does <code>super(currentclass, cls).__new__(cls[, ...])</code> return an object of type <code>cls</code>?</p>
<p>That statement calls <code>object.__new__(cls)</code> where <code>cls</code> is <code>MyClass</code>.</p>
<p>So how would class <code>object</code> know how to create the type <code>MyClass</code>?</p>
| -3 | 2016-07-29T19:07:39Z | 38,666,185 | <p>Few things, first of all you need to write explicit inheritance from <code>object</code> like this:</p>
<pre><code>class MyClass(object):
</code></pre>
<p>You should read about <a href="http://stackoverflow.com/questions/54867/what-is-the-difference-between-old-style-and-new-style-classes-in-python">old classes and new classes</a> on python(only on python 2, in python 3 it does not matter)</p>
<p>For your question, I think you missed the point or syntax of <code>super()</code> functions. Again, small change to your code:</p>
<pre><code>RetVal = super(MyClass, cls).__new__(cls)
</code></pre>
<p>In this way you reference to the parent-class by using <code>super(MyClass, cls)</code> and calling a method from this parent-class <strong>new</strong>(of course you can use any other method)</p>
<p><strong>edit</strong>
After reading your comments, I'll just add that <code>super()</code> doesn't have to take any arguments in Python 3 so maybe it's more trivial for you. Highly recommended to read more about the <code>super()</code> <a href="http://stackoverflow.com/questions/576169/understanding-python-super-with-init-methods">here</a></p>
| 0 | 2016-07-29T19:26:53Z | [
"python",
"super",
"python-internals"
] |
python __new__ - must return an instance of cls | 38,665,942 | <p><a href="https://docs.python.org/3/reference/datamodel.html#object.__new__" rel="nofollow">According to the docs,</a></p>
<blockquote>
<p>Typical implementations create a new instance of the class by invoking
the superclassâs <code>__new__()</code> method using <code>super(currentclass,
cls).__new__(cls[, ...])</code> with appropriate arguments and then modifying
the newly-created instance as necessary before returning it. <br/>
...</p>
<p>If <code>__new__</code> does not return an instance of cls, then the new
instanceâs <code>__init__()</code> method will not be invoked.</p>
</blockquote>
<p>The <a href="https://docs.python.org/3/reference/datamodel.html#object.__new__" rel="nofollow">simplest implementation of <code>__new__</code>:</a></p>
<pre><code>class MyClass:
def __new__(cls):
RetVal = super(currentclass, cls).__new__(cls)
return RetVal
</code></pre>
<p>How exactly does <code>super(currentclass, cls).__new__(cls[, ...])</code> return an object of type <code>cls</code>?</p>
<p>That statement calls <code>object.__new__(cls)</code> where <code>cls</code> is <code>MyClass</code>.</p>
<p>So how would class <code>object</code> know how to create the type <code>MyClass</code>?</p>
| -3 | 2016-07-29T19:07:39Z | 38,666,378 | <p><code>super(MyClass, cls).__new__(cls)</code> first searches the MRO (method resolution order) of the <code>cls</code> object (skipping past <code>MyClass</code> in that sequence), until it finds an object with a <code>__new__</code> attribute.</p>
<p>In your case, that is <code>object.__new__</code>:</p>
<pre><code>>>> class MyClass:
... def __new__(cls):
... RetVal = super(MyClass, cls).__new__(cls)
... return RetVal
...
>>> MyClass.__mro__
(<class '__main__.MyClass'>, <class 'object'>)
>>> hasattr(MyClass.__mro__[1], '__new__')
True
</code></pre>
<p>but if you subclassed <code>MyClass</code> and mixed in another class into the MRO with a <code>__new__</code> method then it <em>could</em> be another method.</p>
<p><code>object.__new__</code> is implemented in C, see the <a href="https://hg.python.org/cpython/file/v3.5.1/Objects/typeobject.c#l3405" rel="nofollow"><code>object_new()</code> function</a>; it contains a hook for abstract base classes to make sure you are not trying to instantiate an abstract class, then delegates to the <code>tp_alloc</code> slot, which usually will be set to <a href="https://hg.python.org/cpython/file/v3.5.1/Objects/typeobject.c#l933" rel="nofollow"><code>PyType_GenericAlloc</code></a>, which adds a new <code>PyObject</code> struct to the heap. It is that struct that represents the instance to the interpreter.</p>
| 1 | 2016-07-29T19:41:12Z | [
"python",
"super",
"python-internals"
] |
Matplotlib Basemap example code fails due to MemoryError | 38,665,977 | <p>I'm trying to use the Basemap toolkit from matplotlib to plot data on a map. When I try to run the following code</p>
<pre><code>from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
# setup Lambert Conformal basemap.
# set resolution=None to skip processing of boundary datasets.
m = Basemap(width=12000000,height=9000000,projection='lcc',
resolution=None,lat_1=45.,lat_2=55,lat_0=50,lon_0=-107.)
m.shadedrelief()
plt.show()
</code></pre>
<p>which is copied and pasted directly from <a href="http://matplotlib.org/basemap/users/geography.html" rel="nofollow">example #4 on the basemap tutorial</a></p>
<p>The code fails with this error:</p>
<pre><code>Traceback (most recent call last):
File "basemap_test.py", line 11, in <module>
m.shadedrelief()
File "C:\Python35-32\lib\site-packages\mpl_toolkits\basemap\__init__.py", line 4043, in shadedrelief
return self.warpimage(image='shadedrelief',scale=scale,**kwargs)
File "C:\Python35-32\lib\site-packages\mpl_toolkits\basemap\__init__.py", line 4171, in warpimage
self._bm_rgba = self._bm_rgba.astype(np.float32)/255.
MemoryError
</code></pre>
<p>I am running Python 3.5.1 using matplotlib version 1.5.1 and Basemap version 1.0.8</p>
<p>I have found a couple threads (<a href="http://stackoverflow.com/questions/36885953/keep-transparency-with-basemap-warpimage/36963029#36963029">here</a> and <a href="http://stackoverflow.com/questions/38243506/python-basemap-error-using-shadedrelief-bluemarble-or-etopo-false-longitude-f">here</a>) that deal with similar bugs in mpl_toolkits/basemap/<strong>init</strong>.py that have supposedly been fixed but none that address this problem.</p>
<p>Any help would be appreciated!</p>
| 1 | 2016-07-29T19:10:26Z | 38,667,615 | <p>Solved the issue by upgrading to 64-bit python. It seems that even though Basemap publishes a version for 32-bit python not all the map functions work in 32-bit, even the standard examples.</p>
| 0 | 2016-07-29T21:16:33Z | [
"python",
"matplotlib",
"matplotlib-basemap"
] |
Deleting files using wildcards with python glob module | 38,665,986 | <p>Was looking for a method to delete files using a wildcard. Came across <a href="http://stackoverflow.com/questions/5532498/delete-files-with-python-through-os-shell?answertab=oldest#tab-top">this question</a> which helped me out. I thought the accepted answer was easier to understand and that's the method I would prefer. </p>
<p>However, it looks like the answer which utilizes the <a href="https://docs.python.org/3/library/glob.html" rel="nofollow"><code>glob</code></a> module is considerably more popular. What are the possible reasons for this? Is there an advantage that this method has over the accepted answer?</p>
| 0 | 2016-07-29T19:10:58Z | 38,666,266 | <blockquote>
<p>There should be one-- and preferably only one --obvious way to do it.</p>
</blockquote>
<p>The one obvious way to find all path names that match a pattern is the glob module, since that is <a href="https://docs.python.org/3/library/glob.html" rel="nofollow">what it is documented to do</a>.</p>
<p>The accepted answer implements a subset of glob's functionality, it can find all files ending in <code>.txt</code>. It is not wrong, but it is not the One Way.</p>
| 0 | 2016-07-29T19:32:59Z | [
"python",
"glob"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.