title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Python - Speech Recognition time offsets | 38,642,310 | <p>I am trying to do speech recognition using python. In addition to this, I need to get the times of beginning and end of each word.</p>
<p>I would rather use a free library that can deal with this. I've heard that Sphinx is able to do this but I couldn't find any examples (for python anyway).</p>
<p>I would appreciate any help or suggestions.</p>
| 2 | 2016-07-28T16:47:00Z | 38,647,498 | <p>Something like this:</p>
<pre><code>from os import environ, path
from pocketsphinx.pocketsphinx import *
from sphinxbase.sphinxbase import *
MODELDIR = "../../../model"
DATADIR = "../../../test/data"
config = Decoder.default_config()
config.set_string('-hmm', path.join(MODELDIR, 'en-us/en-us'))
config.set_string('-lm', path.join(MODELDIR, 'en-us/en-us.lm.bin'))
config.set_string('-dict', path.join(MODELDIR, 'en-us/cmudict-en-us.dict'))
config.set_string('-logfn', '/dev/null')
decoder = Decoder(config)
stream = open(path.join(DATADIR, 'goforward.raw'), 'rb')
in_speech_bf = False
decoder.start_utt()
while True:
buf = stream.read(1024)
if buf:
decoder.process_raw(buf, False, False)
if decoder.get_in_speech() != in_speech_bf:
in_speech_bf = decoder.get_in_speech()
if not in_speech_bf:
decoder.end_utt()
print ('Result:', decoder.hyp().hypstr)
print ([(seg.word, seg.prob, seg.start_frame, seg.end_frame) for seg in decoder.seg()])
decoder.start_utt()
else:
break
decoder.end_utt()
</code></pre>
<p>More examples <a href="https://github.com/cmusphinx/pocketsphinx/blob/master/swig/python/test/" rel="nofollow">here</a>. </p>
| 0 | 2016-07-28T22:04:38Z | [
"python",
"speech-recognition",
"cmusphinx"
] |
Python 3 int() function is not converting input string to integer | 38,642,339 | <p>I am working on a small program that reads data from a CSV file. As part of the program, user input is used to only select data that >= but I get <strong>TypeError: unorderable types: str() >= int()</strong> when I run the code. Looks like the sting is not converting to integer.</p>
<pre><code>def get_csv_data(data_type, num):
import csv
ga_session_data = {}
ga_pageviews_data = {}
with open('files/data.csv', 'r') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
page, sessions, pageviews = row
ga_session_data[page] = int(sessions)
ga_pageviews_data[page] = int(pageviews)
if data_type == 'sessions' and sessions >= int(num):
for page, sessions in ga_session_data.items():
print(page, ' - ', sessions)
elif data_type == 'pageviews' and pageviews >= int(num):
for page, pageviews in ga_pageviews_data.items():
print(page, ' - ', pageviews)
def main():
while(True):
question = input("Are you interested in sessions or pageviews?")
if question == 'sessions':
number = int(input("What range are you interested in?"))
get_csv_data(data_type = 'sessions', num = int(number))
elif question == 'pageviews':
number = input("What range are you interested in?")
get_csv_data(data_type = 'pageviews', num = int(number))
else:
print("Invalid Input. Choose between sessions and pageviews.")
main()
</code></pre>
| 0 | 2016-07-28T16:48:19Z | 38,642,406 | <p><code>int</code> does not <em>cast</em> its parameters to integer in-place. In fact those parameters are immutable.</p>
<p><code>int(sessions)</code> does not <em>exactly</em> do what you think it does. <code>session</code> is not modified, but the return value of that call is an <code>int</code>.</p>
<p>You should assign the returned value to a new/same name:</p>
<pre><code>sessions = int(sessions)
pageviews = int(pageviews)
</code></pre>
<p>The operator <code>>=</code> can now compare the two variables you have, since they are now both integers.</p>
<hr>
<p>You may also want to rewrite that <code>if</code> block like so:</p>
<pre><code>if data_type == 'sessions':
for page, sessions in ga_session_data.items():
if sessions >= int(num):
print(page, ' - ', sessions)
</code></pre>
<p>In this way, you're actually checking the sessions count in the dictionary and not the sessions from the for loop.</p>
| 3 | 2016-07-28T16:52:04Z | [
"python",
"int"
] |
Using +level-colors from ImageMagick in Python with Wand | 38,642,347 | <p>I'm going to use Wand to cut out some parts of an image. The image has a transparent background. But before I cut out the parts, I first want to make some adjustments to the source image (without actually altering the source file).</p>
<p>The adjustments I want to make are:</p>
<ol>
<li>Change the black point to gray and leave the white point white</li>
<li>Scale all color values to the new range of gray and white</li>
<li>Replace the transparant background with 100% black</li>
<li>Transform the image into grayscale</li>
</ol>
<p>I can get the desired result using a simple command with ImageMagick:</p>
<p><code>convert input.png +clone +level-colors gray,white -background black -alpha remove -colorspace Gray output.png</code></p>
<p>But how do I do this using Wand? It seems that there's no way to apply the +level-colors operation from Wand. Also the solution from this question: <a href="https://stackoverflow.com/questions/29495217/is-there-a-level-function-in-wand-py">Is there a -level function in wand-py</a> doesn't apply to my problem, I guess. Because it seems the magick image API doesn't have a level-colors method.</p>
<p>Example result of the effect:</p>
<p>Input:
<a href="http://i.stack.imgur.com/q6Yut.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/q6Yut.jpg" alt="Before"></a>
Output:
<a href="http://i.stack.imgur.com/zStWM.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/zStWM.jpg" alt="After"></a></p>
| 0 | 2016-07-28T16:48:36Z | 38,677,963 | <p>Since your output image is grey anyway, you don't really need <code>+level-colors</code>, you can do the same thing like this:</p>
<pre><code>convert movies.png -channel RGB -colorspace gray +level 50,100% -background black -alpha remove output.png
</code></pre>
<p>Another option may be to use the <code>-fx</code> operator. If you imagine your pixel brightnesses vary between <code>0</code> (black) and <code>1</code> (white), then if you divide all the brightnesses by 2, they will vary between <code>0</code> and <code>0.5</code>. Then if you add <code>0.5</code>, they will vary between <code>0.5</code> (mid-grey) and <code>1</code> (white) - which is what you want:</p>
<pre><code>convert movies.png -channel RGB -colorspace gray -fx "(u/2)+0.5" -background black -alpha remove output.png
</code></pre>
| 1 | 2016-07-30T19:58:13Z | [
"python",
"imagemagick",
"wand"
] |
Using +level-colors from ImageMagick in Python with Wand | 38,642,347 | <p>I'm going to use Wand to cut out some parts of an image. The image has a transparent background. But before I cut out the parts, I first want to make some adjustments to the source image (without actually altering the source file).</p>
<p>The adjustments I want to make are:</p>
<ol>
<li>Change the black point to gray and leave the white point white</li>
<li>Scale all color values to the new range of gray and white</li>
<li>Replace the transparant background with 100% black</li>
<li>Transform the image into grayscale</li>
</ol>
<p>I can get the desired result using a simple command with ImageMagick:</p>
<p><code>convert input.png +clone +level-colors gray,white -background black -alpha remove -colorspace Gray output.png</code></p>
<p>But how do I do this using Wand? It seems that there's no way to apply the +level-colors operation from Wand. Also the solution from this question: <a href="https://stackoverflow.com/questions/29495217/is-there-a-level-function-in-wand-py">Is there a -level function in wand-py</a> doesn't apply to my problem, I guess. Because it seems the magick image API doesn't have a level-colors method.</p>
<p>Example result of the effect:</p>
<p>Input:
<a href="http://i.stack.imgur.com/q6Yut.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/q6Yut.jpg" alt="Before"></a>
Output:
<a href="http://i.stack.imgur.com/zStWM.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/zStWM.jpg" alt="After"></a></p>
| 0 | 2016-07-28T16:48:36Z | 38,690,063 | <p>The <code>-level-colors</code> behavior can be applied by <code>wand.image.Image.level</code> method, but will need to be executed for each color channel. The two colors provided are used as reference black/white points.</p>
<p>For example...</p>
<pre class="lang-py prettyprint-override"><code>from wand.image import Image
from wand.color import Color
from wand.compat import nested
with Image(filename='rose:') as rose:
# -level-colors red,green
with nested(Color('red'),
Color('green')) as (black_point,
white_point):
# Red channel
rose.level(black_point.red,
white_point.red,
1.0,
'red')
# Green channel
rose.level(black_point.green,
white_point.green,
1.0,
'green')
# Blue channel
rose.level(black_point.blue,
white_point.blue,
1.0,
'blue')
rose.save(filename='output.png')
</code></pre>
<p><a href="http://i.stack.imgur.com/Do2fm.png" rel="nofollow"><img src="http://i.stack.imgur.com/Do2fm.png" alt="-level-colors"></a></p>
<p>For <code>+level-colors</code>, just invert the black/white points.</p>
<pre class="lang-py prettyprint-override"><code>rose.level(white_point.red,
black_point.red,
1.0,
'red')
</code></pre>
| 1 | 2016-08-01T02:04:33Z | [
"python",
"imagemagick",
"wand"
] |
How to run a command within a subprocess in Python script? | 38,642,382 | <p>I'm trying to run a script that runs putty and within the putty terminal that gets created, run a command. I've been able to start putty from my script using Python's subprocess module with check_call or Popen. However, I'm confused as to how I can run a command within the subprocess Putty terminal from my script. I need to be able to run this command in putty and analyze its output on the Putty terminal. Thanks for any help.</p>
| 2 | 2016-07-28T16:50:50Z | 38,642,471 | <p>You need to set the <code>stdin</code> argument to <code>PIPE</code> and use the <code>communicate</code> function of <code>Popen</code> to send data to stdin.</p>
<pre><code>from subprocess import Popen, PIPE
p = Popen('/the/command', stdin=PIPE, stdout=PIPE, stderr=PIPE)
std_out, std_err = p.communicate('command to putty')
</code></pre>
<p>That being said, it may be easier to use a python library that implements the <code>ssh</code> protocol (like <code>paramiko</code>) rather than going through putty.</p>
| 0 | 2016-07-28T16:55:39Z | [
"python",
"subprocess",
"putty"
] |
Matrix multiplication speeds in R as fast as in Python? | 38,642,521 | <p>I am experiencing substantially slower matrix multiplication in R as compared to python. This is for large matrices. For example (in python):</p>
<pre><code>import numpy as np
A = np.random.rand(4112, 23050).astype('float32')
B = np.random.rand(23050, 2500).astype('float32')
%timeit np.dot(A, B)
1 loops, best of 3: 1.09 s per loop
</code></pre>
<p>Here is the equivalent multiplication in R (takes almost 10x longer):</p>
<pre><code>A <- matrix(rnorm(4112*23050), ncol = 23050)
B <- matrix(rnorm(23050*2500), ncol = 2500)
system.time(A %*% B)
user system elapsed
72.032 1.048 9.444
</code></pre>
<p><strong>How can I achieve matrix multiplication speeds in R that are comparable to what is standard with python?</strong></p>
<h2>What I Have Already Tried:</h2>
<p><strong>1)</strong> Part of the descrepancy seems to be that python supports float32 whereas R only uses numeric, which is similar to (the same as?) float64. For example, the same python commands as above except with float64 takes twice as long (but still 5x slower than R):</p>
<pre><code>import numpy as np
A = np.random.rand(4112, 23050).astype('float64')
B = np.random.rand(23050, 2500).astype('float64')
%timeit np.dot(A, B)
1 loops, best of 3: 2.24 s per loop
</code></pre>
<p><strong>2)</strong> I am using the openBLAS linear algebra back-end for R.</p>
<p><strong>3)</strong> RcppEigen as detailed in answer to <a href="http://stackoverflow.com/questions/35923787/fast-large-matrix-multiplication-in-r">this SO</a> (see link for test.cpp file). The multiplication is about twice as fast in "user" time, but 3x slower in the more critical elapsed time as it only uses 1 of 8 threads.</p>
<pre><code>library(Rcpp)
sourceCpp("test.cpp")
A <- matrix(rnorm(4112*23050), nrow = 4112)
B <- matrix(rnorm(23050*2500), ncol = 2500)
system.time(res <- eigenMatMult(A, B))
user system elapsed
29.436 0.056 29.551
</code></pre>
| 1 | 2016-07-28T16:58:09Z | 38,643,751 | <p>I use <code>MRO</code> and <code>python</code> with <code>anaconda</code> and the <code>MKL</code> BLAS. Here are my results for the same data generating process, i.e. <code>np.random.rand</code> (<code>'float64'</code>) or <code>rnorm</code> and identical dimensions (<em>average and standard deviation over 10 replications</em> ):</p>
<p><strong>Python:</strong></p>
<pre><code>np.dot(A, B) # 1.3616 s (sd = 0.1776)
</code></pre>
<p><strong>R:</strong></p>
<pre><code>Bt = t(B)
a = A %*% B # 2.0285 s (sd = 0.1897)
acp = tcrossprod(A, Bt) # 1.3098 s (sd = 0.1206)
identical(acp, a) # TRUE
</code></pre>
| 2 | 2016-07-28T18:06:46Z | [
"python",
"numpy",
"rcpp",
"matrix-multiplication"
] |
Matrix multiplication speeds in R as fast as in Python? | 38,642,521 | <p>I am experiencing substantially slower matrix multiplication in R as compared to python. This is for large matrices. For example (in python):</p>
<pre><code>import numpy as np
A = np.random.rand(4112, 23050).astype('float32')
B = np.random.rand(23050, 2500).astype('float32')
%timeit np.dot(A, B)
1 loops, best of 3: 1.09 s per loop
</code></pre>
<p>Here is the equivalent multiplication in R (takes almost 10x longer):</p>
<pre><code>A <- matrix(rnorm(4112*23050), ncol = 23050)
B <- matrix(rnorm(23050*2500), ncol = 2500)
system.time(A %*% B)
user system elapsed
72.032 1.048 9.444
</code></pre>
<p><strong>How can I achieve matrix multiplication speeds in R that are comparable to what is standard with python?</strong></p>
<h2>What I Have Already Tried:</h2>
<p><strong>1)</strong> Part of the descrepancy seems to be that python supports float32 whereas R only uses numeric, which is similar to (the same as?) float64. For example, the same python commands as above except with float64 takes twice as long (but still 5x slower than R):</p>
<pre><code>import numpy as np
A = np.random.rand(4112, 23050).astype('float64')
B = np.random.rand(23050, 2500).astype('float64')
%timeit np.dot(A, B)
1 loops, best of 3: 2.24 s per loop
</code></pre>
<p><strong>2)</strong> I am using the openBLAS linear algebra back-end for R.</p>
<p><strong>3)</strong> RcppEigen as detailed in answer to <a href="http://stackoverflow.com/questions/35923787/fast-large-matrix-multiplication-in-r">this SO</a> (see link for test.cpp file). The multiplication is about twice as fast in "user" time, but 3x slower in the more critical elapsed time as it only uses 1 of 8 threads.</p>
<pre><code>library(Rcpp)
sourceCpp("test.cpp")
A <- matrix(rnorm(4112*23050), nrow = 4112)
B <- matrix(rnorm(23050*2500), ncol = 2500)
system.time(res <- eigenMatMult(A, B))
user system elapsed
29.436 0.056 29.551
</code></pre>
| 1 | 2016-07-28T16:58:09Z | 38,645,650 | <p>Slightly tangential, but too long for a comment I think. To check whether the relevant compiler flags (e.g. <code>-fopenmp</code>) are set, use <code>sourceCpp("testeigen.cpp",verbose=TRUE)</code>.</p>
<p>On my system, this showed that the OpenMP flags are <em>not</em> defined by default.</p>
<p>I did this to enable them (adapted from <a href="http://www.lindonslog.com/programming/r/rcpp/" rel="nofollow">here</a>):</p>
<pre><code>library(Rcpp)
pkglibs <- "-fopenmp -lgomp"
pkgcxxflags <- "-fopenmp"
Sys.setenv(PKG_LIBS=pkglibs,PKG_CXXFLAGS=pkgcxxflags)
sourceCpp("testeigen.cpp",verbose=TRUE)
</code></pre>
<ul>
<li>Dirk Eddelbuettel <a href="http://lists.r-forge.r-project.org/pipermail/rcpp-devel/2013-August/006374.html" rel="nofollow">comments</a> that he prefers to set the compiler flags in <code>~/.R/Makevars</code>.</li>
<li>The example I took this from called the internal <code>Rcpp:::RcppLdFlags</code> and <code>Rcpp:::RcppCxxFlags</code> functions and prepended the results to the flags given above; this seems not to be necessary (?)</li>
</ul>
| 1 | 2016-07-28T19:58:09Z | [
"python",
"numpy",
"rcpp",
"matrix-multiplication"
] |
Query records through urlsafe key from ndb python | 38,642,556 | <p>Hi i inserted a record in ndb. I successfully got its url safe key. Now on the basis of this key i want to query the ndb to fetch the records. How can i do this. Please help.</p>
<p>Code to get URL safe Key.</p>
<pre><code> user = Users()
user.name = name
user.email = email
user.password = hashedPass
user.ekey = conkey
user.status = 0
ke = user.put()
chk = ke.urlsafe() // got Key Successfully
</code></pre>
<p>Now on the basis of this key i want to query the db. How can i do this.</p>
| 0 | 2016-07-28T16:59:40Z | 38,642,626 | <p>You can reconstruct the key based on it's <a href="https://cloud.google.com/appengine/docs/python/ndb/keyclass#Constructors" rel="nofollow">urlsafe</a> constructor parameter and then call <code>Key.get</code> to fetch the entity:</p>
<pre><code>from google.appengine.ext import ndb
key = ndb.Key(urlsafe=chk) # chk is the same string returned from ke.urlsafe() in your example code
entity = key.get()
</code></pre>
| 1 | 2016-07-28T17:03:43Z | [
"python",
"app-engine-ndb"
] |
how to load jinja template directly from filesystem | 38,642,557 | <p>I'm relatively inexperienced at python, so I went astray when I read the <a href="http://jinja.pocoo.org/docs/dev/api/#" rel="nofollow">jinja API document at pocoo.org</a>. It reads:</p>
<blockquote>The simplest way to configure Jinja2 to load templates for your application looks roughly like this:</blockquote>
<pre><code>from jinja2 import Environment, PackageLoader
env = Environment(loader=PackageLoader('yourapplication', 'templates'))
</code></pre>
<blockquote>This will create a template environment with the default settings and a loader that looks up the templates in the <i>templates</i> folder inside the <i>yourapplication</i> python package.</blockquote>
<p>As it turns out, this isn't so simple because you have to make/install a python package with your templates in it, which introduces a lot of needless complexity, especially if you have no intention of distributing your code. You can refer to SO questions on the topic <a href="http://stackoverflow.com/questions/38617900/need-to-package-jinja2-template-for-python">here</a> and <a href="http://stackoverflow.com/questions/29150156/how-to-make-a-python-package-containing-only-jinja-templates">here</a>, but the answers are vague and unsatisfying.</p>
<p>What a naive newbie wants to do, obviously, is just load the template directly from the filesystem, not as a resource in a package. <strong>How is this done?</strong></p>
| 0 | 2016-07-28T16:59:42Z | 38,642,558 | <p><strong>Here's how</strong>: use a <code>FileSystemLoader</code> instead of a <code>PackageLoader</code>. I found examples on the web <a href="http://matthiaseisen.com/pp/patterns/p0198/" rel="nofollow">here</a> and <a href="http://kagerato.net/articles/software/libraries/jinja-quickstart.html" rel="nofollow">here</a>. Let's say you have a python file in the same dir as your template:</p>
<pre><code>./index.py
./template.html
</code></pre>
<p>This index.py will find the template and render it:</p>
<pre><code>#!/usr/bin/python
import jinja2
templateLoader = jinja2.FileSystemLoader( searchpath="./")
templateEnv = jinja2.Environment( loader=templateLoader )
TEMPLATE_FILE = "template.html"
template = templateEnv.get_template( TEMPLATE_FILE )
outputText = template.render() # this is where to put args to the template renderer
print(outputText)
</code></pre>
<p>It turns out, the jinja2 API doc does have a <a href="http://jinja.pocoo.org/docs/dev/api/#loaders" rel="nofollow">section which discusses all the built-in loaders</a>, so it's kind of embarrassing not to have noticed that right away. But the introduction is worded such that <code>PackageLoader</code> seems to be the default, "simplest" method. For newcomers to python, this can lead to a wild goose chase.</p>
| 0 | 2016-07-28T16:59:42Z | [
"python",
"templates",
"jinja2"
] |
How to test file locking in Python | 38,642,623 | <p>So I want to write some files that might be locked/blocked for write/delete by other processes and like to test that upfront.</p>
<p>As I understand: <code>os.access(path, os.W_OK)</code> only looks for the permissions and will return true although a file cannot currently be written. So I have this little function:</p>
<pre><code>def write_test(path):
try:
fobj = open(path, 'a')
fobj.close()
return True
except IOError:
return False
</code></pre>
<p>It actually works pretty well, when I try it with a file that I manually open with a Program. But as a wannabe-good-developer I want to put it in a test to automatically see if it works as expected.</p>
<p>Thing is: If I just <code>open(path, 'a')</code> the file I can still <code>open()</code> it again no problem! Even from another Python instance. Although <strong>Explorer</strong> will actually tell me that the file is currently open in Python!</p>
<p>I looked up other posts here & there about locking. Most are suggesting to install a package. You migth understand that I don't wanna do that to test a handful lines of code. So I dug up the packages to see the actual spot where the locking is eventually done...</p>
<p><a href="http://stackoverflow.com/a/30941681/469322">fcntl</a>? I don't have that. <a href="https://pypi.python.org/pypi/portalocker" rel="nofollow">win32con</a>? Don't have it either... Now in <a href="https://github.com/dmfrey/FileLock/blob/master/filelock/filelock.py" rel="nofollow">filelock</a> there is this:</p>
<pre><code>self.fd = os.open(self.lockfile, os.O_CREAT|os.O_EXCL|os.O_RDWR)
</code></pre>
<p>When I do that on a file it moans that the <strong>file exists</strong>!! Ehhm ... yea! That's the idea! But even when I do it on a non-existing path. I can still <code>open(path, 'a')</code> it! Even from another python instance...</p>
<p>I'm beginning to think that I fail to understand something very basic here. Am I looking for the wrong thing? Can someone point me into the right direction?
<strong>Thanks!</strong></p>
| 0 | 2016-07-28T17:03:27Z | 38,643,576 | <p>You are trying to implement the file locking problem using just the system call <strong>open()</strong>. The Unix-like systems uses by default <a href="http://www.thegeekstuff.com/2012/04/linux-file-locking-types/" rel="nofollow">advisory file locking</a>. This means that cooperating processes may use locks to coordinate access to a file among themselves, but uncooperative processes are also free to ignore locks and access the file in any way they choose. In other words, file locks lock out other file lockers only, not I/O. See <a href="https://en.wikipedia.org/wiki/File_locking" rel="nofollow">Wikipedia</a>.</p>
<p>As stated in <a href="http://www.tutorialspoint.com/unix_system_calls/open.htm" rel="nofollow">system call <strong>open()</strong> reference</a> the solution for performing atomic file locking using a lockfile is to create a unique file on the same file system (e.g., incorporating hostname and pid), use <strong>link(2)</strong> to make a link to the lockfile. If <strong>link()</strong> returns 0, the lock is successful. Otherwise, use <strong>stat(2)</strong> on the unique file to check if its link count has increased to 2, in which case the lock is also successful.</p>
<p>That is why in <a href="https://github.com/benediktschmitt/py-filelock/blob/master/filelock.py" rel="nofollow">filelock</a> they also use the function <strong>fcntl.flock()</strong> and puts all that stuff in a module as it should be.</p>
| 1 | 2016-07-28T17:57:04Z | [
"python",
"windows",
"file",
"locking"
] |
How to test file locking in Python | 38,642,623 | <p>So I want to write some files that might be locked/blocked for write/delete by other processes and like to test that upfront.</p>
<p>As I understand: <code>os.access(path, os.W_OK)</code> only looks for the permissions and will return true although a file cannot currently be written. So I have this little function:</p>
<pre><code>def write_test(path):
try:
fobj = open(path, 'a')
fobj.close()
return True
except IOError:
return False
</code></pre>
<p>It actually works pretty well, when I try it with a file that I manually open with a Program. But as a wannabe-good-developer I want to put it in a test to automatically see if it works as expected.</p>
<p>Thing is: If I just <code>open(path, 'a')</code> the file I can still <code>open()</code> it again no problem! Even from another Python instance. Although <strong>Explorer</strong> will actually tell me that the file is currently open in Python!</p>
<p>I looked up other posts here & there about locking. Most are suggesting to install a package. You migth understand that I don't wanna do that to test a handful lines of code. So I dug up the packages to see the actual spot where the locking is eventually done...</p>
<p><a href="http://stackoverflow.com/a/30941681/469322">fcntl</a>? I don't have that. <a href="https://pypi.python.org/pypi/portalocker" rel="nofollow">win32con</a>? Don't have it either... Now in <a href="https://github.com/dmfrey/FileLock/blob/master/filelock/filelock.py" rel="nofollow">filelock</a> there is this:</p>
<pre><code>self.fd = os.open(self.lockfile, os.O_CREAT|os.O_EXCL|os.O_RDWR)
</code></pre>
<p>When I do that on a file it moans that the <strong>file exists</strong>!! Ehhm ... yea! That's the idea! But even when I do it on a non-existing path. I can still <code>open(path, 'a')</code> it! Even from another python instance...</p>
<p>I'm beginning to think that I fail to understand something very basic here. Am I looking for the wrong thing? Can someone point me into the right direction?
<strong>Thanks!</strong></p>
| 0 | 2016-07-28T17:03:27Z | 38,645,902 | <p>Alright! Thanks to those guys I actually have something now! So this is my function:</p>
<pre><code>def lock_test(path):
"""
Checks if a file can, aside from it's permissions, be changed right now (True)
or is already locked by another process (False).
:param str path: file to be checked
:rtype: bool
"""
import msvcrt
try:
fd = os.open(path, os.O_APPEND | os.O_EXCL | os.O_RDWR)
except OSError:
return False
try:
msvcrt.locking(fd, msvcrt.LK_NBLCK, 1)
msvcrt.locking(fd, msvcrt.LK_UNLCK, 1)
os.close(fd)
return True
except (OSError, IOError):
os.close(fd)
return False
</code></pre>
<p>And the unittest could look something like this:</p>
<pre><code>class Test(unittest.TestCase):
def test_lock_test(self):
testfile = 'some_test_name4142351345.xyz'
testcontent = 'some random blaaa'
with open(testfile, 'w') as fob:
fob.write(testcontent)
# test successful locking and unlocking
self.assertTrue(lock_test(testfile))
os.remove(testfile)
self.assertFalse(os.path.exists(testfile))
# make file again, lock and test False locking
with open(testfile, 'w') as fob:
fob.write(testcontent)
fd = os.open(testfile, os.O_APPEND | os.O_RDWR)
msvcrt.locking(fd, msvcrt.LK_NBLCK, 1)
self.assertFalse(lock_test(testfile))
msvcrt.locking(fd, msvcrt.LK_UNLCK, 1)
self.assertTrue(lock_test(testfile))
os.close(fd)
with open(testfile) as fob:
content = fob.read()
self.assertTrue(content == testcontent)
os.remove(testfile)
</code></pre>
<p>Works. Downsides are:</p>
<ul>
<li>It's kind of testing itself with itself</li>
<li>so the initial <code>OSError</code> catch is not even tested, only locking again with <code>msvcrt</code></li>
</ul>
<p>But I dunno how to make it better now.</p>
| 0 | 2016-07-28T20:12:55Z | [
"python",
"windows",
"file",
"locking"
] |
Python: Set PYTHONPATH according to requirements.txt at runtime | 38,642,658 | <p>I have a Python application that comes with a command line script. I expose the script via <code>setuptools</code> "entry point" feature. Whenever a user runs the script, I would like the environment to be consistent with the package's <code>requirements.txt</code>. This means that the environment must contain versions of each dependency package that match the version specifiers in <code>requirements.txt</code>.</p>
<p>I know that this can be achieved with <code>venv</code>/<code>virtualenv</code> by making my users create a virtual environment, install <code>requirements.txt</code> in it, and activate that virtual environment whenever they run the script. I do not want to impose this burden of manually invoking <code>virtualenv</code> on users. Ruby's <code>bundler</code> solves this problem by providing <code>bundler/setup</code>-- when loaded, it modifies Ruby's <code>$LOAD_PATH</code> to reflect the contents of the <code>Gemfile</code> (analogue of <code>requirements.txt</code>). Thus it can be placed at the top of a script to transparently control the runtime environment. Does Python have an equivalent? That is, a way to set the environment at runtime according to <code>requirements.txt</code> without imposing additional complexity on the user?</p>
| 2 | 2016-07-28T17:05:13Z | 38,642,807 | <p>I don't see why it wouldn't be possible for a Python program to install its own dependencies before importing them, but it is unheard of in the Python community.</p>
<p>I'd rather look at options to make your application a standalone executable, as explained <a href="https://stackoverflow.com/questions/112698/py2exe-generate-single-executable-file">here</a>, for instance.</p>
| 1 | 2016-07-28T17:14:32Z | [
"python"
] |
Python: Set PYTHONPATH according to requirements.txt at runtime | 38,642,658 | <p>I have a Python application that comes with a command line script. I expose the script via <code>setuptools</code> "entry point" feature. Whenever a user runs the script, I would like the environment to be consistent with the package's <code>requirements.txt</code>. This means that the environment must contain versions of each dependency package that match the version specifiers in <code>requirements.txt</code>.</p>
<p>I know that this can be achieved with <code>venv</code>/<code>virtualenv</code> by making my users create a virtual environment, install <code>requirements.txt</code> in it, and activate that virtual environment whenever they run the script. I do not want to impose this burden of manually invoking <code>virtualenv</code> on users. Ruby's <code>bundler</code> solves this problem by providing <code>bundler/setup</code>-- when loaded, it modifies Ruby's <code>$LOAD_PATH</code> to reflect the contents of the <code>Gemfile</code> (analogue of <code>requirements.txt</code>). Thus it can be placed at the top of a script to transparently control the runtime environment. Does Python have an equivalent? That is, a way to set the environment at runtime according to <code>requirements.txt</code> without imposing additional complexity on the user?</p>
| 2 | 2016-07-28T17:05:13Z | 38,648,250 | <blockquote>
<p>Does Python have an equivalent? That is, a way to set the environment at runtime according to requirements.txt without imposing additional complexity on the user?</p>
</blockquote>
<p>Yes, more than one.</p>
<p>One is <a href="https://pex.readthedocs.io/en/stable/" rel="nofollow">pex</a></p>
<blockquote>
<p>pex is a library for generating .pex (Python EXecutable) files which
are executable Python environments in the spirit of virtualenvs.</p>
</blockquote>
<p>and the other is <a href="http://platter.pocoo.org/dev/" rel="nofollow">Platter</a>:</p>
<blockquote>
<p>Platter is a tool for Python that simplifies deployments on Unix
servers. Itâs a thin wrapper around pip, virtualenv and wheel and aids
in creating packages that can install without compiling or downloading
on servers.</p>
</blockquote>
| 1 | 2016-07-28T23:21:08Z | [
"python"
] |
Use translate to replace punctuation, what's the difference between these 3 ways? | 38,642,689 | <p>I'm trying to replace punctuation to space in a string. I searched the answer and tried them in my python 2.7, they show different results. </p>
<pre><code>s1=" merry's home, see a sign 'the-shop $on sale$ **go go!'" #sample string
print s1.translate(string.maketrans("",""), string.punctuation) #way1
print s1.translate(None,string.punctuation) #way2
table=string.maketrans(string.punctuation,' '*len(string.punctuation))
print s1.translate(table) #way3
</code></pre>
<p>it prints like this:</p>
<pre><code>merrys home see a sign theshop on sale go go
merrys home see a sign theshop on sale go go
merry s home see a sign the shop on sale go go
</code></pre>
<p>so what's the difference between these ways?</p>
| 2 | 2016-07-28T17:06:47Z | 38,642,747 | <p>There isn't really a functional difference in the first two ... You're either passing an empty translation table (<code>string.maketrans("","")</code>) or you're telling python to skip the translation step (<code>None</code>). After the translation, you're removing all punctuation since you pass <code>string.punctionat</code> as the characters that should be deleted. If I were a betting man, I'd bet that the <code>None</code> version would be slightly more performant, but you can <code>timeit</code> to find out...</p>
<p>The last example creates a translation table to map all punctuation to a space and doesn't delete anything. This is why the last example has a bunch of extra spaces in it.</p>
| 1 | 2016-07-28T17:11:09Z | [
"python",
"string",
"python-2.x",
"punctuation"
] |
Use translate to replace punctuation, what's the difference between these 3 ways? | 38,642,689 | <p>I'm trying to replace punctuation to space in a string. I searched the answer and tried them in my python 2.7, they show different results. </p>
<pre><code>s1=" merry's home, see a sign 'the-shop $on sale$ **go go!'" #sample string
print s1.translate(string.maketrans("",""), string.punctuation) #way1
print s1.translate(None,string.punctuation) #way2
table=string.maketrans(string.punctuation,' '*len(string.punctuation))
print s1.translate(table) #way3
</code></pre>
<p>it prints like this:</p>
<pre><code>merrys home see a sign theshop on sale go go
merrys home see a sign theshop on sale go go
merry s home see a sign the shop on sale go go
</code></pre>
<p>so what's the difference between these ways?</p>
| 2 | 2016-07-28T17:06:47Z | 38,642,812 | <p>The documentation for <code>translate</code> specifies <a href="https://docs.python.org/2/library/stdtypes.html#str.translate" rel="nofollow"><code>str.translate(table[, deletechars])</code></a></p>
<blockquote>
<p>Return a copy of the string where all characters occurring in the optional argument deletechars are removed, and the remaining characters have been mapped through the given translation </p>
</blockquote>
<p>cont</p>
<blockquote>
<p>set the table argument to <code>None</code> for translations that only delete characters</p>
</blockquote>
<pre><code>print s1.translate(string.maketrans("",""), string.punctuation)
</code></pre>
<p>In this case you delete all punctuation and replace empty strings with empty strings </p>
<pre><code>print s1.translate(None,string.punctuation)
</code></pre>
<p>In this case you're simply removing all punctuation.</p>
<pre><code>table=string.maketrans(string.punctuation,' '*len(string.punctuation))
print s1.translate(table)
</code></pre>
<p>In this case you create a translation table that replaces punctuation with empty spaces and then translate.</p>
<p>The difference between the first and second is, as mgilson stated, in performance, the <code>None</code> case does indeed go a bit faster:</p>
<pre><code>%timeit s1.translate(string.maketrans("",""), string.punctuation) #way1
The slowest run took 4.70 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 1.27 µs per loop
%timeit s1.translate(None, string.punctuation) #way1
The slowest run took 11.41 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 627 ns per loop
</code></pre>
<p>The third is a completely different application of translate.</p>
| 1 | 2016-07-28T17:14:56Z | [
"python",
"string",
"python-2.x",
"punctuation"
] |
Assertions vs exceptions for library API type checking | 38,642,720 | <p>Python encourages duck typing over explicit type checking. However, sometimes it is useful to explicitly check types.</p>
<p>When a Python library need to do type checking, should it use assertions or exceptions?</p>
<pre><code>def foo(bar):
assert isinstance(bar, str)
def foo(bar):
if not isinstance(bar, str):
raise TypeError
</code></pre>
<p>A library's API is a public API, and public APIs should explicitly raise exceptions. However, it shouldn't be the responsibility of a library to do user input validation (unless it is a user input validation library), nor is it reasonable to expect a library to be designed to be "used incorrectly". A program using a library isn't going to catch the TypeError exception; it would be fixed to instead call the library API with the right types.</p>
<p>Should library type checking be done with assertions instead of raising exceptions?</p>
| 0 | 2016-07-28T17:09:08Z | 38,642,849 | <p>Assertion evaluation can be deactivated by running python with the <code>-O</code> command line option which can be good if you want to check things more extensively than you would in production.</p>
<p>Have a look at <a href="https://wiki.python.org/moin/UsingAssertionsEffectively" rel="nofollow">https://wiki.python.org/moin/UsingAssertionsEffectively</a> - they know more than me and they say:</p>
<blockquote>
<p>Places to consider putting assertions:</p>
<ul>
<li>checking parameter types, classes, or values</li>
<li>checking data structure invariants</li>
<li>checking "can't happen" situations (duplicates in a list, contradictory state variables.)</li>
<li>after calling a function, to make sure that its return is reasonable</li>
</ul>
</blockquote>
| -1 | 2016-07-28T17:17:14Z | [
"python"
] |
matplotlib: how to make sizes of the subfigures without colorbars consistent with those with colorbars? | 38,642,839 | <p>Here is my figure composed of two subfigures below, one with colorbar and the other without colorbar. Currently they are not of the same size due to the presence of the colorbar, I am wondering if there is a way to fix this?</p>
<p><a href="http://i.stack.imgur.com/hoEFX.png" rel="nofollow"><img src="http://i.stack.imgur.com/hoEFX.png" alt="enter image description here"></a></p>
| 2 | 2016-07-28T17:16:22Z | 38,646,984 | <p>Add colorbar manually:</p>
<pre><code>import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
fig, (ax1, ax2) = plt.subplots(1, 2)
x = np.random.random(50)
y = np.random.random(50)
c = np.random.random(50)
s = 500 * np.random.random(50)
im1 = ax1.scatter(x, y, c='r', s=10)
im2 = ax2.scatter(x, y, c=c, s=s, cmap=plt.cm.jet)
# Add a colorbar
divider = make_axes_locatable(ax2)
cax = divider.append_axes("right", size="5%", pad=0.05)
fig.colorbar(im2, cax=cax)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/Bx14O.png" rel="nofollow"><img src="http://i.stack.imgur.com/Bx14O.png" alt="enter image description here"></a></p>
| 0 | 2016-07-28T21:22:48Z | [
"python",
"matplotlib"
] |
Pandas, conditional column assignment based on column values | 38,643,012 | <p>How can I have conditional assignment in pandas by based on the values of two columns? Conceptually something like the following:</p>
<pre><code>Column_D = Column_B / (Column_B + Column_C) if Column_C is not null else Column_C
</code></pre>
<p>Concrete example:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'b': [2,np.nan,4,2,np.nan], 'c':[np.nan,1,2,np.nan,np.nan]})
b c
0 2.0 NaN
1 NaN 1.0
2 4.0 2.0
3 2.0 NaN
4 NaN NaN
</code></pre>
<p>I want to have a new column <code>d</code> whose result is division of column <code>b</code> by sum of <code>b</code> and <code>c</code>, if <code>c</code> is not null, otherwise the value should be the value at column <code>c</code>.
Something conceptually like the following:</p>
<pre><code>df['d'] = df['b']/(df['b']+df['c']) if not df['c'].isnull() else df['c']
</code></pre>
<p>desired result:</p>
<pre><code> b c d
0 2.0 NaN NaN
1 NaN 1.0 1.0
2 4.0 2.0 0.66
3 2.0 NaN NaN
4 NaN NaN NaN
</code></pre>
<p>How can I achieve this?</p>
| 4 | 2016-07-28T17:27:11Z | 38,643,133 | <p>try this (if you want to have your desired result set - checking <code>b</code> column):</p>
<pre><code>In [30]: df['d'] = np.where(df.b.notnull(), df.b/(df.b+df.c), df.c)
In [31]: df
Out[31]:
b c d
0 2.0 NaN NaN
1 NaN 1.0 1.000000
2 4.0 2.0 0.666667
3 2.0 NaN NaN
4 NaN NaN NaN
</code></pre>
<p>or this, checking <code>c</code> column:</p>
<pre><code>In [32]: df['d'] = np.where(df.c.notnull(), df.b/(df.b+df.c), df.c)
In [33]: df
Out[33]:
b c d
0 2.0 NaN NaN
1 NaN 1.0 NaN
2 4.0 2.0 0.666667
3 2.0 NaN NaN
4 NaN NaN NaN
</code></pre>
| 5 | 2016-07-28T17:33:28Z | [
"python",
"pandas",
"numpy",
"dataframe"
] |
bitcoinrpc calls return nothing | 38,643,016 | <p>I am using bitcoind in my project and when I deploy it on my server bitcoind works strange. I use this lib to work with rpc <a href="https://github.com/jgarzik/python-bitcoinrpc" rel="nofollow">https://github.com/jgarzik/python-bitcoinrpc</a>. On local dev server everything is fine but when I deploy it to vps it stops return data. The data is empty. I made some tests like this:</p>
<pre><code>bitcoin.conf file:
server=1
rpcuser=myuser
rpcpassword=mypassword
rpcconnect=127.0.0.1
rpcport=8332
</code></pre>
<p>some view.py:</p>
<pre><code>def btc_rpc_connect(config):
rpc_server_url = ("http://{user}:{password}@{host}:{port}").format(
user=config.rpc_user,
password=config.rpc_pass,
host=config.rpc_host,
port=config.rpc_port
)
rpc_conn = AuthServiceProxy(rpc_server_url)
return rpc_conn
user = request.user
# getting rpc settings from db
config = ProjectSettings.objects.get(id=1)
rpc_connection = btc_rpc_connect(config)
btc_address = rpc_connection.getnewaddress(user.username)
</code></pre>
<p>I also tried to test from django's ./manage.py shell and entered this code manually. The fact is works on dev server and i have an address in <code>btc_address</code>, but on vps <code>btc_address</code> is empty! Please help me. Can it happen because of permission troubles? Anyways bitcoind accept the connection and do not return authentification exception, but no reaction to any command.</p>
<p>But if I use it from console:</p>
<pre><code>bitcoin-cli getnewaddress
</code></pre>
<p>it works fine and give me an address.</p>
| 1 | 2016-07-28T17:27:12Z | 38,644,632 | <p>Omg that was bug in the repo, and I fixed it locally several month ago and forgot about that! If you have the same problem you can mannually edit <code>lib/python2.7/site-packages/bitcoinrpc/authproxy.py</code> delete <code>else:</code> on 146 line and move out <code>return response['result']</code> from <code>elif</code> block like in here: <a href="https://github.com/jgarzik/python-bitcoinrpc/commit/8c0114bfbf7650d40a88b20d1e16ff79d768f3a9" rel="nofollow">https://github.com/jgarzik/python-bitcoinrpc/commit/8c0114bfbf7650d40a88b20d1e16ff79d768f3a9</a></p>
<p>Another way is delete python-bitcoinrpc:</p>
<pre><code>pip uninstall python-bitcoinrpc
</code></pre>
<p>And reinstall correct version:</p>
<pre><code>pip install git+https://github.com/jgarzik/python-bitcoinrpc.git
</code></pre>
<p>Hope they will fix it in repo soon.</p>
| 0 | 2016-07-28T18:56:58Z | [
"python",
"django",
"json-rpc",
"bitcoind"
] |
Trouble updating json files | 38,643,145 | <p>I have two <code>json</code> files. One of them is a dictionary which is a subset of the other. </p>
<p><code>json_file_1.json</code> contains <code>{'foo': 1, 'bar': 2, 'baz': 3}</code></p>
<p><code>json_file_2.json</code> contains <code>{'foo': 100, 'bar': 200}</code>. </p>
<p>I want to create a final <code>json</code> file that has the following: <code>{'foo': 100, 'bar': 200, 'baz': 3}</code></p>
<p>Here is what I tried so far:</p>
<pre><code>with open('json_file_1.json') as f1:
original_info = json.load(f1)
f1.close()
with open('json_file_2.json') as f2:
updated_info = json.load(f2)
f2.close()
print original_info # prints the correct dictionary
print updated_info # prints the correct dictionary
final_info = original_info.update(updated_info)
print final_info # prints None
with open('json_file_final.json', 'w+') as f_final:
json.dump(final_info, f_final)
</code></pre>
<p>However, when I open the final <code>json</code> file, it only contains "Null". When I tried debugging it, I printed out <code>original_info</code> and <code>updated_info</code>, and they were each fine. I could call <code>original_info.update(updated_info)</code> and that would produce a dictionary that was properly updated. However, it just isn't working for some reason when it's all put together?</p>
<p>Any thoughts?</p>
<p>Thanks so much!</p>
| 0 | 2016-07-28T17:34:16Z | 38,643,256 | <p><a href="https://docs.python.org/3/library/stdtypes.html?highlight=update#dict.update" rel="nofollow"><code>dict.update</code></a> updates a dictionary in-place and returns <code>None</code>. </p>
<p>You need to dump <code>original_info</code></p>
<p>For reference, </p>
<pre><code>In [11]: d1 = {'foo': 1, 'bar': 2, 'baz': 3}
In [12]: d2 = {'foo': 100, 'bar': 200}
In [13]: d3 = d1.update(d2)
In [14]: d3
In [15]: print(d3)
None
In [16]: d1
Out[16]: {'bar': 200, 'baz': 3, 'foo': 100}
</code></pre>
| 2 | 2016-07-28T17:39:32Z | [
"python",
"json"
] |
Importing structured data into python | 38,643,151 | <p>I have a text file with a set of arrays in it that looks like this:</p>
<pre><code>[(0,1,3),(0,4,5),...(1,9,0)]
[(9,8,7),(0,4,5),...(1,9,0)]
</code></pre>
<p>where the rows are not the same length. </p>
<p>This is essentially a list of paths, where each set of points is a path, ie:</p>
<pre><code>(0,1,3),(0,4,5),...(1,9,0) =path1
(9,8,7),(0,4,5),...(1,9,0) =path2
</code></pre>
<p>I need to import this in a form where I can call all elements. eg for all points in path 1, determine the distance to all points in path 2. Not sure where to start considering the delimiters don't want to hand both brackets and commas, and then built arrays in a callable way.</p>
| 0 | 2016-07-28T17:34:36Z | 38,644,104 | <p>The following code reads the data in (assuming one path per line, and no extra whitespace) into a list of numpy arrays, then demonstrates how to compute the distance between two points.</p>
<pre><code>import numpy as np
import numpy.linalg as la
#replace with your datafile
datafile = "../data/point_path.txt"
paths = []
with open(datafile, "r") as f:
for line in f:
point_strs = line.strip().strip("[()]").split("),(")
npoints = len(point_strs)
path = np.empty((npoints, 3))
for i in xrange(npoints):
path[i,:] = np.array(map(int, point_strs[i].split(",")))
paths.append(path)
print "First point of path 1:"
print paths[0][0]
print "Second point of path 2:"
print paths[1][1]
print "Euclidean Distance between these points:"
print la.norm(paths[0][0]-paths[1][1])
</code></pre>
<p>The output of this is:</p>
<pre><code>First point of path 1:
[ 0. 1. 3.]
Second point of path 2:
[ 0. 4. 5.]
Euclidean Distance between these points:
3.60555127546
</code></pre>
<p><b>Edit: How to format input file</b><br/>
The code assumes that each list of points is on its own line (e.g. for line in f, parse list of points). So the following file:</p>
<pre><code>[(0,2,3),(0,4,0)] [(1,4,5),(5,8,9),(3,4,0)] [(0,5,7),(0,6,8),(1,5,6),(5,8,10)]
</code></pre>
<p>will not work because all 3 lists are on the same line.</p>
<p>This format:</p>
<pre><code>[(0,2,3),(0,4,0)]
[(1,4,5),(5,8,9),(3,4,0)]
[(0,5,7),(0,6,8),(1,5,6),(5,8,10)]
</code></pre>
<p>will work, as each list of points is on a separate line. </p>
| 0 | 2016-07-28T18:26:06Z | [
"python",
"arrays",
"numpy",
"import"
] |
How to define models/serializers correctly for Django REST Framework? | 38,643,178 | <p>We are creating a mobile web application with this stack:</p>
<p>Python
Django, SQLite DB
Django REST
Ionic Cordova
Angular JS</p>
<p>This is a quiz application, in which you answer questions from 4 multiple choices. Questions and answers are stored in the database. With the help of REST framework an endpoint has been created.</p>
<p>With this JSON file our Angular JS controller works asynchronously.
The problem is defining answers in model.py. It involves an array in an array.</p>
<p>We're trying to get this structure with Django REST:</p>
<pre><code>[
{
"question" : "Java was originally developed at _______________",
"answer" : [
{"id" : 0, "text" : "Sun Microsystems"},
{"id" : 1, "text" : "Intel"},
{"id" : 2, "text" : "Hewlett-Packard"},
{"id" : 3, "text" : "Oracle"}
],
"correct" : 0
},
]
</code></pre>
<p>And this is what we have:</p>
<pre><code>[
{
"question": "Java was originally developed at _______________",
"answer": [
{
"url": "http://127.0.0.1:8000/api/answer/1/",
"answerid": 0,
"text": "Sun Microsystems"
},
{
"url": "http://127.0.0.1:8000/api/answer/2/",
"answerid": 1,
"text": "Intel"
},
{
"url": "http://127.0.0.1:8000/api/answer/3/",
"answerid": 2,
"text": "Hewlett-Packard"
},
{
"url": "http://127.0.0.1:8000/api/answer/4/",
"answerid": 3,
"text": "Oracle"
}
],
"correct": 0
}
]
</code></pre>
<p>Here's our models.py:</p>
<pre><code>from django.db import models
class Answer(models.Model):
answerid = models.IntegerField()
text = models.TextField()
class Question(models.Model):
question = models.CharField(max_length=200)
answer = models.ManyToManyField(Answer)
correct = models.IntegerField()
</code></pre>
<p>serializer:</p>
<pre><code>from quiz.models import Question, Answer
from rest_framework import routers, serializers, viewsets
class AnswerSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Answer
fields = ('answerid', 'text')
class QuestionSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Question
fields = ('question', 'answer', 'correct')
read_only_fields = ('answer',)
depth = 1
# ViewSets define the view behavior.
class QuestionViewSet(viewsets.ModelViewSet):
queryset = Question.objects.all()
serializer_class = QuestionSerializer
class AnswerViewSet(viewsets.ModelViewSet):
queryset = Answer.objects.all()
serializer_class = AnswerSerializer
</code></pre>
<p>Is it somehow possible to remove the urls in our solution?</p>
| 3 | 2016-07-28T17:35:56Z | 38,643,485 | <p>The urls come because you inherit <code>HyperlinkedModelSerializer</code>.</p>
<pre><code>class QuestionSerializer(serializers.HyperlinkedModelSerializer):
...
</code></pre>
<p>If you don't want them, use a different base class - perhaps just a <a href="http://www.django-rest-framework.org/api-guide/serializers/#modelserializer" rel="nofollow"><code>ModelSerializer</code></a>.</p>
| 2 | 2016-07-28T17:52:17Z | [
"python",
"json",
"django",
"django-rest-framework"
] |
How to define models/serializers correctly for Django REST Framework? | 38,643,178 | <p>We are creating a mobile web application with this stack:</p>
<p>Python
Django, SQLite DB
Django REST
Ionic Cordova
Angular JS</p>
<p>This is a quiz application, in which you answer questions from 4 multiple choices. Questions and answers are stored in the database. With the help of REST framework an endpoint has been created.</p>
<p>With this JSON file our Angular JS controller works asynchronously.
The problem is defining answers in model.py. It involves an array in an array.</p>
<p>We're trying to get this structure with Django REST:</p>
<pre><code>[
{
"question" : "Java was originally developed at _______________",
"answer" : [
{"id" : 0, "text" : "Sun Microsystems"},
{"id" : 1, "text" : "Intel"},
{"id" : 2, "text" : "Hewlett-Packard"},
{"id" : 3, "text" : "Oracle"}
],
"correct" : 0
},
]
</code></pre>
<p>And this is what we have:</p>
<pre><code>[
{
"question": "Java was originally developed at _______________",
"answer": [
{
"url": "http://127.0.0.1:8000/api/answer/1/",
"answerid": 0,
"text": "Sun Microsystems"
},
{
"url": "http://127.0.0.1:8000/api/answer/2/",
"answerid": 1,
"text": "Intel"
},
{
"url": "http://127.0.0.1:8000/api/answer/3/",
"answerid": 2,
"text": "Hewlett-Packard"
},
{
"url": "http://127.0.0.1:8000/api/answer/4/",
"answerid": 3,
"text": "Oracle"
}
],
"correct": 0
}
]
</code></pre>
<p>Here's our models.py:</p>
<pre><code>from django.db import models
class Answer(models.Model):
answerid = models.IntegerField()
text = models.TextField()
class Question(models.Model):
question = models.CharField(max_length=200)
answer = models.ManyToManyField(Answer)
correct = models.IntegerField()
</code></pre>
<p>serializer:</p>
<pre><code>from quiz.models import Question, Answer
from rest_framework import routers, serializers, viewsets
class AnswerSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Answer
fields = ('answerid', 'text')
class QuestionSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Question
fields = ('question', 'answer', 'correct')
read_only_fields = ('answer',)
depth = 1
# ViewSets define the view behavior.
class QuestionViewSet(viewsets.ModelViewSet):
queryset = Question.objects.all()
serializer_class = QuestionSerializer
class AnswerViewSet(viewsets.ModelViewSet):
queryset = Answer.objects.all()
serializer_class = AnswerSerializer
</code></pre>
<p>Is it somehow possible to remove the urls in our solution?</p>
| 3 | 2016-07-28T17:35:56Z | 38,643,551 | <p>I believe this will work.</p>
<pre><code>class AnswerSerializer(serializers.ModelSerializer):
class Meta:
model = Answer
fields = ('answerid', 'text')
class QuestionSerializer(serializers.ModelSerializer):
answer = AnswerSerializer(source="answers)
class Meta:
model = Question
fields = ('question', 'answer', 'correct')
read_only_fields = ('answer',)
depth = 1
</code></pre>
<p>You may need to change the <code>source</code> to correctly get the answers you need.</p>
<p>The <code>serializers.HyperlinkedModelSerializer</code> will automatically insert the url field in your response.</p>
| 2 | 2016-07-28T17:55:51Z | [
"python",
"json",
"django",
"django-rest-framework"
] |
python parse string to get contents between text | 38,643,235 | <p>How can i collect the text (data) between a set of strings? For example I have this code snippet below, which is a modified version of json, which I don't have the ability to change. </p>
<p>However I want to collect the data between <code>presets = {...}</code></p>
<pre><code>{
data = {
friends = {
max = 0 0,
min = 0 0,
},
family = {
cars = {
van = "honda",
car = "ford",
bike = "trek",
},
presets = {
location = "italy",
size = 10,
travelers = False,
},
version = 1,
},
},
}
</code></pre>
<p>So my resulting string would be whatever is between the two brackets <code>{...}</code> following the word presets. In this case it would be:</p>
<pre><code>location = "italy",
size = 10,
travelers = False,
</code></pre>
<p>My starting point so far...</p>
<pre><code>filepath = "C:/Users/jmartini/Projects/assets/tool_source.cfg"
with open(filepath, 'r') as file:
data = file.read().replace('\n', '').replace('\t', '')
print data
</code></pre>
| 1 | 2016-07-28T17:38:26Z | 38,643,580 | <p>You can use <code>re</code> here.</p>
<pre><code>import re
filepath = r"C:/Users/jmartini/Projects/rogue_presetsManager/assets/tool_leveleditormodule_source.cfg"
f=open(filepath, "r")
data = f.read()
print re.findall(r"presets\s*=\s*\{\s*([^}]*?)\s*}", data)
</code></pre>
| 1 | 2016-07-28T17:57:21Z | [
"python",
"parsing"
] |
python parse string to get contents between text | 38,643,235 | <p>How can i collect the text (data) between a set of strings? For example I have this code snippet below, which is a modified version of json, which I don't have the ability to change. </p>
<p>However I want to collect the data between <code>presets = {...}</code></p>
<pre><code>{
data = {
friends = {
max = 0 0,
min = 0 0,
},
family = {
cars = {
van = "honda",
car = "ford",
bike = "trek",
},
presets = {
location = "italy",
size = 10,
travelers = False,
},
version = 1,
},
},
}
</code></pre>
<p>So my resulting string would be whatever is between the two brackets <code>{...}</code> following the word presets. In this case it would be:</p>
<pre><code>location = "italy",
size = 10,
travelers = False,
</code></pre>
<p>My starting point so far...</p>
<pre><code>filepath = "C:/Users/jmartini/Projects/assets/tool_source.cfg"
with open(filepath, 'r') as file:
data = file.read().replace('\n', '').replace('\t', '')
print data
</code></pre>
| 1 | 2016-07-28T17:38:26Z | 38,643,748 | <p>Use <a href="http://pyyaml.org/wiki/PyYAML" rel="nofollow">PyYaml</a> to get the required data</p>
<blockquote>
<p>pip install PyYaml</p>
</blockquote>
<pre><code>import yaml
def testjson():
with open('data.json') as datafile:
data = datafile.read().replace("\n", "").replace("=", ":")
print(yaml.load(data)["data"]["family"]["presets"])
</code></pre>
<p>I get this output with your data</p>
<pre><code>{'location': 'italy', 'size': 10, 'travelers': False}
</code></pre>
| 2 | 2016-07-28T18:06:25Z | [
"python",
"parsing"
] |
Python post data to url | 38,643,278 | <p>I'm using App Engine and I'm trying to post data to a URL similar to this : </p>
<pre><code>https://push.geckoboard.com/v1/send/<widget-id>
</code></pre>
<p>I've tried the following code:</p>
<pre><code>data = {
"api_key" : api_key,
"data" : {
"item" : [
{
"value" : chatamount
}
]
}
}
encoded_args = urllib.urlencode(data)
conn = httplib.HTTPSConnection(pushurl)
conn.request("POST", "", encoded_args)
response = conn.getresponse()
logging.info(response.status)
conn.close()
</code></pre>
<p>However, the logging returns a 400 error. Does anyone know how to perform a simple data POST using Python and App Engine?</p>
| 1 | 2016-07-28T17:40:51Z | 38,643,872 | <p>The error 400 was caused because the data was url encoded which it shouldn't be. By changing </p>
<pre><code>conn.request("POST", "", encoded_args)
</code></pre>
<p>to </p>
<pre><code>conn.request("POST", "", json.dumps(data))
</code></pre>
<p>the problem was solved.</p>
<p>Thanks for the help!</p>
| 0 | 2016-07-28T18:12:40Z | [
"python",
"http",
"google-app-engine",
"http-post"
] |
Parsing whole terms in Python/json profanity filter | 38,643,309 | <p>I have a json file containing terms to check against for a profanity filter.</p>
<pre><code>["bad", "word", "plug"]
</code></pre>
<p>And I am using this (found from another article) to parse the json and search any data object for set words. </p>
<pre><code>def word_filter(self, *field_names):
import json
from pprint import pprint
with open('/var/www/groupclique/website/swearWords.json') as data_file:
data = json.load(data_file)
for field_name in field_names:
for term in data:
if term in field_name:
self.add_validation_error(
field_name,
"%s has profanity" % field_name)
class JobListing(BaseProtectedModel):
id = db.Column(db.Integer, primary_key=True)
category = db.Column(db.String(255))
job_title = db.Column(db.String(255))
@before_flush
def clean(self):
self.word_filter('job_title')
</code></pre>
<p>The issue is if I use the string "plumber" it fails the check due to the word "plug" in the json file. Because "plu" being in both terms. Is there any way to force the entire word in the json file to be used instead of a partial? Output once ran isnt erroneous:</p>
<pre><code>({ "validation_errors": { "job_title": " job_title has profanity" } })
HTTP PAYLOAD:
{
"job_title":"plumber",
}
</code></pre>
| 0 | 2016-07-28T17:42:43Z | 38,643,579 | <p>You can use string.split() as a way to isolate whole words of the field_name. When you split, it returns a list of each part of the string split up by the specified delimiter. Using that, you can check if the profane term is in the split list:</p>
<pre><code>import json
with open('terms.json') as data_file:
data = json.load(data_file)
for field_name in field_names:
for term in data:
if term in field_name.split(" "):
self.add_validation_error(
field_name,
"%s has profanity" % field_name)
</code></pre>
<p>Where this gets dicey is if there is punctuation or something of the sort. For example, the sentence: "Here comes the sun." will not match the bad word "sun", nor will it match "here". To solve the capital problem, you'll want to change the entire input to lowercase:</p>
<pre><code>if term in field_name.lower().split(" "):
</code></pre>
<p>Removing punctuation is a bit more involved, but <a href="https://stackoverflow.com/questions/265960/best-way-to-strip-punctuation-from-a-string-in-python">this</a> should help you implement that.</p>
<p>There may well be more edge cases you'll need to consider, so just a heads up on two quick ones I thought of.</p>
| 0 | 2016-07-28T17:57:14Z | [
"python",
"sqlalchemy",
"flask-restless"
] |
Select and order records in one to many relationship | 38,643,315 | <p>Consider the following tables:</p>
<pre><code>Student
-------
id
name
</code></pre>
<p>and</p>
<pre><code>Assignment
----------
id
name
</code></pre>
<p>and</p>
<pre><code>Grade
-----
id
student_id
assignment_id
grade
</code></pre>
<p>A student can have multiple grades, corresponding to different assignments.</p>
<p>Now, I want to select records to generate a table that looks like this:</p>
<pre><code>Name Assignment 1 Assignment 2 Assignment 3
--------------------------------------------------------
Bob 55% 80% 23%
Suzy 90% 65% 100%
</code></pre>
<p>And, I want to be able to sort by one of the grades (i.e. Assignment 1)</p>
<p>Is this something which can be done with SQL? As an added bonus, can this be done with flask-sqlalchemy?</p>
<p>I have an idea I need to do a JOIN and ORDER BY but I don't really know where to go with this.</p>
<p>Thanks!</p>
| 1 | 2016-07-28T17:43:01Z | 38,644,355 | <p>In Oracle you may consider something like this</p>
<pre><code>with
tab as(
select s.name as student, a.name as assignment, g.v as grade
from grade g
inner join student s on g.s_n = s.n
inner join assignment a on g.a_n = a.n
)
select *
from tab
pivot (min(grade) for assignment in ('Assignment 1', 'Assignment 2', 'Assignment 3'))
order by 2, 3 desc, 7
</code></pre>
| 0 | 2016-07-28T18:40:59Z | [
"python",
"sql",
"sqlalchemy",
"order"
] |
Select and order records in one to many relationship | 38,643,315 | <p>Consider the following tables:</p>
<pre><code>Student
-------
id
name
</code></pre>
<p>and</p>
<pre><code>Assignment
----------
id
name
</code></pre>
<p>and</p>
<pre><code>Grade
-----
id
student_id
assignment_id
grade
</code></pre>
<p>A student can have multiple grades, corresponding to different assignments.</p>
<p>Now, I want to select records to generate a table that looks like this:</p>
<pre><code>Name Assignment 1 Assignment 2 Assignment 3
--------------------------------------------------------
Bob 55% 80% 23%
Suzy 90% 65% 100%
</code></pre>
<p>And, I want to be able to sort by one of the grades (i.e. Assignment 1)</p>
<p>Is this something which can be done with SQL? As an added bonus, can this be done with flask-sqlalchemy?</p>
<p>I have an idea I need to do a JOIN and ORDER BY but I don't really know where to go with this.</p>
<p>Thanks!</p>
| 1 | 2016-07-28T17:43:01Z | 38,644,748 | <p>If you are using SQLite, you have to use the lowest common denominator of SQL, meaning a join for every single assignment you want to consider:</p>
<pre><code>SELECT
name,
Grade1.grade AS assignment1,
Grade2.grade AS assignment2,
Grade3.grade AS assignment3
FROM
Student
LEFT JOIN Grade AS Grade1 ON (Student.id = Grade1.student_id and Grade1.assignment_id = ...)
LEFT JOIN Grade AS Grade2 ON (Student.id = Grade2.student_id and Grade2.assignment_id = ...)
LEFT JOIN Grade AS Grade3 ON (Student.id = Grade3.student_id and Grade3.assignment_id = ...)
ORDER BY Grade1.grade
</code></pre>
<p>Translating this to SQLAlchemy should be straightforward.</p>
| 2 | 2016-07-28T19:03:37Z | [
"python",
"sql",
"sqlalchemy",
"order"
] |
Quickest Algorithmn for Normalization of Region Proposals | 38,643,385 | <p>In order to normalize the region proposal algorithm (that is, applying regression to every X-by-Y area of an image), I need to create a region proposal normalization when summing the activation of each proposal. Currently, for a 128x128 patch of an image, in Python I'm running this bit of code</p>
<pre><code>region_normalization = np.zeros(image.shape)
for x in range(0,image.shape[0]-128):
for y in range(0,image.shape[0]-128):
region_normalization[x:x+128,y:y+128] =
np.add(region_normalization[x:x+128,y:y+128],1)`
</code></pre>
<p>but this is particularly inefficient. What would be a quicker and/or more pythonic implementation of this algorithm?</p>
<p>Thanks!</p>
| 2 | 2016-07-28T17:46:58Z | 38,644,827 | <p><strong>Reverse engineer it!</strong></p>
<p>Well, let's take a look at the output for a small image and smaller <code>N</code> case, as we will try to reverse engineer this loopy code. So, with <code>N = 4</code> (where <code>N</code> was <code>128</code> in the original case) and image.shape = <code>(10,10)</code>, we would have :</p>
<pre><code>In [106]: region_normalization
Out[106]:
array([[ 1, 2, 3, 4, 4, 4, 3, 2, 1, 0],
[ 2, 4, 6, 8, 8, 8, 6, 4, 2, 0],
[ 3, 6, 9, 12, 12, 12, 9, 6, 3, 0],
[ 4, 8, 12, 16, 16, 16, 12, 8, 4, 0],
[ 4, 8, 12, 16, 16, 16, 12, 8, 4, 0],
[ 4, 8, 12, 16, 16, 16, 12, 8, 4, 0],
[ 3, 6, 9, 12, 12, 12, 9, 6, 3, 0],
[ 2, 4, 6, 8, 8, 8, 6, 4, 2, 0],
[ 1, 2, 3, 4, 4, 4, 3, 2, 1, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
</code></pre>
<p>We do see a symmetry there and this symmetry happens to be across both <code>X</code> and <code>Y</code> axes. One more thing that jumps out at us is that each element is product of its starting row and column element. So, the idea would be to get the first row and first column and perform element-wise multiplication among their elements. Since the first row and first column are identical, we just need to get that once and use it with additional axis and let <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy broadcasting</code></a> handle those multiplications. Thus, the implementation would be -</p>
<pre><code>N = 128
a1D = np.hstack((np.arange(N)+1,np.full(image.shape[0]-2*N-1,N,dtype=int),\
np.arange(N,-1,-1)))
out = a1D[:,None]*a1D
</code></pre>
<p><strong>Runtime test</strong></p>
<pre><code>In [137]: def original_app(image):
...: region_normalization = np.zeros(image.shape,dtype=int)
...: for x in range(0,image.shape[0]-128):
...: for y in range(0,image.shape[0]-128):
...: region_normalization[x:x+128,y:y+128] = \
...: np.add(region_normalization[x:x+128,y:y+128],1)
...: return region_normalization
...:
...: def vectorized_app(image):
...: N = 128
...: a1D = np.hstack((np.arange(N)+1,np.full(image.shape[0]-2*N-1,N,\
...: dtype=int),np.arange(N,-1,-1)))
...:
...: return a1D[:,None]*a1D
...:
In [138]: # Input
...: image = np.random.randint(0,255,(512,512))
In [139]: np.allclose(original_app(image),vectorized_app(image)) #Verify
Out[139]: True
In [140]: %timeit original_app(image)
1 loops, best of 3: 13 s per loop
In [141]: %timeit vectorized_app(image)
1000 loops, best of 3: 1.4 ms per loop
</code></pre>
<p>Super speedup there!</p>
| 3 | 2016-07-28T19:08:01Z | [
"python",
"performance",
"numpy",
"vectorization",
"normalization"
] |
Quickest Algorithmn for Normalization of Region Proposals | 38,643,385 | <p>In order to normalize the region proposal algorithm (that is, applying regression to every X-by-Y area of an image), I need to create a region proposal normalization when summing the activation of each proposal. Currently, for a 128x128 patch of an image, in Python I'm running this bit of code</p>
<pre><code>region_normalization = np.zeros(image.shape)
for x in range(0,image.shape[0]-128):
for y in range(0,image.shape[0]-128):
region_normalization[x:x+128,y:y+128] =
np.add(region_normalization[x:x+128,y:y+128],1)`
</code></pre>
<p>but this is particularly inefficient. What would be a quicker and/or more pythonic implementation of this algorithm?</p>
<p>Thanks!</p>
| 2 | 2016-07-28T17:46:58Z | 38,645,331 | <p>The value of any given point i,j in your renormalization is equal to the number of 128x128 windows that contain it. Note that this is the product of the degrees of freedom on the x axis and on the y axis. So all we have to do is figure out the degrees of freedom for each possible x and y, value, then use broadcasting or np.outer to get the result. </p>
<pre><code>import numpy as np
image = np.zeros((200,200))
window=128
region_normalization = np.zeros(image.shape)
for x in range(0,image.shape[0]-window):
for y in range(0,image.shape[0]-window):
region_normalization[x:x+window,y:y+window] = np.add(region_normalization[x:x+window,y:y+window],1)
def sliding(n, window=128):
arr = np.zeros(n)
for i in xrange(n):
#want to find all s such that 0<=s<=i<s+128<n
#thus, s < min(i+1, n-128), s >= max(0, i-window+1)
arr[i] = min(i+1, n-window) - max(0,i-window+1)
return arr
def normalizer(image, window = 128):
m,n = image.shape
res = np.zeros(shape)
if m < window or n < window: return res
x_sliding = sliding(m, window)
y_sliding = sliding(n, window)
print x_sliding
res = np.outer(x_sliding,y_sliding)
return res
print np.allclose(normalizer(image, window=128),region_normalization)
</code></pre>
| 1 | 2016-07-28T19:38:23Z | [
"python",
"performance",
"numpy",
"vectorization",
"normalization"
] |
How to separate yaml.dump key:value pair by a new line? | 38,643,420 | <p>I am trying to make yaml dump each key:value pair on a separate line. Is there a native option to do that? I have tried line_break but couldn't get it to work.</p>
<p>Here is a code example:</p>
<pre><code>import yaml
def test_yaml_dump():
obj = {'key0': 1, 'key1': 2}
with open('test.yaml', 'w') as tmpf:
yaml.dump(obj, tmpf, line_break=0)
</code></pre>
<p>The output is:</p>
<pre><code>{key0: 1, key1: 2}
</code></pre>
<p>I want it to be:</p>
<pre><code>{key0: 1,
key1: 2}
</code></pre>
| 1 | 2016-07-28T17:48:23Z | 38,643,516 | <p>If you add the argument <code>default_flow_style=False</code> to dump then the output will be:</p>
<pre><code>key1: 2
key0: 1
</code></pre>
<p>(the so called block style). That is the much more readable way of dumping Python dicts to YAML mappings. In <code>ruamel.yaml</code> this is the default when using <code>ruamel.yaml.round_trip_dump()</code>.</p>
<pre><code>import sys
import ruamel.yaml as yaml
obj = dict(key0=1, key1=2)
yaml.round_trip_dump(obj, sys.stdout)
</code></pre>
| 1 | 2016-07-28T17:54:07Z | [
"python",
"yaml"
] |
Python: Can an exception class identify the object that raised it? | 38,643,450 | <p>When a Python program raises an exception, is there a way the exception handler can identify the object in which the exception was raised?</p>
<p>If not, I believe I can find out by defining the exception class like this...</p>
<pre><code>class FoobarException(Exception) :
def __init__(self,message,context) :
...
</code></pre>
<p>...and using it like this:</p>
<pre><code> raise FoobarException("Something bad happened!", self)
</code></pre>
<p>I'd rather not have to pass "self" to every exception, though.</p>
| 0 | 2016-07-28T17:50:10Z | 38,643,844 | <p>It quickly gets messy if you want the exception itself to figure out where in the stack it is. You can do something like this:</p>
<pre><code>import inspect
frameinfo = inspect.getframeinfo(inspect.stack()[1][0])
caller_name = frameinfo[2]
file_name = frameinfo[0]
</code></pre>
<p>This, however, will only really work if you are looking for the function or method where the exception was raised, not if you are looking for the class that owns it.</p>
<p>You are probably better off doing something like this:</p>
<pre><code>class MyException(Exception):
pass
# ... somewhere inside a class
raise MyException("Something bad happened in {}".format(self.__class__))
</code></pre>
<p>That way you don't have to write any handling code for your <code>Exception</code> subclass either.</p>
| 0 | 2016-07-28T18:11:23Z | [
"python",
"class",
"exception-handling"
] |
Weird TypeError when using the Datetime/Time module? | 38,643,554 | <p>So I'm new to programming and I had coded this tiny script to turn my internet on from two a.m. to eight a.m. (long story);</p>
<pre><code>import os
import datetime as dt
from time import sleep
def connect():
print("Connecting...")
os.system("netsh wlan connect Sushi")
def disconnect():
print("Disconnecting...")
os.system("netsh wlan disconnect")
def checkcon():
attempt= 0
while os.system("ping google.com") != 0:
print("Unable to connect. Trying again.")
connect()
sleep(attempt)
attempt = attempt + 1
if attempt != 0:
print("Attempt ", str(attempt), " ...")
print("Connected successfully")
def timeformat (hr, min, sec) : #For setting proper datetime parameters.
return (str(hr) + ":" + str(min) + ":" + str(sec))
FMT = '%H:%M:%S'
now = timeformat(dt.datetime.now().time().hour, dt.datetime.now().time().minute, dt.datetime.now().time().second)
twoam = '02:00:00'
eightam = '08:00:00'
def tdelta(a, b = now):
tdel = dt.datetime.strptime(a, FMT) - dt.datetime.strptime(b, FMT)
return tdel.seconds
twoto8 = tdelta(eightam, twoam)
nowto8 = tdelta(eightam)
def main():
if twoto8 >= nowto8:
connect()
checkcon()
print("Your internet has been successfully connected")
x = tdelta(nowto8)
sleep(x)
print("Time's up!")
disconnect()
exit()
else:
print("Not yet!")
disconnect()
x = tdelta(nowto8)
sleep(str(x))
main()
main()
</code></pre>
<p>But whenever I run it, I get this:</p>
<p>line 35, in tdelta
tdel = dt.datetime.strptime(a, FMT) - dt.datetime.strptime(b, FMT)
TypeError: must be str, not int</p>
<p>I don't really understand why, because in the function tdelta, both parameters are strings, and...I don't know. Did I miss something? Do I have to specify something? Or is it just a typo I must have missed?</p>
<p>Also, I think a single glance at my code makes it glaringly obvious that I'm an absolute novice, so if you have any suggestions to improve my code too, I'll be eternally grateful.</p>
<p>I really appreciate any help. :)</p>
<p>EDIT: Here's the full stack trace (as per request):</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Lenovo\Desktop\ShutdownTimer.py", line 58, in <module>
main()
File "C:\Users\Lenovo\Desktop\ShutdownTimer.py", line 54, in main
x = tdelta(nowto8)
File "C:\Users\Lenovo\Desktop\ShutdownTimer.py", line 35, in tdelta
tdel = dt.datetime.strptime(a, FMT) - dt.datetime.strptime(b, FMT)
TypeError: must be str, not int
</code></pre>
| 0 | 2016-07-28T17:56:13Z | 38,643,726 | <p>Lines 46 and 54 (<code>x = tdelta(nowto8)</code>) are calling <code>tdelta</code> with an integer argument, not a string. Change lines 46-47 from</p>
<pre><code>x = tdelta(nowto8)
sleep(x)
</code></pre>
<p>to</p>
<pre><code>sleep(nowto8)
</code></pre>
<p>and lines 54-55 from</p>
<pre><code>x = tdelta(nowto8)
sleep(str(x))
</code></pre>
<p>to</p>
<pre><code>sleep(nowto8)
</code></pre>
<p>as well.</p>
| 0 | 2016-07-28T18:05:30Z | [
"python",
"python-3.x",
"debugging"
] |
JSON TypeError: expected string or buffer | 38,643,555 | <p>I'm trying to store an exception error to json. Even though I'm pretty sure I'm storing a string, it's still giving me a typeerror. </p>
<p>Relevant section of code: </p>
<pre><code>except ConnectionError as e:
s = str(e)
print type(s)
data = json.loads({'error message': s})
print "JSON load succeeded"
</code></pre>
<p>Traceback:</p>
<pre><code><type 'str'>
Traceback (most recent call last):
File "[REDACTED]", line 36, in <module>
ping(SERVERS)
File "[REDACTED]", line 29, in ping
data = json.loads({'error message': s})
File "C:\Python27\Lib\json\__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "C:\Python27\Lib\json\decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
</code></pre>
<p>This is quite baffling to me. I'd appreciate any help with this matter.</p>
| 0 | 2016-07-28T17:56:17Z | 38,644,429 | <p>You are looking for <code>json.dumps()</code>, not <code>json.loads()</code>. Try this:</p>
<pre><code> data = json.dumps({'error message': s})
</code></pre>
<p><a href="https://docs.python.org/3/library/json.html#json.dumps" rel="nofollow"><code>json.dumps(obj)</code></a>: Serialize <code>obj</code> to a JSON formatted <code>str</code><br>
<a href="https://docs.python.org/3/library/json.html#json.loads" rel="nofollow"><code>json.loads(s)</code></a>: Deserialize s (a <code>str</code> instance containing a JSON document) to a Python object</p>
| 0 | 2016-07-28T18:45:34Z | [
"python",
"json",
"python-2.7"
] |
Calling a private property of a Python instance using a string | 38,643,693 | <p>I'm somewhat new to OOP in Python (and Python in general), and I'm running into an issue when I try to access an instances private property from within one of the methods, and using a string as the property name.</p>
<p>The goal here is to basically have a list of properties that will be displayed (in a <em>key - value</em> format) when the objects getDetails() method is called. And it works fine, as long as all of the properties in the list are not private properties in the object. If all of the properties are <em>not</em> private, then it seems to work fine.</p>
<p>In the below example, you can see I have 3 properties, <code>foo</code>, <code>_bar</code> and <code>__baz</code>. In the <code>TheClass.getDetails()</code> method, if the <code>__baz</code> line is commented out, it works perfectly fine:</p>
<pre class="lang-py prettyprint-override"><code>class TheClass(object):
def __init__( self ):
self.foo = 'One..'
self._bar = 'Two..'
self.__baz = 'Three..'
def getDetails( self ):
display = [
'foo'
,'_bar'
#,'__baz'
]
print "DebugInfo:"
for key in display:
print '{0:<15}: {1:<20}'.format(key, self.__dict__[ key ] or 'N/A')
TheClass().getDetails()
""" Output:
DebugInfo:
foo : One..
_bar : Two..
"""
</code></pre>
<p>However, when I uncomment the <code>__baz</code> entry in the <code>display</code> array, I get an exception thrown:</p>
<pre><code>DebugInfo:
foo : One..
_bar : Two..
Traceback (most recent call last):
File "getattr.py", line 18, in <module>
TheClass().getDetails()
File "getattr.py", line 16, in getDetails
print '{0:<15}: {1:<20}'.format(key, self.__dict__[ key ] or 'N/A')
KeyError: '__baz'
</code></pre>
<p>I tried to change how the property was referenced, switching out the <code>self.__dict__[ key ]</code> with <code>getattr( self, key )</code>, but that just resulted in the same error:</p>
<pre><code>DebugInfo:
foo : One..
_bar : Two..
Traceback (most recent call last):
File "getattr.py", line 18, in <module>
TheClass().getDetails()
File "getattr.py", line 16, in getDetails
print '{0:<15}: {1:<20}'.format( key, getattr( self, key ) or 'N/A')
AttributeError: 'TheClass' object has no attribute '__baz'
</code></pre>
<p>If I just hardcode the properties, then obviously that will work fine:</p>
<pre class="lang-py prettyprint-override"><code>class TheClass(object):
def __init__( self ):
self.foo = 'One..'
self._bar = 'Two..'
self.__baz = 'Three..'
def getDetails( self ):
print "DebugInfo:"
print '{0:<15}: {1:<20}'.format( 'foo', self.foo or 'N/A')
print '{0:<15}: {1:<20}'.format( '_bar', self._bar or 'N/A')
print '{0:<15}: {1:<20}'.format( '__baz', self.__baz or 'N/A')
TheClass().getDetails()
""" Output:
DebugInfo:
foo : One..
_bar : Two..
__baz : Three..
"""
</code></pre>
<p>But I need this to be a bit more dynamic. So does anyone know if a way to get this working?</p>
<p>Thanks!</p>
<p><strong>P.S.</strong> I'm using <strong>Python 2.7.11</strong></p>
| 1 | 2016-07-28T18:03:31Z | 38,643,786 | <p>double underscores invoke python <a href="https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references" rel="nofollow">name-mangling</a>:</p>
<pre><code>>>> class Foo(object):
... def __init__(self):
... self.__private = 1
...
>>> f = Foo()
>>> vars(f)
{'_Foo__private': 1}
</code></pre>
<p>You can see that it changed <code>__property</code> to <code>_<classname>__property</code>.</p>
<p>Generally speaking, the reason python does this mangling is to allow the programmer to avoid conflicts with subclasses classes that might want to define a method with the same name (but not override the method in the base class). So, that's when you <em>should</em> use double-underscore prefixed attributes. If you don't have that situation, then you're probably better off just using single underscores (it's more idiomatic).</p>
| 1 | 2016-07-28T18:08:17Z | [
"python",
"python-2.7",
"oop",
"getter",
"getter-setter"
] |
Calling a private property of a Python instance using a string | 38,643,693 | <p>I'm somewhat new to OOP in Python (and Python in general), and I'm running into an issue when I try to access an instances private property from within one of the methods, and using a string as the property name.</p>
<p>The goal here is to basically have a list of properties that will be displayed (in a <em>key - value</em> format) when the objects getDetails() method is called. And it works fine, as long as all of the properties in the list are not private properties in the object. If all of the properties are <em>not</em> private, then it seems to work fine.</p>
<p>In the below example, you can see I have 3 properties, <code>foo</code>, <code>_bar</code> and <code>__baz</code>. In the <code>TheClass.getDetails()</code> method, if the <code>__baz</code> line is commented out, it works perfectly fine:</p>
<pre class="lang-py prettyprint-override"><code>class TheClass(object):
def __init__( self ):
self.foo = 'One..'
self._bar = 'Two..'
self.__baz = 'Three..'
def getDetails( self ):
display = [
'foo'
,'_bar'
#,'__baz'
]
print "DebugInfo:"
for key in display:
print '{0:<15}: {1:<20}'.format(key, self.__dict__[ key ] or 'N/A')
TheClass().getDetails()
""" Output:
DebugInfo:
foo : One..
_bar : Two..
"""
</code></pre>
<p>However, when I uncomment the <code>__baz</code> entry in the <code>display</code> array, I get an exception thrown:</p>
<pre><code>DebugInfo:
foo : One..
_bar : Two..
Traceback (most recent call last):
File "getattr.py", line 18, in <module>
TheClass().getDetails()
File "getattr.py", line 16, in getDetails
print '{0:<15}: {1:<20}'.format(key, self.__dict__[ key ] or 'N/A')
KeyError: '__baz'
</code></pre>
<p>I tried to change how the property was referenced, switching out the <code>self.__dict__[ key ]</code> with <code>getattr( self, key )</code>, but that just resulted in the same error:</p>
<pre><code>DebugInfo:
foo : One..
_bar : Two..
Traceback (most recent call last):
File "getattr.py", line 18, in <module>
TheClass().getDetails()
File "getattr.py", line 16, in getDetails
print '{0:<15}: {1:<20}'.format( key, getattr( self, key ) or 'N/A')
AttributeError: 'TheClass' object has no attribute '__baz'
</code></pre>
<p>If I just hardcode the properties, then obviously that will work fine:</p>
<pre class="lang-py prettyprint-override"><code>class TheClass(object):
def __init__( self ):
self.foo = 'One..'
self._bar = 'Two..'
self.__baz = 'Three..'
def getDetails( self ):
print "DebugInfo:"
print '{0:<15}: {1:<20}'.format( 'foo', self.foo or 'N/A')
print '{0:<15}: {1:<20}'.format( '_bar', self._bar or 'N/A')
print '{0:<15}: {1:<20}'.format( '__baz', self.__baz or 'N/A')
TheClass().getDetails()
""" Output:
DebugInfo:
foo : One..
_bar : Two..
__baz : Three..
"""
</code></pre>
<p>But I need this to be a bit more dynamic. So does anyone know if a way to get this working?</p>
<p>Thanks!</p>
<p><strong>P.S.</strong> I'm using <strong>Python 2.7.11</strong></p>
| 1 | 2016-07-28T18:03:31Z | 38,643,804 | <p>Attributes with a double underscore (e.g. <code>__foo</code>) are <a href="https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references" rel="nofollow">mangled</a> to make it harder to access them. The rules are as follows:</p>
<blockquote>
<p>Since there is a valid use-case for class-private members (namely to avoid name clashes of names with names defined by subclasses), there is limited support for such a mechanism, called name mangling. Any identifier of the form __spam (at least two leading underscores, at most one trailing underscore) is textually replaced with _classname__spam, where classname is the current class name with leading underscore(s) stripped. This mangling is done without regard to the syntactic position of the identifier, as long as it occurs within the definition of a class.</p>
</blockquote>
<p>Therefore, in the lookup table, you'd need to look for the symbol <code>_TheClass__baz</code> instead of just <code>__baz</code>.</p>
| 2 | 2016-07-28T18:09:03Z | [
"python",
"python-2.7",
"oop",
"getter",
"getter-setter"
] |
How can I randomly populate a list of lists in Python? | 38,643,703 | <p>I have been trying to produce an exploration game, so naturally I started with a world generator. I am stuck, however, on populating my list of biomes. The "biome_map" list is essentially an array that is equal in width and height to whatever size the user requested. Here is the code I have written:</p>
<pre><code>EWbiome_map = [] #produces an empty row that is E_W_Size km wide
for chunk1 in range (1, (E_W_Size + 1)):
EWbiome_map = EWbiome_map + ["empty"]
biome_map = []
for chunk2 in range (1, (N_S_Size + 1)):
biome_map = biome_map + [EWbiome_map]
print ("Map Initialized")
print ("Assigning Biomes...") # produces an empty array
print (biome_map)
Seed1 = Seed
random.seed (Seed)
x = 0
for element in biome_map:
y = 0
for chunk3 in element:
(biome_map[x])[y] = random.choice (biome_list)
y = y + 1
x = x + 1
print ("Biomes Assigned")
print (biome_map)
</code></pre>
<p>The error shows up in the result, where each list is a copy of the last.</p>
<pre><code>Modules Successfully Imported
Biomes Initialized
Map Initialized
Assigning Biomes...
[['empty', 'empty', 'empty'], ['empty', 'empty', 'empty'],['empty', 'empty', 'empty']]
Biomes Assigned
[['tundra', 'tundra', 'plateaus'], ['tundra', 'tundra', 'plateaus'], ['tundra', 'tundra', 'plateaus']]
</code></pre>
| 0 | 2016-07-28T18:04:16Z | 38,643,897 | <p>You are using a reference to the same list <code>EWbiome_map</code> when creating <code>biome_map</code>. Instead do something like:</p>
<pre><code>biome_map = [['empty']*E_W_Size]*N_S_Size
</code></pre>
<p>Your entire code could however be pretty much shortened to:</p>
<pre><code>biome_map = [[random.choice(biome_list) for _ in range(E_W_Size)]
for _ in range(N_S_Size)]
</code></pre>
| 1 | 2016-07-28T18:14:17Z | [
"python",
"python-3.x"
] |
How can I randomly populate a list of lists in Python? | 38,643,703 | <p>I have been trying to produce an exploration game, so naturally I started with a world generator. I am stuck, however, on populating my list of biomes. The "biome_map" list is essentially an array that is equal in width and height to whatever size the user requested. Here is the code I have written:</p>
<pre><code>EWbiome_map = [] #produces an empty row that is E_W_Size km wide
for chunk1 in range (1, (E_W_Size + 1)):
EWbiome_map = EWbiome_map + ["empty"]
biome_map = []
for chunk2 in range (1, (N_S_Size + 1)):
biome_map = biome_map + [EWbiome_map]
print ("Map Initialized")
print ("Assigning Biomes...") # produces an empty array
print (biome_map)
Seed1 = Seed
random.seed (Seed)
x = 0
for element in biome_map:
y = 0
for chunk3 in element:
(biome_map[x])[y] = random.choice (biome_list)
y = y + 1
x = x + 1
print ("Biomes Assigned")
print (biome_map)
</code></pre>
<p>The error shows up in the result, where each list is a copy of the last.</p>
<pre><code>Modules Successfully Imported
Biomes Initialized
Map Initialized
Assigning Biomes...
[['empty', 'empty', 'empty'], ['empty', 'empty', 'empty'],['empty', 'empty', 'empty']]
Biomes Assigned
[['tundra', 'tundra', 'plateaus'], ['tundra', 'tundra', 'plateaus'], ['tundra', 'tundra', 'plateaus']]
</code></pre>
| 0 | 2016-07-28T18:04:16Z | 38,643,931 | <p>Your problem is the line</p>
<pre><code>biome_map = biome_map + [EWbiome_map]
</code></pre>
<p>You are making <code>biome_map</code> to be a list where each element is the list <code>EWbiome_map</code>. Note that each element is not a <em>copy</em> of the list, it is the list itself. To correct this you want a <em>copy</em> of the <code>EWbiome_map</code> list. One way to do this is</p>
<pre><code>biome_map = biome_map + [list(EWbiome_map)]
</code></pre>
<p>Another way is</p>
<pre><code>biome_map = biome_map + [EWbiome_map[:]]
</code></pre>
<p>There are also other ways, but these should be clear enough.</p>
<hr>
<p>To clarify what is happening in your code, remember that everything in Python is an object, and an object is handled internally as a pointer to where the data in the object is stored. When you add <code>EWbiome_map</code> to your <code>biome_map</code> list, that list actually stores a pointer to <code>EWbiome_map</code> and not the data itself. So when you change one occurrence of that data it is changed for all references everywhere.</p>
<p>Doing copies by <code>list(EWbiome_map)</code> or <code>EWbiome_map[:]</code> makes a new list with its own data and pointer. Now you can change one of those without affecting the others.</p>
<p><strong>TL;DR</strong> Variables are handled differently in Python than in many other languages, since everything is an object. Pointers are used more extensively than you think, but the implementation is (usually) hidden from you.</p>
| 0 | 2016-07-28T18:16:17Z | [
"python",
"python-3.x"
] |
How can I randomly populate a list of lists in Python? | 38,643,703 | <p>I have been trying to produce an exploration game, so naturally I started with a world generator. I am stuck, however, on populating my list of biomes. The "biome_map" list is essentially an array that is equal in width and height to whatever size the user requested. Here is the code I have written:</p>
<pre><code>EWbiome_map = [] #produces an empty row that is E_W_Size km wide
for chunk1 in range (1, (E_W_Size + 1)):
EWbiome_map = EWbiome_map + ["empty"]
biome_map = []
for chunk2 in range (1, (N_S_Size + 1)):
biome_map = biome_map + [EWbiome_map]
print ("Map Initialized")
print ("Assigning Biomes...") # produces an empty array
print (biome_map)
Seed1 = Seed
random.seed (Seed)
x = 0
for element in biome_map:
y = 0
for chunk3 in element:
(biome_map[x])[y] = random.choice (biome_list)
y = y + 1
x = x + 1
print ("Biomes Assigned")
print (biome_map)
</code></pre>
<p>The error shows up in the result, where each list is a copy of the last.</p>
<pre><code>Modules Successfully Imported
Biomes Initialized
Map Initialized
Assigning Biomes...
[['empty', 'empty', 'empty'], ['empty', 'empty', 'empty'],['empty', 'empty', 'empty']]
Biomes Assigned
[['tundra', 'tundra', 'plateaus'], ['tundra', 'tundra', 'plateaus'], ['tundra', 'tundra', 'plateaus']]
</code></pre>
| 0 | 2016-07-28T18:04:16Z | 38,644,308 | <p>As other answers have mentioned, you are referencing the same EWbiome_map each time, which causes you to alter it in all rows of biome_map.</p>
<p>However, your code also does a lot of unnecessary things, like initializing an empty map and iterating with 'chunk3' and 'element' instead of just using x and y directly.</p>
<p>Faster would be to use numpy and replace <strong>all</strong> of that code with:</p>
<pre><code>import numpy as np
biome_map = np.random.choice(biome_list,size=(N_S_Size,E_W_Size))
</code></pre>
| 0 | 2016-07-28T18:38:02Z | [
"python",
"python-3.x"
] |
How to get results from python unit tests in Django using custom settings | 38,643,736 | <p>I am trying to generate a report from the unit tests in Django.
For running the tests and retrieving the results I am using a custom TestResult class, which works fine:</p>
<pre><code>results = TestResult()
loader = unittest.TestLoader()
suites = loader.discover('test_folder')
for suite in suites:
suite(results)
</code></pre>
<p>My only issue is that I can't override the settings file to use in memory database. I decorated my test cases with <strong>override_settings</strong> from <strong>django.test</strong>, which works for me only in the command line. When I run it using the loader it uses the my_app.settings file, however, it looks like it is overridden:</p>
<pre><code>>>> from django.conf import settings
>>> settings.DATABASES
{'default': {'TEST_CHARSET': 'UTF8', 'NAME': ':memory:', 'ENGINE': 'django.db.backends.sqlite3', 'TEST_NAME': ':memory:'}}
</code></pre>
<p>I also created my own override_settings file to override any function in my project and the result is the same. I tried to override the <strong>DJANGO_SETTINGS_MODULE</strong> in os.environ, but still having the same issue. Maybe I am missing some django.setup() function to load the settings file again.</p>
<p>I want to be able to: </p>
<ul>
<li>get results from the unit tests</li>
<li>use custom settings</li>
<li>load specific test module (just like with the <strong>loader.discover(start_dir)</strong>)</li>
</ul>
<p>TestCase example:</p>
<pre><code>from rest_framework import status
from rest_framework.test import APITestCase
class TestCase(APITestCase):
fixtures = ('dump',)
url = '/api/post'
def test_post(self):
# returns post from existing database not from fixture
post = Post.objects.get(pk=3)
data = {
'name': 'test',
'post': post.content
}
response = self.client.post(self.url + '/1', data=data)
self.assertEquals(response.status_code, status.HTTP_201_CREATED)
</code></pre>
<p>Using:</p>
<ul>
<li>Python 3.4 </li>
<li>Django 1.9</li>
</ul>
<p>Thanks for your help.</p>
| 0 | 2016-07-28T18:05:51Z | 38,643,865 | <p>Well, if you are using command <code>./manage.py test</code>, then you could just parse the commandline argument and see if it's a <code>test</code>, then have an <code>if</code> statement in your <code>settings.py</code>:</p>
<pre><code>if 'test' in sys.argv:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3'
}
}
</code></pre>
| 0 | 2016-07-28T18:12:25Z | [
"python",
"django",
"unit-testing",
"python-unittest"
] |
Reading two files, a csv and xls, and bring columns from csv to xls based on subnet(csv)/ip(xls) match | 38,643,756 | <p>I am trying to make a program that reads two files, a csv and xls, and use some python logic that parses and transfers certain columns from the csv to xls, based on a ip/subnet match.</p>
<p>The csv has subnet, mask, and cidr, in columns B,C, and D. (csv has about 10k rows, not all will be used.)</p>
<p>The xls has ip address in column C. (xls has 5009 rows), each ip address corresponds to the subnet it is in.</p>
<p>For example, this info in csv colB,C,D:</p>
<pre><code>subnet mask cidr
10.120.10.0 255.255.255.0 /24
</code></pre>
<p>corresponds to, this info in xls colB(these ip's aren't consecutive. they're on random rows.)</p>
<pre><code>10.120.10.12
10.120.10.13
</code></pre>
<p>The columns in csv I need to port to the xls file are G, H, I, K, and M.</p>
<p>I'm trying to find a way to match each ip in the xls file to a subnet in the csv, and bring the data in csv columns G, H, I, K, and M to the row of the corresponding ip in the xls.</p>
<p>Sorry if this is confusing, it's a confusing problem to solve and i'm just a beginner to python.</p>
| 0 | 2016-07-28T18:06:59Z | 38,644,396 | <p>To begin with you could try this script </p>
<pre><code>import openpyxl
from xlrd import open_workbook
import csv
#To read XLX
book = open_workbook('IPADR.xlsx')
sheet = book.sheet_by_index(0)
keys = [sheet.cell(row_index, 0).value for row_index in xrange(sheet.nrows)]
print keys
# To read Csv
exampleFile = open('sample.csv')
exampleReader = csv.reader(exampleFile)
for row in exampleReader:
print row[0]
</code></pre>
| 0 | 2016-07-28T18:43:17Z | [
"python",
"excel",
"python-3.x",
"csv",
"subnet"
] |
How to scrape GIS coordinates from a map (e.g. Pokevision) using python and selenium? | 38,643,991 | <p>I'd like to scrape PokemonVision so that I can get all the longitude and latitude coordinates of the Pokemon being displayed. </p>
<p>The URL of the webpage contains the longitude and latitude of the flag marker . For example the following url contains 39.95142302373031,-75.17986178398132: <a href="https://pokevision.com/#/@39.95142302373031,-75.17986178398132" rel="nofollow">https://pokevision.com/#/@39.95142302373031,-75.17986178398132</a></p>
<p>In the <a href="http://codepen.io/chriscruz/pen/kXpKKq" rel="nofollow">source code</a>, the flag marker has the following div: </p>
<pre><code><div class="leaflet-marker-pane"><img class="leaflet-marker-icon leaflet-zoom-animated leaflet-clickable" src="/asset/image/leaflet//marker-icon.png" style="margin-left: -12px; margin-top: -41px; width: 25px; height: 41px; transform: translate(324px, 135px); z-index: 135;" tabindex="0"/>
</code></pre>
<p>I've also noticed that every pokemon being displayed has a div like the following: </p>
<pre><code><div class="leaflet-marker-icon-wrapper leaflet-zoom-animated leaflet-clickable" style="margin-left: 0px; margin-top: 0px; transform: translate(215px, 113px); z-index: 113;" tabindex="0"><img class="leaflet-marker-icon " src="//ugc.pokevision.com/images/pokemon/116.png" style="margin-left: 0px; margin-top: 0px; width: 48px; height: 48px;"/><span class="leaflet-marker-iconlabel home-map-label" style="margin-left: 10px; margin-top: 26px;">08:15</span></div>
</code></pre>
<p>I'm assuming the position of the pokemon and the flag marker can be found in the div, particularly right after the text "transform: translate(".</p>
<p>Considering that we know the both the pixel position and the longitude and latitude of the flag, as well as the pixel position of the pokemon, I believe I should be able to get the longitude and latitude of the pokemon. </p>
<p>For example, the flag marker is always at 324px, 135px and we know that the gis coordinates of the flag marker is 39.95142302373031,-75.17986178398132. We also know the coordinates of a pokemon (e.g. 215px, 113px). However, I can't seems to figure out how to get the longitude and latitude of the pokemon.</p>
| 1 | 2016-07-28T18:20:37Z | 38,650,073 | <p>If you click the map the URL updates with the coordinates at that point. You can find all visible pokemon on the map, click them, then parse the coordinates from the updated URL. Example code:</p>
<pre><code>from pprint import pprint as pp
from selenium import webdriver
from selenium.common.exceptions import WebDriverException
poke_names = {
21: "Spearow",
23: "Ekans",
39: "Jigglypuff",
98: "Krabby",
129: "Pidgey",
}
driver = webdriver.Chrome()
try:
driver.get("https://pokevision.com/#/@39.95142302373031,-75.17986178398132")
# Zoom out once
zoom_css = "a.leaflet-control-zoom-out"
driver.find_element_by_css_selector(zoom_css).click()
# Find all pokemon in the source
poke_css = "div.leaflet-marker-pane div.leaflet-marker-icon-wrapper"
pokemon = driver.find_elements_by_css_selector(poke_css)
print("Found {0} pokemon".format(len(pokemon)))
# Filter for only the ones that are displayed on screen
on_screen_pokemon = [p for p in pokemon if p.is_displayed()]
print("There are {0} pokemon on screen".format(len(on_screen_pokemon)))
# Click each pokemon, which moves the marker and thus updates the URL with
# the coords of that pokemon
coords = list()
for pokemon in on_screen_pokemon:
try:
pokemon.click()
# Example URL: https://ugc.pokevision.com/images/pokemon/21.png
img_url = pokemon.find_element_by_css_selector('img').get_attribute("src")
img_num = int(img_url.split('.png')[0].split('/')[-1])
except WebDriverException:
# Some are hidden by other elements, move on
continue
else:
# Example
# https://pokevision.com/#/@39.95142302373031,-75.17986178398132
poke_coords = driver.current_url.split('#/@')[1].split(',')
poke_name = poke_names[img_num] if img_num in poke_names else "Unknown"
coords.append((poke_name, poke_coords))
print("Found coordinates for {0} pokemon".format(len(coords)))
for poke_name, poke_coords in coords:
print("Found {0} pokemon at coordinates {1}".format(poke_name, poke_coords))
finally:
driver.quit()
</code></pre>
<p>Output:</p>
<pre><code>(.venv35) â stackoverflow python pokefinder.py
Found 103 pokemon
There are 85 pokemon on screen
Found coordinates for 27 pokemon
Found Unknown pokemon at coordinates ['39.95481970299595', '-75.18772602081299']
Found Spearow pokemon at coordinates ['39.952878764070974', '-75.18424987792967']
Found Spearow pokemon at coordinates ['39.95625069896077', '-75.18845558166504']
Found Unknown pokemon at coordinates ['39.95685927437669', '-75.18216848373413']
Found Unknown pokemon at coordinates ['39.95174378273782', '-75.17852067947388']
Found Unknown pokemon at coordinates ['39.9509706687274', '-75.17377853393555']
Found Unknown pokemon at coordinates ['39.95241819420643', '-75.17523765563965']
Found Unknown pokemon at coordinates ['39.95409596949794', '-75.17422914505005']
Found Unknown pokemon at coordinates ['39.95131610372689', '-75.17277002334595']
Found Unknown pokemon at coordinates ['39.95276362189558', '-75.17313480377197']
Found Unknown pokemon at coordinates ['39.95254978591276', '-75.17257690429688']
Found Unknown pokemon at coordinates ['39.95319129185564', '-75.17094612121582']
Found Unknown pokemon at coordinates ['39.95488549657055', '-75.17195463180542']
Found Unknown pokemon at coordinates ['39.95488549657055', '-75.17096757888794']
Found Unknown pokemon at coordinates ['39.9571224404468', '-75.17251253128052']
Found Unknown pokemon at coordinates ['39.95633293919831', '-75.17088174819946']
Found Spearow pokemon at coordinates ['39.94958891128449', '-75.1890778541565']
Found Pidgey pokemon at coordinates ['39.94958891128449', '-75.18671751022339']
Found Unknown pokemon at coordinates ['39.94769717428357', '-75.18306970596313']
Found Unknown pokemon at coordinates ['39.948174225938324', '-75.18070936203003']
Found Unknown pokemon at coordinates ['39.94458803200817', '-75.17658948898315']
Found Unknown pokemon at coordinates ['39.94689111392826', '-75.174400806427']
Found Unknown pokemon at coordinates ['39.948322275775425', '-75.1739501953125']
Found Ekans pokemon at coordinates ['39.94749977262573', '-75.17088174819946']
Found Unknown pokemon at coordinates ['39.94842097548884', '-75.17317771911621']
Found Unknown pokemon at coordinates ['39.94934216594682', '-75.17180442810059']
Found Unknown pokemon at coordinates ['39.948075525868894', '-75.17107486724852']
</code></pre>
<p>This code is problematic for a couple reasons, chief among them being the overly broad and careless exception handling. You should, however, be able to adapt the concept into a more robust solution.</p>
| 2 | 2016-07-29T03:28:56Z | [
"python",
"html",
"selenium",
"gis",
"pokemon-go"
] |
wxPython - how to stop a wx.Control from losing focus on arrow key press? | 38,644,013 | <pre><code>import wx
class Control(wx.Control):
def __init__(self, parent):
wx.Control.__init__(self, parent)
self.Bind(wx.EVT_CHAR, self.OnKey)
self.Bind(wx.EVT_KEY_DOWN, self.OnKey)
self.Bind(wx.EVT_KEY_DOWN, self.OnKey)
self.Bind(wx.EVT_LEFT_DOWN, self.OnMouseClick)
def OnKey(self, event):
print("key pressed")
event.Skip()
def OnMouseClick(self, event):
self.SetFocus()
print("has focus")
event.Skip()
class Frame(wx.Frame):
def __init__(self, parent=None):
wx.Frame.__init__(self, parent)
panel = wx.Panel(self)
sizer = wx.BoxSizer(wx.HORIZONTAL)
radio = wx.RadioButton(panel, label="Radio button")
button = wx.Button(panel, label="Button")
control = Control(panel)
sizer.Add(radio, 0, wx.ALL, 5)
sizer.Add(button, 0, wx.ALL, 5)
sizer.Add(control, 0, wx.ALL, 5)
panel.SetSizer(sizer)
self.Show()
if __name__ == "__main__":
app = wx.App()
frame = Frame()
app.MainLoop()
</code></pre>
<p>A pretty minimal example.</p>
<p>by clicking on the control, sets the focus to control. However, even though binding key presses to a function OnKey, an arrow key press changes focus to another button/widget.</p>
<p>Is there a method similar to AcceptsFocusFromKeyboard(self):</p>
<blockquote>
<p>Description: Can this window be given focus by keyboard navigation? if not, the only way to give it focus (provided it accepts it at all) is to click it. </p>
</blockquote>
<p>Except,</p>
<h1>A method where my control doesn't lose focus from keyboard navigation</h1>
| 0 | 2016-07-28T18:21:38Z | 38,644,119 | <p>Both of you event handlers are missing the call to:</p>
<pre><code>event.Skip()
</code></pre>
<p>Those events are non-wxCommandEvents and in order for them to work properly, OS has to perform whatever needs to be performed. That's what Skip() is for. Otherwise the event will be eaten by the control.</p>
<p>Please check the documentation for that method, add it and see if that fixes the issue.</p>
<p>Also, out of curiosity, what do you expect to happen when the user presses the arrow key?</p>
<p>If you want the focus to stay when the arrow key is pressed, try to filter the event by event.GetKeyCode() (check the docs for the proper function name) and don't call event.Skip() in this case.</p>
<p>If that doesn't work try to catch wx.EVT_CHAR_HOOK event.</p>
<p>If even this doesn't work, override FilterEvent().</p>
<p>Also, the structure of you GUI is very weird. Try to get the RAD tool (wxGlade, wxSmith, etc), build it there and look at the result. Or check the demo folder in wxPython distribution (available as a different download) to see how to make the normal layout in wxPython.</p>
<p>It is also possible that by fixing the GUI you will fix the arrow key issue.</p>
<p>Try this code instead:</p>
<pre><code>import wx
class Frame(wx.Frame):
def __init__(self, parent=None):
wx.Frame.__init__(self, parent)
panel = wx.Panel(self)
sizer = wx.BoxSizer(wx.HORIZONTAL)
radio = wx.RadioButton(panel, label="Radio button")
button = wx.Button(panel, label="Button")
self.Bind(wx.EVT_CHAR, self.OnChar)
radio.Bind(wx.EVT_KEY_DOWN, self.OnKeyDown)
button.Bind(wx.EVT_KEY_DOWN, self.OnKeyDown)
self.Bind(wx.EVT_LEFT_DOWN, self.OnMouseClick)
sizer.Add(radio, 0, wx.ALL, 5)
sizer.Add(button, 0, wx.ALL, 5)
sizer.Add(control, 0, wx.ALL, 5)
panel.SetSizer(sizer)
self.Show()
def OnKeyDown(self, event):
print("key pressed")
if not event.GetKeyPress() == wx.WXK_LEFT_ARROW: # Check the function name. I'm writing from memory
event.Skip()
def OnChar(self, event):
print("Character key pressed on the panel")
event.Skip()
def OnMouseClick(self, event):
self.SetFocus()
print("has focus")
event.Skip()
if __name__ == "__main__":
app = wx.App()
frame = Frame()
app.MainLoop()
</code></pre>
<p>You should get the idea. Call event.Skip() only if the key pressed is not an arrow key.</p>
| 0 | 2016-07-28T18:27:15Z | [
"python",
"wxpython",
"wxwidgets",
"wx"
] |
Sort list of dictionaries based on keys | 38,644,055 | <p>I want to sort a list of dictionaries based on the presence of keys. Let's say I have a list of keys [key2,key3,key1], I need to order the list in such a way the dictionary with key2 should come first, key3 should come second and with key1 last.</p>
<p>I saw this answer (<a href="http://stackoverflow.com/questions/21446278/sort-python-list-of-dictionaries-by-key-if-key-exists">Sort python list of dictionaries by key if key exists</a>) but it refers to only one key</p>
<p>The sorting is not based on value of the 'key'. It depends on the presence of the key and that too with a predefined list of keys.</p>
| 0 | 2016-07-28T18:23:20Z | 38,644,377 | <p>How about something like this</p>
<pre><code>def sort_key(dict_item, sort_list):
key_idx = [sort_list.index(key) for key in dict_item.iterkeys() if key in sort_list]
if not key_idx:
return len(sort_list)
return min(key_idx)
dict_list.sort(key=lambda x: sort_key(x, sort_list))
</code></pre>
<p>If the a given dictionary in the list contains more than one of the keys in the sorting list, it will use the one with the lowest index. If none of the keys are present in the sorting list, the dictionary is sent to the end of the list.</p>
<p>Dictionaries that contain the same "best" key (i.e. lowest index) are considered equal in terms of order. If this is a problem, it wouldn't be too hard to have the <code>sort_key</code> function consider all the keys rather than just the best.
To do that, simply return the whole <code>key_idx</code> instead of <code>min(key_idx)</code> and instead of <code>len(sort_list)</code> return <code>[len(sort_list)]</code></p>
| 0 | 2016-07-28T18:41:58Z | [
"python",
"dictionary"
] |
Sort list of dictionaries based on keys | 38,644,055 | <p>I want to sort a list of dictionaries based on the presence of keys. Let's say I have a list of keys [key2,key3,key1], I need to order the list in such a way the dictionary with key2 should come first, key3 should come second and with key1 last.</p>
<p>I saw this answer (<a href="http://stackoverflow.com/questions/21446278/sort-python-list-of-dictionaries-by-key-if-key-exists">Sort python list of dictionaries by key if key exists</a>) but it refers to only one key</p>
<p>The sorting is not based on value of the 'key'. It depends on the presence of the key and that too with a predefined list of keys.</p>
| 0 | 2016-07-28T18:23:20Z | 38,644,495 | <p>I'd do this with:</p>
<pre><code>sorted_list = sorted(dict_list, key = lambda d: next((i for (i, k) in enumerate(key_list) if k in d), len(key_list) + 1))
</code></pre>
<p>That uses a generator expression to find the index in the key list of the first key that's in each dictionary, then use that value as the sort key, with dicts that contain none of the keys getting <code>len(key_list) + 1</code> as their sort key so they get sorted to the end.</p>
| 0 | 2016-07-28T18:49:13Z | [
"python",
"dictionary"
] |
Sort list of dictionaries based on keys | 38,644,055 | <p>I want to sort a list of dictionaries based on the presence of keys. Let's say I have a list of keys [key2,key3,key1], I need to order the list in such a way the dictionary with key2 should come first, key3 should come second and with key1 last.</p>
<p>I saw this answer (<a href="http://stackoverflow.com/questions/21446278/sort-python-list-of-dictionaries-by-key-if-key-exists">Sort python list of dictionaries by key if key exists</a>) but it refers to only one key</p>
<p>The sorting is not based on value of the 'key'. It depends on the presence of the key and that too with a predefined list of keys.</p>
| 0 | 2016-07-28T18:23:20Z | 38,644,522 | <p>Just use <code>sorted</code> using a list like <code>[key1 in dict, key2 in dict, ...]</code> as the key to sort by. Remember to reverse the result, since <code>True</code> (i.e. key is in dict) is sorted after <code>False</code>.</p>
<pre><code>>>> dicts = [{1:2, 3:4}, {3:4}, {5:6, 7:8}]
>>> keys = [5, 3, 1]
>>> sorted(dicts, key=lambda d: [k in d for k in keys], reverse=True)
[{5: 6, 7: 8}, {1: 2, 3: 4}, {3: 4}]
</code></pre>
<p>This is using <em>all</em> the keys to break ties, i.e. in above example, there are two dicts that have the key <code>3</code>, but one also has the key <code>1</code>, so this one is sorted second.</p>
| 1 | 2016-07-28T18:50:59Z | [
"python",
"dictionary"
] |
How to subset a dataframe and resolve the SettingWithCopy warning in Python? | 38,644,073 | <p>I've read an Excel sheet of survey responses into a dataframe in a Python 3 Jupyter notebook, and want to remove rows where the individuals are in one particular program. So I've subset from dataframe 'df' to a new dataframe 'dfgeneral' using .loc .</p>
<pre><code>notnurse = df['Program Code'] != 'NSG'
dfgeneral = df.loc[notnurse,:]
</code></pre>
<p>I then want to map labels (I.e. Satisfied, Not Satisfied) to the codes that were used to represent them, and find the number of respondents who gave each response. Several questions use the same scale, so I looped through them:</p>
<pre><code>q5list = ['Q5_1','Q5_2','Q5_3','Q5_4','Q5_5','Q5_6']
scale5_dict = {1:'Very satisfied',2:'Satisfied',3:'Neutral',
4:'Somewhat dissatisfied',5:'Not satisfied at all',
np.NaN:'No Response'}
for i in q5list:
dfgeneral[i] = df[i].map(scale5_dict)
print(dfgeneral[i].value_counts(dropna=False))
</code></pre>
<p>In the output, I get the SettingWithCopy warning:</p>
<pre><code>A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
</code></pre>
<p>I used .loc to create dfgeneral; is this a false positive, or what change should I make? Thank you for your help.</p>
| 0 | 2016-07-28T18:24:26Z | 38,644,252 | <pre><code>dfgeneral = df.loc[notnurse,:]
</code></pre>
<p>This line (second line) takes a slice of the DataFrame and assigns it to a variable. When you want to manipulate that variable, you see the warning (A value is trying to be set on a copy of a slice from a DataFrame). </p>
<p>Change that line to:</p>
<pre><code>dfgeneral = df.loc[notnurse, :].copy()
</code></pre>
| 2 | 2016-07-28T18:35:00Z | [
"python",
"pandas",
"dataframe",
"subset"
] |
Create a bar graph using datetimes | 38,644,150 | <p>I am using matplotlib and pyplot to create some graphs from a CSV file. I can create line graphs no problem, but I am having a lot of trouble creating a bar graph.</p>
<p>I referred to this post <a href="http://stackoverflow.com/questions/5902371/matplotlib-bar-chart-with-dates">matplotlib bar chart with dates</a> among several others that seemed like they should easily accomplish my task, but I can't get it to work with my list of datetimes.</p>
<p>Running the exact code from the above post generates the expected graph, but when I swap our their x and y values for my own from my CSV file:</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib
import numpy as np
from datetime import datetime
import csv
columns="YEAR,MONTH,DAY,HOUR,PREC,PET,Q,UZTWC,UZFWC,LZTWC,LZFPC,LZFSC,ADIMC,AET"
data_file="FFANA_000.csv"
list_of_datetimes = []
skipped_header = False
with open(data_file, 'rt') as f:
reader = csv.reader(f, delimiter=',', quoting=csv.QUOTE_NONE)
for row in reader:
if skipped_header:
date_string = "%s/%s/%s %s" % (row[0].strip(), row[1].strip(), row[2].strip(), row[3].strip())
dt = datetime.strptime(date_string, "%Y/%m/%d %H")
list_of_datetimes.append(dt)
skipped_header = True
UZTWC = np.genfromtxt(data_file, delimiter=',', names=columns, usecols=("UZTWC"))
x = list_of_datetimes
y = UZTWC
ax = plt.subplot(111)
ax.bar(x, y, width=10)
ax.xaxis_date()
plt.show()
</code></pre>
<p>Running this gives the error:</p>
<pre><code>Traceback (most recent call last):
File "graph.py", line 151, in <module>
ax.bar(x, y, width=10)
File "C:\Users\rbanks\AppData\Local\Programs\Python\Python35-32\lib\site-packages\matplotlib\__init__.py", line 1812, in inner
return func(ax, *args, **kwargs)
File "C:\Users\rbanks\AppData\Local\Programs\Python\Python35-32\lib\site-packages\matplotlib\axes\_axes.py", line 2118, in bar
if h < 0:
TypeError: unorderable types: numpy.ndarray() < int()
</code></pre>
<p>When I run the datetime numpy conversion that is necessary for plotting my line graphs:</p>
<pre><code>list_of_datetimes = matplotlib.dates.date2num(list_of_datetimes)
</code></pre>
<p>I get the same error.</p>
<p>Could anyone offer some insight?</p>
<p>excerpt from FFANA_000.csv:</p>
<pre><code>%YEAR,MO,DAY,HR,PREC(MM/DT),ET(MM/DT),Q(CMS), UZTWC(MM),UZFWC(MM),LZTWC(MM),LZFPC(MM),LZFSC(MM),ADIMC(MM), ET(MM/DT)
2012, 5, 1, 0, 0.000, 1.250, 0.003, 2.928, 0.000, 3.335, 4.806, 0.000, 6.669, 1.042
2012, 5, 1, 6, 0.000, 1.250, 0.003, 2.449, 0.000, 3.156, 4.798, 0.000, 6.312, 0.987
2012, 5, 1, 12, 0.000, 1.250, 0.003, 2.048, 0.000, 2.970, 4.789, 0.000, 5.940, 0.929
2012, 5, 1, 18, 0.000, 1.250, 0.003, 1.713, 0.000, 2.782, 4.781, 0.000, 5.564, 0.869
2012, 5, 2, 0, 0.000, 1.250, 0.003, 1.433, 0.000, 2.596, 4.772, 0.000, 5.192, 0.809
2012, 5, 2, 6, 0.000, 1.250, 0.003, 1.199, 0.000, 2.414, 4.764, 0.000, 4.829, 0.750
2012, 5, 2, 12, 0.000, 1.250, 0.003, 1.003, 0.000, 2.239, 4.756, 0.000, 4.478, 0.693
2012, 5, 2, 18, 0.000, 1.250, 0.003, 0.839, 0.000, 2.072, 4.747, 0.000, 4.144, 0.638
2012, 5, 3, 0, 0.000, 1.250, 0.003, 0.702,
</code></pre>
| 3 | 2016-07-28T18:28:55Z | 38,647,731 | <p>I could not fully reproduce your problem with your data and code. I get </p>
<pre><code>> UZTWC = np.genfromtxt(data_file, delimiter=';', names=columns,
> usecols=("UZTWC")) File
> "C:\Python34-64bit\lib\site-packages\numpy\lib\npyio.py", line 1870,
> in genfromtxt
> output = np.array(data, dtype) ValueError: could not convert string to float: b'UZTWC(MM)'
</code></pre>
<p>But try changing <code>UZTWC = np.genfromtxt(...)</code> to</p>
<pre><code>UZTWC = np.genfromtxt(data_file, delimiter=',', usecols=(7), skip_header=1)
</code></pre>
<p>and you should get a graph. The problem is that for some reason your numpy array is a made of strings and not floats.</p>
| 0 | 2016-07-28T22:25:14Z | [
"python",
"numpy",
"matplotlib"
] |
merge python dictionaries when a key has a certain value | 38,644,179 | <p>I have a list of python dictionaries, looking like this:</p>
<pre><code>[{key1: value1, key2: array1}, {key1: value2, key2: array2}, {key1: value3, key3: array3},...]
</code></pre>
<p>In my case, some of these dictionaries have the same value for key1, for example, value1 = value3. How can I end up getting an array of dictionaries that will look like this?</p>
<pre><code> [{key1: value1, key2: array1+array3}, {key1: value2, key2: array2},...]
</code></pre>
<p>where "array1+array3" is a single array with appended elements of the original array1 and array3 arrays?</p>
<p>edit: this is not merging by key; each dictionary in the array has strictly the same structure: {key1: "some_string", key2: [an array]} - my problem is that I wish to "merge" them on the "some_string", so the [an array], the value of key2, will be concatenated with arrays from other dictionaries in the list that have the same "some_string" as value of key1.</p>
| -1 | 2016-07-28T18:30:44Z | 38,644,562 | <p>The best way to do what you want is to re-structure your data, at least temporarily. If you were to use a dictionary that mapped from <code>value</code> to <code>array</code> (without the fixed keys), you could put all the data into a single place and efficiently detect when duplicate values occur so that you can merge the values together. If need be, you can turn the single dictionary back into a list of smaller dictionaries afterwards.</p>
<p>Try something like this:</p>
<pre><code>merged = {}
for d in list_of_dicts:
if d["key1"] not in merged:
merged[d["key1"]] = d["key2"]
else:
merged[d["key1"]] += d["key2"]
# if necessary, put back into a list of dicts:
merged_list = [{"key1": k, "key2": v} for k, v in merged.items()]
</code></pre>
<p>The order of the list at the end will be arbitrary. If you want to preserve the order of the dicts in the original list, you could use an <code>OrderedDict</code> for <code>merged</code>.</p>
| 0 | 2016-07-28T18:53:18Z | [
"python",
"dictionary"
] |
merge python dictionaries when a key has a certain value | 38,644,179 | <p>I have a list of python dictionaries, looking like this:</p>
<pre><code>[{key1: value1, key2: array1}, {key1: value2, key2: array2}, {key1: value3, key3: array3},...]
</code></pre>
<p>In my case, some of these dictionaries have the same value for key1, for example, value1 = value3. How can I end up getting an array of dictionaries that will look like this?</p>
<pre><code> [{key1: value1, key2: array1+array3}, {key1: value2, key2: array2},...]
</code></pre>
<p>where "array1+array3" is a single array with appended elements of the original array1 and array3 arrays?</p>
<p>edit: this is not merging by key; each dictionary in the array has strictly the same structure: {key1: "some_string", key2: [an array]} - my problem is that I wish to "merge" them on the "some_string", so the [an array], the value of key2, will be concatenated with arrays from other dictionaries in the list that have the same "some_string" as value of key1.</p>
| -1 | 2016-07-28T18:30:44Z | 38,644,853 | <p>Since you want to group by <code>key1</code>, you can use <code>itertools.groupby()</code>.</p>
<pre><code>from operator import itemgetter
from itertools import groupby
list_of_dicts = [{'key1': 1, 'key2': [1, 2]},
{'key1': 2, 'key2': [3, 4]},
{'key1': 1, 'key2': [5, 6]}]
grouped = groupby(sorted(list_of_dicts, key=itemgetter('key1')),
key=itemgetter('key1'))
result = [{'key1': k,
'key2': [elt for item in g for elt in item['key2']]}
for k, g in grouped]
print(result)
</code></pre>
<p>Output:</p>
<pre><code>[{'key2': [1, 2, 5, 6], 'key1': 1}, {'key2': [3, 4], 'key1': 2}]
</code></pre>
| 0 | 2016-07-28T19:09:43Z | [
"python",
"dictionary"
] |
Insert table data from website into table on my own website using Python and Beautiful Soup | 38,644,251 | <p>I wrote some code that grabs the numbers I need from <a href="http://abri.une.edu.au/online/cgi-bin/i4.dll?1=2021292A&2=2420&3=56&5=2B3C2B3C3A&6=59585A242621582623&9=5251595A" rel="nofollow">this website</a>, but I don't know what to do next. </p>
<p>It grabs the numbers from the table at the bottom. The ones under calving ease, birth weight, weaning weight, yearling weight, milk and total maternal.</p>
<pre><code>#!/usr/bin/python
import urllib2
from bs4 import BeautifulSoup
import pyperclip
def getPageData(url):
if not ('abri.une.edu.au' in url):
return -1
webpage = urllib2.urlopen(url).read()
soup = BeautifulSoup(webpage, "html.parser")
# This finds the epd tree and saves it as a searchable list
pedTreeTable = soup.find('table', {'class':'TablesEBVBox'})
# This puts all of the epds into a list.
# it looks for anything in pedTreeTable with an td tag.
pageData = pedTreeTable.findAll('td')
pageData.pop(7)
return pageData
def createPedigree(animalPageData):
''' make animalPageData much more useful. Strip the text out and put it in a dict.'''
animals = []
for animal in animalPageData:
animals.append(animal.text)
prettyPedigree = {
'calving_ease' : animals[18],
'birth_weight' : animals[19],
'wean_weight' : animals[20],
'year_weight' : animals[21],
'milk' : animals[22],
'total_mat' : animals[23]
}
for animalKey in prettyPedigree:
if animalKey != 'year_weight' and animalKey != 'dam':
prettyPedigree[animalKey] = stripRegNumber(prettyPedigree[animalKey])
return prettyPedigree
def stripRegNumber(animal):
'''returns the animal with its registration number stripped'''
lAnimal = animal.split()
strippedAnimal = ""
for word in lAnimal:
if not word.isdigit():
strippedAnimal += word + " "
return strippedAnimal
def prettify(pedigree):
''' Takes the pedigree and prints it out in a usable format '''
s = ''
pedString = ""
# this is also ugly, but it was the only way I found to format with a variable
cFormat = '{{:^{}}}'
rFormat = '{{:>{}}}'
#row 1 of string
s += rFormat.format(len(pedigree['calving_ease'])).format(
pedigree['calving_ease']) + '\n'
#row 2 of string
s += rFormat.format(len(pedigree['birth_weight'])).format(
pedigree['birth_weight']) + '\n'
#row 3 of string
s += rFormat.format(len(pedigree['wean_weight'])).format(
pedigree['wean_weight']) + '\n'
#row 4 of string
s += rFormat.format(len(pedigree['year_weight'])).format(
pedigree['year_weight']) + '\n'
#row 4 of string
s += rFormat.format(len(pedigree['milk'])).format(
pedigree['milk']) + '\n'
#row 5 of string
s += rFormat.format(len(pedigree['total_mat'])).format(
pedigree['total_mat']) + '\n'
return s
if __name__ == '__main__':
while True:
url = raw_input('Input a url you want to use to make life easier: \n')
pageData = getPageData(url)
s = prettify(createPedigree(pageData))
pyperclip.copy(s)
if len(s) > 0:
print 'the easy string has been copied to your clipboard'
</code></pre>
<p>I've just been using this code for easy copying and pasting. All I have to do is insert the URL, and it saves the numbers to my clipboard. </p>
<p>Now I want to use this code on my website; I want to be able to insert a URL in my HTML code, and it displays these numbers on my page in a table.</p>
<p>My questions are as follows: </p>
<ol>
<li>How do I use the python code on the website?</li>
<li>How do I insert collected data into a table with HTML?</li>
</ol>
| 0 | 2016-07-28T18:34:55Z | 38,674,424 | <p>It sounds like you would want to use something like <a href="https://www.djangoproject.com" rel="nofollow">Django</a>. Although the learning curve is a bit steep, it is worth it <em>and</em> it (of course) supports python.</p>
| 0 | 2016-07-30T13:23:53Z | [
"python",
"html",
"web-scraping",
"beautifulsoup",
"html-parsing"
] |
Python Multiprocessing apply_async Only Using One Process | 38,644,254 | <p>This is my code: </p>
<pre><code>import multiprocessing
import time
import os
def worker():
print str(os.getpid()) + " is going to sleep..."
time.sleep(1)
print str(os.getpid()) + " woke up!"
return
if __name__ == "__main__":
pool = multiprocessing.Pool(processes = 8)
for _ in range(5):
pool.apply_async(worker())
</code></pre>
<p>And my output:</p>
<pre><code>23173 is going to sleep...
23173 woke up!
23173 is going to sleep...
23173 woke up!
23173 is going to sleep...
23173 woke up!
23173 is going to sleep...
23173 woke up!
23173 is going to sleep...
23173 woke up!
</code></pre>
<p>This output appears sequentially and obviously all the outputs have the same id. What I'm expecting to see is 5 separate processes alerting they that are going to sleep and then waking up at similar times. </p>
<p>Why is this not happening? I have done a lot of research on stackoverflow and other sites and it seems like everyone is assured that this should have the "worker" callable applied across multiple processes, but it appears not to be. </p>
| 0 | 2016-07-28T18:35:18Z | 38,644,894 | <p>You are <strong>calling</strong> the <code>worker</code> function, so <em>the current process</em> executes that code and <em>then</em> the result of <code>worker</code> (i.e. <code>None</code>) is passed to <code>apply_asynch</code> which does nothing.</p>
<p>Change it to <a href="https://docs.python.org/3.5/library/multiprocessing.html#multiprocessing.pool.Pool.apply_async" rel="nofollow"><code>pool.apply_asynch(worker)</code></a> in this way the <em>function</em> <code>worker</code> is passed to the subprocess which executes it. </p>
<hr>
<p>To be dead clear, the problem is the difference between this:</p>
<pre><code>>>> def worker():
... print('Hi there!')
...
>>> def f(func):
... print('Received', func)
... func()
...
>>> f(worker())
Hi there!
Received None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in f
TypeError: 'NoneType' object is not callable
</code></pre>
<p>And this:</p>
<pre><code>>>> f(worker)
Received <function worker at 0x7f5cd546f048>
Hi there!
</code></pre>
<p>Note that the former has the wrong order of lines and produces a <code>TypeError</code>.
Unfortunately <code>Pool.apply_asynch</code> silences exceptions by default. You can handle errors passing in extra arguments though.</p>
| 1 | 2016-07-28T19:12:00Z | [
"python",
"multiprocessing"
] |
how to increment a whole word match in python using a variable in the regex | 38,644,263 | <p>I am trying to show how many words match in a txt file using python and regex but instead of using the term 'like' I would like to use the variable 'words'</p>
<pre><code>text = 'I like and love green grass'
positive_words = positive_words=open("positive.txt").read()
words = text.split(' ')
if re.search(r'\blike\b',positive_words):
positive_counter=positive_counter+1
print positive_counter
</code></pre>
<p>in my txt file I have the words 'like' and 'love' so positive_counter should equal 2.. How would I use words as a variable instead of 'like'? This works now but just do not know how to incorporate the variable words</p>
| 3 | 2016-07-28T18:35:45Z | 38,644,352 | <p>From the regex point of view, this should work:</p>
<pre><code>re.search(r'\b(I|like|and|love|green|grass)\b', positive_words)
</code></pre>
<p>To build the re from your text variable (note, I'm coding this from memory, you may need to tweak it somewhat):</p>
<pre><code>regex = r'\b(%s)\b' % "|".join(words)
re.search(regex, positive_words)
</code></pre>
| 1 | 2016-07-28T18:40:40Z | [
"python",
"regex",
"variables",
"increment"
] |
how to increment a whole word match in python using a variable in the regex | 38,644,263 | <p>I am trying to show how many words match in a txt file using python and regex but instead of using the term 'like' I would like to use the variable 'words'</p>
<pre><code>text = 'I like and love green grass'
positive_words = positive_words=open("positive.txt").read()
words = text.split(' ')
if re.search(r'\blike\b',positive_words):
positive_counter=positive_counter+1
print positive_counter
</code></pre>
<p>in my txt file I have the words 'like' and 'love' so positive_counter should equal 2.. How would I use words as a variable instead of 'like'? This works now but just do not know how to incorporate the variable words</p>
| 3 | 2016-07-28T18:35:45Z | 38,644,391 | <pre><code>text = 'I like and love green grass'
positive_words = positive_words=open("positive.txt").read()
words = text.split(' ')
for word in words:
if re.search(r'\b' + word + r'\b',positive_words):
positive_counter=positive_counter+1
print positive_counter
</code></pre>
<p>Just looping all of the words in text.</p>
| 3 | 2016-07-28T18:42:59Z | [
"python",
"regex",
"variables",
"increment"
] |
using a ModelForm to sanitize post data in Django | 38,644,280 | <p>I've come across <a href="http://stackoverflow.com/questions/30699794/how-to-create-object-from-querydict-in-django">How to create object from QueryDict in django?</a> , which answers what I want to do. However I want to sanitize the data. What does the Brandon mean by "using a ModelForm" to sanitize posted data?</p>
| 0 | 2016-07-28T18:36:50Z | 38,644,380 | <p>What he mean is that with <code>ModelForm</code> you can not only create model instance from <code>QueryDict</code>, but also do a bunch of validation on data types and it's requirements as for example if value's length correct, if it's required etc. Also you will pass only needed data from <code>QueryDict</code> to model instance and not whole request</p>
<p>So typical flow for this is:</p>
<pre><code>form = ModelForm(request.POST)
if form.is_valid():
form.save()
return HttpResponse('blah-blah success message')
else:
form = ModelForm()
return HttpResponse('blah-blah error message')
</code></pre>
<p>And awesome Django docs for this: <a href="https://docs.djangoproject.com/en/dev/topics/forms/modelforms/#django.forms.ModelForm" rel="nofollow">https://docs.djangoproject.com/en/dev/topics/forms/modelforms/#django.forms.ModelForm</a></p>
| 1 | 2016-07-28T18:42:06Z | [
"python",
"django"
] |
using a ModelForm to sanitize post data in Django | 38,644,280 | <p>I've come across <a href="http://stackoverflow.com/questions/30699794/how-to-create-object-from-querydict-in-django">How to create object from QueryDict in django?</a> , which answers what I want to do. However I want to sanitize the data. What does the Brandon mean by "using a ModelForm" to sanitize posted data?</p>
| 0 | 2016-07-28T18:36:50Z | 38,644,708 | <p><code>ModelForm</code> are very helpful when you want to create just model instances. If you create a form that closely looks like a model then you should go for a model form instead. Here is an example.
Going by the example provided in the Django <a href="https://docs.djangoproject.com/en/dev/topics/forms/modelforms/#django.forms.ModelForm" rel="nofollow">website</a>.</p>
<p>In your forms.py</p>
<pre><code>class ArticleForm(ModelForm):
class Meta:
model = Articels #You need to mention the model name for which you want to create the form
fields = ['content', 'headline'] #Fields you want your form to display
</code></pre>
<p>So in the form itself you can sanitize your data as well. There are 2 ways of doing that.<br><br>
<strong>Way 1:</strong> Using the <code>clean</code> function provided by Django using which you can sanitize all your fields in one function. </p>
<pre><code>class ArticleForm(ModelForm):
class Meta:
model = Articels #You need to mention the model name for which you want to create the form
fields = ['content', 'headline'] #Fields you want your form to display
def clean(self):
# Put your logic here to clean data
</code></pre>
<p><br>
<strong>Way 2:</strong> Using <code>clean_fieldname</code> function using which you can clean your form data for each field separately.</p>
<pre><code>class ArticleForm(ModelForm):
class Meta:
model = Articels #You need to mention the model name for which you want to create the form
fields = ['content', 'headline'] #Fields you want your form to display
def clean_content(self):
# Put your logic here to clean content
def clean_headline(self):
# Put your logic here to clean headline
</code></pre>
<p>Basically you would use <code>clean</code> and <code>clean_fieldname</code> methods to validate your form. This is done to raise any error in forms if a wrong input is submitted. Let's assume you want the article's content to have at least 10 characters. You would add this constraint to <code>clean_content</code>. </p>
<pre><code>class ArticleForm(ModelForm):
class Meta:
model = Articels #You need to mention the model name for which you want to create the form
fields = ['content', 'headline'] #Fields you want your form to display
def clean_content(self):
# Get the value entered by user using cleaned_data dictionary
data_content = self.cleaned_data.get('content')
# Raise error if length of content is less than 10
if len(data_content) < 10:
raise forms.ValidationError("Content should be min. 10 characters long")
return data_content
</code></pre>
<p>So here's the flow: <br>
<strong>Step 1</strong>: User open the page say <code>/home/</code>, and you show the user a form to add new article.</p>
<p><strong>Step 2</strong>: User submits the form (content length is less than 10). </p>
<p><strong>Step 3</strong>: You create an instance of the form using the <code>POST</code> data. Like this <code>form = ArticleForm(request.POST)</code>.</p>
<p><strong>Step 4:</strong> Now you call the <code>is_valid</code> method on the form to check if its valid.</p>
<p><strong>Step 5:</strong> Now the <code>clean_content</code> comes in play. When you call <code>is_valid</code>, it will check if the content entered by user is min. 10 characters or not. If not it will raise an error.</p>
<p>This is how you can validate your form.</p>
| 4 | 2016-07-28T19:01:44Z | [
"python",
"django"
] |
Chunking for Tamil language | 38,644,313 | <p>I want to use the NLTK chunker for Tamil language (which is an Indic language). <a href="http://www.nltk.org/_modules/nltk/chunk.html" rel="nofollow">However, it says that it doesn't support Unicode because it uses the 'pre' module for regular expressions.</a></p>
<blockquote>
<h2>Unresolved Issues</h2>
<p>If we use the <code>re</code> module for regular expressions, Python's regular
expression engine generates "maximum recursion depth exceeded" errors
when processing very large texts, even for regular expressions that
should not require any recursion. We therefore use the <code>pre</code> module
instead. But note that <code>pre</code> does not include Unicode support, so
this module will not work with unicode strings.</p>
</blockquote>
<p>Any suggestion for a work around or another way to accomplish it?</p>
| 1 | 2016-07-28T18:38:16Z | 38,644,769 | <p>You can use <a href="http://ltrc.iiit.ac.in/" rel="nofollow">LTRC</a>'s <a href="http://ltrc.iiit.ac.in/showfile.php?filename=downloads/shallow_parser.php" rel="nofollow">Shallow Parser</a> for Tamil Language.</p>
<p>You can check online demo, <a href="http://ltrc.iiit.ac.in/analyzer/tamil/" rel="nofollow">here</a>.</p>
| 0 | 2016-07-28T19:04:30Z | [
"python",
"unicode",
"nltk",
"chunking",
"indic"
] |
Chunking for Tamil language | 38,644,313 | <p>I want to use the NLTK chunker for Tamil language (which is an Indic language). <a href="http://www.nltk.org/_modules/nltk/chunk.html" rel="nofollow">However, it says that it doesn't support Unicode because it uses the 'pre' module for regular expressions.</a></p>
<blockquote>
<h2>Unresolved Issues</h2>
<p>If we use the <code>re</code> module for regular expressions, Python's regular
expression engine generates "maximum recursion depth exceeded" errors
when processing very large texts, even for regular expressions that
should not require any recursion. We therefore use the <code>pre</code> module
instead. But note that <code>pre</code> does not include Unicode support, so
this module will not work with unicode strings.</p>
</blockquote>
<p>Any suggestion for a work around or another way to accomplish it?</p>
| 1 | 2016-07-28T18:38:16Z | 38,685,330 | <p>Chunkers are language-specific, so you need to train one for Tamil anyway. Of course if you are happy with available off-the-shelf solutions (I've got no idea if there are any, e.g. if the link in the now-deleted answer is any good), you can stop reading here. If not, you can train your own but you'll need a corpus that is annotated with the chunks you want to recognize: perhaps you are after NP chunks (the usual case), but maybe it's something else. </p>
<p>Once you have an annotated corpus, look carefully at chapters 6 and 7 of the NLTK book, and especially <a href="http://www.nltk.org/book/ch07.html#developing-and-evaluating-chunkers" rel="nofollow">section 7.3, Developing and evaluating chunkers.</a>. While Chapter 7 begins with the nltk's regexp chunker, keep reading and you'll see how to build a "sequence classifier" that does not rely on the nltk's regexp-based chunking engine. (<a href="http://www.nltk.org/book/ch06.html" rel="nofollow">Chapter 6</a> is essential for this, so don't skip it).</p>
<p>It's not a trivial task: You need to understand the classifier approach, put the pieces together, probably convert your corpus to <a href="http://www.nltk.org/book/ch07.html#reading-iob-format-and-the-conll-2000-corpus" rel="nofollow">IOB format</a>, and finally select features that will give you satisfactory performance. But it is pretty straightforward, and can be carried out for any language or chunking task for which you have an annotated corpus. The only open-ended part is thinking up contextual cues that you can convert into features to help the classifier decide correctly, and experimenting until you find a good mix. (On the up side, it is a much more powerful approach than pure regexp-based solutions, even for ascii text).</p>
| 2 | 2016-07-31T15:08:30Z | [
"python",
"unicode",
"nltk",
"chunking",
"indic"
] |
How do I increase the width between xaxis ticks in a seaborn facetgrid plot? | 38,644,375 | <p>I have a seaborn Facetgrid, stripplot</p>
<pre><code>m=sns.FacetGrid(group, col='myGroupCol' , size=15, aspect=0.9, sharex=False, sharey=True)
m.map(sns.stripplot,'myX','myY',hue='myColorBy',data=pandas.groupby(), order=order_list, jitter=0.4, hue_order=\
['T','C','TT','TC','CT','CC'],palette="Set1", split=True, size=15, linewidth=2, edgecolor="gray").set(ylim=(-2,6))
for ax,title in zip(m.axes.flat, sorted(titles.iterkeys())):
ax.tick_params(axis='x', which='major', pad=15)
ax.grid(True)
ax.legend(bbox_to_anchor=(1.05, 1),loc=2)
</code></pre>
<p>I have tried to changes the spacing between the sticks using tick params but I don't see any change in width.</p>
<p>Any idea what I am doing wrong?</p>
| 0 | 2016-07-28T18:41:48Z | 38,644,498 | <p>Well just adjusted the aspect ratio and jitter appropriately and got the desired plot.</p>
| 1 | 2016-07-28T18:49:28Z | [
"python",
"matplotlib",
"seaborn"
] |
Dataframe with column of ranges. Given number, select row where number occurs | 38,644,388 | <p>I have a dataframe with a column of a range of numbers, and then more columns of data</p>
<pre><code>[1, 2, 3, ..., 10] | a | b
[11, 12, 13, 14, ...] | c | d
</code></pre>
<p>Given a number like 10, 14, etc. how do I select the row where that number is in the range, i.e for 10 I want <code>[1, 2, 3, ..., 10] | a | b</code> row to be returned.</p>
<p>So far Ive tried <code>dfs['A'].ix[10 in dfs['A']['B']]</code> where <code>dfs</code> is a dictionary of dataframes, <code>'A'</code> is a dataframe, <code>'B'</code> is the column with ranges. </p>
<p>How do I do this?</p>
| 0 | 2016-07-28T18:42:46Z | 38,644,492 | <p>Use <code>apply</code> to loop through column <code>B</code> and check each element individually which returns a logical index for subsetting:</p>
<pre><code>df = pd.DataFrame({"B": [list(range(1,11)), list(range(11,21))], "col1":["a", "b"], "col2":["c", "d"]})
df[df["B"].apply(lambda x: 10 in x)]
# B col1 col2
# 0 [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] a c
</code></pre>
| 1 | 2016-07-28T18:48:57Z | [
"python",
"pandas"
] |
Dataframe with column of ranges. Given number, select row where number occurs | 38,644,388 | <p>I have a dataframe with a column of a range of numbers, and then more columns of data</p>
<pre><code>[1, 2, 3, ..., 10] | a | b
[11, 12, 13, 14, ...] | c | d
</code></pre>
<p>Given a number like 10, 14, etc. how do I select the row where that number is in the range, i.e for 10 I want <code>[1, 2, 3, ..., 10] | a | b</code> row to be returned.</p>
<p>So far Ive tried <code>dfs['A'].ix[10 in dfs['A']['B']]</code> where <code>dfs</code> is a dictionary of dataframes, <code>'A'</code> is a dataframe, <code>'B'</code> is the column with ranges. </p>
<p>How do I do this?</p>
| 0 | 2016-07-28T18:42:46Z | 38,644,500 | <pre><code>df = pd.DataFrame({'ranges':[range(11), range(11,20)], 'dat1':['a','c'], 'dat2':['b','d']})
mask = df.ranges.apply(lambda x: 10 in x)
df.ix[mask]
</code></pre>
| 1 | 2016-07-28T18:49:34Z | [
"python",
"pandas"
] |
PyMySQL Error inserting values into table. - Python | 38,644,397 | <p>I am trying to make a login system with python and mysql. I connected to the database, but when I try to insert values into a table, it fails. I'm not sure what's wrong. I am using python 3.5 and the PyMySQL module. </p>
<pre><code> #!python3
import pymysql, sys, time
try:
print('Connecting.....')
time.sleep(1.66)
conn = pymysql.connect(user='root', passwd='root', host='127.0.0.1', port=3306, database='MySQL')
print('Connection suceeded!')
except:
print('Connection failed.')
sys.exit('Error.')
cursor = conn.cursor()
sql = "INSERT INTO login(USER, PASS) VALUES('test', 'val')"
try:
cursor.execute(sql)
conn.commit()
except:
conn.rollback()
print('Operation failed.')
conn.close()
</code></pre>
| 0 | 2016-07-28T18:43:18Z | 38,646,773 | <p>I think it may have to do with the order of the statements in the connection. According to the PyMySQL github (found <a href="https://github.com/PyMySQL/PyMySQL/blob/master/example.py" rel="nofollow">here</a>) the correct order is host, port, user, passwd, db.</p>
| 0 | 2016-07-28T21:07:57Z | [
"python",
"mysql",
"python-3.x",
"pymysql"
] |
Error running executable compiled with py2exe | 38,644,417 | <p>I'm trying use py2exe to compile an eye tracking experiment written in Python 2.7 (32-bit). The experiment uses the psychopy library. I wrote the experiment using the PyCharm IDE, and the experiment runs when I run it through the PyCharm IDE, using an interpreter in a virtual environment located at <code>C:\Users\phil\Python_2.7_32-bit</code>. </p>
<p>The experiment compiles without generating any errors when I enter the following command into the command prompt: <code>C:\Users\phil\Python_2.7_32-bit\Scripts\python.exe C:\Users\phil\PycharmProjects\iTRAC\VisSearch\setup.py py2exe</code>.</p>
<p>When I run the executable generated by the above py2exe command, I get this error:</p>
<pre><code>Traceback (most recent call last):
File "VisualSearch.py", line 3, in <module>
File "psychopy\__init__.pyc", line 39, in <module>
File "psychopy\preferences\__init__.pyc", line 5, in <module>
File "psychopy\preferences\preferences.pyc", line 172, in <module>
File "psychopy\preferences\preferences.pyc", line 33, in __init__
File "psychopy\preferences\preferences.pyc", line 98, in loadAll
File "psychopy\preferences\preferences.pyc", line 146, in loadAppData
File "psychopy\preferences\configobj.pyc", line 583, in __getitem__
KeyError: 'builder'
</code></pre>
<p>My setup.py script is as follows:</p>
<pre><code>from distutils.core import setup
import py2exe
setup(windows =['C:\Users\phil\PycharmProjects\iTRAC\VisSearch\VisualSearch.py'])
</code></pre>
<p>I've also tried using the following setup.py script with the same results:</p>
<pre><code>from distutils.core import setup
import py2exe
setup(windows = [{'script':'C:\Users\phil\PycharmProjects\iTRAC\VisSearch\VisualSearch.py',
'options' : {'py2exe':{'includes':['psychopy'],
'compressed': True,
'bundle_files': 1,}}}])
</code></pre>
<p>I googled the error and came up with 0 results.</p>
<p><strong>Can anybody tell me why I am running into this error?</strong></p>
| 0 | 2016-07-28T18:44:59Z | 38,666,497 | <p>This is probably a missing config/prefs file. PsychoPy uses the configobj library to read and validate preferences but my guess is that py2exe is only automatically packaging py/pyc files and needs to include the .spec files in the psychopy/preferences folder.</p>
| 0 | 2016-07-29T19:50:08Z | [
"python",
"windows",
"compiler-errors",
"py2exe",
"psychopy"
] |
Splitting line and adding numbers to a numpy array | 38,644,441 | <p>I have several text files in a folder, all with data in the form of numbers, each separated by 3 spaces. There are no line breaks. I want to take the numbers, put them in order in a numpy array, and then reshape it to be a 240 by 240 array. (I have the correct number of data points in each file to do so.) Afterwards, I want it to display my array graphically, and then do the same for the next file. However, my attempts keep giving my errors that say:</p>
<pre><code>"'unicodeescape' codec can't decode bytes in position 10-11: malformed \N character escape."
</code></pre>
<p>My code so far is:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
a = np.array([])
import glob, os
os.chdir("/mydirectory")
for file in glob.glob("*.txt"):
for line in file:
numbers = line.split(' ')
for number in numbers:
a.np.append([number])
b = a.reshape(240,240)
plt.imshow(b)
a = np.array([])
</code></pre>
| 0 | 2016-07-28T18:46:08Z | 38,645,414 | <p>It sounds like a number with reading one of the files. I'd suggest first doing a </p>
<pre><code> lines = file.readlines()
</code></pre>
<p>and making sure that the lines look right. You may also want to add a <code>strip</code></p>
<pre><code>In [244]: [int(x) for x in '121 342 123\n'.strip().split(' ')]
Out[244]: [121, 342, 123]
</code></pre>
<p>But this looping structure is also bad. It's a misuse of <code>np.append</code></p>
<pre><code>a = np.array([])
....
for number in numbers:
a.np.append([number])
In [245]: a=np.array([])
In [246]: a.np.append(['123'])
...
AttributeError: 'numpy.ndarray' object has no attribute 'np'
In [247]: a.append(['123'])
...
AttributeError: 'numpy.ndarray' object has no attribute 'append'
In [248]: np.append(a,['123'])
Out[248]:
array(['123'],
dtype='<U32')
In [249]: a
Out[249]: array([], dtype=float64)
</code></pre>
<p><code>np.append</code> returns a new array; it does not change <code>a</code> inplace.</p>
<p>You want to collect values in list (or lists of lists), or at the very least pass a list of integers to <code>np.array</code>:</p>
<pre><code>In [250]: np.array([int(x) for x in '121 342 123\n'.strip().split(' ')])
Out[250]: array([121, 342, 123])
</code></pre>
| 2 | 2016-07-28T19:44:26Z | [
"python",
"arrays",
"python-3.x",
"numpy"
] |
Batch renaming in python | 38,644,477 | <p>I want to rename image files in bulk, and give them names like 1.jpg, 2.jpg and so on. It works fine if I do it the first time. But as soon as I copy a new file with a different name like abc.jpg, the program behaves weirdly, sometimes just leaving the file abc.jpg as it is, while other times renaming it as I want, but leaving another file with a different number. </p>
<p>Here's a part of the code:</p>
<pre><code>i = 1
if(os.path.exists(path)):
for file in os.listdir(path):
new_file = str(i) + '.jpg'
if os.path.isfile(os.path.join(path, new_file)):
print new_file + ' already renamed'
else:
os.rename(os.path.join(path, file), os.path.join(path, new_file))
i += 1
print "Renaming successful!"
else:
print "Folder does not exist!"
</code></pre>
<p>Please help me with this!</p>
| -1 | 2016-07-28T18:48:22Z | 38,645,503 | <p>try this, it worked for me. You can add an if statement to check if the folder exists if u want. - </p>
<pre><code> import os
sourcepath = 'D:\Pictures'
path = os.listdir(sourcepath)
i = 1
for files in path:
if not os.path.isfile(os.path.join(sourcepath, files)):
continue
new_file = str(i)+'.jpg'
try:
os.rename(os.path.join(sourcepath, files), os.path.join(sourcepath, new_file))
i += 1
except:
print 'Error'
</code></pre>
| 0 | 2016-07-28T19:49:42Z | [
"python",
"python-2.7",
"batch-rename"
] |
Reading JSON file with Python 3 | 38,644,480 | <p>I'm using Python 3.5.2 on Windows 10 x64. The <code>JSON</code> file I'm reading is <a href="http://pastebin.com/Yjs6FAfm" rel="nofollow" title="this">this</a> which is a <code>JSON</code> array containing 2 more arrays.</p>
<p>I'm trying to parse this <code>JSON</code> file using the <code>json</code> module. As described in the <a href="https://docs.python.org/3/library/json.html" rel="nofollow" title="docs">docs</a> the <code>JSON</code> file must be compliant to <code>RFC 7159</code>. I checked my file <a href="https://jsonformatter.curiousconcept.com/" rel="nofollow" title="here">here</a> and it tells me it's perfectly fine with the <code>RFC 7159</code> format, but when trying to read it using this simple python code:</p>
<pre><code>with open(absolute_json_file_path, encoding='utf-8-sig') as json_file:
text = json_file.read()
json_data = json.load(json_file)
print(json_data)
</code></pre>
<p>I'm getting this exception:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd.py", line 2217, in <module>
globals = debugger.run(setup['file'], None, None)
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd.py", line 1643, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/Andres Torti/Git-Repos/MCF/Sur3D.App/shapes-json-checker.py", line 14, in <module>
json_data = json.load(json_file)
File "C:\Users\Andres Torti\AppData\Local\Programs\Python\Python35-32\lib\json\__init__.py", line 268, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "C:\Users\Andres Torti\AppData\Local\Programs\Python\Python35-32\lib\json\__init__.py", line 319, in loads
return _default_decoder.decode(s)
File "C:\Users\Andres Torti\AppData\Local\Programs\Python\Python35-32\lib\json\decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\Andres Torti\AppData\Local\Programs\Python\Python35-32\lib\json\decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
</code></pre>
<p>I can read this exact file perfectly fine on Javascript but I can't get Python to parse it. Is there anything wrong with my file or is any problem with the Python parser?</p>
| 0 | 2016-07-28T18:48:25Z | 38,644,565 | <p>Try this</p>
<pre><code>import json
with open('filename.txt', 'r') as f:
array = json.load(f)
print (array)
</code></pre>
| 1 | 2016-07-28T18:53:24Z | [
"python",
"json",
"parsing"
] |
Reading JSON file with Python 3 | 38,644,480 | <p>I'm using Python 3.5.2 on Windows 10 x64. The <code>JSON</code> file I'm reading is <a href="http://pastebin.com/Yjs6FAfm" rel="nofollow" title="this">this</a> which is a <code>JSON</code> array containing 2 more arrays.</p>
<p>I'm trying to parse this <code>JSON</code> file using the <code>json</code> module. As described in the <a href="https://docs.python.org/3/library/json.html" rel="nofollow" title="docs">docs</a> the <code>JSON</code> file must be compliant to <code>RFC 7159</code>. I checked my file <a href="https://jsonformatter.curiousconcept.com/" rel="nofollow" title="here">here</a> and it tells me it's perfectly fine with the <code>RFC 7159</code> format, but when trying to read it using this simple python code:</p>
<pre><code>with open(absolute_json_file_path, encoding='utf-8-sig') as json_file:
text = json_file.read()
json_data = json.load(json_file)
print(json_data)
</code></pre>
<p>I'm getting this exception:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd.py", line 2217, in <module>
globals = debugger.run(setup['file'], None, None)
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd.py", line 1643, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/Andres Torti/Git-Repos/MCF/Sur3D.App/shapes-json-checker.py", line 14, in <module>
json_data = json.load(json_file)
File "C:\Users\Andres Torti\AppData\Local\Programs\Python\Python35-32\lib\json\__init__.py", line 268, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "C:\Users\Andres Torti\AppData\Local\Programs\Python\Python35-32\lib\json\__init__.py", line 319, in loads
return _default_decoder.decode(s)
File "C:\Users\Andres Torti\AppData\Local\Programs\Python\Python35-32\lib\json\decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\Andres Torti\AppData\Local\Programs\Python\Python35-32\lib\json\decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
</code></pre>
<p>I can read this exact file perfectly fine on Javascript but I can't get Python to parse it. Is there anything wrong with my file or is any problem with the Python parser?</p>
| 0 | 2016-07-28T18:48:25Z | 38,644,625 | <p>Based on reading over the <a href="https://docs.python.org/3/library/json.html#json.load" rel="nofollow">documentation</a> again, it appears you need to either change the third line to </p>
<pre><code>json_data = json.loads(text)
</code></pre>
<p>or remove the line</p>
<pre><code>text = json_file.read()
</code></pre>
<p>since <code>read()</code> causes the file's index to reach the end of the file. (I suppose, alternatively, you can reset the index of the file).</p>
| 1 | 2016-07-28T18:56:23Z | [
"python",
"json",
"parsing"
] |
How to use a dictionary in recursion | 38,644,493 | <p>I have a recursive function that is similar to DFS on a graph G=(V,E) that finds all simple paths. The recursion is in a for loop so that when the function returns it may recurse again before returning itself. </p>
<p>Just to set the picture:</p>
<pre><code>def algo(M,v):
for u in v.neighbors:
# do stuff
M[u.name] = u # dictionary
algo(M,u)
</code></pre>
<p>However since M is a dictionary, it is treated as a mutable object so that when the function returns it does not restore M like it would for immutable objects. What is the best pythonic way of accomplishing this?</p>
<p>I don't think the deepcopy function in the copy library will be the best choice due to the problems outlined in the docs causing a recursive loop: <a href="https://docs.python.org/2/library/copy.html" rel="nofollow">https://docs.python.org/2/library/copy.html</a></p>
<p>How can this be done?</p>
| -1 | 2016-07-28T18:49:06Z | 38,651,681 | <p>Do you simply want to insert an item into a dictionary, recur, and then remove the item:</p>
<pre><code>def algo(M, v):
for u in v.neighbors:
# do stuff
M[u.name] = u # temporarily extend dictionary
result = algo(M, u)
del M[u.name] # clean up
# check result
</code></pre>
<p>If you want to make sure the dictionary is pristine for the next iteration and recurance, you can pass an augmented (shallow) copy:</p>
<pre><code>def algo(M, v):
for u in v.neighbors:
# do stuff
# create a new extended dictionary on the fly
result = algo({**M, u.name: u}, u))
# no need to clean up afterwards
# check result
</code></pre>
<p>If you're not running Python 3.5 or later, then use a more verbose syntax, something like:</p>
<pre><code>def algo(M, v):
for u in v.neighbors:
# do stuff
# create a new extended dictionary on the fly
result = algo(dict(list(M.items()) + [(u.name, u)]), u))
# no need to clean up afterwards
# check result
</code></pre>
<p>Or are you trying to achieve something else?</p>
| 1 | 2016-07-29T06:13:05Z | [
"python",
"dictionary",
"recursion",
"copy"
] |
Nearest value iteration | 38,644,511 | <pre><code>s1 = pd.Series({11:100, 13:102, 17:99})
s2 = pd.Series({10:1, 14:2, 18:3})
</code></pre>
<p>Having these series, individually I can find s2's value by the nearest index of s2, using s1's index. Example:</p>
<p><code>s2.values[np.abs(s2.index - s1.index[0]).argmin()]</code> </p>
<p>Returns 1 because 11, s1's first index is closest to 10.</p>
<p>What I can't seem to figure out is how to create a DataFrame that has s1 and these values iterated, without using a for loop which I've been been taught is unpractical in pandas.</p>
<p>So the desired outcome is a DataFrame with s1's values in one column and the other having s2's value using the code above.</p>
| 1 | 2016-07-28T18:50:17Z | 38,644,762 | <p>If I understand correctly, I think you want to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reindex.html" rel="nofollow"><code>reindex</code></a> on <code>s2</code> with <code>method='nearest'</code>:</p>
<pre><code>s2 = s2.reindex(s1.index, method='nearest')
df = pd.DataFrame({'s1': s1, 's2': s2})
</code></pre>
<p>The resulting output:</p>
<pre><code> s1 s2
11 100 1
13 102 2
17 99 3
</code></pre>
| 2 | 2016-07-28T19:04:15Z | [
"python",
"pandas"
] |
Pandas query causing RuntimeWarning: divide by zero encountered in log10 | 38,644,532 | <p>I saw a similar post (<a href="http://stackoverflow.com/questions/32659279/runtimewarning-divide-by-zero-encountered-in-log10-in-pandas-align-py-prob-fro">RuntimeWarning: divide by zero encountered in log10 in pandas align.py, prob from `query`-- cause/solution?</a>), but there was no sample code, followup or answer. Is this a known bug with <code>query</code>?</p>
<p>When using <code>pd.query()</code>, I'm getting a <code>RuntimeWarning</code> that doesn't make sense to me. I tried it a few different ways and tried to simplify the data to make sure it wasn't me, but I continually get the warning no matter what I do. Is anyone able to run the below code without getting the warning? Is this a bug?</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame.from_items([('A', [1, 2, 3, 4, 5]),
('B', [14, 15, 16, 17, 18]),
('C', list(np.random.randn(5)))])
df.set_index(['A', 'B'], inplace = True)
df.query('15 <= B <= 17')
C:\Anaconda3\lib\site-packages\pandas\computation\align.py:98:
RuntimeWarning: divide by zero encountered in log10
ordm = np.log10(abs(reindexer_size - term_axis_size))
Out[1]:
C
A B
2 15 -0.852411
3 16 -0.665470
4 17 0.132162
</code></pre>
<p>I thought maybe it was the fact that I was using <code>15 <= B <= 17</code> so I tried to break that down into pieces, but got the same RuntimeWarning.</p>
<pre><code>In [2]: df.query('B <= 17')
C:\Anaconda3\lib\site-packages\pandas\computation\align.py:98:
RuntimeWarning: divide by zero encountered in log10
ordm = np.log10(abs(reindexer_size - term_axis_size))
Out[2]:
C
A B
1 14 0.380167
2 15 -0.852411
3 16 -0.665470
4 17 0.132162
In [3]: df.query('B >= 15')
C:\Anaconda3\lib\site-packages\pandas\computation\align.py:98:
RuntimeWarning: divide by zero encountered in log10
ordm = np.log10(abs(reindexer_size - term_axis_size))
Out[3]:
C
A B
2 15 -0.852411
3 16 -0.665470
4 17 0.132162
5 18 0.697867
</code></pre>
<ul>
<li>Windows 7 64 bit</li>
<li>Spyder 3.0.0.dev0</li>
<li>Python 3.5.2</li>
<li>Pandas 0.18.1</li>
<li>Numpy 1.11.1</li>
</ul>
| 2 | 2016-07-28T18:51:36Z | 38,644,712 | <p>There is a <a href="https://github.com/pydata/pandas/issues/13135" rel="nofollow">bug</a> and it's still open.</p>
<p>It's working properly for me:</p>
<pre><code>In [2]: df
Out[2]:
C
A B
1 14 0.159137
2 15 2.275652
3 16 0.798102
4 17 0.774097
5 18 2.361938
In [3]: df.query('15 <= B <= 17')
Out[3]:
C
A B
2 15 2.275652
3 16 0.798102
4 17 0.774097
In [4]: df.query('B <= 17')
Out[4]:
C
A B
1 14 0.159137
2 15 2.275652
3 16 0.798102
4 17 0.774097
</code></pre>
| 0 | 2016-07-28T19:01:55Z | [
"python",
"python-3.x",
"pandas",
"dataframe"
] |
How to pass the value of a variable into a mongo query? | 38,644,539 | <p>The following mongo query works correctly:
<code>db.data.find({"data.id": "5" })</code></p>
<p>I want to pass the value of a variable for an argument list to the query in a python program:</p>
<pre><code>import pymongo
from pymongo import MongoClient
client=MongoClient()
db=client.dhlab
data=db.data
for data in db.data.find({"data.id":'"' $id '"'}):
print data
</code></pre>
<p>I tried the following for <code>db.data.find({"data.id":id})</code> without success:
<code>/"id/", '"'id'"', /"$id/"</code></p>
<p>How can I fix this?</p>
| 0 | 2016-07-28T18:52:04Z | 38,644,654 | <p>First, could you share you existing document schema? for refferences. </p>
<pre><code>db.data.findOne()
</code></pre>
<p>Did you explicitly use have field <code>id</code> on the document?</p>
<p>MongoDB store document using <code>_id</code>, maybe you got it wrong.</p>
| 0 | 2016-07-28T18:58:15Z | [
"python",
"mongodb"
] |
Import error on 'from six import raise_from' | 38,644,575 | <p>I'm trying to use the gmusicapi(<a href="https://github.com/simon-weber/gmusicapi" rel="nofollow">https://github.com/simon-weber/gmusicapi</a>). However when I try the following line:</p>
<pre><code>from gmusicapi import Webclient
</code></pre>
<p>I get the following error message:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/gmusicapi-10.0.2rc1-py2.7.egg/gmusicapi/__init__.py", line 4, in <module>
from gmusicapi.clients import Webclient, Musicmanager, Mobileclient
File "/usr/local/lib/python2.7/dist-packages/gmusicapi-10.0.2rc1-py2.7.egg/gmusicapi/clients/__init__.py", line 2, in <module>
from gmusicapi.clients.webclient import Webclient
File "/usr/local/lib/python2.7/dist-packages/gmusicapi-10.0.2rc1-py2.7.egg/gmusicapi/clients/webclient.py", line 11, in <module>
from gmusicapi.protocol import webclient
File "/usr/local/lib/python2.7/dist-packages/gmusicapi-10.0.2rc1-py2.7.egg/gmusicapi/protocol/webclient.py", line 6, in <module>
from six import raise_from
ImportError: cannot import name raise_from
</code></pre>
<p>I'm not sure why I am unable to import raise_from.</p>
<p>I'm running python2.7.6 with six version at 1.5.2</p>
| 0 | 2016-07-28T18:54:03Z | 38,644,781 | <p>It works fine for me. And I have python 2.7.5.</p>
<p>Try this <code>pip install --upgrade six</code></p>
| 0 | 2016-07-28T19:05:24Z | [
"python",
"six"
] |
Python: How to delete rows ending in certain characters? | 38,644,696 | <p>I have a large data file and I need to delete rows that end in certain letters.</p>
<p>Here is an example of the file I'm using:</p>
<pre><code>User Name DN
MB212DA CN=MB212DA,CN=Users,DC=prod,DC=trovp,DC=net
MB423DA CN=MB423DA,OU=Generic Mailbox,DC=prod,DC=trovp,DC=net
MB424PL CN=MB424PL,CN=Users,DC=prod,DC=trovp,DC=net
MBDA423 CN=MBDA423,OU=DNA,DC=prod,DC=trovp,DC=net
MB2ADA4 CN=MB2ADA4,OU=DNA,DC=prod,DC=trovp,DC=netenter code here
</code></pre>
<p>Code I am using:</p>
<pre><code>from pandas import DataFrame, read_csv
import pandas as pd
f = pd.read_csv('test1.csv', sep=',',encoding='latin1')
df = f.loc[~(~pd.isnull(f['User Name']) & f['UserName'].str.contains("DA|PL",))]
</code></pre>
<p>How do I use regular expression syntax to delete the words that end in "DA" and "PL" but make sure I do not delete the other rows because they contain "DA" or "PL" inside of them?</p>
<p>It should delete the rows and I end up with a file like this:</p>
<pre><code>User Name DN
MBDA423 CN=MBDA423,OU=DNA,DC=prod,DC=trovp,DC=net
MB2ADA4 CN=MB2ADA4,OU=DNA,DC=prod,DC=trovp,DC=net
</code></pre>
<p>First 3 rows are deleted because they ended in DA and PL.</p>
| 9 | 2016-07-28T19:01:08Z | 38,644,778 | <p>Instead of <code>regular expressions</code>, you can use the <a href="https://docs.python.org/3/library/stdtypes.html#str.endswith" rel="nofollow"><code>endswith()</code></a> method to check if a string ends with a specific pattern.</p>
<p>I.e.:</p>
<pre><code>for row in rows:
if row.endswith('DA') or row.endswith('PL'):
#doSomething
</code></pre>
<p>You should create another df using the filtered data, and then use <code>pd.to_csv()</code> to save a clean version of your file.</p>
| 0 | 2016-07-28T19:05:17Z | [
"python",
"python-3.x",
"pandas"
] |
Python: How to delete rows ending in certain characters? | 38,644,696 | <p>I have a large data file and I need to delete rows that end in certain letters.</p>
<p>Here is an example of the file I'm using:</p>
<pre><code>User Name DN
MB212DA CN=MB212DA,CN=Users,DC=prod,DC=trovp,DC=net
MB423DA CN=MB423DA,OU=Generic Mailbox,DC=prod,DC=trovp,DC=net
MB424PL CN=MB424PL,CN=Users,DC=prod,DC=trovp,DC=net
MBDA423 CN=MBDA423,OU=DNA,DC=prod,DC=trovp,DC=net
MB2ADA4 CN=MB2ADA4,OU=DNA,DC=prod,DC=trovp,DC=netenter code here
</code></pre>
<p>Code I am using:</p>
<pre><code>from pandas import DataFrame, read_csv
import pandas as pd
f = pd.read_csv('test1.csv', sep=',',encoding='latin1')
df = f.loc[~(~pd.isnull(f['User Name']) & f['UserName'].str.contains("DA|PL",))]
</code></pre>
<p>How do I use regular expression syntax to delete the words that end in "DA" and "PL" but make sure I do not delete the other rows because they contain "DA" or "PL" inside of them?</p>
<p>It should delete the rows and I end up with a file like this:</p>
<pre><code>User Name DN
MBDA423 CN=MBDA423,OU=DNA,DC=prod,DC=trovp,DC=net
MB2ADA4 CN=MB2ADA4,OU=DNA,DC=prod,DC=trovp,DC=net
</code></pre>
<p>First 3 rows are deleted because they ended in DA and PL.</p>
| 9 | 2016-07-28T19:01:08Z | 38,644,843 | <p>You can use a boolean mask whereby you check if the last two characters of <code>User_Name</code> are in not (<code>~</code>) in a set of two character endings:</p>
<pre><code>>>> df[~df.User_Name.str[-2:].isin(['DA', 'PA'])]
User_Name DN
2 MB424PL CN=MB424PL, CN=Users, DC=prod, DC=trovp, DC=net
3 MBDA423 CN=MBDA423, OU=DNA, DC=prod, DC=trovp, DC=net
4 MB2ADA4 CN=MB2ADA4, OU=DNA, DC=prod, DC=trovp, DC=nete...
</code></pre>
| 2 | 2016-07-28T19:09:09Z | [
"python",
"python-3.x",
"pandas"
] |
Python: How to delete rows ending in certain characters? | 38,644,696 | <p>I have a large data file and I need to delete rows that end in certain letters.</p>
<p>Here is an example of the file I'm using:</p>
<pre><code>User Name DN
MB212DA CN=MB212DA,CN=Users,DC=prod,DC=trovp,DC=net
MB423DA CN=MB423DA,OU=Generic Mailbox,DC=prod,DC=trovp,DC=net
MB424PL CN=MB424PL,CN=Users,DC=prod,DC=trovp,DC=net
MBDA423 CN=MBDA423,OU=DNA,DC=prod,DC=trovp,DC=net
MB2ADA4 CN=MB2ADA4,OU=DNA,DC=prod,DC=trovp,DC=netenter code here
</code></pre>
<p>Code I am using:</p>
<pre><code>from pandas import DataFrame, read_csv
import pandas as pd
f = pd.read_csv('test1.csv', sep=',',encoding='latin1')
df = f.loc[~(~pd.isnull(f['User Name']) & f['UserName'].str.contains("DA|PL",))]
</code></pre>
<p>How do I use regular expression syntax to delete the words that end in "DA" and "PL" but make sure I do not delete the other rows because they contain "DA" or "PL" inside of them?</p>
<p>It should delete the rows and I end up with a file like this:</p>
<pre><code>User Name DN
MBDA423 CN=MBDA423,OU=DNA,DC=prod,DC=trovp,DC=net
MB2ADA4 CN=MB2ADA4,OU=DNA,DC=prod,DC=trovp,DC=net
</code></pre>
<p>First 3 rows are deleted because they ended in DA and PL.</p>
| 9 | 2016-07-28T19:01:08Z | 38,644,862 | <p>You could use this expression</p>
<pre><code>df = df[~df['User Name'].str.contains('(?:DA|PL)$')]
</code></pre>
<p>It will return all rows that don't end in either DA or PL. </p>
<p>The <code>?:</code> is so that the brackets would not capture anything. Otherwise, you'd see pandas returning the following (harmless) warning: </p>
<pre><code>UserWarning: This pattern has match groups. To actually get the groups, use str.extract.
</code></pre>
<p>Alternatively, using <code>endswith()</code> and without regular expressions, the same filtering could be achieved by using the following expression:</p>
<pre><code>df = df[~df['User Name'].str.endswith(('DA', 'PL'))]
</code></pre>
<p>As expected, the version without regular expression will be faster. A simple test, consisting of <code>big_df</code>, which consists of 10001 copies of your original <code>df</code>:</p>
<pre><code># Create a larger DF to get better timing results
big_df = df.copy()
for i in range(10000):
big_df = big_df.append(df)
print(big_df.shape)
>> (50005, 2)
# Without regular expressions
%%timeit
big_df[~big_df['User Name'].str.endswith(('DA', 'PL'))]
>> 10 loops, best of 3: 22.3 ms per loop
# With regular expressions
%%timeit
big_df[~big_df['User Name'].str.contains('(?:DA|PL)$')]
>> 10 loops, best of 3: 61.8 ms per loop
</code></pre>
| 7 | 2016-07-28T19:10:13Z | [
"python",
"python-3.x",
"pandas"
] |
In scipy classification models (such as the svm.svc), how can one get a list of the names for all classes? | 38,644,787 | <p>In scipy classification models (such as the svm.svc), how can one get a list of the names for all classes the model may classify points into?</p>
| 0 | 2016-07-28T19:05:36Z | 38,688,820 | <p>For <a href="http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html" rel="nofollow">svm.SVC</a> (and many other models, like LogisticRegression etc.) there is an attribute <code>classes_</code> thus on trained model <code>clf</code> you can do</p>
<pre><code>print clf.classes_
</code></pre>
| 1 | 2016-07-31T22:14:52Z | [
"python",
"machine-learning",
"scikit-learn",
"classification",
"svm"
] |
pandas.DataFrame.replace, and for the first column | 38,644,794 | <p>How to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="nofollow"><code>pandas.DataFrame.replace</code></a> to replace all occurrences of a string in a pandas dataframe? </p>
<p>This seems to be rather dumb question, but please <a href="https://gist.github.com/suntong/40a4efeafcaed5616d288dd05aff9d0d" rel="nofollow">take a look here</a>, </p>
<p>I.e., of the following three <code>DataFrame.replace</code>, only one is working, the other two are not. How can I fix them? </p>
<p>Also, how to limit the replacement within the first column? (I tried to use <code>, axis=1</code>, but was told that it is now deprecated). Thx. </p>
<pre><code>df = pd.DataFrame({'A': np.random.choice(['foo', 'bar'], 100),
'B': np.random.choice(['one', 'two', 'three'], 100),
'C': np.random.choice(['I1', 'I2', 'I3', 'I4'], 100),
'D': np.random.randint(-10,11,100),
'E': np.random.randn(100)})
p = pd.pivot_table(df, index=['A','B'], columns='C', values='D')
df.replace("fo","fu", regex=True,inplace=True)
p.replace("fo","fu", regex=True,inplace=True)
p.index = p.index.to_series().str.join('-')
r=p.copy()
r.replace("fo","fu", regex=True,inplace=True)
</code></pre>
| 1 | 2016-07-28T19:06:05Z | 38,645,236 | <p>The issue with your other two uses of replace is that replace only works on the values in the columns, not in the index, and in <code>p</code> and <code>r</code> 'fo' is only found in the index! So, one approach that would work is:</p>
<pre><code>p.index = p.index.to_series().replace('fo','fu', regex=True)
r.index = r.index.to_series().replace('fo','fu', regex=True)
</code></pre>
<p>So, you see...</p>
<pre><code>>>> r.index = r.index.to_series().replace('fo','fu', regex=True)
>>> r
C I1 I2 I3 I4
bar-one 6.666667 1.875000 -3.750000 6.000000
bar-three -1.750000 -4.000000 -2.200000 -1.000000
bar-two 4.000000 -2.666667 -4.333333 4.000000
fuo-one 5.400000 -0.400000 4.000000 0.000000
fuo-three 0.166667 -4.500000 -5.000000 -5.571429
fuo-two 2.100000 -4.000000 4.500000 -1.857143
</code></pre>
<p>In general, if you only want to replace on a single column, you need to use something like:</p>
<pre><code>df.loc[:,'B'].replace('th','TH', regex=True, inplace=True)
</code></pre>
<p>So...</p>
<pre><code>>>> df.loc[:,'B'].replace('th','TH', regex=True, inplace=True)
>>> df.head(10)
A B C D E
0 bar two I3 -7 1.212562
1 bar two I3 -9 1.480350
2 bar one I2 1 -1.555152
3 fuo two I2 -9 -1.754800
4 bar one I3 -4 1.455140
5 fuo THree I4 -9 0.131374
6 fuo one I4 -7 1.876776
7 fuo one I1 3 0.170372
8 bar two I1 3 -0.647829
9 bar one I3 -1 0.796723
</code></pre>
<p>Note, the latter construct let's you control the rows as well!</p>
<pre><code>>>> df.loc[0:5,'B'].replace('on','ON', regex=True, inplace=True)
>>> df.head(10)
A B C D E
0 bar two I3 -7 1.212562
1 bar two I3 -9 1.480350
2 bar ONe I2 1 -1.555152
3 fuo two I2 -9 -1.754800
4 bar ONe I3 -4 1.455140
5 fuo THree I4 -9 0.131374
6 fuo one I4 -7 1.876776
7 fuo one I1 3 0.170372
8 bar two I1 3 -0.647829
9 bar one I3 -1 0.796723
</code></pre>
| 0 | 2016-07-28T19:32:23Z | [
"python",
"python-3.x",
"replace",
"dataframe"
] |
pandas.DataFrame.replace, and for the first column | 38,644,794 | <p>How to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="nofollow"><code>pandas.DataFrame.replace</code></a> to replace all occurrences of a string in a pandas dataframe? </p>
<p>This seems to be rather dumb question, but please <a href="https://gist.github.com/suntong/40a4efeafcaed5616d288dd05aff9d0d" rel="nofollow">take a look here</a>, </p>
<p>I.e., of the following three <code>DataFrame.replace</code>, only one is working, the other two are not. How can I fix them? </p>
<p>Also, how to limit the replacement within the first column? (I tried to use <code>, axis=1</code>, but was told that it is now deprecated). Thx. </p>
<pre><code>df = pd.DataFrame({'A': np.random.choice(['foo', 'bar'], 100),
'B': np.random.choice(['one', 'two', 'three'], 100),
'C': np.random.choice(['I1', 'I2', 'I3', 'I4'], 100),
'D': np.random.randint(-10,11,100),
'E': np.random.randn(100)})
p = pd.pivot_table(df, index=['A','B'], columns='C', values='D')
df.replace("fo","fu", regex=True,inplace=True)
p.replace("fo","fu", regex=True,inplace=True)
p.index = p.index.to_series().str.join('-')
r=p.copy()
r.replace("fo","fu", regex=True,inplace=True)
</code></pre>
| 1 | 2016-07-28T19:06:05Z | 38,645,342 | <h3>Making replacements in just one column</h3>
<p>Get that column as a Series and use the <code>Series.replace</code> method:</p>
<pre><code>df['A'] = df['A'].replace("fo", "fu", regex=True)
</code></pre>
<p>or</p>
<pre><code>df['A'].replace("fo", "fu", regex=True, inplace=True)
</code></pre>
<h3>Making replacements in an index</h3>
<p>Both <code>Series.replace</code> and <code>DataFrame.replace</code> act only on the data itself, not on the index. You need to reset the index to a column, make the replacement and set the index back again.</p>
<pre><code>r = p.copy()
r.index = r.index.to_series().str.join('-')
r.index.name = 'A-B'
r = (r.reset_index() # reset the index
.replace('fo', 'fu', regex=True) # make the replacement
.set_index('A-B')) # set the index again
</code></pre>
<p>You can combine this with the above — so the replacement <em>only</em> happens in the index.</p>
<pre><code>r = p.copy()
r.index = r.index.to_series().str.join('-')
r.index.name = 'A-B'
r = r.reset_index()
r['A-B'] = r['A-B'].replace('fo', 'fu', regex=True)
r = r.set_index('A-B')
</code></pre>
<p>Even shorter is @juanpa.arrivillaga's code:</p>
<pre><code>r.index = r.index.to_series().replace('fo','fu', regex=True)
</code></pre>
<h3>Making replacements in a <code>MultiIndex</code></h3>
<p>Again, you need to reset whichever level of the <code>MultiIndex</code> you want, make the replacement and put it back in the index.</p>
<pre><code>p = (p.reset_index(level='A') # reset only level 'A'
.replace('fo', 'fu', regex=True) # make the replacement
.set_index('A', append=True) # send 'A' to the index again
.swaplevel()) # if you want 'A' before 'B'
</code></pre>
<p>Or, to make the replacement <em>only</em> in that level of the index:</p>
<pre><code>p = p.reset_index(level='A')
p['A'] = p['A'].replace('fo', 'fu', regex=True)
p = p.set_index('A', append=True).swaplevel()
</code></pre>
<p>But notice that </p>
<pre><code>p.index = p.index.to_series().replace('fo', 'fu', regex=True)
</code></pre>
<p>does not quite give what you want here — it replace the <code>MultiIndex</code> with a plain index.</p>
| 1 | 2016-07-28T19:39:32Z | [
"python",
"python-3.x",
"replace",
"dataframe"
] |
PyCurl Timing Out With Working Network Connection on Windows 10 | 38,644,845 | <p>When using GrabLib, which uses PyCurl/LibCurl to make requests, I keep getting a timeout error when sending a request. When using the requests module, however, the requests.get method connects to the third-party website with no issues. </p>
<p>Here is my sample code which uses requests and pycurl:</p>
<pre><code>import pycurl
import requests
r = requests.get('http://www.google.com')
print r
c = pycurl.Curl()
c.setopt(pycurl.TIMEOUT_MS, 3000)
c.setopt(pycurl.URL, 'http://www.google.com/')
c.perform()
</code></pre>
<p>Here is the output of the code:</p>
<pre><code><Response [200]>
Traceback (most recent call last):
File "C:/Users/redacted/test2.py", line 10, in <module>
print c.perform()
pycurl.error: (28, 'Resolving timed out after 3000 milliseconds')
</code></pre>
<p>Can someone let me know why this may be happening? I'm at a dead end here.</p>
| 0 | 2016-07-28T19:09:12Z | 38,708,479 | <p>After translating some Russian Google Group comments I've determined that downgrading from "PycURL / 7.43. 0 libcurl / 7.47. 0 OpenSSL / 1.0 2e zlib / 1.2 8 c -.. Ares / 1.10 0 libssh2 / 1.6 0" to "PycURL/7.19.5.3 libcurl/7.45.0 WinSSL zlib/1.2.8" (printed with "print (pycurl.version)") fixes whatever issue I was having. I'm not sure of the intricacies of PyCurl, and what's changed between these versions, so I could not tell you why this was happening. I just know that this resolved the issue for now. </p>
<p>If it helps at all, this started happening when I switched from my work network to my home network; but I also had a VPN that runs on startup (P.I.A.). This leads me to believe that some kind of Windows 10 network setting was the cause of this issue. At first when the problem was occurring, I could reset my machine and it would fix the issue, only for it to happen a bit later (I assume when my VPN connected?). After I went back from my home network to my work network I uninstalled P.I.A., for unrelated reasons, and PyCurl stopped working completely. Again, downgrading fixed the issue for now, for whatever reason. </p>
<p>If anyone can provide more insight into why this might have happened only with PyCurl, then it would be much obliged.</p>
<p>Links for reference:</p>
<p><a href="https://groups.google.com/forum/#!topic/python-grab/PwoplNwa1TI" rel="nofollow">https://groups.google.com/forum/#!topic/python-grab/PwoplNwa1TI</a></p>
<p><a href="https://bintray.com/pycurl/pycurl/pycurl/view#files" rel="nofollow">https://bintray.com/pycurl/pycurl/pycurl/view#files</a> (pycurl-7.19.5.1.win32-py2.7.msi) (I uninstalled pycurl and installed this version on Windows 10 x64)</p>
| 0 | 2016-08-01T21:33:18Z | [
"python",
"networking",
"windows-10",
"libcurl",
"pycurl"
] |
How can i tell if this repository is using python3 or 2? | 38,644,870 | <p><a href="https://github.com/tekno45/dndchar" rel="nofollow">https://github.com/tekno45/dndchar</a></p>
<p>I forked this repository and i'd like to work on it at work. Is there a way to tell if it's written in 3 or 2?</p>
| -2 | 2016-07-28T19:10:29Z | 38,644,887 | <p>It's written in Python 2.</p>
<p>The biggest hint is that <code>print</code> is a statement there, not a function.</p>
| 4 | 2016-07-28T19:11:38Z | [
"python",
"python-2.7",
"python-3.x",
"github"
] |
Cython - Exposing C++ (vector and non-vector) objects returned by C++ function to Python | 38,644,895 | <p>I am working on a project where I have a large C++ code base available which I want to wrap with cython and made available in python.</p>
<p>In doing so, I am facing a situation where some of my C++ functions returns Vector object or simple object (with multiple attributes). I want to return this object to Python so that its values can be accessed.</p>
<p>For doing that, I am following this post almost exactly: <a href="http://stackoverflow.com/questions/33677231/how-to-expose-a-function-returning-a-c-object-to-python-without-copying-the-ob">How to expose a function returning a C++ object to Python without copying the object?</a></p>
<p>I have a very similar requirement. Please refer to the <strong>move</strong> construct that is implemented/used in the above forum. </p>
<p>Following is my code that I am trying to implement for simple (non-vector case):</p>
<p><strong>test_header.pxd</strong></p>
<pre class="lang-py prettyprint-override"><code>from libcpp.vector cimport vector
from libcpp.string cimport string
from libcpp.map cimport map
cdef extern from "main_class.h":
cdef cppclass main_class:
int ID1
double ID2
cdef extern from "class2.h":
cdef cppclass class2:
class2() except +
class2(const double& T) except +
void Add(const main_class& ev)
const vector[main_class]& GetEvs()
#Process Class
cdef extern from "Pclass.h":
cdef cppclass Pclass:
Pclass(const unsigned& n, const unsigned& dims) except +
unsigned GetDims()
double processNext(const class2& data, const unsigned& num_iter)
cdef extern from "main_algo.h":
#TODO: Check if inheritance works correctly, virtual functions, objects, std::vector
cdef cppclass main_algo:
main_algo(const unsigned& dims) except +
main_class getNext(Pclass& pr, const class2& d)
</code></pre>
<p><strong>test.pyx</strong></p>
<pre class="lang-py prettyprint-override"><code>from header cimport main_class, class2, Pclass, main_algo
from libcpp.vector cimport vector
from libcpp.string cimport string
from libcpp.map cimport map
cdef extern from "<utility>":
vector[class2]&& move(vector[class2]&&)
main_class&& move(main_class&&)
cdef class main_class_2:
cdef main_class* thisptr
cdef main_class_3 evs
def __cinit__(self,main_class_3 evs):
self.evs = evs
self.thisptr = &evs.evs
cdef class main_class_3:
cdef main_class evs
cdef move_from(self, main_class&& move_this):
self.evs = move(move_this)
cdef class implAlgo:
cdef:
main_algo *_thisptr
def __cinit__(implAlgo self):
self._thisptr = NULL
def __init__(implAlgo self, unsigned dims):
self._thisptr = new main_algo(dims)
def __dealloc__(implAlgo self):
if self._thisptr != NULL:
del self._thisptr
cdef int _check_alive(implAlgo self) except -1:
if self._thisptr == NULL:
raise RuntimeError("Wrapped C++ object is deleted")
else:
return 0
cdef getNext(implAlgo self, Pclass& p, const class2& s):
self._check_alive()
cdef main_class evs = self._thisptr.getNext(p, sq)
retval = main_class_3()
retval.move_from(move(evs))
return retval
</code></pre>
<p>Here, the class <code>main_algo</code> implements the method <code>getNext()</code> which returns an object of class <code>main_class</code>.</p>
<p>From test.pyx, I want to return this object to purely python file where its values can be accessed. </p>
<p>When I try to compile the above code, I get multiple instances of
following error at several places wherever I use that method and I get similar error for different tokens like ')' or '*' . Some example errors are: </p>
<pre><code>sources.cpp:5121:70: error: expected primary-expression before â*â token
__pyx_vtable_11Cython_test_7sources_main_class_3.move_from = (PyObject *(*)(struct __pyx_obj_11Cython_test_7sources_main_class_3 *, main_class &&))__pyx_f_11Cython_test_7sources_8main_class_3_move_from;
^
sources.cpp:5121:73: error: expected primary-expression before â)â token
__pyx_vtable_11Cython_test_7sources_main_class_3.move_from = (PyObject *(*)(struct __pyx_obj_11Cython_test_7sources_main_class_3 *, main_class &&))__pyx_f_11Cython_test_7sources_8main_class_3_move_from;
^
sources.cpp:5121:75: error: expected primary-expression before âstructâ
__pyx_vtable_11Cython_test_7sources_main_class_3.move_from = (PyObject *(*)(struct __pyx_obj_11Cython_test_7sources_main_class_3 *, main_class &&))__pyx_f_11Cython_test_7sources_8main_class_3_move_from;
^
sources.cpp:5121:133: error: expected primary-expression before â&&â token
__pyx_vtable_11Cython_test_7sources_main_class_3.move_from = (PyObject *(*)(struct __pyx_obj_11Cython_test_7sources_main_class_3 *, main_class &&))__pyx_f_11Cython_test_7sources_8main_class_3_move_from;
</code></pre>
<p>But all these tokens are related to the objects that I create to move the C++ object to python. There is no error with any other declaration. </p>
<p>Can someone please help to show me where am I wrong?</p>
| 0 | 2016-07-28T19:12:02Z | 38,647,224 | <p>If C++ methods return pointers to objects (or if the ownership of the data can be transferred), and the underlying data can be accessed, memory views should be usable.</p>
<p>The following example is for a <code>vector</code> of integers (<code>int</code>). Class <code>test_view</code> has a reference to the object containing the data and to the view object so the lifetime of those two should be the same.</p>
<p><em>test.pxd</em></p>
<pre><code>from libcpp.vector cimport vector
cdef public class test [object cTest, type cTest]:
cdef vector[int] * test
</code></pre>
<p><em>test.pyx</em></p>
<pre><code>from libcpp.vector cimport vector
class test_view:
def __init__(self, test obj):
cdef ssize_t N = obj.test[0].size()
cdef int[::1] v = <int[:N]>obj.test[0].data()
self._obj = obj
self._view = v
def get(self):
return self._view
cdef class test:
def __cinit__(self):
self.test = NULL
def __init__(self, seq):
self.test = new vector[int]()
cdef int i
for i in seq:
self.test[0].push_back(i)
def __dealloc__(self):
print("Dealloc")
if self.test != NULL:
del self.test
# Expose size method of std::vector
def size(self):
return self.test[0].size()
def view(self):
# return an instance of test_view, object should stay alive
# for the duration of test_view instance
return test_view(self)
</code></pre>
<p><em>setup.py</em></p>
<pre><code>from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
ext_modules = [ Extension('test', ['test.pyx'], language='c++') ]
setup(name = 'test',
ext_modules = ext_modules,
cmdclass = {'build_ext': build_ext})
</code></pre>
<p><em>run.py</em></p>
<pre><code>import test
a = test.test([1, 2, 3, 4, 5, 6])
v = a.view()
print('Try to cause deallocation')
a = None
print('Print view')
for i in v.get():
print('\t{:-12d}'.format(i))
nv = v._view
v = None
print('Not good, should print junk numbers')
for i in nv:
print('\t{:-12d}'.format(i))
</code></pre>
<p>When <em>run.py</em> is executed,</p>
<pre><code>Try to cause deallocation
Print view
1
2
3
4
5
6
Dealloc
Not good, should print junk numbers
11966656
0
3
4
5
6
</code></pre>
| 0 | 2016-07-28T21:43:11Z | [
"python",
"c++",
"object",
"cython"
] |
Cython - Exposing C++ (vector and non-vector) objects returned by C++ function to Python | 38,644,895 | <p>I am working on a project where I have a large C++ code base available which I want to wrap with cython and made available in python.</p>
<p>In doing so, I am facing a situation where some of my C++ functions returns Vector object or simple object (with multiple attributes). I want to return this object to Python so that its values can be accessed.</p>
<p>For doing that, I am following this post almost exactly: <a href="http://stackoverflow.com/questions/33677231/how-to-expose-a-function-returning-a-c-object-to-python-without-copying-the-ob">How to expose a function returning a C++ object to Python without copying the object?</a></p>
<p>I have a very similar requirement. Please refer to the <strong>move</strong> construct that is implemented/used in the above forum. </p>
<p>Following is my code that I am trying to implement for simple (non-vector case):</p>
<p><strong>test_header.pxd</strong></p>
<pre class="lang-py prettyprint-override"><code>from libcpp.vector cimport vector
from libcpp.string cimport string
from libcpp.map cimport map
cdef extern from "main_class.h":
cdef cppclass main_class:
int ID1
double ID2
cdef extern from "class2.h":
cdef cppclass class2:
class2() except +
class2(const double& T) except +
void Add(const main_class& ev)
const vector[main_class]& GetEvs()
#Process Class
cdef extern from "Pclass.h":
cdef cppclass Pclass:
Pclass(const unsigned& n, const unsigned& dims) except +
unsigned GetDims()
double processNext(const class2& data, const unsigned& num_iter)
cdef extern from "main_algo.h":
#TODO: Check if inheritance works correctly, virtual functions, objects, std::vector
cdef cppclass main_algo:
main_algo(const unsigned& dims) except +
main_class getNext(Pclass& pr, const class2& d)
</code></pre>
<p><strong>test.pyx</strong></p>
<pre class="lang-py prettyprint-override"><code>from header cimport main_class, class2, Pclass, main_algo
from libcpp.vector cimport vector
from libcpp.string cimport string
from libcpp.map cimport map
cdef extern from "<utility>":
vector[class2]&& move(vector[class2]&&)
main_class&& move(main_class&&)
cdef class main_class_2:
cdef main_class* thisptr
cdef main_class_3 evs
def __cinit__(self,main_class_3 evs):
self.evs = evs
self.thisptr = &evs.evs
cdef class main_class_3:
cdef main_class evs
cdef move_from(self, main_class&& move_this):
self.evs = move(move_this)
cdef class implAlgo:
cdef:
main_algo *_thisptr
def __cinit__(implAlgo self):
self._thisptr = NULL
def __init__(implAlgo self, unsigned dims):
self._thisptr = new main_algo(dims)
def __dealloc__(implAlgo self):
if self._thisptr != NULL:
del self._thisptr
cdef int _check_alive(implAlgo self) except -1:
if self._thisptr == NULL:
raise RuntimeError("Wrapped C++ object is deleted")
else:
return 0
cdef getNext(implAlgo self, Pclass& p, const class2& s):
self._check_alive()
cdef main_class evs = self._thisptr.getNext(p, sq)
retval = main_class_3()
retval.move_from(move(evs))
return retval
</code></pre>
<p>Here, the class <code>main_algo</code> implements the method <code>getNext()</code> which returns an object of class <code>main_class</code>.</p>
<p>From test.pyx, I want to return this object to purely python file where its values can be accessed. </p>
<p>When I try to compile the above code, I get multiple instances of
following error at several places wherever I use that method and I get similar error for different tokens like ')' or '*' . Some example errors are: </p>
<pre><code>sources.cpp:5121:70: error: expected primary-expression before â*â token
__pyx_vtable_11Cython_test_7sources_main_class_3.move_from = (PyObject *(*)(struct __pyx_obj_11Cython_test_7sources_main_class_3 *, main_class &&))__pyx_f_11Cython_test_7sources_8main_class_3_move_from;
^
sources.cpp:5121:73: error: expected primary-expression before â)â token
__pyx_vtable_11Cython_test_7sources_main_class_3.move_from = (PyObject *(*)(struct __pyx_obj_11Cython_test_7sources_main_class_3 *, main_class &&))__pyx_f_11Cython_test_7sources_8main_class_3_move_from;
^
sources.cpp:5121:75: error: expected primary-expression before âstructâ
__pyx_vtable_11Cython_test_7sources_main_class_3.move_from = (PyObject *(*)(struct __pyx_obj_11Cython_test_7sources_main_class_3 *, main_class &&))__pyx_f_11Cython_test_7sources_8main_class_3_move_from;
^
sources.cpp:5121:133: error: expected primary-expression before â&&â token
__pyx_vtable_11Cython_test_7sources_main_class_3.move_from = (PyObject *(*)(struct __pyx_obj_11Cython_test_7sources_main_class_3 *, main_class &&))__pyx_f_11Cython_test_7sources_8main_class_3_move_from;
</code></pre>
<p>But all these tokens are related to the objects that I create to move the C++ object to python. There is no error with any other declaration. </p>
<p>Can someone please help to show me where am I wrong?</p>
| 0 | 2016-07-28T19:12:02Z | 38,674,182 | <p>A very simple example that duplicates your issue (when compiled incorrectly) and demonstrates how to fix it by changing the compile options.</p>
<pre><code>cdef extern from "<utility>" namespace "std":
int&& move(int&&)
cdef class someclass:
cdef int i
cdef move_from(self, int&& i):
self.i = move(i)
</code></pre>
<p>(Note that I've added <code>namespace "std"</code> when defining <code>move</code>. This is missing in your code, probably because I missed it in the answer you based this code on).</p>
<p>setup.py is as follows</p>
<pre><code>from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = [Extension('test_move',
sources=['test_move.pyx'],
extra_compile_args=['-std=c++11'],
language='c++')]
</code></pre>
<p>If I remove "<code>extra_compile_args</code>" then I see the same errors you report (because the compiler is assuming that you are using the old C++ standard which does not support rvalue references). If I compile it as above then it compiles correctly.</p>
<p>This does not mean there aren't other issues in your code. It is much longer than needed to demonstrate the problem and is not complete (it relies on at least 3 C++ header files that you don't provide). Therefore it is impossible to test.</p>
| 0 | 2016-07-30T12:53:14Z | [
"python",
"c++",
"object",
"cython"
] |
Telegram.org API: calling method invokeWithLayer in Python | 38,644,901 | <p>In reference to <a href="http://stackoverflow.com/questions/30661644/how-to-implement-authorization-using-a-telegram-api/34929980#_=_">How to implement authorization using a Telegram API?</a> This is the best telegram documentation I have yet to find! Thanks for that.</p>
<p>I am attempting to create a Python library to make calls to telegram.org. I can authenticate as described in the above link but ran into issues with method responses returning in a format found only in a previous layer. In other words, my client is calling a method from one layer of the api but the server is responding with a data format from an older api layer. I confirmed this by searching an older api layer file for the return id. </p>
<pre><code>{
"id": "571849917",
"params": [
{
"name": "phone_registered",
"type": "Bool"
},
{
"name": "phone_code_hash",
"type": "string"
}
],
"predicate": "auth.sentCode",
"type": "auth.SentCode"
},
</code></pre>
<p>The format my client is expecting is this:</p>
<pre><code>{
"id": "-269659687",
"params": [
{
"name": "phone_registered",
"type": "Bool"
},
{
"name": "phone_code_hash",
"type": "string"
},
{
"name": "send_call_timeout",
"type": "int"
},
{
"name": "is_password",
"type": "Bool"
}
],
"predicate": "auth.sentCode",
"type": "auth.SentCode"
},
</code></pre>
<p>So, I read in the telegram docs I need to call <a href="https://core.telegram.org/api/invoking" rel="nofollow">invokeWithLayer</a> to make sure the client and server have their api layers in synch.</p>
<p>Several questions:</p>
<ol>
<li>Given a telegram api schema file, is there a way to determine from the schema which layer it is? Or do you just "simply have to know"?</li>
<li>When calling invokeWithLayer how do you format the 'query' parameter? Do you have to serialize the query first?</li>
</ol>
<p>Here is my initConnection code where I serialize each method before using it as a parameter. Unfortunately, the response is not favorable. The first response is:</p>
<pre><code>('initConnection - msg: ', {u'messages': [{u'body': {u'first_msg_id': 6312441942040617984L, u'unique_id': 986871592203578887L, u'server_salt': 7658270006181864880L}, u'seqno': 1, u'msg_id': 6312441944354392065L, u'bytes': 28}, {u'body': {u'msg_ids': [6312441942040617984L]}, u'seqno': 2, u'msg_id': 6312441944354450433L, u'bytes': 20}]})
</code></pre>
<p>The second response is:</p>
<pre><code>{u'req_msg_id': 6312441942040617984L, u'result': {u'error_message': 'INPUT_METHOD_INVALID', u'error_code': 400}})
</code></pre>
<p>...and the code:</p>
<pre><code> def initConnection(self, config):
'''Set the API layer and initialize the connection'''
# get the required config data
api_layer = config.getint('App data', 'api_layer')
api_id = config.getint('App data', 'api_id')
version = config.get('App data', 'version')
print
print('----------------------------------------------')
print('initConnection - api_layer: ', api_layer)
print('initConnection - api_id: ', api_id)
print('initConnection - version: ', version)
# serialize a candidate method as a parameter. It doesn't
# matter what it is so we will use something simple like get_future_salts.
simpleQuery=TL.tl_serialize_method('get_future_salts', num=3)
# serialize the initConnection method
initConnectionQuery = TL.api_serialize_method('initConnection', api_id=api_id,
device_model='Unknown UserAgent',
system_version='Unknown Platform',
app_version=version,
lang_code='en-US',
query=simpleQuery)
# perform the initialization
msg = self.method_call('invokeWithLayer', layer=api_layer, query=initConnectionQuery)
print('initConnection - msg: ', msg)
</code></pre>
<p>Thanks!</p>
| 1 | 2016-07-28T19:12:24Z | 38,660,855 | <p>1) I get the latest telegram schema from here:
<a href="https://raw.githubusercontent.com/telegramdesktop/tdesktop/master/Telegram/SourceFiles/mtproto/scheme.tl" rel="nofollow">https://raw.githubusercontent.com/telegramdesktop/tdesktop/master/Telegram/SourceFiles/mtproto/scheme.tl</a></p>
<p>You can build your own TL parser library that works the way you want it too, and easily update it to the latest as the versions change.</p>
<p>2) To send the X query param you simply serialize and append to the end of your invoke with layer query.</p>
<p><strong>Example</strong>: (from my Telegram Elixir Library)</p>
<pre><code> msg = TL.invokewithlayer(layer, TL.initconnection(app_id, device_model, system_version, app_version, lang_code, TL.help_getconfig))
</code></pre>
<p>You can see the definition of relevant Telegram schema:</p>
<pre><code>initConnection#69796de9 {X:Type} api_id:int device_model:string system_version:string app_version:string lang_code:string query:!X = X;
invokeWithLayer#da9b0d0d {X:Type} layer:int query:!X = X;
</code></pre>
| 0 | 2016-07-29T14:04:52Z | [
"python",
"api",
"telegram"
] |
SciPy Parabolic Cylinder D and Complex Arguments | 38,644,932 | <p>I'm trying to use the Parabolic Cylinder D function from SciPy and am having trouble with complex arguments. Sample code to produce the error is:</p>
<pre><code>#!/usr/bin/env python
import numpy
import scipy.special as special
# test real numbers
test = 0.735759
A = test - special.pbdv(1,2)[0]
print A
# test complex numbers.
test = 9.43487e-16+1j*5.1361
A = test - special.pbdv(3,-1j)[0]
print A
</code></pre>
<p>The error I get is:</p>
<pre><code>---> 19 A = test - special.pbdv(3,-1j)[0]
20 print A
21
TypeError: ufunc 'pbdv' not supported for the input types, and the inputs could not be
safely coerced to any supported types according to the casting rule ''safe''
</code></pre>
<p><a href="http://docs.scipy.org/doc/numpy/reference/ufuncs.html" rel="nofollow">From the documentation</a> it looks like the function is simply not defined to work with complex arguments. Other scipy functions (like the Bessel function jv) explicitly state they accept complex arguments, so I don't think I am wrong in my reading of the error.</p>
<p>My follow up question: Is there an implementation of the Parabolic Cylinder D function in python that accepts complex arguments? I've tried constructing my own from <a href="https://en.wikipedia.org/wiki/Parabolic_cylinder_function" rel="nofollow">Abramowitz and Stegun</a> but I can't seem to get it to agree with Mathematica. Suggestions would be appreciated. My google skills haven't uncovered anything.</p>
<p>Edit:
<a href="http://scicomp.stackexchange.com/questions/24074/simulation-of-parabolic-cylinder-function-ua-x-for-complex-arguments">Question is similar to the question here</a>.</p>
| 1 | 2016-07-28T19:14:28Z | 38,646,012 | <p>I still don't understand why the scipy implementation of the function doesn't accept complex arguments, all the functions under the hood accept complex arguments.</p>
<p>I managed to get the function defined in <a href="https://en.wikipedia.org/wiki/Parabolic_cylinder_function" rel="nofollow">Abramowitz and Stegun</a> working. I am not sure what was wrong with my previous attempts. I am sure there is a better way to write the function, but here is my implementation for using the Parabolic Cylinder D function in python for complex values:</p>
<pre><code>import numpy
import scipy.special as special
PI = numpy.pi
def y1(a,z):
return numpy.exp(-0.25*(z**2.0))*special.hyp1f1(0.5*a+0.25,0.5,0.5*(z**2.0))
def y2(a,z):
return z*numpy.exp(-0.25*(z**2.0))*special.hyp1f1(0.5*a+0.75,1.5,0.5*(z**2.0))
def U(a,z):
zeta = 0.5*a+0.25
return (1/numpy.sqrt(PI))*(1/(2.0**zeta))*(numpy.cos(PI*zeta)*special.gamma(0.5-zeta)*y1(a,z) \
-numpy.sqrt(2)*numpy.sin(PI*zeta)*special.gamma(1-zeta)*y2(a,z))
def ParabolicCylinderD(v,z):
b = -v-0.5
return U(b,z)
</code></pre>
<p>Edit: This doesn't work if the index is negative. Dunno why. I've switched to Warren's suggestion of MPmath above. It is fast enough for my needs.</p>
| 1 | 2016-07-28T20:19:37Z | [
"python",
"numpy",
"math",
"scipy"
] |
How to use AnyObject method in zeep library for Python? | 38,644,935 | <p>I read in the documentation that it is done like this.</p>
<p><a href="http://i.stack.imgur.com/7jxIS.png" rel="nofollow"><img src="http://i.stack.imgur.com/7jxIS.png" alt="enter image description here"></a></p>
<p>I did a simple code to try this out and I have the following. (ucmdb is the client)</p>
<pre><code>intProp_type = ucmdb.get_element('ns17:IntProp')
intProp = xsd.AnyObject(intProp_type, intProp_type(name = "slots", value = 56 ))
</code></pre>
<p>And this error comes out.</p>
<pre><code>Traceback (most recent call last):
File "C:\Python\lib\site-packages\zeep-0.12.0-py3.5.egg\zeep\xsd\schema.py", line 71, in get_element
return schema._elements[qname]
KeyError: <lxml.etree.QName object at 0x000001C314122F08>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "ejemplo.py", line 35, in <module>
main()
File "ejemplo.py", line 18, in main
processCIs()
File "ejemplo.py", line 31, in processCIs
intProp_type = ucmdb.get_element('ns17:IntProp')
File "C:\Python\lib\site-packages\zeep-0.12.0-py3.5.egg\zeep\client.py", line 119, in get_element
return self.wsdl.types.get_element(name)
File "C:\Python\lib\site-packages\zeep-0.12.0-py3.5.egg\zeep\xsd\schema.py", line 81, in get_element
qname.localname, qname.namespace, known_elements or ' - '))
KeyError: "No element 'IntProp' in namespace http://schemas.hp.com/ucmdb/ui/1/types. Available elements are: - "
</code></pre>
| 0 | 2016-07-28T19:14:35Z | 38,660,236 | <p>You might want to try it with latest version. Otherwise submit an issue in github.com/mvantellingen/python-zeep</p>
| 0 | 2016-07-29T13:33:00Z | [
"python",
"xsd",
"python-3.5",
"webservice-client"
] |
Tornado multiple IOLoop in multithreads | 38,644,963 | <p>I am trying to run multiple IOLoop in multiple threads and I am wondering how the IOLoop actually works. </p>
<pre><code>class WebThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self, name='WebThread')
def run(self):
curdir = os.path.dirname(os.path.realpath(__file__))
application = Application() #Very simple tornado.web.Application
http_server_api = tornado.httpserver.HTTPServer(application)
http_server_api.listen(8888)
logging.info('Starting application')
#tornado.ioloop.IOLoop.instance() is singleton, not for thread, right?
ioloop = tornado.ioloop.IOLoop()
ioloop.make_current()
ioloop.start()
</code></pre>
<p>According to the doc, I can not use IOLoop.instance() since it's a singleton and I am working in a thread. So I created my own IOLoop. But this piece of code listens on the port 8888 but can not render any web page. I am wondering is there anything missed, or do I need to tie the http_server to the IOLoop in some way?</p>
<p>Also, I find that removing the last 3 lines and substitute with <code>tornado.ioloop.IOLoop.instance().start</code> works perfectly for single thread. But what's the difference between the singleton and self created IOLoop? </p>
<p>I am new to Tornado and any answers are welcomed.</p>
| 0 | 2016-07-28T19:16:13Z | 38,648,368 | <blockquote>
<p>In general you should use IOLoop.current as the default when constructing an asynchronous object, and use IOLoop.instance when you mean to communicate to the main thread from a different one.</p>
</blockquote>
<p><a href="http://www.tornadoweb.org/en/stable/ioloop.html#tornado.ioloop.IOLoop.current" rel="nofollow"><code>IOLoop.current</code></a> without params returns <strong>already created ioloop of thread</strong> or it calls <code>IOLoop.instance()</code>. And HTTPServer (actually in TCPServer) use IOLoop.current to interact with ioloop, so the only thing you should change is to create ioloop before HTTPServer e.g.</p>
<pre><code>class WebThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self, name='WebThread')
def run(self):
curdir = os.path.dirname(os.path.realpath(__file__))
ioloop = tornado.ioloop.IOLoop()
application = Application() #Very simple tornado.web.Application
http_server_api = tornado.httpserver.HTTPServer(application)
http_server_api.listen(8888)
logging.info('Starting application')
ioloop.start()
</code></pre>
<p>Also I've removed <code>IOLoop.make_current</code>, since it's redundant - <code>IOLoop()</code> sets self as a current.</p>
<hr>
<p>Code above will work, but only with one thread, because reuse_port is not enabled by default. You will end up with:</p>
<pre><code>OSError: [Errno 98] Address already in use
</code></pre>
<p>You can enable this with </p>
<pre><code> http_server_api.bind(port=8888, reuse_port=True)
http_server_api.start()
</code></pre>
<p>instead of <code>http_server_api.listen(8888)</code></p>
| 1 | 2016-07-28T23:36:54Z | [
"python",
"multithreading",
"tornado"
] |
What is the equivalent of Serial.available() in pyserial? | 38,645,060 | <p>When I am trying to read multiple lines of serial data on an Arduino, I use the following idiom:</p>
<pre><code>String message = "";
while (Serial.available()){
message = message + serial.read()
}
</code></pre>
<p>In Arduino C, <code>Serial.available()</code> returns the number of bytes available to be read from the serial buffer (See <a href="https://www.arduino.cc/en/Serial/Available" rel="nofollow">Docs</a>). <em>What is the equivalent of <code>Serial.available()</code> in python?</em></p>
<p>For example, if I need to read multiple lines of serial data I would expect to ues the following code:</p>
<pre><code>import serial
ser = serial.Serial('/dev/ttyACM0', 9600, timeout=0.050)
...
while ser.available():
print ser.readline()
</code></pre>
| 0 | 2016-07-28T19:22:19Z | 38,646,191 | <p>The property <a href="https://pythonhosted.org/pyserial/pyserial_api.html#serial.Serial.in_waiting" rel="nofollow"><code>Serial.in_waiting</code></a> returns the "the number of bytes in the receive buffer". </p>
<p>This seems to be the equivalent of <a href="https://www.arduino.cc/en/Serial/Available" rel="nofollow"><code>Serial.available()</code></a>'s description: "the number of bytes ... that's already arrived and stored in the serial receive buffer."</p>
<p>For versions prior to pyserial 3.0, use <code>.inWaiting()</code>.</p>
<p>Try:</p>
<pre><code>import serial
ser = serial.Serial('/dev/ttyACM0', 9600, timeout=0.050)
...
while ser.in_waiting: # Or: while ser.inWaiting():
print ser.readline()
</code></pre>
| 1 | 2016-07-28T20:31:59Z | [
"python",
"serial-port"
] |
What is the equivalent of Serial.available() in pyserial? | 38,645,060 | <p>When I am trying to read multiple lines of serial data on an Arduino, I use the following idiom:</p>
<pre><code>String message = "";
while (Serial.available()){
message = message + serial.read()
}
</code></pre>
<p>In Arduino C, <code>Serial.available()</code> returns the number of bytes available to be read from the serial buffer (See <a href="https://www.arduino.cc/en/Serial/Available" rel="nofollow">Docs</a>). <em>What is the equivalent of <code>Serial.available()</code> in python?</em></p>
<p>For example, if I need to read multiple lines of serial data I would expect to ues the following code:</p>
<pre><code>import serial
ser = serial.Serial('/dev/ttyACM0', 9600, timeout=0.050)
...
while ser.available():
print ser.readline()
</code></pre>
| 0 | 2016-07-28T19:22:19Z | 38,651,454 | <p>I have written my code as below. Hope you can use it modify your code</p>
<pre><code>import serial
import csv
import os
import time
import sys
import string
from threading import Timer
def main():
pass
if __name__ == '__main__':
main()
COUNT=0
f=open("test.csv","w+");
result = csv.writer(f,delimiter=',')
result_statement=("Dir","ACTUATOR_ON_OFF","MODE","DATE","TIME"," TRACKER DESIRED ANGLE"," TRACKER ACTUAL ANGLE")
result.writerow(result_statement)
f.close()
while COUNT<=100:
#while():
time.sleep(60)
ser=serial.Serial()
ser.port=12
ser.baudrate=9600
ser.open()
str=ser.read(150)
# print "string are:\n",str
print type(str)
val=str.split(":")
# print "value is:\n",val
lines=str.split("\r\n")
# print "line statement are :\n",lines
COUNT=COUNT+1
print COUNT
f=open("test.csv","a+");
result = csv.writer(f,delimiter=',')
wst=[]
for line in lines[:-1]:
parts=line.split(":")
for p in parts[1:]:
wst.append(p)
#result = csv.writer(f,delimiter=',')
#wst.append(parts[1:])
print "wst:\n",wst
result.writerow(wst)
f.close()
f.close()
ser.close()
</code></pre>
| -1 | 2016-07-29T05:56:01Z | [
"python",
"serial-port"
] |
Assigning headers in text files and building arrays | 38,645,078 | <p>I am writing a program that reads a text file and parses the information inside it. An example of the text file is as follows:</p>
<pre><code>->DQB1*02:02:01:01
GAACTTTGCTCTTTTCACCAAAACTTAAGGCTCCTCAGGGTGTGTCTAAGACAACAGCAGTAAAAATGTCTATGACAGCAATTTTCTCTCCCCTGAAATATGATCCCCACTTAATTTGCCCTATTGAAAGAATCCCAAGTATAAGAACAACTGGTTTTTAATCAATATTACAAAGATGTTTACTGTTGAATCGCATTTTTCTTTGGCTTCTTAAAATCCCTTAGGCATTCAATCTTCAGCTCTTCCATAAT
->OMIXON_CONSENSUS_M-86-11-9517_DQB1*02:02:01
GTCCAAGCTGTGTTGACTACCACTACTTTTCCCTTCGTCTCAATTATGTCTTGGAAGAAGGCTTTGCGGATCCCTGGAGGCCTTCGGGTAGCAACTGTGACCTTGATGCTGGCGATGCTGAGCACCCCGGTGGCTGAGGGCAGAGACTCTCCCGGTAAGTGCAGGGCCACTGCTCTCCAGAGCCGCCACTCTGGGAACAGGCTCTCCTTGGGCTGGGGT
->GENDX_CONSENSUS_M-86-11-9517_DQB1*02:02:01:01
TGCCAGGTACATCAGATCCATCAGGTCCAAGCTGTGTTGACTACCACTACTTTTCCCTTCGTCTCAATTATGTCTTGGAAGAAGGCTTTGCGGATCCCTGGAGGCCTTCGGGTAGCAACTGTGACCTTGATGCTGGCGATGCTGAGCACCCCGGTGGCTGAGGGCAGAGACTCTCCCGGTAAGTGCAGGGCCACTGCTCTCCAGAGCCGCCACTCTGGGA
</code></pre>
<p>I am trying to assign all the lines starting with <code>></code> as a header so I can create a header array and the remaining text as a sequence array so afterwards I can align the sequences and parse. I am having trouble with assigning the headers. So far in my code I have as follows:</p>
<pre><code>def readfile():
with open ("testAllele1.txt", "r") as myfile:
y = myfile.read()
with open(y) as z:
for line in z: # build array
counter=1
if line.startswith(">"): #header array
header(counter)=line
counter=counter+1
else:
sequence(counter)=line #sequence array
</code></pre>
<p>Please help! (Also I am like a beginner to intermediate programmer so nothing too difficult please)</p>
| 0 | 2016-07-28T19:23:23Z | 38,645,929 | <p>Your example is broken.
All lines start with <code>-></code></p>
<p>So I assume that you want to split them at the first space character.</p>
<pre><code>#! /usr/bin/env python
# read the file and split into lines
y = open("testAllele1.txt", "r").read()
z = y.splitlines()
# initialize
header = []
sequence = []
#loop over all lines
for line in z:
if line.startswith("->"):
h, s = line.split()
h = h[2:] # cut away the leading "->"
header.append(h)
sequence.append(s)
print header
print sequence
</code></pre>
| 1 | 2016-07-28T20:14:18Z | [
"python",
"arrays",
"python-2.7",
"header"
] |
How to delete a line break character in list item python | 38,645,099 | <p>I have a small question regarding python lists.
See this code:</p>
<pre><code>passwords = []
fob = open("/path/to/file", "r")
for add_items in fob :
passwords.append(add_items)
print passwords
fob.close()
</code></pre>
<p>The code result:</p>
<pre><code>['Monster1\n', 'Monster2\n', 'Monster3']
</code></pre>
<p>How to delete <code>\n</code> in each <code>add_items</code> ?</p>
| 2 | 2016-07-28T19:24:17Z | 38,645,115 | <p>Use <code>rstrip('\n')</code> or just <code>rstrip()</code>:</p>
<pre><code>passwords.append(add_items.rstrip('\n'))
</code></pre>
| 3 | 2016-07-28T19:25:17Z | [
"python",
"list"
] |
How to delete a line break character in list item python | 38,645,099 | <p>I have a small question regarding python lists.
See this code:</p>
<pre><code>passwords = []
fob = open("/path/to/file", "r")
for add_items in fob :
passwords.append(add_items)
print passwords
fob.close()
</code></pre>
<p>The code result:</p>
<pre><code>['Monster1\n', 'Monster2\n', 'Monster3']
</code></pre>
<p>How to delete <code>\n</code> in each <code>add_items</code> ?</p>
| 2 | 2016-07-28T19:24:17Z | 38,645,154 | <p>Above answer would work. So would this, without directly stripping each line or using list comprehension:</p>
<pre><code>with open("C:\Users\Bucky\Desktop\pass.txt", "r") as fob:
passwords = fob.read().splitlines()
</code></pre>
| 0 | 2016-07-28T19:27:26Z | [
"python",
"list"
] |
How to delete a line break character in list item python | 38,645,099 | <p>I have a small question regarding python lists.
See this code:</p>
<pre><code>passwords = []
fob = open("/path/to/file", "r")
for add_items in fob :
passwords.append(add_items)
print passwords
fob.close()
</code></pre>
<p>The code result:</p>
<pre><code>['Monster1\n', 'Monster2\n', 'Monster3']
</code></pre>
<p>How to delete <code>\n</code> in each <code>add_items</code> ?</p>
| 2 | 2016-07-28T19:24:17Z | 38,645,160 | <p>You could clean it up a bit better and remove all surrounding white space by using <code>strip</code>. Also, you could simplify things by using a comprehension and a context manager: </p>
<pre><code>with open("C:\Users\Bucky\Desktop\pass.txt") as f:
passwords = [data.strip() for data in f]
</code></pre>
<p>In this answer, a context manager (the <code>with</code> statement) can be learned about <a href="https://docs.python.org/3/tutorial/inputoutput.html#methods-of-file-objects" rel="nofollow">here</a>. That section should explain more details about the file objects and how to use a context manager.</p>
| 2 | 2016-07-28T19:27:39Z | [
"python",
"list"
] |
How to delete a line break character in list item python | 38,645,099 | <p>I have a small question regarding python lists.
See this code:</p>
<pre><code>passwords = []
fob = open("/path/to/file", "r")
for add_items in fob :
passwords.append(add_items)
print passwords
fob.close()
</code></pre>
<p>The code result:</p>
<pre><code>['Monster1\n', 'Monster2\n', 'Monster3']
</code></pre>
<p>How to delete <code>\n</code> in each <code>add_items</code> ?</p>
| 2 | 2016-07-28T19:24:17Z | 38,645,211 | <p>You can use a list comprehension towether with <code>replace</code> to clean all instances of <code>\n</code> within the string (assuming the password should not contain a carriage return):</p>
<pre><code>>>> [pw.replace('\n', '') for pw in passwords]
['Monster1', 'Monster2', 'Monster3']
</code></pre>
| 1 | 2016-07-28T19:30:57Z | [
"python",
"list"
] |
How can I do multidimensional matrix multiplication in numpy, scipy | 38,645,111 | <p>I have a scipy sparse matrix with shape (8,9) and another array with shape (9, 12 , 17). I want to multiply these such that I get a matrix/array of size (8,12,17) where the (8,9) matrix has effectively multiplied the first dimension only. Do I have to use Kronecker products to do this or is there a simple way in numpy?</p>
| 0 | 2016-07-28T19:25:03Z | 38,646,244 | <p>Here are a couple of ways I can get it to work. The second seems to be better and is about 12 times faster when I tested. </p>
<pre><code>def multiply_3D_dim_zero_slow(matrix, array):
shape = array.shape
final_shape = (matrix.shape[0], array.shape[1], array.shape[2])
result = np.zeros(final_shape)
for i in xrange(shape[1]):
for j in xrange(shape[2]):
result[:, i, j] = matrix * array[:, i, j]
return result.reshape(final_shape)
</code></pre>
<p>And here is a faster version that uses reshape to make the multidimensional array into a 2D array. </p>
<pre><code>def multiply_3D_dim_zero(matrix, array):
shape = array.shape
final_shape = (matrix.shape[0], array.shape[1], array.shape[2])
array_reshaped = array.reshape(shape[0], shape[1] * shape[2])
return (matrix * array_reshaped).reshape(final_shape)ode here
</code></pre>
<p>This just works on the first dimension which is what I need but one could generalize. </p>
| 0 | 2016-07-28T20:34:36Z | [
"python",
"numpy",
"matrix",
"scipy"
] |
How can I do multidimensional matrix multiplication in numpy, scipy | 38,645,111 | <p>I have a scipy sparse matrix with shape (8,9) and another array with shape (9, 12 , 17). I want to multiply these such that I get a matrix/array of size (8,12,17) where the (8,9) matrix has effectively multiplied the first dimension only. Do I have to use Kronecker products to do this or is there a simple way in numpy?</p>
| 0 | 2016-07-28T19:25:03Z | 38,646,481 | <p>As hpaulj suggests in the comments, the easiest way to do this is <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a> with a dense matrix:</p>
<pre><code>>>> a = np.random.randn(8, 9)
>>> b = np.random.randn(9, 12, 17)
>>> c = np.einsum('ij,jkl->ikl', a, b)
>>> c.shape
(8, 12, 17)
</code></pre>
| 1 | 2016-07-28T20:50:01Z | [
"python",
"numpy",
"matrix",
"scipy"
] |
How can I do multidimensional matrix multiplication in numpy, scipy | 38,645,111 | <p>I have a scipy sparse matrix with shape (8,9) and another array with shape (9, 12 , 17). I want to multiply these such that I get a matrix/array of size (8,12,17) where the (8,9) matrix has effectively multiplied the first dimension only. Do I have to use Kronecker products to do this or is there a simple way in numpy?</p>
| 0 | 2016-07-28T19:25:03Z | 38,646,519 | <p>If <code>m1</code> is the 2d sparse matrix, <code>m1.A</code> is its dense array form. The dinmensions practically write the <code>einsum</code> expression.</p>
<pre><code>np.einsum('ij,jkl->ikl', m1.A, m2)
</code></pre>
<p>for example:</p>
<pre><code>In [506]: M = sparse.random(8, 9, 0.1)
In [507]: A = np.ones((9, 12, 17))
In [508]: np.einsum('ij,jkl->ikl', M.A, A).shape
Out[508]: (8, 12, 17)
</code></pre>
| 2 | 2016-07-28T20:52:03Z | [
"python",
"numpy",
"matrix",
"scipy"
] |
How can I do multidimensional matrix multiplication in numpy, scipy | 38,645,111 | <p>I have a scipy sparse matrix with shape (8,9) and another array with shape (9, 12 , 17). I want to multiply these such that I get a matrix/array of size (8,12,17) where the (8,9) matrix has effectively multiplied the first dimension only. Do I have to use Kronecker products to do this or is there a simple way in numpy?</p>
| 0 | 2016-07-28T19:25:03Z | 38,647,553 | <p>@Divakar recommended <code>np.tensordot</code>, and @hpaulj and @Praveen suggested <code>np.einsum</code>. Yet another way is transposing axes:</p>
<pre><code>(a @ b.transpose((2, 0, 1))).transpose((1, 2, 0))
</code></pre>
<p>For the small dimensions that you quote, <code>np.einsum</code> and transposition seem to be faster. But once you start scaling up the dimension of the axis along which you are multiplying, <code>np.tensordot</code> beats the other two.</p>
<pre><code>import numpy as np
m, n, k, l = 8, 9, 12, 17
a = np.random.random((m, n))
b = np.random.random((n, k, l))
%timeit np.tensordot(a, b, axes=([1], [0]))
# => 10000 loops, best of 3: 22 µs per loop
%timeit np.einsum("ij,jkl->ikl", a, b)
# => 100000 loops, best of 3: 10.1 µs per loop
%timeit (a @ b.transpose((2, 0, 1))).transpose((1, 2, 0))
# => 100000 loops, best of 3: 11.1 µs per loop
m, n, k, l = 8, 900, 12, 17
a = np.random.random((m, n))
b = np.random.random((n, k, l))
%timeit np.tensordot(a, b, axes=([1], [0]))
# => 1000 loops, best of 3: 198 µs per loop
%timeit np.einsum("ij,jkl->ikl", a, b)
# => 1000 loops, best of 3: 868 µs per loop
%timeit (a @ b.transpose((2, 0, 1))).transpose((1, 2, 0))
# => 1000 loops, best of 3: 907 µs per loop
m, n, k, l = 8, 90000, 12, 17
a = np.random.random((m, n))
b = np.random.random((n, k, l))
%timeit np.tensordot(a, b, axes=([1], [0]))
# => 10 loops, best of 3: 21.7 ms per loop
%timeit np.einsum("ij,jkl->ikl", a, b)
# => 10 loops, best of 3: 164 ms per loop
%timeit (a @ b.transpose((2, 0, 1))).transpose((1, 2, 0))
# => 10 loops, best of 3: 166 ms per loop
</code></pre>
| 1 | 2016-07-28T22:09:29Z | [
"python",
"numpy",
"matrix",
"scipy"
] |
Regex replacing pairs of dollar signs | 38,645,212 | <p>I have a string like: <code>The old man $went$ to the $barn$.</code> How would I convert this to <code>The old man ~!went! to the ~!barn!.</code></p>
<p>If I didn't need to add the <code>~</code> in front of the first occurrence, I could simply do <code>text.replace('$', '!')</code> in Python.</p>
| 1 | 2016-07-28T19:30:58Z | 38,645,273 | <p>Use a capture group so that your replacement string can put the text between the <code>$</code> back in place.</p>
<p>So the regex would be:</p>
<pre><code>\$([^$]*)\$
</code></pre>
<p>And then the replacement string would be:</p>
<pre><code>~!\1!
</code></pre>
<p><a href="https://regex101.com/r/xT8yR1/2" rel="nofollow">Regex101 Demo</a></p>
| 1 | 2016-07-28T19:34:48Z | [
"python",
"regex"
] |
Regex replacing pairs of dollar signs | 38,645,212 | <p>I have a string like: <code>The old man $went$ to the $barn$.</code> How would I convert this to <code>The old man ~!went! to the ~!barn!.</code></p>
<p>If I didn't need to add the <code>~</code> in front of the first occurrence, I could simply do <code>text.replace('$', '!')</code> in Python.</p>
| 1 | 2016-07-28T19:30:58Z | 38,645,277 | <p>Yes, regex this. Capture groups will help.</p>
<pre><code>result = re.sub(r'\$(.*?)\$', r'~!\1!', my_str)
</code></pre>
| 1 | 2016-07-28T19:34:59Z | [
"python",
"regex"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.