title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
pyinstaller with matplotlib backend_wxagg and hashlib md5 | 38,534,398 | <p>A colleague of mine is having an issue building a python app we're working on together. We've been able to isolate the issue and replicate with the following code:</p>
<pre><code>print "before import"
from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg
print "after import"
</code></pre>
<p>We're both working on the same shared computer (RHEL 6.6), using Enthought Canopy Python 2.7.6, matplotlib 1.4.2, and pyinstaller 3.2.</p>
<p>Here's where the fun begins:</p>
<ul>
<li>We are both able to run this from source through <code>python test.py</code>, and it behaves exactly as expected.</li>
<li>I'm able to generate an executable using <code>pyinstaller test.py</code>, and aside from a missing .so that it complains about, everything runs fine.</li>
<li>If my colleague attempts <code>pyinstaller test.py</code>, the executable is generated without complaint, but we when we try to run it we get the following error message.</li>
</ul>
<p>Error:</p>
<pre><code>[username@machine test]$ ./test
ERROR:root:code for hash md5 was not found.
Traceback (most recent call last):
File "hashlib.py", line 139, in <module>
File "hashlib.py", line 91, in __get_builtin_constructor
ValueError: unsupported hash type md5
ERROR:root:code for hash sha1 was not found.
Traceback (most recent call last):
File "hashlib.py", line 139, in <module>
File "hashlib.py", line 91, in __get_builtin_constructor
ValueError: unsupported hash type sha1
ERROR:root:code for hash sha224 was not found.
Traceback (most recent call last):
File "hashlib.py", line 139, in <module>
File "hashlib.py", line 91, in __get_builtin_constructor
ValueError: unsupported hash type sha224
ERROR:root:code for hash sha256 was not found.
Traceback (most recent call last):
File "hashlib.py", line 139, in <module>
File "hashlib.py", line 91, in __get_builtin_constructor
ValueError: unsupported hash type sha256
ERROR:root:code for hash sha384 was not found.
Traceback (most recent call last):
File "hashlib.py", line 139, in <module>
File "hashlib.py", line 91, in __get_builtin_constructor
ValueError: unsupported hash type sha384
ERROR:root:code for hash sha512 was not found.
Traceback (most recent call last):
File "hashlib.py", line 139, in <module>
File "hashlib.py", line 91, in __get_builtin_constructor
ValueError: unsupported hash type sha512
before import
Traceback (most recent call last):
File "test.py", line 3, in <module>
File "/u/username/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/PyInstaller-3.2-py2.7.egg/PyInstaller/loader/pyimod03_importers.py", line 389, in load_module
exec(bytecode, module.__dict__)
File "matplotlib/backends/backend_wxagg.py", line 7, in <module>
File "/u/username/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/PyInstaller-3.2-py2.7.egg/PyInstaller/loader/pyimod03_importers.py", line 389, in load_module
exec(bytecode, module.__dict__)
File "matplotlib/figure.py", line 38, in <module>
File "/u/username/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/PyInstaller-3.2-py2.7.egg/PyInstaller/loader/pyimod03_importers.py", line 389, in load_module
exec(bytecode, module.__dict__)
File "matplotlib/colorbar.py", line 36, in <module>
File "/u/username/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/PyInstaller-3.2-py2.7.egg/PyInstaller/loader/pyimod03_importers.py", line 389, in load_module
exec(bytecode, module.__dict__)
File "matplotlib/contour.py", line 27, in <module>
File "/u/username/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/PyInstaller-3.2-py2.7.egg/PyInstaller/loader/pyimod03_importers.py", line 389, in load_module
exec(bytecode, module.__dict__)
File "matplotlib/texmanager.py", line 49, in <module>
ImportError: cannot import name md5
Failed to execute script test
</code></pre>
<p>Obviously, something is different in how our environments are setup, and something isn't getting pulled along by pyinstaller. I just can't figure out what it is! We've tried adding hashlib and md5 both as hidden-imports, to no avail.</p>
| 0 | 2016-07-22T19:58:12Z | 39,252,650 | <p>I had the same error when trying to build a frozen binary with cx_Freeze. After trying a lot of desperate hacks, like patching <code>hashlib.py</code>, I finally got rid of the error message by excluding <code>hashlib</code> altogether. </p>
<p>In cx_Freeze it looks like:</p>
<pre><code>â¦
excludes = ["collections.abc", "tcl", "tk", "OpenGL", "scipy", "hashlib"]
build_options = {
"packages": packages,
"includes": includes,
"include_files": include_files,
"excludes": excludes, }
â¦
setup(name=â¦,
version=â¦,
description=â¦,
options=dict(build_exe=build_options,
install_exe=install_options),
executables=executables)
</code></pre>
<p>I suppose that <code>pyinstaller</code> also offers options to exclude libraries.</p>
<p>I'm now struggling with other errors, related to conflicting numpy versions, so I cannot 100% confirm that the disapperance of the error indicates that the problem is solved. </p>
| 0 | 2016-08-31T14:41:37Z | [
"python",
"matplotlib",
"pyinstaller"
] |
Python Inheritance suitability of using subclass to produce plot after setting plot properties in parent | 38,534,400 | <p>I am trying to write a simple Python matplotlib plotting Class that will set up formatting of the plot as I would like. To do this, I am using a class and a subclass. The parent generates an axis and figure, then sets all axis properties that I would like. Then the <code>child</code> generates the plot itself - scatter plot, line, etc. - and saves the plot to an output file.</p>
<p>Here is what I have come up with for the purposes of this question (the plot itself is based on <a href="http://matplotlib.org/examples/pylab_examples/simple_plot.html" rel="nofollow">this code</a>):</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
class myplot():
def __init__(self,dims,padd):
self.fig = plt.figure(figsize=(8.5, 11))
self.ax = self.fig.add_axes(dims)
self.pad = padd
def ax_prop(self):
self.ax.set_axisbelow(True)
self.ax.tick_params(axis='x', pad=self.pad, which='both')
class plotter(myplot):
def __init__(self,dims,padd,x,y):
myplot.__init__(self,dims,padd)
self.x = x
self.y = y
def mk_p(self):
self.ax.plot(self.x, self.y, linestyle = '-')
plt.savefig('Outfile.png', facecolor=self.fig.get_facecolor(), dpi = 300)
plt.show()
if __name__ == "__main__":
x = np.arange(0.0, 2.0, 0.01)
y = np.sin(2*np.pi*x)
propr = [0.60, 0.2, 0.2, 0.25]; padding =5
plt_instance = myplot(propr,padding)
plt_instance.ax_prop()
pl_d = plotter(propr,padding,x,y)
pl_d.mk_p()
</code></pre>
<p>I'm tying to base this on the following:</p>
<pre><code>The child must have all the properties of the parent.
</code></pre>
<p>Therefore, the <code>child</code> class <code>plotter()</code> must inherit the:</p>
<pre><code>ax, and all its customized properties
fig
</code></pre>
<p>from the parent (<code>myplot()</code>) class. The following may change in the <code>Child</code> class:</p>
<ol>
<li>type of plot - scatter, line, etc.</li>
<li>any properties of the type of plot - <code>markersize</code>, <code>linestyle</code>, etc.</li>
</ol>
<p>but the <code>matplotlib</code> figure and axis objects (<code>fig</code>,<code>ax</code>) will always be required before plotting an input set of numbers.</p>
<p><strong>Question:</strong></p>
<p>Is this a <em>correct</em> usage of Python inheritance or is the child class superfluous in this case? If so, was there a place where the reasoning for a subclass was not consistent with object oriented programming (I'm having difficulty convincing myself of this)?</p>
| 0 | 2016-07-22T19:58:28Z | 38,534,702 | <p>I think the main issue is that your subclass violates the <a href="https://en.wikipedia.org/wiki/Liskov_substitution_principle" rel="nofollow">Liskov Substitution Principle</a>. Also see <a href="https://lostechies.com/derickbailey/2009/02/11/solid-development-principles-in-motivational-pictures/" rel="nofollow">here</a> for nice pictures of SOLID principles. You can't substitute an instance of <code>myplot</code> for <code>plotter</code>, since <code>plotter</code>'s method signature is different. You should be able to substitute in a subclass instead of the base class. Because the signature is different, you cannot.</p>
| 1 | 2016-07-22T20:21:20Z | [
"python",
"class",
"matplotlib",
"plot"
] |
Python Inheritance suitability of using subclass to produce plot after setting plot properties in parent | 38,534,400 | <p>I am trying to write a simple Python matplotlib plotting Class that will set up formatting of the plot as I would like. To do this, I am using a class and a subclass. The parent generates an axis and figure, then sets all axis properties that I would like. Then the <code>child</code> generates the plot itself - scatter plot, line, etc. - and saves the plot to an output file.</p>
<p>Here is what I have come up with for the purposes of this question (the plot itself is based on <a href="http://matplotlib.org/examples/pylab_examples/simple_plot.html" rel="nofollow">this code</a>):</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
class myplot():
def __init__(self,dims,padd):
self.fig = plt.figure(figsize=(8.5, 11))
self.ax = self.fig.add_axes(dims)
self.pad = padd
def ax_prop(self):
self.ax.set_axisbelow(True)
self.ax.tick_params(axis='x', pad=self.pad, which='both')
class plotter(myplot):
def __init__(self,dims,padd,x,y):
myplot.__init__(self,dims,padd)
self.x = x
self.y = y
def mk_p(self):
self.ax.plot(self.x, self.y, linestyle = '-')
plt.savefig('Outfile.png', facecolor=self.fig.get_facecolor(), dpi = 300)
plt.show()
if __name__ == "__main__":
x = np.arange(0.0, 2.0, 0.01)
y = np.sin(2*np.pi*x)
propr = [0.60, 0.2, 0.2, 0.25]; padding =5
plt_instance = myplot(propr,padding)
plt_instance.ax_prop()
pl_d = plotter(propr,padding,x,y)
pl_d.mk_p()
</code></pre>
<p>I'm tying to base this on the following:</p>
<pre><code>The child must have all the properties of the parent.
</code></pre>
<p>Therefore, the <code>child</code> class <code>plotter()</code> must inherit the:</p>
<pre><code>ax, and all its customized properties
fig
</code></pre>
<p>from the parent (<code>myplot()</code>) class. The following may change in the <code>Child</code> class:</p>
<ol>
<li>type of plot - scatter, line, etc.</li>
<li>any properties of the type of plot - <code>markersize</code>, <code>linestyle</code>, etc.</li>
</ol>
<p>but the <code>matplotlib</code> figure and axis objects (<code>fig</code>,<code>ax</code>) will always be required before plotting an input set of numbers.</p>
<p><strong>Question:</strong></p>
<p>Is this a <em>correct</em> usage of Python inheritance or is the child class superfluous in this case? If so, was there a place where the reasoning for a subclass was not consistent with object oriented programming (I'm having difficulty convincing myself of this)?</p>
| 0 | 2016-07-22T19:58:28Z | 38,573,613 | <p>You should prefer composition to inheritance wherever possible. I would make a function that creates the plot and another that draws on it.</p>
<pre><code>def create_plot(dims, pad):
fig = plt.figure(figsize=(8.5, 11))
ax = fig.add_axes(dims)
ax.set_axisbelow(True)
ax.tick_params(axis='x', pad=pad, which='both')
return fig
def draw_plot(fig, x, y, outfile):
fig.axes[0].plot(x, y, linestyle='-')
plt.savefig(outfile, facecolor=fig.get_facecolor(), dpi=300)
plt.show()
if __name__ == '__main__':
x = np.arange(0.0, 2.0, 0.01)
y = np.sin(2 * np.pi * x)
dims = [0.60, 0.2, 0.2, 0.25]
padding = 5
outfile = 'Outfile.png'
fig = create_plot(dims, padding)
draw_plot(fig, x, y, outfile)
</code></pre>
| 1 | 2016-07-25T17:02:17Z | [
"python",
"class",
"matplotlib",
"plot"
] |
Can't figure out how to multiply aliens in my pygame? | 38,534,458 | <p>So I've just been writing a dodger game for practice using pygame and python. I've read a lot of tutorials and looked at other similar games to help me. Basically, aliens come down from the top of the screen (they're supposed to be random sizes and speeds and their location is supposed to be random as well) and the player uses their mouse to move the spaceship to dodge the falling aliens. </p>
<p>I have an if statement that is supposed to multiply the aliens coming down from the top but when I run the game only one alien falls and when it reaches the end of the screen it just disappears. No other aliens appear. The program is still running so the game hasn't ended. </p>
<p>Here is my full code. </p>
<pre><code>import pygame
import random
import sys
from pygame.locals import *
alienimg = pygame.image.load('C:\\Python27\\alien.png')
playerimg = pygame.image.load('C:\\Python27\\spaceship.png')
def playerCollision(playerRect, aliens): # a function for when the player hits an alien
for a in aliens:
if playerRect.colliderect(a['rect']):
return True
return False
def screenText(text, font, screen, x, y): #text display function
textobj = font.render(text, 1, (255, 255, 255))
textrect = textobj.get_rect()
textrect.topleft = (x,y)
screen.blit(textobj, textrect)
def main(): #this is the main function that starts the game
pygame.init()
screen = pygame.display.set_mode((500,500))
clock = pygame.time.Clock()
pygame.mouse.set_visible(False)
pygame.display.set_caption('Dodge the Aliens')
font = pygame.font.SysFont("monospace", 45)
pressed = pygame.key.get_pressed()
aliens = []
score = 0
alienAdd = 0
addedaliens = 0
gameOver = False
moveLeft = moveRight = moveUp = moveDown = False
topScore = 0
while gameOver==False: #while loop that actually runs the game
score += 1
playerImage = pygame.image.load('C:\\Python27\\spaceship.png').convert() # the player images
playerRect = playerImage.get_rect()
screen.blit(playerImage, [300, 250])
alienImage = pygame.image.load('C:\\Python27\\alien.png').convert() #alien images
alienRect = alienImage.get_rect()
screen.blit(alienImage, [ 50, 50 ])
pygame.display.flip()
for event in pygame.event.get(): #key controls
if event.type == KEYDOWN and event.key == pygame.K_ESCAPE: #escape ends the game
gameOver = True
elif event.type == MOUSEMOTION:
playerRect.move_ip(event.pos[0] - playerRect.centerx, event.pos[1] - playerRect.centery)
screen.fill((0,0,0))
pygame.display.flip()
if not gameOver:
alienAdd += 1
if alienAdd == 6: # randomly adding aliens of different sizes and speeds
aliendAdd = 0
alienSize = random.randint(8, 25)
newAlien = {'rect': pygame.Rect(random.randint(0, 500 - alienSize), 0 -alienSize, alienSize, alienSize),
'speed': random.randint(1, 8),
'surface':pygame.transform.scale(alienImage, (alienSize, alienSize)),
}
aliens.append(newAlien)
if moveLeft and playerRect.left > 0:
playerRect.move_ip(-1 * 5, 0)
if moveRight and playerRect.right < 500:
playerRect.move_ip(5, 0)
if moveUp and playerRect.top > 0:
playerRect.move_ip(0, -1 * 5)
if moveDown and playerRect.bottom < 500:
playerRect.move_ip(0, 5)
for a in aliens:
if not gameOver:
a['rect'].move_ip(0, a['speed'])
for a in aliens[:]:
if a['rect'].top > 500:
aliens.remove(a) #removes the aliens when they get to the bottom of the screen
screenText('Score %s' % (score), font, screen, 10, 0)
screen.blit(playerImage, playerRect)
pygame.display.update()
for a in aliens:
screen.blit(a['surface'], a['rect'])
pygame.display.flip()
if playerCollision(playerRect, aliens):
if score > topScore:
topScore = score
gameOver = True
clock.tick(30)
screenText('Game Over!', font, screen, (750 / 6), ( 750 / 6))
screenText('Press ENTER To Play Again.', font, screen, ( 750 / 6) - 80, (750 / 6) + 50)
pygame.display.update()
if __name__ == "__main__":
main()
</code></pre>
<p>Also, sidenote, whenever I put my code in here I go down every line and indent it. Is there an easier way to do that?</p>
| 1 | 2016-07-22T20:03:17Z | 38,534,539 | <p>You have a typo in <code>aliendAdd = 0</code>, it should be <code>alienAdd = 0</code>. This is causing you to keep incrementing <code>alienAdd</code> more and more and not hit your if statement of <code>alienAdd == 6</code> again.</p>
| 3 | 2016-07-22T20:09:46Z | [
"python",
"pygame"
] |
TVML catlog using TVML Templet Server installation | 38,534,486 | <p>To make a TVML App I got TVML Sample Catlog from Apple <a href="https://developer.apple.com/library/tvos/samplecode/TVMLCatalog/Introduction/Intro.html#//apple_ref/doc/uid/TP40016505-Intro-DontLinkElementID_2" rel="nofollow">https://developer.apple.com/library/tvos/samplecode/TVMLCatalog/Introduction/Intro.html#//apple_ref/doc/uid/TP40016505-Intro-DontLinkElementID_2</a>
Integrate this Catgol need Server installation by this commang
(command specify in Readme.txt file)
$ python -m SimpleHTTPServer 9001</p>
<p>This command in not respnding anything.</p>
| 0 | 2016-07-22T20:06:03Z | 38,588,405 | <p>I'm using http-server from npm.
It's really easy to use, only need to write:<code>http-serve -p 9001</code></p>
<p>You can find more here:
<a href="https://www.npmjs.com/package/http-server" rel="nofollow">https://www.npmjs.com/package/http-server</a></p>
| 0 | 2016-07-26T11:16:33Z | [
"python",
"xcode",
"tvos",
"tvml"
] |
How can I debug a connection issue within VOLTTRON? | 38,534,520 | <p>I am connecting to an external VOLTTRON instance. I am not getting a response from the connection. What's the issue?</p>
<p>I am writing a simple python script to connect to an external platform and retrieve the peers. If I get the serverkey, clientkey, and/or publickey incorrect I don't know how to determine which is the culprit, from the client side. I just get a gevent timeout. Is there a way to know?</p>
<pre><code>import os
import gevent
from volttron.platform.vip.agent import Agent
secret = "secret"
public = "public"
serverkey = "server"
tcp_address = "tcp://external:22916"
agent = Agent(address=tcp_address, serverkey=serverkey, secretkey=secret,
publickey=public)
event = gevent.event.Event()
greenlet = gevent.spawn(agent.core.run, event)
event.wait(timeout=30)
print("My id: {}".format(agent.core.identity))
peers = agent.vip.peerlist().get(timeout=5)
for p in peers:
print(p)
gevent.sleep(3)
greenlet.kill()
</code></pre>
| 0 | 2016-07-22T20:08:07Z | 38,596,085 | <p>The short answer: no, the client cannot determine why its connection to the server failed. The client will attempt to connect until it times out.</p>
<p>Logs and debug messages on the <em>server side</em> can help troubleshoot a connection problem. There are three distinct messages related to key errors:</p>
<ol>
<li><p><code>CURVE I: cannot open client HELLO -- wrong server key?</code><br>
Either the client omit the server key, the client used the wrong server key, or the server omit the secret key.</p></li>
<li><p><code>CURVE I: cannot open client INITIATE vouch</code><br>
Either the client omit the public or secret key, or its public and secret keys don't correspond to each other.</p></li>
<li><p><code>authentication failure</code><br>
The server key was correct and the secret and public keys are valid, but the server rejected the connection because the client was not authorized to connect (based on the client's public key).</p></li>
</ol>
<p>The first two messages are printed by <a href="https://github.com/zeromq/libzmq/blob/master/src/curve_server.cpp" rel="nofollow">libzmq</a>. To see the third message <code>volttron</code> must be started with increased verboseness (at least <code>-v</code>).</p>
| 1 | 2016-07-26T17:12:26Z | [
"python",
"volttron"
] |
csv value modification for certain cells on odd rows on a particular column | 38,534,527 | <p>Hi I'm trying to finish this small piece of code for modifying csv files, I've got this far with some help:</p>
<p>edit... some more info.</p>
<blockquote>
<p>Basically what Iâm looking to do is make some small changes to the csv file depending on the project and parent issue in JIRA. Python will then make the changes to the csv file before it is then read into JIRA - thatâs the second part of the program Iâve not even really looked at yet.
Iâm only looking to change the BOX-123 type cells and leave the blank ones blank.
But the idea of the program is that I can use it to make some small changes to a template which will then automatically create some issues in JIRA.</p>
</blockquote>
<pre><code>import os
import csv
project = 'Dudgeon'
parent = 'BOX-111'
rows = (1,1007)
current = os.getcwd()
filename = 'test.csv'
filepath = os.path.join(os.getcwd(), filename)
#print(current)
#print(filename)
print(filepath)
with open(filepath, 'r') as csvfile:
readCSV = csv.reader(csvfile)
next(readCSV, None)
for row in readCSV:
print(row[16])
row_count =sum(1 for row in readCSV)
print(row_count)
with open(filepath, 'r') as infile, open('out.csv', 'w') as outfile:
outfile.write(infile.readline()) # write out the 1st line
for line in infile:
cols = line.strip().split(',')
cols[16] = project
outfile.write(','.join(cols) + '\n')
with open('out.csv', 'r') as infile, open('out1.csv', 'w') as outfile:
for row in infile:
if row % 2 != 0:
cols [15] = parent
outfile.write()
</code></pre>
<p>Any help really appreciated.</p>
| 0 | 2016-07-22T20:08:48Z | 38,535,353 | <p>You want to use the row's index when comparing to <code>0</code>. Use <a href="https://docs.python.org/3/library/functions.html?highlight=enumerate#enumerate" rel="nofollow"><code>enumerate()</code></a>:</p>
<pre><code>with open('out.csv', 'r') as infile, open('out1.csv', 'w') as outfile:
for rowidx,row in enumerate(infile):
cols = row.strip().split(',')
if rowidx % 2 != 0:
cols[15] = parent
outfile.write(cols)
</code></pre>
<p>You really should be using the <a href="https://docs.python.org/3/library/csv.html" rel="nofollow"><code>csv</code></a> module here, though. Untested but should get you started.</p>
<pre><code>with open('out.csv', 'r') as infile, open('out1.csv', 'w') as outfile:
reader = csv.reader(infile)
writer = csv.writer(outfile)
for rowidx,row in enumerate(reader):
if rowidx % 2 != 0:
row[15] = parent
writer.write_row(row)
</code></pre>
| 0 | 2016-07-22T21:15:49Z | [
"python",
"csv"
] |
csv value modification for certain cells on odd rows on a particular column | 38,534,527 | <p>Hi I'm trying to finish this small piece of code for modifying csv files, I've got this far with some help:</p>
<p>edit... some more info.</p>
<blockquote>
<p>Basically what Iâm looking to do is make some small changes to the csv file depending on the project and parent issue in JIRA. Python will then make the changes to the csv file before it is then read into JIRA - thatâs the second part of the program Iâve not even really looked at yet.
Iâm only looking to change the BOX-123 type cells and leave the blank ones blank.
But the idea of the program is that I can use it to make some small changes to a template which will then automatically create some issues in JIRA.</p>
</blockquote>
<pre><code>import os
import csv
project = 'Dudgeon'
parent = 'BOX-111'
rows = (1,1007)
current = os.getcwd()
filename = 'test.csv'
filepath = os.path.join(os.getcwd(), filename)
#print(current)
#print(filename)
print(filepath)
with open(filepath, 'r') as csvfile:
readCSV = csv.reader(csvfile)
next(readCSV, None)
for row in readCSV:
print(row[16])
row_count =sum(1 for row in readCSV)
print(row_count)
with open(filepath, 'r') as infile, open('out.csv', 'w') as outfile:
outfile.write(infile.readline()) # write out the 1st line
for line in infile:
cols = line.strip().split(',')
cols[16] = project
outfile.write(','.join(cols) + '\n')
with open('out.csv', 'r') as infile, open('out1.csv', 'w') as outfile:
for row in infile:
if row % 2 != 0:
cols [15] = parent
outfile.write()
</code></pre>
<p>Any help really appreciated.</p>
| 0 | 2016-07-22T20:08:48Z | 38,540,755 | <p>A friend helped me last night and this is what they came up with:</p>
<pre><code>with open(filepath, 'r') as infile, open('out.csv', 'w') as outfile:
outfile.write(infile.readline()) # write out the 1st line
for line in infile:
cols = line.strip().split(',')
cols[16] = project
outfile.write(','.join(cols) + '\n')
with open('out.csv', 'r') as infile, open('out1.csv', 'w') as outfile:
outfile.write(infile.readline()) # write out the 1st line
lineCounter = 0
for line in infile:
lineCounter += 1
cols = line.strip().split(',')
if lineCounter % 2 != 0:
cols[15] = parent
outfile.write(','.join(cols) + '\n')
</code></pre>
| 0 | 2016-07-23T10:29:38Z | [
"python",
"csv"
] |
Selenium fails to select element by id | 38,534,611 | <p>Here is the HTML code of the element:</p>
<pre><code><input maxlength="64" name="pskSecret" class="text" id="pskSecret" value="" size="32" type="text">
</code></pre>
<p>And here is my python code, which tries to select it:</p>
<pre><code>self.driver.find_element_by_id("pskSecret").clear()
self.driver.find_element_by_id("pskSecret").send_keys(data) # data is variable
</code></pre>
<p>However I got exception, stating that selenium is unable to locate the element.
Any ideas what may be causing the problem</p>
<p>Edit: Also the element is inside an iframe, however I'm accessing other elements in it which are working correctly.</p>
| 1 | 2016-07-22T20:14:55Z | 38,534,725 | <p>May be when you are going to find element, it could not be load on the DOM due to timing issues. You should try using <code>WebDriverWait</code> to wait until element is visible as below :-</p>
<pre><code>from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(self.driver, 10)
input = wait.until(EC.visibility_of_element_located((By.ID, "pskSecret")))
input.clear()
input.send_keys(data)
</code></pre>
<p><strong>Note</strong> :- if this element is inside any frame you need to switch that frame before finding element using <code>self.driver.switch_to_frame("your frame id or name")</code></p>
| 0 | 2016-07-22T20:23:10Z | [
"python",
"selenium"
] |
How to upload to Nexus using Maven in python | 38,534,659 | <p>I want to upload an artifact to Nexus using maven in a python script. I looked it up here: <a href="https://gist.github.com/adamv/705292" rel="nofollow">https://gist.github.com/adamv/705292</a></p>
<p>and am doing the following way:</p>
<pre><code>def local2(command, print_command=False):
from subprocess import Popen, PIPE
p = Popen(command, stdout=PIPE, stderr=PIPE)
if print_command: print " ".join(command)
output, errput = p.communicate()
return p.returncode, output, errput
def uploadQAJavaToNexus():
url = "example"
groupId = "example"
artifactId = "example"
repositoryId = "example"
# filePath =
version = "version"
status, stdout, stderr = local2([
MAVEN_BINARY,
"deploy:deploy-file",
"-Durl=" +url,
"-DrepositoryId=" +repositoryId,
"-Dversion=" + version,
"-Dfile=" + "path"
"-DartifactId=" + artifactId,
"-Dpackaging=" + "jar",
"-DgroupId" + groupId,
])
return status, stdout, stderr
</code></pre>
<p>But I'm getting undefined variable MAVEN_BINARY. What is this?</p>
| 1 | 2016-07-22T20:17:52Z | 38,534,735 | <p>(Linux) Search for the binary file for mvn:</p>
<pre><code>$ which mvn
/c/Maven/apache-maven-3.0.5-bin/apache-maven-3.0.5/bin/mvn
</code></pre>
<p>and set this in your script:</p>
<pre><code>MAVEN_BINARY ='/c/Maven/apache-maven-3.0.5-bin/apache-maven-3.0.5/bin/mvn'
</code></pre>
| 0 | 2016-07-22T20:24:09Z | [
"python",
"maven",
"nexus"
] |
Add legend to networks plot to explain colouring of nodes | 38,534,730 | <p>I have a plot of a networkx graph in which edge-color depends on the weights assigned to the respective edges using the following code (with <code>a_netw</code> the nx.Graph):</p>
<pre><code>a_netw_edges = a_netw.edges()
a_netw_weights = [a_netw[source][dest]['weight'] for source, dest in a_netw_edges]
a_netw_colors = [plt.cm.Blues(weight*15) for weight in a_netw_weights]
nx.draw_networkx(a_netw, edges=a_netw_edges, width=1, edge_color=a_netw_colors)
</code></pre>
<p>To this graph I would like to add a legend that makes the connection between the weights and the colours explicit; like in a heatmap that uses <code>pcolor</code>.</p>
<p>While I have a rough idea of how to start:</p>
<pre><code>fig, axes = plt.subplots(nrows=2)
nx.draw_networkx(a_netw, edges=a_netw_edges, width=1, edge_color=a_netw_colors, ax=axes[0])
axes[0].get_xaxis().set_visible(False)
axes[0].get_yaxis().set_visible(False)
gradient = np.linspace(0, 1, 256)
gradient = np.vstack((gradient, gradient))
axes[1].imshow(gradient, aspect=3, cmap=plt.cm.Blues)
axes[1].get_yaxis().set_visible(False)
plt.tight_layout()
</code></pre>
<p>I have no idea how to do the following steps:</p>
<ol>
<li>Add the correct ticks on the relevant axis to get the connection with the weights.</li>
<li>Draw it vertically instead of horizontally.</li>
</ol>
| 3 | 2016-07-22T20:23:37Z | 38,630,883 | <p>I suggest you use the colorbar() command as shown below. I am providing an example Graph, see if it makes sense?</p>
<p><a href="http://i.stack.imgur.com/dcFI1.png" rel="nofollow"><img src="http://i.stack.imgur.com/dcFI1.png" alt="enter image description here"></a></p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
#generate a graph with weights
a_netw=nx.Graph()
a_netw.add_edge('a','b',weight=6)
a_netw.add_edge('a','c',weight=2)
a_netw.add_edge('c','d',weight=1)
a_netw.add_edge('c','e',weight=7)
a_netw.add_edge('c','f',weight=9)
a_netw.add_edge('a','d',weight=3)
#creating a color list for each edge based on weight
a_netw_edges = a_netw.edges()
a_netw_weights = [a_netw[source][dest]['weight'] for source, dest in a_netw_edges]
#scale weights in range 0-1 before assigning color
maxWeight=float(max(a_netw_weights))
a_netw_colors = [plt.cm.Blues(weight/maxWeight) for weight in a_netw_weights]
#suppress plotting for the following dummy heatmap
plt.ioff()
#multiply all tuples in color list by scale factor
colors_unscaled=[tuple(map(lambda x: maxWeight*x, y)) for y in a_netw_colors]
#generate a 'dummy' heatmap using the edgeColors as substrate for colormap
heatmap = plt.pcolor(colors_unscaled,cmap=plt.cm.Blues)
#re-enable plotting
plt.ion()
fig,axes = plt.subplots()
nx.draw_networkx(a_netw, edges=a_netw_edges, width=10, edge_color=a_netw_colors, ax=axes)
axes.get_xaxis().set_visible(False)
axes.get_yaxis().set_visible(False)
#add colorbar
cbar = plt.colorbar(heatmap)
cbar.ax.set_ylabel('edge weight',labelpad=15,rotation=270)
</code></pre>
| 2 | 2016-07-28T08:24:44Z | [
"python",
"matplotlib",
"networkx"
] |
Why does returning this lamda expression result in a string? | 38,534,799 | <p>Good afternoon,</p>
<p>I recently started learning Python 3.5.1, and am currently experimenting with <code>lambda</code> expressions. I tried setting up the simple method below.</p>
<pre><code>def sum_double(a, b):
return lambda a, b: a+b if a != b else (a+b)*2, a, b
</code></pre>
<p>All it is supposed to do is return the sum of <code>a</code> and <code>b</code>, and twice their sum if <code>a</code> is equal to <code>b</code>, but instead I get an output that looks like this.</p>
<p>Code:</p>
<pre><code>print(sum_double(1, 2))
print(sum_double(2, 3))
print(sum_double(2, 2))
</code></pre>
<p>Output:</p>
<pre><code>(<function sum_double.<locals>.<lambda> at 0x000001532DC0FA60>, 1, 2)
(<function sum_double.<locals>.<lambda> at 0x000001532DC0FA60>, 2, 3)
(<function sum_double.<locals>.<lambda> at 0x000001532DC0FA60>, 2, 2)
</code></pre>
<p>Am I doing this wrong? Why is this happening, and how would I use a lambda expression to achieve my desired functionality if that is even possible?</p>
| 2 | 2016-07-22T20:29:30Z | 38,534,822 | <pre><code>sum_double = lambda a, b: a+b if a != b else (a+b)*2
</code></pre>
<p><code>lambda</code> itself defines a function. A <code>def</code> is not required.</p>
<p>In your case, a tuple consisting of the returned function, <code>a</code> and <code>b</code> is printed.</p>
| 2 | 2016-07-22T20:32:10Z | [
"python",
"python-3.x",
"lambda"
] |
Why does returning this lamda expression result in a string? | 38,534,799 | <p>Good afternoon,</p>
<p>I recently started learning Python 3.5.1, and am currently experimenting with <code>lambda</code> expressions. I tried setting up the simple method below.</p>
<pre><code>def sum_double(a, b):
return lambda a, b: a+b if a != b else (a+b)*2, a, b
</code></pre>
<p>All it is supposed to do is return the sum of <code>a</code> and <code>b</code>, and twice their sum if <code>a</code> is equal to <code>b</code>, but instead I get an output that looks like this.</p>
<p>Code:</p>
<pre><code>print(sum_double(1, 2))
print(sum_double(2, 3))
print(sum_double(2, 2))
</code></pre>
<p>Output:</p>
<pre><code>(<function sum_double.<locals>.<lambda> at 0x000001532DC0FA60>, 1, 2)
(<function sum_double.<locals>.<lambda> at 0x000001532DC0FA60>, 2, 3)
(<function sum_double.<locals>.<lambda> at 0x000001532DC0FA60>, 2, 2)
</code></pre>
<p>Am I doing this wrong? Why is this happening, and how would I use a lambda expression to achieve my desired functionality if that is even possible?</p>
| 2 | 2016-07-22T20:29:30Z | 38,534,825 | <p>Well, you're not calling the <code>lambda</code> function and as such the return value is a tuple of the defined <code>lambda</code> function and the values of <code>a</code> and <code>b</code>.</p>
<p>Change it to call the lambda before returning while supplying the arguments to it:</p>
<pre><code>return (lambda a, b: a+b if a != b else (a+b)*2)(a, b)
</code></pre>
<p>And it works just fine:</p>
<pre><code>print(sum_double(1, 2))
3
print(sum_double(2, 2))
8
</code></pre>
| 1 | 2016-07-22T20:32:27Z | [
"python",
"python-3.x",
"lambda"
] |
Python random card dealer - face card counter doesn't count correctly | 38,534,815 | <p>I'm practicing writing this program for class. I have to deal cards until four aces are dealt and at the end, I also have to count how many face cards (jack, queen, king cards) were dealt. I didn't make a dictionary for the card names because my teacher specifically told us to do the random integer command. However, everything works except for the face counter (f_counter). It always counts one less face cards for some reason. Does anyone know why? Thanks!</p>
<pre><code>print("You were dealt:\n")
import random
# This is the initial counter for the number of cards dealt.
t_counter = 0
# This is the initial counter for the number of aces dealt.
a_counter = 0
# This is the initial counter for the number of face cards dealt.
f_counter = 0
# This is so both a rank and a suit are dealt.
r = random.randint(1,13)
s = random.randint(1,4)
while a_counter < 4:
# This counts and tells the user of each card dealt that isn't an ace.
r = random.randint(1,13)
s = random.randint(1,4)
t_counter += 1
if r == 11:
rank = "Jack"
elif r == 12:
rank = "Queen"
elif r == 13:
rank = "King"
elif r > 1:
rank = r
if s == 1:
suit = "Spades"
elif s == 2:
suit = "Hearts"
elif s == 3:
suit = "Diamonds"
elif s == 4:
suit = "Clubs"
print("Card",t_counter,': A',rank,"of",suit,)
# This counts the aces.
if r == 1:
a_counter += 1
print("An Ace of",suit,"!")
# This counts the face cards.
if r == 11 or r == 12 or r == 13:
f_counter += 1
# This allows up to four aces and also prints the number of face cards as the last thing.
if a_counter == 4:
print("You got",f_counter,"face cards!")
break
</code></pre>
| 0 | 2016-07-22T20:31:49Z | 38,535,142 | <p>I think I found it. Consider the case where you have roll a face card. </p>
<blockquote>
<p>Let r = 11. </p>
<p>rank -> 'Jack'</p>
<p>suit -> anything</p>
<p>print('Jack of whatever')</p>
<p>increment f_counter</p>
<p>// next iteration</p>
<p>Let r = 1</p>
<p>Fall through the if statements that set rank because rank isn't 11,12,13 or > 1, thus rank = 'Jack'</p>
<p>print('jack of some other suit')</p>
<p>Increment a_counter (because rolled r = 1)</p>
<p>Fall through the possibilities to increment f_counter</p>
</blockquote>
<p>Thus, you've printed a face card without incrementing f_counter.</p>
| 0 | 2016-07-22T20:59:37Z | [
"python"
] |
Python random card dealer - face card counter doesn't count correctly | 38,534,815 | <p>I'm practicing writing this program for class. I have to deal cards until four aces are dealt and at the end, I also have to count how many face cards (jack, queen, king cards) were dealt. I didn't make a dictionary for the card names because my teacher specifically told us to do the random integer command. However, everything works except for the face counter (f_counter). It always counts one less face cards for some reason. Does anyone know why? Thanks!</p>
<pre><code>print("You were dealt:\n")
import random
# This is the initial counter for the number of cards dealt.
t_counter = 0
# This is the initial counter for the number of aces dealt.
a_counter = 0
# This is the initial counter for the number of face cards dealt.
f_counter = 0
# This is so both a rank and a suit are dealt.
r = random.randint(1,13)
s = random.randint(1,4)
while a_counter < 4:
# This counts and tells the user of each card dealt that isn't an ace.
r = random.randint(1,13)
s = random.randint(1,4)
t_counter += 1
if r == 11:
rank = "Jack"
elif r == 12:
rank = "Queen"
elif r == 13:
rank = "King"
elif r > 1:
rank = r
if s == 1:
suit = "Spades"
elif s == 2:
suit = "Hearts"
elif s == 3:
suit = "Diamonds"
elif s == 4:
suit = "Clubs"
print("Card",t_counter,': A',rank,"of",suit,)
# This counts the aces.
if r == 1:
a_counter += 1
print("An Ace of",suit,"!")
# This counts the face cards.
if r == 11 or r == 12 or r == 13:
f_counter += 1
# This allows up to four aces and also prints the number of face cards as the last thing.
if a_counter == 4:
print("You got",f_counter,"face cards!")
break
</code></pre>
| 0 | 2016-07-22T20:31:49Z | 38,535,146 | <p>I made a few changes to your program. Let me know if this gives your desired output and I can explain the main changes. </p>
<pre><code>rank = ''
while a_counter < 4:
# This counts and tells the user of each card dealt that isn't an ace.
r = random.randint(1,13)
s = random.randint(1,4)
t_counter += 1
if s == 1:
suit = "Spades"
elif s == 2:
suit = "Hearts"
elif s == 3:
suit = "Diamonds"
elif s == 4:
suit = "Clubs"
if r == 11:
rank = "Jack"
f_counter += 1
elif r == 12:
rank = "Queen"
f_counter += 1
elif r == 13:
rank = "King"
f_counter += 1
elif r > 1 and r < 11:
rank = r
elif r == 1:
rank == "Ace"
a_counter += 1
if r == 1:
print("Card %d: An Ace of %s! **") % (t_counter, suit)
else:
print("Card %d: a %s of %s") % (t_counter, rank, suit)
if a_counter == 4:
print("You got %d face cards!") % (f_counter)
break
</code></pre>
<p>This seems to be working for me, except there's nothing in your program that would prevent the same card turning up twice (or more)... </p>
| 0 | 2016-07-22T20:59:46Z | [
"python"
] |
Diagonal Sprite Movement in Pygame, Python 3 | 38,534,817 | <p>I'm currently making a game using Pygame, Python 3 and one of the bugs I have in it is they way shots move. The game is a 2D top-down shooter, and the code for the player shooting mechanic is below:</p>
<p>(<code>player_rect</code> is the Rect for the player, bullet_speed is a pre-defined <code>int</code>)</p>
<pre><code>if pygame.mouse.get_pressed()[0]:
dx = mouse_pos[0]-player_rect.centerx
dy = mouse_pos[1]-player_rect.centery
x_speed = bullet_speed/(math.sqrt(1+((dy**2)/(dx**2))))
y_speed = bullet_speed/(math.sqrt(1+((dx**2)/(dy**2))))
if dx < 0:
x_speed *= -1
if dy < 0:
y_speed *= -1
#surface, rect, x-speed, y-speed
player_shots.append([player_shot_image, player_shot_image.get_rect(centerx=player_rect.centerx, centery=player_rect.centery), x_speed, y_speed])
</code></pre>
<p>Later in the loop, there is this part of code:</p>
<pre><code>for player_shot_counter in range(len(player_shots)):
player_shots[player_shot_counter][1][0] += player_shots[player_shot_counter][2]
player_shots[player_shot_counter][1][1] += player_shots[player_shot_counter][3]
</code></pre>
<p>This mechanic works mostly fine, with the exception of one major bug: the slower the shot, the less accurate it is as <code>pygame.Rect[0]</code> and <code>pygame.Rect[1]</code> can only be integer values. For example if the <code>player_rect.center</code> is <code>(0, 0)</code>, the position of the mouse is <code>(100, 115)</code> and bullet_speed is <code>10</code>, then <code>x_speed</code> will automatically round to <code>7</code> and <code>y_speed</code> to <code>8</code>, resulting in the bullet eventually passing through the point <code>(98, 112)</code>. However, if bullet_speed is <code>5</code>, then the bullet will pass through the point <code>(99, 132)</code>.</p>
<p>Is there any way to get around this in pygame?</p>
<p>Thanks in advance for any help!</p>
| 0 | 2016-07-22T20:31:55Z | 38,535,036 | <p>I don't know a whole lot about Pygame, but have you thought of storing your positions as non-integer values, only converting to integers on display? The internal representation can be more precise than what is displayed to the user.</p>
| 1 | 2016-07-22T20:51:16Z | [
"python",
"python-3.x",
"pygame",
"sprite",
"pygame-surface"
] |
Sklearn Spectral Clustering error for inf or NaNs in matrix | 38,535,008 | <p>I'm using <a href="http://scikit-learn.org/stable/modules/generated/sklearn.cluster.SpectralClustering.html" rel="nofollow">Spectral Clustering Library</a> and similarity matrix is its main argument. My matrix looks like:</p>
<pre><code>[[ 1.00000000e+00 8.47085137e-01 8.49644498e-01 8.49746438e-01
2.96473454e-01 8.50540412e-01 8.49462072e-01 8.50839475e-01
8.45951343e-01 5.76448265e-01 8.48265736e-01 8.43378943e-01
3.75348067e-01 1.17626480e-01 2.50357519e-01 8.50495202e-01
9.97541755e-01 8.49835674e-01 8.48770171e-01 8.45869271e-01
-5.97205241e-02]
[ 8.47085137e-01 1.00000000e+00 9.98547894e-01 9.98803332e-01
2.22305018e-01 9.98755219e-01 9.98502380e-01 9.98402601e-01
9.98778885e-01 5.66416311e-01 9.98639207e-01 9.98452172e-01
-6.10479042e-02 2.46741344e-02 -4.14116930e-03 9.98357419e-01
8.48955204e-01 9.98525354e-01 9.98900440e-01 9.98426618e-01
-6.51839614e-02]
[ 8.49644498e-01 9.98547894e-01 1.00000000e+00 9.98764222e-01
1.59017501e-01 9.98777492e-01 9.98797005e-01 9.98756310e-01
9.98785822e-01 5.71955127e-01 9.98834038e-01 9.98652820e-01
-5.95467715e-02 1.98107829e-02 -3.88527970e-03 9.98810942e-01
8.51337460e-01 9.98882675e-01 9.98815975e-01 9.98789494e-01
-6.69662309e-02]
[ 8.49746438e-01 9.98803332e-01 9.98764222e-01 1.00000000e+00
4.73518047e-01 9.98684853e-01 9.98839959e-01 9.99029920e-01
9.98804479e-01 5.67855583e-01 9.98759386e-01 9.98796277e-01
-6.07517782e-02 1.71388383e-02 -3.20996100e-03 9.98669121e-01
8.51600753e-01 9.98681806e-01 9.99072484e-01 9.98702177e-01
-6.29855810e-02]
[ 3.52784328e-01 2.41076867e-01 2.01621082e-01 4.11538647e-01
9.92999574e-01 2.09351787e-01 2.12464918e-01 1.84566399e-01
2.82162287e-01 8.88835155e-01 1.90613041e-01 2.12150578e-01
2.92104260e-01 6.25221827e-02 8.70607365e-01 2.88645877e-01
3.09283827e-01 2.81253950e-01 1.80307149e-01 2.49082955e-01
5.46192492e-02]
...
[ -5.97205241e-02 -6.51839614e-02 -6.69662309e-02 -6.29855810e-02
7.86918277e-02 -6.49002943e-02 -6.12003747e-02 -6.34500592e-02
-6.75593439e-02 7.23869691e-02 -6.20686862e-02 -5.94039824e-02
-1.00101778e-01 -1.14667128e-01 5.57606897e-02 -6.32884559e-02
-5.33734526e-02 -5.90822523e-02 -6.17068052e-02 -5.76615359e-02
1.00000000e+00]]
</code></pre>
<p>And my code similar to the documentation samples:</p>
<pre><code>cl = SpectralClustering(n_clusters=4,affinity='precomputed')
y = cl.fit_predict(matrix)
</code></pre>
<p>But the following error occurs:</p>
<pre><code>/usr/local/lib/python2.7/dist-packages/scikit_learn-0.17.1-py2.7-linux-x86_64.egg/sklearn/utils/validation.py:629: UserWarning: Array is not symmetric, and will be converted to symmetric by average with its transpose.
warnings.warn("Array is not symmetric, and will be converted "
/usr/local/lib/python2.7/dist-packages/scikit_learn-0.17.1-py2.7-linux-x86_64.egg/sklearn/utils/graph.py:172: RuntimeWarning: invalid value encountered in sqrt
w = np.sqrt(w)
Traceback (most recent call last):
File "/home/mahmood/PycharmProjects/sentence2vec/graphClustering.py", line 23, in <module>
y = cl.fit_predict(matrix)
File "/usr/local/lib/python2.7/dist-packages/scikit_learn-0.17.1-py2.7-linux-x86_64.egg/sklearn/base.py", line 371, in fit_predict
self.fit(X)
File "/usr/local/lib/python2.7/dist-packages/scikit_learn-0.17.1-py2.7-linux-x86_64.egg/sklearn/cluster/spectral.py", line 454, in fit
assign_labels=self.assign_labels)
File "/usr/local/lib/python2.7/dist-packages/scikit_learn-0.17.1-py2.7-linux-x86_64.egg/sklearn/cluster/spectral.py", line 258, in spectral_clustering
eigen_tol=eigen_tol, drop_first=False)
File "/usr/local/lib/python2.7/dist-packages/scikit_learn-0.17.1-py2.7-linux-x86_64.egg/sklearn/manifold/spectral_embedding_.py", line 254, in spectral_embedding
tol=eigen_tol)
File "/usr/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 1545, in eigsh
symmetric=True, tol=tol)
File "/usr/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 1033, in get_OPinv_matvec
return LuInv(A).matvec
File "/usr/lib/python2.7/dist-packages/scipy/sparse/linalg/interface.py", line 142, in __new__
obj.__init__(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 922, in __init__
self.M_lu = lu_factor(M)
File "/usr/lib/python2.7/dist-packages/scipy/linalg/decomp_lu.py", line 58, in lu_factor
a1 = asarray_chkfinite(a)
File "/usr/lib/python2.7/dist-packages/numpy/lib/function_base.py", line 1022, in asarray_chkfinite
"array must not contain infs or NaNs")
ValueError: array must not contain infs or NaNs
</code></pre>
<p>First warning is acceptable, because matrix is not symmetric, but there are no infs or NaNs in the matrix.</p>
| -1 | 2016-07-22T20:48:03Z | 38,539,257 | <p>NaN values arise <em>because</em> your matrix is not a similarity matrix: <strong>your data contains negative similarities</strong>! When taking the <code>sqrt</code> of these values you get <code>NaN</code>, hence the error.</p>
<p>The warnings are not just for fun - matrix decomposition techniques have somre requirements to allow them to work and return meaningful results.</p>
<p>Fix your negative similarities first, then retry.</p>
| 0 | 2016-07-23T07:21:37Z | [
"python",
"numpy",
"scikit-learn",
"cluster-analysis"
] |
Beautiful Soup 4 finding text within table | 38,535,161 | <p>I have been trying to use BS4 to scrape from <a href="http://www.nfl.com/draft/2012/tracker#dt-tabs:dt-by-position/dt-by-position-input:cb" rel="nofollow">this</a> web page. I cannot find the data I want (player names in the table, ie, "Claiborne, Morris").</p>
<p>When I use:</p>
<pre><code>soup = BeautifulSoup(r.content, "html.parser")
PlayerName = soup.find_all("table")
print (PlayerName)
</code></pre>
<p>None of the player's names are even in the output, it is only showing a different table.</p>
<p>When I use:</p>
<pre><code>soup = BeautifulSoup(r.content, 'html.parser')
texts = soup.findAll(text=True)
print(texts)
</code></pre>
<p>I can see them.</p>
<p>Any advice on how to dig in and get player names?</p>
| 1 | 2016-07-22T21:01:08Z | 38,536,863 | <p>The table you're looking for is dynamically filled by JavaScript when the page is rendered. When you retrieve the page using e.g. <code>requests</code>, it only retrieves the original, unmodified page. This means that some elements that you see in your browser will be missing.</p>
<p>The fact that you can find the player names in your second snippet of code, is because they are contained in the page's JavaScript source, as JSON. However you won't be able to retrieve them with BeautifulSoup as it won't parse the JavaScript.</p>
<p>The best option is to use something like <a href="https://pypi.python.org/pypi/selenium" rel="nofollow">Selenium</a>, which mimics a browser as closely as possible and will execute JavaScript code, thereby rendering the same page content as you would see in your own browser.</p>
| 0 | 2016-07-23T00:09:10Z | [
"python",
"html",
"web-scraping",
"python-requests",
"bs4"
] |
How/when to close a file in an object? | 38,535,165 | <p>I'm trying to design a class to manage video devices in Linux (<code>/dev/video*</code>).</p>
<p>Because of my C++ background I naturally thought I could open the file in the constructor and close it in the destructor.</p>
<p>Then later I learned python does not guarantee when/if the destructor is called.</p>
<p>Then I think I can make my own "initialize" and "de-initialize" methods to manage the opening/closing of the device file, but then it creates time gaps when the objected is constructed but not initialized and also when the object is de-initialized but not destructed, at which time the object does not have a valid internal state ( the methods are mostly <code>ioctls</code> on the opened video device).</p>
<p>That means I need to validate object state at the beginning of each method , like built-in file objects (<code>f=open()</code>, <code>f.close</code>)? Or just let the I/O error occur when a method is called on an already de-initialized object?</p>
| 1 | 2016-07-22T21:01:31Z | 38,535,845 | <p>Go ahead and open the file in the constructor, it won't hurt anything.</p>
<p>Python provides the <a href="https://www.python.org/dev/peps/pep-0343/" rel="nofollow"><code>with</code> statement</a> to allow setup and teardown of an object beyond construction/destruction. Your object must include an <code>__enter__</code> and <code>__exit__</code> method; <code>__enter__</code> is called at the beginning of the <code>with</code> statement, and <code>__exit__</code> is called at the conclusion of the code block contained within the <code>with</code>. Notably <code>__exit__</code> is called whether the block runs to completion or is terminated early with an exception.</p>
<p>Obviously <code>with</code> is only useful if you're using the object right then and there, not if you're storing it as a member in yet another object for example. But you can just go one level deeper and use <code>with</code> around <em>that</em> object, and have its <code>__exit__</code> function call a cleanup function on your own object.</p>
| 0 | 2016-07-22T22:00:10Z | [
"python"
] |
Converting a generator expression (<genexpr>) to list? | 38,535,171 | <p>I'm doing some topic modeling and am looking to store some of the results of my analysis.</p>
<pre><code>import pandas as pd, numpy as np, scipy
import sklearn.feature_extraction.text as text
from sklearn import decomposition
descs = ["You should not go there", "We may go home later", "Why should we do your chores", "What should we do"]
vectorizer = text.CountVectorizer()
dtm = vectorizer.fit_transform(descs).toarray()
vocab = np.array(vectorizer.get_feature_names())
nmf = decomposition.NMF(3, random_state = 1)
topic = nmf.fit_transform(dtm)
topic_words = []
for t in nmf.components_:
word_idx = np.argsort(t)[::-1][:20]
topic_words.append(vocab[i] for i in word_idx)
for t in range(len(topic_words)):
print("Topic {}: {}\n".format(t, " ".join([word for word in topic_words[t]])))
</code></pre>
<p>Prints:</p>
<pre><code>Topic 0: do we should your why chores what you there not may later home go
Topic 1: should you there not go what do we your why may later home chores
Topic 2: we may later home go what do should your you why there not chores
</code></pre>
<p>I'm trying to write those topics to a file, so I thought storing them in a list might work, like this:</p>
<pre><code>l = []
for t in range(len(topic_words)):
l.append([word for word in topic_words[t]])
print("Topic {}: {}\n".format(t, " ".join([word for word in topic_words[t]])))
</code></pre>
<p>But <code>l</code> just ends up as an empty array. How can I store these words in a list?</p>
| 0 | 2016-07-22T21:02:01Z | 38,535,302 | <p>You're appending generator expressions to your list <code>topic_words</code>, so the first time you printed, the generator expressions are already exhausted. You can instead do:</p>
<pre><code>topic_words = []
for t in nmf.components_:
word_idx = np.argsort(t)[::-1][:20]
topic_words.append([vocab[i] for i in word_idx])
# ^ ^
</code></pre>
<p>With this, you apparently won't need a new list, and you can print out with:</p>
<pre><code>for t, words in enumerate(topic_words, 1):
print("Topic {}: {}\n".format(t, " ".join(words)))
</code></pre>
| 3 | 2016-07-22T21:11:38Z | [
"python",
"generator"
] |
scikit-learn add training data | 38,535,216 | <p>I was looking at the training data available in <code>sklearn</code> at <a href="http://scikit-learn.org/stable/datasets/twenty_newsgroups.html" rel="nofollow">here</a>. As per documentation, it contains 20 classes of documents, based on some newsgroup collection. It does a fairly good job of classifying documents belonging to those categories. However, I need to add more articles for categories, like cricket, football, nuclear physics, etc.</p>
<p>I have set of documents for each class ready, like <code>sports -> cricket</code>, <code>cooking -> French</code>, etc.. How do I add those documents and classes in <code>sklearn</code> so that the interface which now returns 20 classes will return those 20 plus the new ones as well? If there is some training that I need to do, either through <code>SVM</code> or <code>Naive Bayes</code>, where do I do it before adding it to the dataset?</p>
| 0 | 2016-07-22T21:05:42Z | 38,607,789 | <p>Supposing your additional data has the following directory structure (if not, then this should be your first step, because it will make your life a lot easier as you can use the <code>sklearn</code> API to fetch the data, see <a href="http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_files.html" rel="nofollow">here</a>):</p>
<pre><code>additional_data
|
|-> sports.cricket
|
|-> file1.txt
|-> file2.txt
|-> ...
|
|-> cooking.french
|
|-> file1.txt
|-> ...
...
</code></pre>
<p>Moving to <code>python</code>, load up both datasets (supposing your additional data are in the above format and are rooted at <code>/path/to/additional_data</code>)</p>
<pre><code>import os
from sklearn import cross_validation
from sklearn.datasets import fetch_20newsgroups
from sklearn.datasets import load_files
from sklearn.externals import joblib
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import accuracy_score
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
import numpy as np
# Note if you have a pre-defined training/testing split in your additional data, you would merge them with the corresponding 'train' and 'test' subsets of 20news
news_data = fetch_20newsgroups(subset='all')
additional_data = load_files(container_path='/path/to/additional_data', encoding='utf-8')
# Both data objects are of type `Bunch` and therefore can be relatively straightforwardly merged
# Merge the two data files
'''
The Bunch object contains the following attributes: `dict_keys(['target_names', 'description', 'DESCR', 'target', 'data', 'filenames'])`
The interesting ones for our purposes are 'data' and 'filenames'
'''
all_filenames = np.concatenate((news_data.filenames, additional_data.filenames)) # filenames is a numpy array
all_data = news_data.data + additional_data.data # data is a standard python list
merged_data_path = '/path/to/merged_data'
'''
The 20newsgroups data has a filename a la '/path/to/scikit_learn_data/20news_home/20news-bydate-test/rec.sport.hockey/54367'
So depending on whether you want to keep the sub directory structure of the train/test splits or not,
you would either need the last 2 or 3 parts of the path
'''
for content, f in zip(all_data, all_filenames):
# extract sub path
sub_path, filename = f.split(os.sep)[-2:]
# Create output directory if not exists
p = os.path.join(merged_data_path, sub_path)
if (not os.path.exists(p)):
os.makedirs(p)
# Write data to file
with open(os.path.join(p, filename), 'w') as out_file:
out_file.write(content)
# Now that everything is stored at `merged_data_path`, we can use `load_files` to fetch the dataset again, which now includes everything from 20newsgroups and your additional data
all_data = load_files(container_path=merged_data_path)
'''
all_data is yet another `Bunch` object:
* `data` contains the data
* `target_names` contains the label names
* `target contains` the labels in numeric format
* `filenames` contains the paths of each individual document
thus, running a classifier over the data is straightforward
'''
vec = CountVectorizer()
X = vec.fit_transform(all_data.data)
# We want to create a train/test split for learning and evaluating a classifier (supposing we haven't created a pre-defined train/test split encoded in the directory structure)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, all_data.target, test_size=0.2)
# Create & fit the MNB model
mnb = MultinomialNB()
mnb.fit(X_train, y_train)
# Evaluate Accuracy
y_predicted = mnb.predict(X_test)
print('Accuracy: {}'.format(accuracy_score(y_test, y_predicted)))
# Alternatively, the vectorisation and learning can be packaged into a pipeline and serialised for later use
pipeline = Pipeline([('vec', CountVectorizer()), ('mnb', MultinomialNB())])
# Run the vectorizer and train the classifier on all available data
pipeline.fit(all_data.data, all_data.target)
# Serialise the classifier to disk
joblib.dump(pipeline, '/path/to/model_zoo/mnb_pipeline.joblib')
# If you get some more data later on, you can deserialise the model and run them through the pipeline again
p = joblib.load('/path/to/model_zoo/mnb_pipeline.joblib')
docs_new = ['God is love', 'OpenGL on the GPU is fast']
y_predicted = p.predict(docs_new)
print('Predicted labels: {}'.format(np.array(all_data.target_names)[y_predicted]))
</code></pre>
| 1 | 2016-07-27T08:32:46Z | [
"python",
"machine-learning",
"scipy",
"scikit-learn"
] |
Python formatting CSV with string and float and write | 38,535,227 | <p>How do you format a float to 2 decimal points with mixed data type?
I'm fetching a table and writing rows to csv file.
My data is (string, string, float, float, float...)</p>
<pre><code> sql = 'select * from testTable'
c.execute(sql)
columnNames = list(map(lambda x: x[0], c.description))
result = c.fetchall()
with open(outputCSVPath, 'wb') as f:
writer = csv.writer(f)
writer.writerow(columnNames)
writer.writerows(result)
</code></pre>
<p>With the above code, I get floats with 6 decimal places. I need to format it to 2 decimal places but because of the first 2 of the list being string, it gives me an type error.</p>
<p>Thank you.</p>
| 0 | 2016-07-22T21:06:17Z | 38,535,833 | <p>You need to change the 'result' list using list comprehension and ternary if-else in Python.</p>
<p><code>result = [x if type(x) is str else format(x,'.2f') for x in result]</code></p>
| 1 | 2016-07-22T21:59:22Z | [
"python",
"csv"
] |
Python formatting CSV with string and float and write | 38,535,227 | <p>How do you format a float to 2 decimal points with mixed data type?
I'm fetching a table and writing rows to csv file.
My data is (string, string, float, float, float...)</p>
<pre><code> sql = 'select * from testTable'
c.execute(sql)
columnNames = list(map(lambda x: x[0], c.description))
result = c.fetchall()
with open(outputCSVPath, 'wb') as f:
writer = csv.writer(f)
writer.writerow(columnNames)
writer.writerows(result)
</code></pre>
<p>With the above code, I get floats with 6 decimal places. I need to format it to 2 decimal places but because of the first 2 of the list being string, it gives me an type error.</p>
<p>Thank you.</p>
| 0 | 2016-07-22T21:06:17Z | 38,535,859 | <p>You can iterate the result in order to parse only the floats values, and then you can use <code>writer.writerows(row)</code> to write row by row. Take a look <a href="http://stackoverflow.com/questions/17861152/cursor-fetchall-vs-listcursor-in-python">here</a> to know the different ways to iterate your <code>results</code>.</p>
<p>On the other hand, you can format floats to two decimals points using the <a href="https://docs.python.org/2/library/functions.html#round" rel="nofollow"><code>round()</code></a> function from python.</p>
<p>An example:</p>
<pre><code>>>> # Each of your rows should be something like:
>>> list = ["string1", "string3", 1.435654, 4.43256]
>>> #Round the floats
>>> parsed_list = [round(x, 2) if i > 1 else x for i, x in enumerate(list)]
>>> print parsed_list
['string1', 'string2', 1.44, 4.43]
</code></pre>
| 1 | 2016-07-22T22:01:16Z | [
"python",
"csv"
] |
Can't import matplotlib even if it's installad | 38,535,241 | <p>I am running a sample python program on my Mac (El Capitan, 10.11.5)<br>
I have a the default version of python installed (2.6) and on top of that I installed python 2.7 and 3.5. I need matplotlib on my 2.7 version. </p>
<p>I installed it with pip (not pip3) and I don't know why it got installed on python 3.5. </p>
<p>If I type <code>pip list</code> this is the output: </p>
<pre><code>cycler (0.10.0)
Django (1.8.4)
matplotlib (1.5.1)
numpy (1.11.1)
pip (8.1.2)
pyparsing (2.1.5)
python-dateutil (2.5.3)
pytz (2016.6.1)
selenium (2.53.6)
setuptools (19.4)
six (1.10.0)
wheel (0.26.0)
</code></pre>
<p><code>which python</code> outputs: <code>/usr/local/bin/python</code> </p>
<p>My path is:</p>
<pre><code>/usr/local/share/python3:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
</code></pre>
<p>In <code>/usr/local/share/python3</code>: I have python 3.5, in <code>/usr/local/bin</code>: python 2.7 and then in <code>/usr/bin</code> the default python. </p>
<p>When I run in my program <code>from matplotlib import pyplot as plt</code> and try to use it I get: </p>
<blockquote>
<p>ImportError: No module named matplotlib</p>
</blockquote>
<p>I think the problem is that matplotlib is installed on python3.5 site-package. How could I fix this? </p>
<p>Thank you!</p>
| 3 | 2016-07-22T21:07:08Z | 38,543,032 | <blockquote>
<p>I have a the default version of python installed (2.6) and on top of that I installed python 2.7 and 3.5. I need matplotlib on my 2.7 version. </p>
</blockquote>
<p>that's genereally not a problem, however you need to make sure the python environments are not mixed up. </p>
<blockquote>
<p>I think the problem is that matplotlib is installed on python3.5 site-package. How could I fix this? </p>
</blockquote>
<p><strong>1. Use python's virtualenv feature</strong></p>
<p>My recommendation is to use <a href="http://docs.python-guide.org/en/latest/dev/virtualenvs/" rel="nofollow"><code>virtualenv</code></a> *):</p>
<pre><code># for a python 3.5 environment
$ cd /path/to/<project with python 3>
$ PATH="/path/to/python3.x;$PATH" python -m venv myenv
# for a python 2.7 environment
$ cd /path/to/<project with python 2>
$ PATH="/path/to/python2.7;$PATH" virtualenv myenv
</code></pre>
<p>This will create clean per-project python environments with their seperate <code>site-packages</code>. With that you can work on your project (or even multiple projects) that have different packages or package versions installed, without them interfering. </p>
<p><em>That said, before you move any further, open a new Terminal to make sure all path are reset to a clean state.</em></p>
<p><strong>2. Re-Install packages into your fresh virtualenv</strong></p>
<p>Activate the environment and re-install the required packages into the project's environment using</p>
<pre><code># assuming the list of packages is in /path/to/project/requirements.txt
$ cd /path/to/project
$ source myenv/bin/activate
$ pip install -r requirements.txt
</code></pre>
<p>Once you have done this, you should be able to import matplotlib lib just fine:</p>
<pre><code>python -c 'import matplotlib; print matplotlib'
<module 'matplotlib' from '/path/to/python/site-packages/matplotlib/__init__.pyc'>
</code></pre>
<p><strong>3. Give yourself a break</strong></p>
<p>To simplify using virtualenvs try <a href="http://virtualenvwrapper.readthedocs.io/en/latest/index.html" rel="nofollow"><code>virtualenvwrapper</code></a>. This adds a couple of commands to your system to simplify the handling of virtualenvs, e.g.:</p>
<pre><code># create new environments
$ mkvirtualenv foo
# activate a particular environment
$ workon foo
# list packages in your environment
$ lssitepackages
(...)
</code></pre>
<p>*) Note that Python 3 provides the <a href="https://docs.python.org/3/library/venv.html" rel="nofollow"><code>venv</code></a> package as part of the standard library, whereas Python 2.7 requires that you install the <code>virtualenv</code> package first.</p>
| 1 | 2016-07-23T14:48:27Z | [
"python",
"osx",
"matplotlib",
"pip"
] |
Rendering Large Image in PyGame Causing Low Framerate | 38,535,283 | <p>I am trying to make a 'Runner' style game in PyGame (like Geometry Dash) where the background is constantly moving. So far everything works fine, but the rendering of the background images restricts the frame rate from exceeding 35 frames per second. Before I added the infinite/repeating background element, it could easily run at 60 fps. These two lines of code are responsible (when removed, game can run at 60+fps):</p>
<p>screen.blit(bg, (bg_x, 0)) |
screen.blit(bg, (bg_x2, 0))</p>
<p>Is there anything I could do to make the game run faster? Thanks in advance!</p>
<p>Simplified Source Code:</p>
<pre><code>import pygame
pygame.init()
screen = pygame.display.set_mode((1000,650), 0, 32)
clock = pygame.time.Clock()
def text(text, x, y, color=(0,0,0), size=30, font='Calibri'): # blits text to the screen
text = str(text)
font = pygame.font.SysFont(font, size)
text = font.render(text, True, color)
screen.blit(text, (x, y))
def game():
bg = pygame.image.load('background.png')
bg_x = 0 # stored positions for the background images
bg_x2 = 1000
pygame.time.set_timer(pygame.USEREVENT, 1000)
frames = 0 # counts number of frames for every second
fps = 0
while True:
frames += 1
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
quit()
if event.type == pygame.USEREVENT: # updates fps every second
fps = frames
frames = 0 # reset frame count
bg_x -= 10 # move the background images
bg_x2 -= 10
if bg_x == -1000: # if the images go off the screen, move them to the other end to be 'reused'
bg_x = 1000
elif bg_x2 == -1000:
bg_x2 = 1000
screen.fill((0,0,0))
screen.blit(bg, (bg_x, 0))
screen.blit(bg, (bg_x2, 0))
text(fps, 0, 0)
pygame.display.update()
#clock.tick(60)
game()
</code></pre>
<p>Here is the background image:</p>
<p><a href="http://i.stack.imgur.com/H5ccb.png" rel="nofollow"><img src="http://i.stack.imgur.com/H5ccb.png" alt="enter image description here"></a></p>
| 1 | 2016-07-22T21:10:18Z | 38,536,196 | <p>Have you tried using <code>convert()</code>?
</p>
<pre class="lang-py prettyprint-override"><code> bg = pygame.image.load('background.png').convert()
</code></pre>
<p>From the <a href="http://www.pygame.org/docs/ref/image.html#pygame.image.load" rel="nofollow">documentation</a>:</p>
<blockquote>
<p>You will often want to call Surface.convert() with no arguments, to create a copy that will draw more quickly on the screen.</p>
<p>For alpha transparency, like in .png images use the convert_alpha() method after loading so that the image has per pixel transparency.</p>
</blockquote>
| 1 | 2016-07-22T22:38:34Z | [
"python",
"pygame"
] |
Can't get ngrok, and my python script to use the same port | 38,535,332 | <p>I imported flask into my python script, and I'm using ngrok to make it accessible. </p>
<pre><code>The ASK_APPLICATION_ID has not been set. Application ID verification disabled.
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
The ASK_APPLICATION_ID has not been set. Application ID verification disabled.
* Debugger is active!
* Debugger pin code: 288-130-002
</code></pre>
<p>In ngrok the web interface if set to: <a href="http://127.0.0.1:4040" rel="nofollow">http://127.0.0.1:4040</a> </p>
<p>Whenever I try to set the python script to the same port (app.run(port="4040)), then run ngrok, ngrok changes its port to 4041.</p>
<p>If I run ngrok first, then set the port to ngrok's port, I get the error that the port is in use.</p>
| 0 | 2016-07-22T21:13:42Z | 38,539,671 | <p>start the flask application at 5000 port and then start ngrok</p>
<pre><code>./ngrok http 5000
</code></pre>
<p>ngrok will provide the url which then forwards requests to your app running at 5000</p>
| 0 | 2016-07-23T08:16:04Z | [
"python",
"flask",
"ngrok"
] |
graphs from Altair are not displaying | 38,535,357 | <p>On Ubuntu x64, I just freshly installed Anaconda 3.</p>
<p>I then installed <code>altair</code> via <code>conda</code> per <a href="https://github.com/ellisonbg/altair" rel="nofollow">these directions</a>. Then I run the example code:</p>
<pre><code>from altair import *
population = load_dataset('population')
Chart(population).mark_bar().encode(
x='sum(people)',
).transform_data(filter="datum.year==2000")
</code></pre>
<p>The code runs, but nothing happens. I expected a page would open in the browser perhaps, like <code>bokeh</code> does. To be safe, I also ran <code>jupyter notebook</code> in the background and re-running the code - no difference.</p>
<p>Please let me know if there's more information you need about my environment.</p>
| 0 | 2016-07-22T21:16:28Z | 38,535,977 | <p>Thanks to @cel, I found out the code cannot be run in the iPython console from Anaconda - it needs to be run from a Jupyter notebook.</p>
<p>You have to run <code>jupyter notebook</code> in the terminal, then create a new iPython Notebook. Once the notebook is created, you can run your code interactively from the notebook.</p>
| 1 | 2016-07-22T22:13:40Z | [
"python",
"anaconda",
"jupyter"
] |
MongDB - queries a large collection | 38,535,434 | <p>I have 40 MM documents in my MongoDB collection (e.g. db.large_collection)</p>
<p>I want to get all the distinct User_ID. </p>
<p>I have created an Index on the field user_id but when I try to execute, it returns an error. </p>
<pre><code>> db.large_collection.count()
39894523
> db.clean_tweets4.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "sampled_tourist.clean_tweets4"
},
{
"v" : 1,
"key" : {
"user_id" : 1
},
"name" : "user_id_1",
"ns" : "sampled_tourist.clean_tweets4"
},
{
"v" : 1,
"key" : {
"coordinates" : 1
},
"name" : "coordinates_1",
"ns" : "sampled_tourist.clean_tweets4"
},
{
"v" : 1,
"key" : {
"timestamp_ms" : 1
},
"name" : "timestamp_ms_1",
"ns" : "sampled_tourist.clean_tweets4"
}
</code></pre>
<p>]</p>
<p>But when I run </p>
<pre><code> db.clean_tweets4.find({},{user_id:1})
{ "_id" : ObjectId("5790f9a178776f4b56ede2be"), "user_id" : NumberLong("2246342226") }
{ "_id" : ObjectId("5790f9a178776f4b56ede2bf"), "user_id" : NumberLong("2289817236") }
{ "_id" : ObjectId("5790f9a178776f4b56ede2c0"), "user_id" : 1904381486 }
{ "_id" : ObjectId("5790f9a178776f4b56ede2c1"), "user_id" : NumberLong("3044032705") }
{ "_id" : ObjectId("5790f9a178776f4b56ede2c2"), "user_id" : NumberLong("3407958364") }
{ "_id" : ObjectId("5790f9d278776f4b56ee4af2"), "user_id" : 1566025975 }
{ "_id" : ObjectId("5790f7ab78776f4b56ea55c6"), "user_id" : 15857879 }
{ "_id" : ObjectId("5790f9a178776f4b56ede28f"), "user_id" : NumberLong("3394102511") }
{ "_id" : ObjectId("5790f9a178776f4b56ede293"), "user_id" : 1376377652 }
{ "_id" : ObjectId("5790f9a178776f4b56ede294"), "user_id" : 352385989 }
{ "_id" : ObjectId("5790f9a178776f4b56ede295"), "user_id" : NumberLong("2383622643") }
{ "_id" : ObjectId("5790f9a178776f4b56ede29c"), "user_id" : 152362163 }
{ "_id" : ObjectId("5790f9a178776f4b56ede2a0"), "user_id" : 1446113954 }
{ "_id" : ObjectId("5790f9a178776f4b56ede2a1"), "user_id" : 1893437088 }
{ "_id" : ObjectId("5790f9a178776f4b56ede2a2"), "user_id" : 67121578 }
{ "_id" : ObjectId("5790f9a178776f4b56ede2a3"), "user_id" : 1714137770 }
{ "_id" : ObjectId("5790f9a178776f4b56ede2a4"), "user_id" : 52806609 }
</code></pre>
<p>Thanks! </p>
| 0 | 2016-07-22T21:22:16Z | 38,535,476 | <p>find({}); means it returns everything.
This is what you want. </p>
<pre><code>db.collection.find({user_id:1});
</code></pre>
| -1 | 2016-07-22T21:25:41Z | [
"python",
"mongodb",
"bigdata"
] |
Numpy: operations on columns (or rows) of NxM array | 38,535,444 | <p>This may be a silly question, but I've just started using <em>numpy</em> and I have to figure out how to perform some simple operations.</p>
<p>Suppose that I have the 2x3 array</p>
<pre><code>array([[1, 3, 5],
[2, 4, 6]])
</code></pre>
<p>And that I want to perform some operation on the first column, for example subtract 1 to all the elements to get</p>
<pre><code>array([[0, 3, 5],
[1, 4, 6]])
</code></pre>
<p>How can I perform such an operation?</p>
| 0 | 2016-07-22T21:23:03Z | 38,535,502 | <pre><code>arr
# array([[1, 3, 5],
# [2, 4, 6]])
arr[:,0] = arr[:,0] - 1 # choose the first column here, subtract one and
# assign it back to the same column
arr
# array([[0, 3, 5],
# [1, 4, 6]])
</code></pre>
| 0 | 2016-07-22T21:28:21Z | [
"python",
"arrays",
"numpy"
] |
urllibâexpressing what piece of JSON Data I desire | 38,535,456 | <p>Here is a big piece of JSON data that I fetch in my code below:</p>
<pre><code>{
"status": 200,
"offset": 0,
"limit": 10,
"count": 8,
"total": 8,
"url": "/v2/dictionaries/ldoce5/entries?headword=extra",
"results": [
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra",
"homnum": 3,
"id": "cqAFDjvvYg",
"part_of_speech": "adverb",
"senses": [
{
"collocation_examples": [
{
"collocation": "one/a few etc extra",
"example": {
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001627480.mp3"
}
],
"text": "I got a few extra in case anyone else decides to come."
}
}
],
"definition": [
"in addition to the usual things or the usual amount"
],
"examples": [
{
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001627477.mp3"
}
],
"text": "They need to offer something extra to attract customers."
}
]
}
],
"url": "/v2/dictionaries/entries/cqAFDjvvYg"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra-",
"id": "cqAFDk1BDw",
"part_of_speech": "prefix",
"pronunciations": [
{
"audio": [
{
"lang": "American English",
"type": "pronunciation",
"url": "/v2/dictionaries/assets/ldoce/us_pron/extra__pre.mp3"
}
],
"ipa": "ekstrÉ"
}
],
"senses": [
{
"definition": [
"outside or beyond"
],
"examples": [
{
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001832333.mp3"
}
],
"text": "extragalactic (=outside our galaxy)"
}
]
}
],
"url": "/v2/dictionaries/entries/cqAFDk1BDw"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra",
"homnum": 1,
"id": "cqAFDjpNZQ",
"part_of_speech": "adjective",
"pronunciations": [
{
"audio": [
{
"lang": "British English",
"type": "pronunciation",
"url": "/v2/dictionaries/assets/ldoce/gb_pron/extra_n0205.mp3"
},
{
"lang": "American English",
"type": "pronunciation",
"url": "/v2/dictionaries/assets/ldoce/us_pron/extra1.mp3"
}
],
"ipa": "ËekstrÉ"
}
],
"senses": [
{
"collocation_examples": [
{
"collocation": "an extra ten minutes/three metres etc",
"example": {
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001202489.mp3"
}
],
"text": "I asked for an extra two weeks to finish the work."
}
}
],
"definition": [
"more of something, in addition to the usual or standard amount or number"
],
"examples": [
{
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001202484.mp3"
}
],
"text": "Could you get an extra loaf of bread?"
}
],
"gramatical_info": {
"type": "only before noun"
}
}
],
"url": "/v2/dictionaries/entries/cqAFDjpNZQ"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra",
"homnum": 2,
"id": "cqAFDjsQjH",
"part_of_speech": "pronoun",
"senses": [
{
"collocation_examples": [
{
"collocation": "pay/charge/cost etc extra",
"example": {
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001202499.mp3"
}
],
"text": "I earn extra for working on Sunday."
}
}
],
"definition": [
"an amount of something, especially money, in addition to the usual, basic, or necessary amount"
],
"synonym": "more"
}
],
"url": "/v2/dictionaries/entries/cqAFDjsQjH"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra",
"homnum": 4,
"id": "cqAFDjyTn8",
"part_of_speech": "noun",
"senses": [
{
"definition": [
"something which is added to a basic product or service that improves it and often costs more"
],
"examples": [
{
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001202524.mp3"
}
],
"text": "Tinted windows and a sunroof are optional extras(=something that you can choose to have or not)."
}
]
}
],
"url": "/v2/dictionaries/entries/cqAFDjyTn8"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra virgin",
"id": "cqAFDmV2Jw",
"part_of_speech": "adjective",
"senses": [
{
"definition": [
"extra virgin olive oil comes from olives that are pressed for the first time, and is considered to be the best quality olive oil"
]
}
],
"url": "/v2/dictionaries/entries/cqAFDmV2Jw"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra time",
"id": "cqAFDmGZyQ",
"part_of_speech": "noun",
"senses": [
{
"american_equivalent": "overtime",
"definition": [
"a period, usually of 30 minutes, added to the end of a football game in some competitions if neither team has won after normal time"
],
"examples": [
{
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001627835.mp3"
}
],
"text": "The match went into extra time."
}
],
"geography": "especially British English",
"gramatical_examples": [
{
"examples": [
{
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001627834.mp3"
}
],
"text": "Beckham scored in extra time."
}
],
"pattern": "in extra time"
}
]
}
],
"url": "/v2/dictionaries/entries/cqAFDmGZyQ"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra-sensory perception",
"id": "cqAFDm6ceW",
"part_of_speech": "noun",
"senses": [
{
"definition": [
"ESP"
]
}
],
"url": "/v2/dictionaries/entries/cqAFDm6ceW"
}
]
}
</code></pre>
<p>I want to grab and print the definitions offered in the JSON results. I don't know how to express this and am getting a 'list indices must be integers or slices, not str' error for my sense = data['senses'].</p>
<pre><code>#!/usr/bin/env python
import urllib.request
import json
wp = urllib.request.urlopen("http://api.pearson.com/v2/dictionaries/ldoce5/entries?headword=extra").read().decode('utf8')
jsonData=json.loads(wp)
data=jsonData['results']
for item in data:
sense = data['senses']
print(senses['definition'])
</code></pre>
| 1 | 2016-07-22T21:24:11Z | 38,535,564 | <p><code>sense</code> is actually a list with a single element, a dictionary. The contained dictionary has your desired key-value pair.</p>
<p>For example:</p>
<pre><code>for item in data:
sense = data['senses'][0]
print(sense['definition'])
</code></pre>
| 0 | 2016-07-22T21:33:12Z | [
"python",
"json",
"api",
"python-3.x",
"urlfetch"
] |
Matplotlib stores .svg different than what is show on screen | 38,535,459 | <p>Here is an example. The next is what get's stored as a .png and this is how it displays on the screen and it is correct</p>
<p><a href="http://i.stack.imgur.com/mmxiY.png" rel="nofollow"><img src="http://i.stack.imgur.com/mmxiY.png" alt="this is the image that looks right"></a></p>
<p>and the next is the .svg which is using interpolation for the heatmap</p>
<p><a href="http://i.stack.imgur.com/p5ABs.png" rel="nofollow"><img src="http://i.stack.imgur.com/p5ABs.png" alt="enter image description here"></a></p>
<p>They are both stored using the next line of code respectively</p>
<pre><code>plt.savefig(filename,format='png')
plt.savefig(filename,format='svg')
</code></pre>
<p>And the next is the code that generates the actual plot</p>
<pre><code>def heatmapText(data,xlabels=[],ylabels=[],cmap='jet',fontsize=7):
'''
Heatmap with text on each of the cells
'''
plt.imshow(data,interpolation='none',cmap=cmap)
for y in range(data.shape[0]):
for x in range(data.shape[1]):
plt.text(x , y , '%.1f' % data[y, x],
horizontalalignment='center',
verticalalignment='center',
fontsize=fontsize
)
plt.gca()
if ylabels!=[]:
plt.yticks(range(ylabels.size),ylabels.tolist(),rotation='horizontal')
if xlabels!=[]:
plt.xticks(range(xlabels.size),xlabels.tolist(),rotation='vertical')
</code></pre>
<p>For both plots I used exactly the same function but stored it in different formats. Last, in screen appears correctly (like in the .png).</p>
<p>Any ideas on how to have the .svg to store the file correctly?</p>
| 0 | 2016-07-22T21:24:22Z | 38,535,663 | <p>Based on <a href="http://matplotlib.org/examples/images_contours_and_fields/interpolation_none_vs_nearest.html" rel="nofollow">http://matplotlib.org/examples/images_contours_and_fields/interpolation_none_vs_nearest.html</a> </p>
<p><a href="http://stackoverflow.com/questions/12473511/what-does-matplotlib-imshowinterpolation-nearest-do">What does matplotlib `imshow(interpolation='nearest')` do?</a></p>
<p>and</p>
<p><a href="http://stackoverflow.com/questions/23490289/matplotlib-shows-different-figure-than-saves-from-the-show-window">matplotlib shows different figure than saves from the show() window</a></p>
<p>I'm going to recommend trying this with <code>interpolation=nearest</code></p>
<p>The following code gives me identical displayed and saved as svg plots:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
A = np.random.rand(5, 5)
plt.figure(1)
plt.imshow(A, interpolation='nearest')
plt.savefig('fig',format='svg')
plt.show()
</code></pre>
| 0 | 2016-07-22T21:42:39Z | [
"python",
"svg",
"matplotlib"
] |
TfIdfVectorizer divides words to single characters? | 38,535,475 | <p>I am trying to find nearest neighbors in a set of descriptions. Descriptions usually contain 1-15 words that I am tokenizing using the scikit's TfIdfVectorizer. Then, with the same vectorizer, I am fitting the base description. However, it seems that the vectorizer divides this one to separate characters, rather than words, because the resulting sparse matrix is of shape [number of letters in the base description x number of unique words in the corpus]</p>
<pre><code>descriptions = 'total assets'
products = LoadData('C:/dict.csv', dtype = {'Code': np.str, 'LocalLanguageLabel': np.str})
products = products.fillna({'LocalLanguageLabel':''})
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(token_pattern=r'\b\w+\b')
#tried the below two as well
#vectorizer = TfidfVectorizer()
#vectorizer = TfidfVectorizer(token_pattern=r'\b\w+\b', analyzer = 'word')
dict_matrix = vectorizer.fit_transform(products['LocalLanguageLabel'])
input_matrix = vectorizer.transform(description)
from sklearn.neighbors import NearestNeighbors
model = NearestNeighbors(metric='euclidean', algorithm='brute')
model.fit(dict_matrix)
distance, indices = model.kneighbors(input_matrix,n_neighbors = 10)
</code></pre>
<p>when I print the input_matrix, this is what I get (you can guess that the indices relate to characters in 'totalassets'):</p>
<pre><code>print(input_matrix)
(0, 33478) 1.0 #t
(1, 24021) 1.0 #o
(2, 33478) 1.0 #t
(3, 2298) 1.0 #a
(4, 20272) 1.0 #l
(6, 2298) 1.0 #a
(7, 30874) 1.0 #s
(8, 30874) 1.0 #s
(9, 11386) 1.0 #e
(10, 33478) 1.0 #t
(11, 30874) 1.0 #s
<12x39859 sparse matrix of type '<class 'numpy.float64'>'
with 11 stored elements in Compressed Sparse Row format>
</code></pre>
<p>Is that expected? I would expect 10 distances and 10 indices, instead I am getting 12 lists of 10 elements each.</p>
| 0 | 2016-07-22T21:25:40Z | 38,535,789 | <p>Right, the answer was pretty simple for the amount of time I spent on it. I wrapped the <code>description</code> in a list and got the expected 10 results:</p>
<pre><code>input_matrix = vectorizer.transform([description])
</code></pre>
| 0 | 2016-07-22T21:55:31Z | [
"python",
"scikit-learn"
] |
Call a class Object with Python(Tkinter) | 38,535,498 | <p>Currently I'm looking for a way to call a class object within Tkinter. Here is a sample code to which can be used. From this how could I call this within Tkinter?</p>
<pre><code>root=Tk()
root.geometry=(root, width=x, height=y)
root.title("Let's do this!")
class MyApp():
def Do_Good():
py_game=Label(root, width=x, height=y)
return
root.manloop()
</code></pre>
<p>Question...How do I call the class within the root window?</p>
| -2 | 2016-07-22T21:27:49Z | 38,535,598 | <p>You call it like you do any other method on any other object: you create an instance, and call the method.</p>
<pre><code>app = MyApp()
...
app.Do_Good()
</code></pre>
<p>If you're asking how to call it from a callback, it's the same answer:</p>
<pre><code>app = MyApp()
...
button = Button(root, text="Do good!", command=app.Do_Good)
</code></pre>
| 1 | 2016-07-22T21:35:59Z | [
"python",
"python-2.7",
"tkinter"
] |
How to setup Emacs to use a given Python virtualenv? | 38,535,499 | <p>I use Emacs for a number of tasks and as I am starting to work with Python I would like to keep using Emacs to code in Python.</p>
<p>I have set up a virtualenv for Python3, and it is working as desired. I also have Emacs 24.5 installed with the latest version of <a href="http://batsov.com/prelude/" rel="nofollow">Emacs Prelude</a>.</p>
<p>When I edit an Python source file all I expected is working -- code completion, object inspection, etc. -- but for my system wide Python installation, not for the virtual environment I have set up for the project.</p>
<p>How can I tell Emacs to use the virtual environment for a given project?</p>
| 0 | 2016-07-22T21:27:54Z | 39,158,058 | <p><a href="https://github.com/jorgenschaefer/elpy" rel="nofollow">Elpy</a> has support for virtual environments built in via <a href="https://github.com/jorgenschaefer/pyvenv" rel="nofollow">Pyenv</a>. Pyenv can also be installed as a standalone package. Pyenv <a href="https://github.com/jorgenschaefer/pyvenv/issues/6" rel="nofollow">added support</a> for project-specific virtual environments in 2014, but I haven't experimented with them myself so I don't know how well it works.</p>
<p>If you want to code Python in Emacs, I would recommend installing Elpy. It really simplifies the process of getting a good environment up-and-running, and it is modular so you can deactivate sections over time when you decide you want a more tailored package. </p>
<p>You may also want to take a look at <a href="https://github.com/porterjamesj/virtualenvwrapper.el" rel="nofollow">virtualenvwrapper.el</a>, although Pyenv looks like it has more functionality.</p>
| 1 | 2016-08-26T03:58:12Z | [
"python",
"emacs",
"virtualenv",
"emacs-prelude"
] |
Stacked Bar Chart in Matplotlib; Series Are Overlaying Instead Of Stacking | 38,535,520 | <p>Struggling to figure this one out. I have a stacked bar chart that I'm trying to create with Python / Matplotlib and it seems to be overlaying the data instead of stacking resulting in different colors (i.e. red+yellow = orange instead of stacking red and yellow). Can anyone see what I'm doing wrong?</p>
<p>Here's my code:</p>
<pre><code>#Stacked Bar Char- matplotlib
#Create the general blog and the "subplots" i.e. the bars
f, ax1 = plt.subplots(1, figsize=(10,5))
# Set the bar width
bar_width = 0.75
# positions of the left bar-boundaries
bar_l = [i+1 for i in range(len(df4['bikes']))]
# positions of the x-axis ticks (center of the bars as bar labels)
tick_pos = [i+(bar_width/2) for i in bar_l]
# Create a bar plot, in position bar_1
ax1.bar(bar_l, df4['bikes1'], width=bar_width, label='bikes', alpha=0.5,color='r', align='center')
# Create a bar plot, in position bar_1
ax1.bar(bar_l, df4['bikes'], width=bar_width,alpha=0.5,color='r', align='center')
# Create a bar plot, in position bar_1
ax1.bar(bar_l, df4['cats1'], width=bar_width, label='cats', alpha=0.5,color='y', align='center')
# Create a bar plot, in position bar_1
ax1.bar(bar_l, df4['cats'], width=bar_width,alpha=0.5,color='y', align='center')
# set the x ticks with names
plt.xticks(tick_pos, df4['Year'])
# Set the label and legends
ax1.set_ylabel("Count")
ax1.set_xlabel("Year")
plt.legend(loc='upper left')
ax1.axhline(y=0, color='k')
ax1.axvline(x=0, color='k')
# Set a buffer around the edge
plt.xlim([min(tick_pos)-bar_width, max(tick_pos)+bar_width])
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/XyCyD.png" rel="nofollow"><img src="http://i.stack.imgur.com/XyCyD.png" alt="enter image description here"></a></p>
| 1 | 2016-07-22T21:29:55Z | 38,535,971 | <p>You have to manually calculate position from which your <em>cats</em> bar will start (look at the <code>bottom</code> variable in my code when <em>cats</em> are plotting). You have to propose your own method to calculate position of <em>cats</em> bars depending on data in your dataframe.</p>
<p>Your colors are mixed up because you use <code>alpha</code> variable and when bars overlayed then colors would mixed up:</p>
<pre><code>import matplotlib.pylab as plt
import numpy as np
import pandas as pd
df4 = pd.DataFrame.from_dict({'bikes1':[-1,-2,-3,-4,-5,-3], 'bikes':[10,20,30,15,11,11],
'cats':[1,2,3,4,5,6], 'cats1':[-6,-5,-4,-3,-2,-1], 'Year': [2012,2013,2014,2015,2016,2017]})
#Create the general blog and the "subplots" i.e. the bars
f, ax1 = plt.subplots(1, figsize=(10,5))
# Set the bar width
bar_width = 0.75
# positions of the left bar-boundaries
bar_l = [i+1 for i in range(len(df4['bikes']))]
# positions of the x-axis ticks (center of the bars as bar labels)
tick_pos = [i+(bar_width/2) for i in bar_l]
# Create a bar plot, in position bar_1
ax1.bar(bar_l, df4['bikes1'], width=bar_width, label='bikes', alpha=0.5,color='r', align='center')
# Create a bar plot, in position bar_1
ax1.bar(bar_l, df4['bikes'], width=bar_width,alpha=0.5,color='r', align='center')
# Create a bar plot, in position bar_1
ax1.bar(bar_l, df4['cats1'], width=bar_width, label='cats', alpha=0.5,color='y', align='center',
bottom=[min(i,j) for i,j in zip(df4['bikes'],df4['bikes1'])])
# Create a bar plot, in position bar_1
ax1.bar(bar_l, df4['cats'], width=bar_width,alpha=0.5,color='y', align='center',
bottom=[max(i,j) for i,j in zip(df4['bikes'],df4['bikes1'])])
# set the x ticks with names
plt.xticks(tick_pos, df4['Year'])
# Set the label and legends
ax1.set_ylabel("Count")
ax1.set_xlabel("Year")
plt.legend(loc='upper left')
ax1.axhline(y=0, color='k')
ax1.axvline(x=0, color='k')
# Set a buffer around the edge
plt.xlim([min(tick_pos)-bar_width, max(tick_pos)+bar_width])
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/6WP70.png" rel="nofollow"><img src="http://i.stack.imgur.com/6WP70.png" alt="enter image description here"></a></p>
| 0 | 2016-07-22T22:12:22Z | [
"python",
"pandas",
"matplotlib"
] |
Making a combat system using classes | 38,535,620 | <p>I've been trying to implement a combat system in my text game. I thought I could just create instances in each of my scenes that have different <code>int</code> values to minimize hard-coding that the exercise from the book I have I think is trying to teach me. When I access the <code>TestRoom</code> class by dict it says in Powershell:</p>
<pre><code>TypeError: __init__() takes exactly 4 arguments (1 given)
</code></pre>
<p>Please help me figure out how to do this I'm new to Python and the book doesn't explain classes that well.</p>
<pre><code>class Hero(object):
def __init__(self, health1, damage1, bullets1):
self.health1 = health1
self.damage1 = damage1
self.bullets1 = bullets1
class Gothon(object):
def __init__(self, health2, damage2):
self.health2 = health2
self.damage2 = damage2
class Combat(object):
def battle():
while self.health1 != 0 or self.health2 != 0 or self.bullets1 != 0:
print "You have %r health left" % self.health1
print "Gothon has %r health left" % self.health2
fight = raw_input("Attack? Y/N?\n> ")
if fight == "Y" or fight == "y":
self.health2 - self.damage1
self.health1 - self.damage2
elif fight == "N" or fight == "n":
self.health1 - self.damage2
else:
print "DOES NOT COMPUTE"
if self.health1 == 0:
return 'death'
else:
pass
class TestRoom(Combat, Hero, Gothon):
def enter():
hero = Hero(10, 2, 10)
gothon = Gothon(5, 1)
</code></pre>
<p>This is in Python 2.7</p>
<p>Side note: Is learning Pyhton 2.7 a bad thing with Python 3 out? Answer isn't really required, just wondering.</p>
| 0 | 2016-07-22T21:38:22Z | 38,535,651 | <p>I'm not sure why you've made TestRoom inherit from those other three classes. That doesn't make sense; inheritance means an "is-a" relationship, but while a room might <em>contain</em> those things, it isn't actually any of those things itself.</p>
<p>This is the source of your problem, as TestRoom has now inherited the <code>__init__</code> method from the first one of those that defines it, Hero, and so it requires the parameters that Hero does. Remove that inheritance.</p>
| 3 | 2016-07-22T21:41:47Z | [
"python",
"python-2.7",
"syntax",
"typeerror"
] |
Check if field exists in DNS record using scapy | 38,535,638 | <p>I am parsing DNS packets using scapy. Not all of the DNS Answer Records have all of the fields. For example, some Answers don't have rdata so </p>
<pre><code>answer = packet.an[0].rdata
</code></pre>
<p>results in the error:</p>
<pre><code>AttributeError: rdata
</code></pre>
<p>Is there a way of testing to see if the rdata field exists in an answer record? I know that you can check for layers</p>
<pre><code>if packet.haslayer(DNS):
<code>
</code></pre>
<p>so you don't ask for layers that don't exist. Is there a parallel for fields within layers?</p>
| 1 | 2016-07-22T21:40:47Z | 38,535,743 | <p>You could always do a try-except. Might not be as slick as what you're looking for, though.</p>
<pre><code>try:
answer = packet.an[0].rdata
except AttributeError:
# do something
</code></pre>
| 1 | 2016-07-22T21:50:17Z | [
"python",
"dns",
"scapy"
] |
Streaming with Tweepy: randomly getting 'TypeError: unsupported operand type(s) for +: 'int' and 'str' | 38,535,687 | <p>I'm using the Tweepy library to track a hashtag (I'm using the streaming API) and I save to a file on my hard drive after a certain number of records. My code works and I leave it running. After some time, I randomly get the following error</p>
<pre><code>Exception in thread Thread-1:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/Library/Python/2.7/site-packages/tweepy/streaming.py", line 294, in _run
raise exception
TypeError: unsupported operand type(s) for +: 'int' and 'str'
</code></pre>
<p>If I wait a bit and restart, the program starts running but I soon get the error again.</p>
<p>I also noticed that by the time I get home from work in the evening, the error would have occured and from then onwards even if I restart, I will keep on getting the error until basically hours later.</p>
<p>Has anyone encountered this issue before and if so, what was the solution?</p>
<p>I'm running python 2.7 and Tweepy 3.5</p>
<p>Thanks</p>
<p>Updates: My code</p>
<pre><code>trackList = '#GOP'
try:
listen = myListener.SListener(api)
stream = Stream(auth, listen)
stream.filter(track=trackList)
except Exception, inst:
print "An unexpected error occured"
print (type(inst))
_, _, tb = sys.exc_info()
filename, lineno, funname, line = traceback.extract_tb(tb)[-1]
print('{}:{}, in {}\n {}'.format(filename, lineno, funname, line))
</code></pre>
<p>And in myListener.py</p>
<pre><code>def on_status(self, status):
try:
self.output.write(status + "\n")
self.counter += 1
if self.counter >= 5000:
self.output.close()
self.output = open(time.strftime('%Y%m%d-%H%M%S') + '.json', 'w')
self.counter = 0
return
except Exception, inst:
print "An unexpected error occured in Listener"
print (type(inst))
_, _, tb = sys.exc_info()
filename, lineno, funname, line = traceback.extract_tb(tb)[-1]
print('{}:{}, in {}\n {}'.format(filename, lineno, funname, line))
pass
</code></pre>
| 0 | 2016-07-22T21:45:08Z | 38,536,383 | <p>Can you add a minimal code on how you are passing your hashtag? This can be either related to the way you are feeding hashtag or an open issue in tweepy.</p>
<p>This particular issue regarding streaming is very similar to what you are experiencing: <a href="https://github.com/tweepy/tweepy/issues/748" rel="nofollow">TypeError: expected string or buffer in streaming module #748</a></p>
| 0 | 2016-07-22T23:01:14Z | [
"python",
"tweepy"
] |
Parsing multi-level json file into Python dict | 38,535,698 | <p>I'm trying to parse a json file into a python dict (to make it copy-ready for Redshift).</p>
<p>My intended output format is: </p>
<pre><code>{col1:val1, col2:val2,..}
{col1:val1, col2:val2,..}
</code></pre>
<p>The current format of the file is:</p>
<pre><code>{"val0": {"col1":"val1", "col2":"val2",..},
"val0": {"col1":"val1", "col2":"val2",..},..}
</code></pre>
<p>Where "val0" is a date field (only value, no column name) that I don't need in my output. </p>
<p>How do I convert the latter format to the former? I've tried going through documentation for the json module (as well as a few other StackOverflow answers), but nothing seems to click. Any help is appreciated. Thanks!</p>
| -1 | 2016-07-22T21:46:02Z | 39,253,947 | <p>It sounds like you need a json array.</p>
<pre><code>import json
with open('example.json', 'r') as f:
data = json.load(f)
array_data = list(data.values())
print(array_data) # [{'col1': 'val1', 'col2': 'val2'},
# {'col1': 'val1', 'col2': 'val2'}]
with open('array_result.json', 'w') as f:
json.dump(array_data, f)
</code></pre>
| 1 | 2016-08-31T15:46:59Z | [
"python",
"json",
"parsing"
] |
Add double quotes to string in python | 38,535,707 | <p>If my input text is</p>
<pre><code>a
b
c
d
e
f
g
</code></pre>
<p>and I want my output text to be: (with the double quotes)</p>
<pre><code>"a b c d e f g"
</code></pre>
<p>Where do I go after this step:</p>
<pre><code>" ".join([a.strip() for a in b.split("\n") if a])
</code></pre>
| -1 | 2016-07-22T21:46:39Z | 38,535,752 | <pre><code>'"%s"' % " ".join([a.strip() for a in s.split("\n") if a])
</code></pre>
| 2 | 2016-07-22T21:51:37Z | [
"python",
"string"
] |
Add double quotes to string in python | 38,535,707 | <p>If my input text is</p>
<pre><code>a
b
c
d
e
f
g
</code></pre>
<p>and I want my output text to be: (with the double quotes)</p>
<pre><code>"a b c d e f g"
</code></pre>
<p>Where do I go after this step:</p>
<pre><code>" ".join([a.strip() for a in b.split("\n") if a])
</code></pre>
| -1 | 2016-07-22T21:46:39Z | 38,535,817 | <p>You have successfully constructed a string without the quotes. So you need to add the double quotes. There are a few different ways to do this in Python:</p>
<pre><code>>>> my_str = " ".join([a.strip() for a in b.split("\n") if a])
>>> print '"' + my_str + '"' #Use single quotes to surround the double quotes
"a b c d e f g"
>>> print "\"" + my_str + "\"" #Escape the double quotes
"a b c d e f g"
>>> print '"%s"'%my_str #Use string formatting
"a b c d e f g"
</code></pre>
<p>Any of these options are valid and idiomatic Python. I might go with the first option myself simply because it's short and clear</p>
| 5 | 2016-07-22T21:58:11Z | [
"python",
"string"
] |
Adding generated PDF to FileField fails; adding local PDF works | 38,535,726 | <p>I'm trying to generate a PDF file and to add it to a Django <code>FileField</code>. Nothing fancy, but I can't seem to get i</p>
<p>When using a local file on my hard drive, everything works fine:</p>
<pre><code>>>> invoice = Invoice.objects.get(pk=153)
>>> local_file = open('my.pdf')
>>> djangofile = File(local_file)
>>> type(local_file)
<type 'file'>
>>> type(djangofile)
<class 'django.core.files.base.File'>
>>> invoice.pdf = djangofile
>>> invoice.pdf
<FieldFile: my.pdf>
>>> invoice.save()
>>> invoice.pdf
<FieldFile: documents/invoices/2016/07/my.pdf>
</code></pre>
<p>However when trying the same with a generated PDF, things don't work:</p>
<pre><code>>>> invoice = Invoice.objects.get(pk=154)
>>> html_template = get_template('invoicing/invoice_pdf.html')
>>> rendered_html = html_template.render({'invoice': invoice}).encode(encoding="UTF-8")
>>> pdf_file = HTML(string=rendered_html).write_pdf()
>>> type(pdf_file)
<type 'str'>
>>> djangofile = File(pdf_file)
>>> type(djangofile)
<class 'django.core.files.base.File'>
>>> invoice.pdf = djangofile
>>> invoice.pdf
<FieldFile: None>
>>> invoice.save()
>>> invoice.pdf
<FieldFile: None>
</code></pre>
<p>What am I doing wrong? Why is one <code>django.core.files.base.File</code> object accepted and another one isn't?</p>
| 0 | 2016-07-22T21:48:22Z | 38,535,865 | <p><code>File()</code> is only a wrapper around Python's file object. It won't work with strings like your generated PDF. For that, you need the <a href="https://docs.djangoproject.com/en/1.9/ref/files/file/#the-contentfile-class" rel="nofollow">ContentFile class</a>. Try:</p>
<pre><code>(...)
djangofile = ContentFile(pdf_file)
invoice.pdf = djangofile
invoice.pdf.name = "myfilename.pdf"
invoice.save()
</code></pre>
| 1 | 2016-07-22T22:01:52Z | [
"python",
"django"
] |
subraction error | 38,535,794 | <p>I have a certain error in a speed testing program I have built in Python. My program works by taking a variable of 0, adding 1 and printing its value. It repeats this 10,000 times, and records how long this all takes. After the time is taken every 1,000 cycle, it is arranged into a graph, with the columns, "Cycles", "Time", and "Time between previous checkpoint". The time between the previous checkpoint remains normal until it comes to 5 and 6 thousand cycles. It would look like this;</p>
<p><a href="http://i.stack.imgur.com/Hlz3I.png" rel="nofollow"><img src="http://i.stack.imgur.com/Hlz3I.png" alt="enter image description here"></a></p>
<p>Here is a snippet of the table code:</p>
<pre><code>print(" 4,000 %ss %ss" % (check4, check4 - check3))
print("| | | |")
print(" 5,000 %ss %ss" % (check5, check5 - check4))
print("| | | |")
print(" 6,000 %ss %ss" % (check6, check6 - check5))
</code></pre>
<p>As you can see, the results spike then dip. The problem is, they shouldn't. It wouldn't make sense that there can be a negative amount of time between two events. I've checked where the checkpoint variables are assigned and they are like this;</p>
<pre><code> elif(count == 4000):
check4 = round(time() - start, 3)
check4 -= 0.3
Beep(825, 100)
print(" %s %ss" % (count, round(time() - start, 2)))
elif(count == 5000):
check5 = round(time() - start, 3)
check5 -= - 0.4
Beep(850, 100)
print(" %s %ss" % (count, round(time() - start, 2)))
elif(count == 6000):
check6 = round(time() - start, 3)
check6 -= 0.5
Beep(875, 100)
print(" %s %ss" % (count, round(time() - start, 2)))
</code></pre>
<p>There is either something I've overlooked because I'm looking for something else, or my method is at fault. I've only come here as a last resort, I've been slaving over this on and off for at least 3 months.</p>
<p>If anyone could find what is causing this value anomaly, please could they respond.</p>
| 0 | 2016-07-22T21:56:19Z | 38,535,887 | <p>In your second elif statement in your code snippet, it looks like you have a minus sign that should not be there:</p>
<pre><code>elif(count == 5000):
check5 = round(time() - start, 3)
check5 -= [-] 0.4
Beep(850, 100)
</code></pre>
<p>This makes your program add 0.4 rather than subtract, which seems to be the source of your error. </p>
| 2 | 2016-07-22T22:03:19Z | [
"python",
"subtraction"
] |
Looping over large sparse array | 38,535,861 | <p>Let's say I have a (sparse) matrix <code>M</code> size <code>(N*N, N*N)</code>. I want to select elements from this matrix where the outer product of <code>grid</code> (a <code>(n,m)</code> array, where <code>n*m=N</code>) is True (it is a boolean 2D array, and <code>na=grid.sum()</code>). This can be done as follows</p>
<pre><code>result = M[np.outer( grid.flatten(),grid.flatten() )].reshape (( N, N ) )
</code></pre>
<p><code>result</code> is an <code>(na,na)</code> sparse array (and <code>na < N</code>). The previous line is what I want to achieve: get the elements of M that are true from the product of <code>grid</code>, and squeeze the ones that aren't true out of the array.</p>
<p>As <code>n</code> and <code>m</code> (and hence <code>N</code>) grow, and <code>M</code> and <code>result</code> are sparse matrices, I am not able to do this efficiently in terms of memory or speed. Closest I have tried is:</p>
<pre><code>result = sp.lil_matrix ( (1, N*N), dtype=np.float32 )
# Calculate outer product
A = np.einsum("i,j", grid.flatten(), grid.flatten())
cntr = 0
it = np.nditer ( A, flags=['multi_index'] )
while not it.finished:
if it[0]:
result[0,cntr] = M[it.multi_index[0], it.multi_index[1]]
cntr += 1
# reshape result to be a N*N sparse matrix
</code></pre>
<p>The last reshape could be done by <a href="http://stackoverflow.com/questions/16511879/reshape-sparse-matrix-efficiently-python-scipy-0-12">this approach</a>, but I haven't got there yet, as the while loop is taking forever. </p>
<p>I have also tried selecting nonzero elements of A too, and looping over but this eats up all of the memory:</p>
<pre><code>A=np.einsum("i,j", grid.flatten(), grid.flatten())
nzero = A.nonzero() # This eats lots of memory
cntr = 0
for (i,j) in zip (*nzero):
temp_mat[0,cntr] = M[i,j]
cnt += 1
</code></pre>
<p>'n' and 'm' in the example above are around 300. </p>
| 0 | 2016-07-22T22:01:22Z | 38,538,601 | <p>I don't know if it was a typo, or code error, but your example is missing an <code>iternext</code>:</p>
<pre><code>R=[]
it = np.nditer ( A, flags=['multi_index'] )
while not it.finished:
if it[0]:
R.append(M[it.multi_index])
it.iternext()
</code></pre>
<p>I think appending to a list is simpler and faster than <code>R[ctnr]=...</code>. It's competitive if <code>R</code> is a regular array, and sparse indexing is slower (even the fastest <code>lil</code> format).</p>
<p><code>ndindex</code> wraps this use of a nditer as:</p>
<pre><code>R=[]
for index in np.ndindex(A.shape):
if A[index]:
R.append(M[index])
</code></pre>
<p><code>ndenumerate</code> also works:</p>
<pre><code>R = []
for index,a in np.ndenumerate(A):
if a:
R.append(M[index])
</code></pre>
<p>But I wonder if you really want to advance the <code>cntr</code> each <code>it</code> step, not just the <code>True</code> cases. Otherwise reshaping <code>result</code> to <code>(N,N)</code> doesn't make much sense. But in that case, isn't your problem just</p>
<pre><code>M[:N, :N].multiply(A)
</code></pre>
<p>or if <code>M</code> was a dense array:</p>
<pre><code>M[:N, :N]*A
</code></pre>
<p>In fact if both <code>M</code> and <code>A</code> are sparse, then the <code>.data</code> attribute of that <code>multiply</code> will be the same as the <code>R</code> list.</p>
<pre><code>In [76]: N=4
In [77]: M=np.arange(N*N*N*N).reshape(N*N,N*N)
In [80]: a=np.array([0,1,0,1])
In [81]: A=np.einsum('i,j',a,a)
In [82]: A
Out[82]:
array([[0, 0, 0, 0],
[0, 1, 0, 1],
[0, 0, 0, 0],
[0, 1, 0, 1]])
In [83]: M[:N, :N]*A
Out[83]:
array([[ 0, 0, 0, 0],
[ 0, 17, 0, 19],
[ 0, 0, 0, 0],
[ 0, 49, 0, 51]])
In [84]: c=sparse.csr_matrix(M)[:N,:N].multiply(sparse.csr_matrix(A))
In [85]: c.data
Out[85]: array([17, 19, 49, 51], dtype=int32)
In [89]: [M[index] for index, a in np.ndenumerate(A) if a]
Out[89]: [17, 19, 49, 51]
</code></pre>
| 1 | 2016-07-23T05:45:12Z | [
"python",
"arrays",
"numpy",
"scipy"
] |
Efficiently index a multidemnsional numpy array by another array | 38,535,868 | <p>I have an array <code>x</code> which specific values I would like to access, whose indices are given by another array.</p>
<p>For example, <code>x</code> is </p>
<pre><code>array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
</code></pre>
<p>and the indices are an array of Nx2</p>
<pre><code>idxs = np.array([[1,2], [4,3], [3,3]])
</code></pre>
<p>I would like a function that returns an array of x[1,2], x[4,3], x[3,3] or [7, 23, 18]. The following code does the trick, but I would like to speed it up for large arrays, perhaps by avoiding the for loop.</p>
<pre><code>import numpy as np
def arrayvalsofinterest(x, idx):
output = np.zeros(idx.shape[0])
for i in range(len(output)):
output[i] = x[tuple(idx[i,:])]
return output
if __name__ == "__main__":
xx = np.arange(25).reshape(5,5)
idxs = np.array([[1,2],[4,3], [3,3]])
print arrayvalsofinterest(xx, idxs)
</code></pre>
| 0 | 2016-07-22T22:01:58Z | 38,535,919 | <p>You can pass in an iterable of <code>axis0</code> coordinates and an iterable of <code>axis1</code> coordinates. See <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#integer-array-indexing" rel="nofollow">the Numpy docs here</a>.</p>
<pre><code>i0, i1 = zip(*idxs)
x[i0, i1]
</code></pre>
<p>As @Divakar points out in the comments, this is less memory efficient than using a view of the array i.e.</p>
<pre><code>x[idxs[:, 0], idxs[:, 1]]
</code></pre>
| 3 | 2016-07-22T22:07:44Z | [
"python",
"arrays",
"numpy"
] |
Two python pools, two queues | 38,535,920 | <p>I am trying to understand how pool and queue work in Python, and the following example doesn't work as expected. I expect the program to end, but it's stuck in an infinite loop because the second queue isn't getting emptied.</p>
<pre><code>import multiprocessing
import os
import time
inq = multiprocessing.Queue()
outq = multiprocessing.Queue()
def worker_main(q1, q2):
while True:
i = q1.get(True)
time.sleep(.1)
q2.put(i*2)
def worker2(q):
print q.get(True)
p1 = multiprocessing.Pool(3, worker_main,(inq, outq,))
p2 = multiprocessing.Pool(2, worker2,(outq,))
for i in range(50):
inq.put(i)
while inq.qsize()>0 or outq.qsize()>0:
print 'q1 size', inq.qsize(), 'q2 size', outq.qsize()
time.sleep(.1)
</code></pre>
<p>the output shows that the second queue (outq) is .get once, but that's all.</p>
<p>output:</p>
<blockquote>
<pre><code>q1 size 49 q2 size 0
q1 size 47 q2 size 0
2
4
q1 size 44 q2 size 1
q1 size 41 q2 size 4
q1 size 38 q2 size 7
q1 size 35 q2 size 11
q1 size 31 q2 size 14
q1 size 27 q2 size 18
q1 size 24 q2 size 21
q1 size 22 q2 size 23
q1 size 19 q2 size 26
q1 size 15 q2 size 30
q1 size 12 q2 size
</code></pre>
</blockquote>
<p>Why isn't the worker2 getting called until the outq is empty?</p>
| -1 | 2016-07-22T22:07:55Z | 38,536,972 | <p>This is a very odd way to use a <code>Pool</code>. The function passed to the constructor is called only once per process in the pool. It's intended for one-time initialization tasks, and is rarely used.</p>
<p>As is, your <code>worker2</code> is called exactly twice, one time for each process in yout <code>p2</code> pool. Your function gets one value from a queue and then exits. The process never does anything else. So it's doing exactly what you coded it to do.</p>
<p>There's no evident reason to use a <code>Pool</code> here at all; creating 5 <code>multiprocessing.Process</code> objects instead would be more natural.</p>
<p>If you feel you have to do it this way, then you need to put a loop in <code>worker2</code>. Here's one way:</p>
<pre><code>import multiprocessing
import time
def worker_main(q1, q2):
while True:
i = q1.get()
if i is None:
break
time.sleep(.1)
q2.put(i*2)
def worker2(q):
while True:
print(q.get())
if __name__ == "__main__":
inq = multiprocessing.Queue()
outq = multiprocessing.Queue()
p1 = multiprocessing.Pool(3, worker_main,(inq, outq,))
p2 = multiprocessing.Pool(2, worker2,(outq,))
for i in range(50):
inq.put(i)
for i in range(3): # tell worker_main we're done
inq.put(None)
while inq.qsize()>0 or outq.qsize()>0:
print('q1 size', inq.qsize(), 'q2 size', outq.qsize())
time.sleep(.1)
</code></pre>
<h2>SUGGESTED</h2>
<p>This is a "more natural" way to use <code>Process</code> objects instead, and using queue sentinels (special values - here <code>None</code>) to let processes know when to stop. BTW, I'm using Python 3, so use <code>print</code> as a function rather than as a statement.</p>
<pre><code>import multiprocessing as mp
import time
def worker_main(q1, q2):
while True:
i = q1.get()
if i is None:
break
time.sleep(.1)
q2.put(i*2)
def worker2(q):
while True:
i = q.get()
if i is None:
break
print(i)
def wait(procs):
alive_count = len(procs)
while alive_count:
alive_count = 0
for p in procs:
if p.is_alive():
p.join(timeout=0.1)
print('q1 size', inq.qsize(), 'q2 size', outq.qsize())
alive_count += 1
if __name__ == "__main__":
inq = mp.Queue()
outq = mp.Queue()
p1s = [mp.Process(target=worker_main, args=(inq, outq,))
for i in range(3)]
p2s = [mp.Process(target=worker2, args=(outq,))
for i in range(2)]
for p in p1s + p2s:
p.start()
for i in range(50):
inq.put(i)
for p in p1s: # tell worker_main we're done
inq.put(None)
wait(p1s)
# Tell worker2 we're done
for p in p2s:
outq.put(None)
wait(p2s)
</code></pre>
| 3 | 2016-07-23T00:27:30Z | [
"python",
"queue",
"pool"
] |
Pandas: Delete rows based on multiple columns values | 38,535,931 | <p>I have a dataframe with columns <code>A,B,C</code>. I have a list of tuples like <code>[(x1,y1), (x2,y2), ...]</code>. I would like to delete all rows that meet the following condition:
<code>(B=x1 && C=y1) | (B=x2 && C=y2) | ...</code> How can I do that in pandas? I wanted to use the <code>isin</code> function, but not sure if it is possible since my list has tuples. I could do something like this:</p>
<pre><code>for x,y in tuples:
df = df.drop(df[df.B==x && df.C==y].index)
</code></pre>
<p>Maybe there is an easier way.</p>
| 5 | 2016-07-22T22:09:36Z | 38,536,020 | <p>It will probably be more efficient to <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow">use boolean indexing</a> than a bunch of calls to <code>DataFrame.drop</code>. This is because Pandas doesn't have to reallocate memory in each loop iteration.</p>
<pre><code>m = pd.Series(False, index=df.index)
for x,y in tuples:
m |= (df.B == x) & (df.C == y)
df = df[~m]
</code></pre>
| 0 | 2016-07-22T22:18:50Z | [
"python",
"pandas"
] |
Pandas: Delete rows based on multiple columns values | 38,535,931 | <p>I have a dataframe with columns <code>A,B,C</code>. I have a list of tuples like <code>[(x1,y1), (x2,y2), ...]</code>. I would like to delete all rows that meet the following condition:
<code>(B=x1 && C=y1) | (B=x2 && C=y2) | ...</code> How can I do that in pandas? I wanted to use the <code>isin</code> function, but not sure if it is possible since my list has tuples. I could do something like this:</p>
<pre><code>for x,y in tuples:
df = df.drop(df[df.B==x && df.C==y].index)
</code></pre>
<p>Maybe there is an easier way.</p>
| 5 | 2016-07-22T22:09:36Z | 38,536,146 | <p>Use pandas indexing</p>
<pre><code>df.set_index(list('BC')).drop(tuples, errors='ignore').reset_index()
</code></pre>
<hr>
<h3>Timing</h3>
<pre><code>def linear_indexing_based(df, tuples):
idx = np.array(tuples)
BC_arr = df[['B','C']].values
shp = np.maximum(BC_arr.max(0)+1,idx.max(0)+1)
BC_IDs = np.ravel_multi_index(BC_arr.T,shp)
idx_IDs = np.ravel_multi_index(idx.T,shp)
return df[~np.in1d(BC_IDs,idx_IDs)]
def divakar(df, tuples):
idx = np.array(tuples)
mask = (df.B.values == idx[:, None, 0]) & (df.C.values == idx[:, None, 1])
return df[~mask.any(0)]
def pirsquared(df, tuples):
return df.set_index(list('BC')).drop(tuples).reset_index()
</code></pre>
<p><strong>10 rows, 1 tuple</strong></p>
<pre><code>np.random.seed([3,1415])
df = pd.DataFrame(np.random.choice(range(10), (10, 3)), columns=list('ABC'))
tuples = [tuple(row) for row in np.random.choice(range(10), (1, 2))]
</code></pre>
<p><a href="http://i.stack.imgur.com/mwQ8M.png" rel="nofollow"><img src="http://i.stack.imgur.com/mwQ8M.png" alt="enter image description here"></a></p>
<p><strong>10,000 rows, 500 tuples</strong></p>
<pre><code>np.random.seed([3,1415])
df = pd.DataFrame(np.random.choice(range(10), (10000, 3)), columns=list('ABC'))
tuples = [tuple(row) for row in np.random.choice(range(10), (500, 2))]
</code></pre>
<p><a href="http://i.stack.imgur.com/JMjqD.png" rel="nofollow"><img src="http://i.stack.imgur.com/JMjqD.png" alt="enter image description here"></a></p>
| 5 | 2016-07-22T22:32:59Z | [
"python",
"pandas"
] |
Pandas: Delete rows based on multiple columns values | 38,535,931 | <p>I have a dataframe with columns <code>A,B,C</code>. I have a list of tuples like <code>[(x1,y1), (x2,y2), ...]</code>. I would like to delete all rows that meet the following condition:
<code>(B=x1 && C=y1) | (B=x2 && C=y2) | ...</code> How can I do that in pandas? I wanted to use the <code>isin</code> function, but not sure if it is possible since my list has tuples. I could do something like this:</p>
<pre><code>for x,y in tuples:
df = df.drop(df[df.B==x && df.C==y].index)
</code></pre>
<p>Maybe there is an easier way.</p>
| 5 | 2016-07-22T22:09:36Z | 38,536,170 | <p><strong>Approach #1</strong></p>
<p>Here's a vectorized approach using <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy's broadcasting</code></a> -</p>
<pre><code>def broadcasting_based(df, tuples):
idx = np.array(tuples)
mask = (df.B.values == idx[:, None, 0]) & (df.C.values == idx[:, None, 1])
return df[~mask.any(0)]
</code></pre>
<p>Sample run -</p>
<pre><code>In [224]: df
Out[224]:
A B C
0 6 4 4
1 2 0 3
2 8 3 4
3 7 8 3
4 6 7 8
5 3 3 2
6 5 4 2
7 2 4 7
8 6 1 6
9 1 1 1
In [225]: tuples = [(3,4),(7,8),(1,6)]
In [226]: broadcasting_based(df,tuples)
Out[226]:
A B C
0 6 4 4
1 2 0 3
3 7 8 3
5 3 3 2
6 5 4 2
7 2 4 7
9 1 1 1
</code></pre>
<hr>
<p><strong>Approach #2 : To cover a generic number of columns</strong></p>
<p>For a case like this, one could collapse the information from different columns into one single entry that would represent the uniqueness among all columns. This could be achieved by considering each row as indexing tuple. Thus, basically each row would become one entry. Similarly, each entry from the list of tuple that is to be matched could be reduced to a <code>1D</code> array with each tuple becoming one scalar each. Finally, we use <code>np.in1d</code> to look for the correspondence, get the valid mask and have the desired rows removed dataframe, Thus, the implementation would be -</p>
<pre><code>def linear_indexing_based(df, tuples):
idx = np.array(tuples)
BC_arr = df[['B','C']].values
shp = np.maximum(BC_arr.max(0)+1,idx.max(0)+1)
BC_IDs = np.ravel_multi_index(BC_arr.T,shp)
idx_IDs = np.ravel_multi_index(idx.T,shp)
return df[~np.in1d(BC_IDs,idx_IDs)]
</code></pre>
| 4 | 2016-07-22T22:35:34Z | [
"python",
"pandas"
] |
Debugging from PyCharm to Visual Studio C++ code | 38,536,062 | <p>I'm currently trying to debug issues in <a href="https://github.com/BVLC/caffe/tree/windows" rel="nofollow">Caffe for Windows</a> PyCaffe.</p>
<p>Because of a <a href="https://github.com/Microsoft/PTVS/issues/1439" rel="nofollow">bug in Python Tools for Visual Studio</a>, PTVS doesn't work so I'm using PyCharm and trying to attach to PyCaffe's process through Visual Studio 2013. That is, I'm running PyCharm debugger on a Python script with a breakpoint set at the point where I call the Python entry point into PyCaffe. </p>
<p>I debug the Python script in PyCharm which calls modules written in C++ in VS. I want to debug those modules in C++. So I'm trying to attach to the PyCharm or Python processes with breakpoints set in VS.</p>
<p>The problem is that the breakpoint isn't firing at the entry point in PyCaffe in the Visual Studio C++ code. </p>
<p>Has anyone successfully gotten this kind of thing to work or is there an alternative way of doing this? </p>
| 0 | 2016-07-22T22:23:10Z | 38,603,201 | <p>We attach to one process and allow to set breakpoints within code that has not started from the VS debugger. But one important issue is that we often debug/run the app in the VS, for example, we debug the Web code that runs under IIS, we will attach to the IIS process or others. </p>
<p>Your project is different from the above sample, you run/debug your app in Pycharm (not the VS), but you want use the VS Attach to process function, so it would have a limitation. As you said that you debug script in PyCharm, and want to call C++, so you would check that whether the PyCharm supports a similar feature like the attach tool in VS.</p>
| 0 | 2016-07-27T03:27:26Z | [
"python",
"c++",
"visual-studio",
"debugging",
"pycharm"
] |
Youcompleteme plugin for vim fails to provide completion for error codes from errno.h | 38,536,090 | <p>I've never used vim at work, only starting to familiarize myself with it and so far like it very much.</p>
<p>For <strong>YouCompleteMe</strong> plugin to work for my test project I took .ycm_extra_conf.py file from <a href="https://github.com/Valloric/ycmd/blob/master/cpp/ycm/.ycm_extra_conf.py" rel="nofollow">here</a> and added '-I/usr/include' and 'path/to/my/project/' to flags. It works very well, it can complete even c++11's <code>auto</code> types!</p>
<p>But I couldn't make it complete error codes like <code>EINTR</code>, <code>EAGAIN</code>, etc., that are supposed to be visible after <code>#include <errno.h></code></p>
<p>If I call <code>:YcmComplete GoToDeclaration</code> with my cursor being on <code>EINTR</code>, it's declaration is correctly found however...</p>
<p>Is there a solution?</p>
| 0 | 2016-07-22T22:25:41Z | 38,556,721 | <p>By further googling I found out that macro completion can be obtained with (Ctrl-Space).</p>
| 1 | 2016-07-24T20:58:41Z | [
"python",
"c++",
"vim",
"youcompleteme"
] |
Tensorflow Deep and Wide Demo issue | 38,536,155 | <p>I tried to run the code directly from tensorflow's <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py" rel="nofollow">Deep and Wide demo repo</a>:</p>
<p>There is an immediate issue with <code>urllib</code> which can easily be fixed by using <code>urllib.request</code> instead. The code will still not run afterward though, I get the following error:</p>
<pre><code>m.fit(input_fn=lambda: input_fn(df_train), steps=FLAGS.train_steps)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/USER/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 182, in fit
monitors=monitors)
File "/Users/USER/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 449, in _train_model
train_op, loss_op = self._get_train_ops(features, targets)
File "/Users/USER/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 156, in _get_train_ops
logits = self._logits(features, is_training=True)
File "/Users/USER/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 294, in _logits
if self._get_linear_feature_columns() and self._get_dnn_feature_columns():
File "/Users/USER/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 216, in _get_dnn_feature_columns
self._dnn_feature_columns)) if self._dnn_feature_columns else None
TypeError: unorderable types: str() < _SparseColumnKeys()
</code></pre>
<p>I'm having trouble finding the source of this issue. There doesn't seem to be anyone else experiencing this issue. Tensorflow is installed in virtualenv (tensorflow) on python 3.5.</p>
| 2 | 2016-07-22T22:33:57Z | 38,558,946 | <p>urllib is for 2.7, try running it in 2.7 instead of 3.5.</p>
| 0 | 2016-07-25T02:37:40Z | [
"python",
"tensorflow",
"python-3.5"
] |
Tensorflow Deep and Wide Demo issue | 38,536,155 | <p>I tried to run the code directly from tensorflow's <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py" rel="nofollow">Deep and Wide demo repo</a>:</p>
<p>There is an immediate issue with <code>urllib</code> which can easily be fixed by using <code>urllib.request</code> instead. The code will still not run afterward though, I get the following error:</p>
<pre><code>m.fit(input_fn=lambda: input_fn(df_train), steps=FLAGS.train_steps)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/USER/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 182, in fit
monitors=monitors)
File "/Users/USER/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 449, in _train_model
train_op, loss_op = self._get_train_ops(features, targets)
File "/Users/USER/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 156, in _get_train_ops
logits = self._logits(features, is_training=True)
File "/Users/USER/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 294, in _logits
if self._get_linear_feature_columns() and self._get_dnn_feature_columns():
File "/Users/USER/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 216, in _get_dnn_feature_columns
self._dnn_feature_columns)) if self._dnn_feature_columns else None
TypeError: unorderable types: str() < _SparseColumnKeys()
</code></pre>
<p>I'm having trouble finding the source of this issue. There doesn't seem to be anyone else experiencing this issue. Tensorflow is installed in virtualenv (tensorflow) on python 3.5.</p>
| 2 | 2016-07-22T22:33:57Z | 39,523,280 | <p>I had the same issue. This solved it.</p>
<p>I had to install Tensorflow under Python 2.7. Here's how to do it with a virtual environment, using Conda:</p>
<pre><code># Python 2.7
$ conda create -n tensorflow python=2.7
$ source activate tensorflow
(tensorflow)$ # Your prompt should change
# Linux/Mac OS X, Python 2.7/3.4/3.5, CPU only:
(tensorflow)$ conda install -c conda-forge tensorflow
</code></pre>
<p>Check out: <a href="https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#using-conda" rel="nofollow">https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#using-conda</a></p>
| 0 | 2016-09-16T03:18:59Z | [
"python",
"tensorflow",
"python-3.5"
] |
Simple way of excluding an element from a calculation on a list? | 38,536,214 | <p>For example I want to check the correlation coefficient between two lists like:</p>
<pre><code>r = np.corrcoef(list25, list26)[0,1]
</code></pre>
<p>but I want to exclude -1's in the lists from the calculation. Is there a simple one-liner way of doing this instead of making a new copies of the lists and iterating through to remove all -1's and such?</p>
| 0 | 2016-07-22T22:40:06Z | 38,536,239 | <p>There is a one liner solution. It's creating a new list without the ones. It can be done using <a href="http://www.secnetix.de/olli/Python/list_comprehensions.hawk" rel="nofollow">List Comprehension</a>:</p>
<pre><code>new_list = [x for x in old_list if x != -1]
</code></pre>
<p>it basically copies everything that matches the condition from the old list to the new list.</p>
<p>So, for your example:</p>
<pre><code>r = np.corrcoef([x for x in list25 if x != -1], [x for x in list26 if x != -1])[0,1]
</code></pre>
| 1 | 2016-07-22T22:43:46Z | [
"python",
"list"
] |
Simple way of excluding an element from a calculation on a list? | 38,536,214 | <p>For example I want to check the correlation coefficient between two lists like:</p>
<pre><code>r = np.corrcoef(list25, list26)[0,1]
</code></pre>
<p>but I want to exclude -1's in the lists from the calculation. Is there a simple one-liner way of doing this instead of making a new copies of the lists and iterating through to remove all -1's and such?</p>
| 0 | 2016-07-22T22:40:06Z | 38,536,411 | <p>Use a generator</p>
<pre><code>def greater_neg_1(items):
for item in items:
if item>-1:
yield item
</code></pre>
<p>Usage:</p>
<pre><code>>>> L = [1,-1,2,3,4,-1,4]
>>> list(greater_neg_1(L))
[1, 2, 3, 4, 4]
</code></pre>
<p>or:</p>
<pre><code>r = np.corrcoef(greater_neg_1(list25), greater_neg_1(list26))[0,1]
</code></pre>
<p>Won't require any extra memory.</p>
| 1 | 2016-07-22T23:05:35Z | [
"python",
"list"
] |
Simple way of excluding an element from a calculation on a list? | 38,536,214 | <p>For example I want to check the correlation coefficient between two lists like:</p>
<pre><code>r = np.corrcoef(list25, list26)[0,1]
</code></pre>
<p>but I want to exclude -1's in the lists from the calculation. Is there a simple one-liner way of doing this instead of making a new copies of the lists and iterating through to remove all -1's and such?</p>
| 0 | 2016-07-22T22:40:06Z | 38,538,693 | <p>If you actually want to remove the <code>-1</code> from the lists:</p>
<pre><code>while -1 in list25: list25.remove(-1)
</code></pre>
| 1 | 2016-07-23T05:57:53Z | [
"python",
"list"
] |
merge pandas csv in different directories | 38,536,247 | <p>I have csv files with same names in different directories and i want to merge them as a single csv.</p>
<pre>
dir1
abcd__diff.csv
efgh__diff.csv
dir2
abcd_diffhere.csv
efgh_diffhere.csv
operation
dir1/abcd_diff.csv join dir2/abcd_diffhere.csv
dir1/efgh_diff.csv join dir2/efgh_diffhere.csv
</pre>
<p>I want to merge them using a common field. I can use pandas join operator but what is the most efficient way to search and map the filenames across directories.
I split the filenames using character __ giving the list of files with same names in each directory. I can do two for loops and iterate but that would not be efficient as I have around 200 files.</p>
| 2 | 2016-07-22T22:44:21Z | 38,536,612 | <p>Consider <code>zip()</code> on the two file name lists where a dictionary of dataframes are appended (avoiding 200 separate objects). Keys to dictionary would be unique filenames shared by each pair. Below assumes filename lists do not have directories just base names of files.</p>
<pre><code>import os
...
dfDict = {}
for i, j in zip(dir1list, dir2list):
temp1 = pd.read_csv(os.path.join(dir1, i))
temp2 = pd.read_csv(os.path.join(dir2, j))
key = i.replace('.csv','')
dfDict[key] = pd.merge(temp1, temp2, on='commonfield')
</code></pre>
<p>Should lists be unordered and even of different lengths, consider a list comprehension comparing the two and creating a list of tuple pairs of items matched by first 4 characters: <em>abcd</em>, <em>efgh</em>, ... Then loop the list for the data frame merges</p>
<pre><code>dir1list = ['abcd__diff.csv','efgh__diff.csv']
dir2list = ['abcd_diffhere.csv','efgh_diffhere.csv']
allfiles = [(i,j) for i in dir1list for j in dir2list if i[:4] == j[:4]]
dfDict = {}
for file in allfiles:
temp1 = pd.read_csv(os.path.join(dir1, file[0]))
temp2 = pd.read_csv(os.path.join(dir2, file[1]))
key = i[:4]
dfDict[key] = pd.merge(temp1, temp2, on='commonfield')
</code></pre>
| 2 | 2016-07-22T23:32:01Z | [
"python",
"csv",
"pandas",
"join",
"merge"
] |
merge pandas csv in different directories | 38,536,247 | <p>I have csv files with same names in different directories and i want to merge them as a single csv.</p>
<pre>
dir1
abcd__diff.csv
efgh__diff.csv
dir2
abcd_diffhere.csv
efgh_diffhere.csv
operation
dir1/abcd_diff.csv join dir2/abcd_diffhere.csv
dir1/efgh_diff.csv join dir2/efgh_diffhere.csv
</pre>
<p>I want to merge them using a common field. I can use pandas join operator but what is the most efficient way to search and map the filenames across directories.
I split the filenames using character __ giving the list of files with same names in each directory. I can do two for loops and iterate but that would not be efficient as I have around 200 files.</p>
| 2 | 2016-07-22T22:44:21Z | 38,536,842 | <p>Situate files like this</p>
<pre><code>files1 = []
files2 = []
dir1path = './dir1/'
dir2path = './dir2/'
dir1 = os.listdir(dir1path)
dir2 = os.listdir(dir2path)
for f in dir1:
fmatch = f.split('.csv')[0] + 'here.csv'
if fmatch in dir2:
files1.append(f)
files2.append(fmatch)
files1 = [os.path.join(dir1path, f) for f in files1]
files2 = [os.path.join(dir2path, f) for f in files2]
fpairs = zip(files1, files2)
</code></pre>
<p>Then create list of dataframes</p>
<pre><code># edit this lambda function accroding to your needs
# it will have to be specific to your csv formatting
rf = lambda f: pd.read_csv(f)
dfs = [rf(fp[0]).merge(rf(fp[1]), on='Key') for fp in fpairs]
</code></pre>
| 3 | 2016-07-23T00:06:07Z | [
"python",
"csv",
"pandas",
"join",
"merge"
] |
Can't import Orekit - 'DLL load failed' | 38,536,425 | <p>I try to run Orekit library through python. I use Anaconda, as proposed in <a href="https://www.orekit.org/forge/projects/orekit-python-wrapper/wiki/Installation" rel="nofollow">official tutorial</a>. Unfortunately, I come across following error as soon as I try to <code>import orekit</code> in python 2.7 console:</p>
<pre><code>In [1]: import orekit
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-307c30f2343b> in <module>()
----> 1 import orekit
C:\Anaconda2\lib\site-packages\orekit\__init__.py in <module>()
2 os.environ["PATH"] = r"C:/Anaconda2\Library\jre\bin\server" + os.pathsep + os.environ["PATH"]
3
----> 4 import os, _orekit
5
6 __dir__ = os.path.abspath(os.path.dirname(__file__))
ImportError: DLL load failed: Nie moâna odnaleÄÅ okreÅ¥lonego moduâu.
</code></pre>
<p>I've already added required 2 environmental paths and installed JDK which is located in <code>C:\Program Files\Java\jdk1.8.0_102</code>. I've also installed JCC and Orekit through Anaconda. I use Windows 10 64 bit and python 2.7.</p>
<p>Do you have any suggestions what might gone wrong? How to run it properly?</p>
| 0 | 2016-07-22T23:06:23Z | 38,546,885 | <p>The solution was to install jdk through Anaconda by:</p>
<pre><code>conda install -c cyclus java-jdk
</code></pre>
<p>I found it in official <a href="https://www.orekit.org/wws/arc/orekit-users/2016-07/msg00001.html" rel="nofollow">mailing list thread</a>.</p>
| 0 | 2016-07-23T22:15:53Z | [
"java",
"python",
"anaconda"
] |
Find rolling 52 week high on daily stock market data in pandas | 38,536,453 | <p>This seems like a simple question (and answer) but I'm having trouble with it.</p>
<p>The issue:</p>
<p>I have a pandas dataframe full of OHLC data. I want to find the rolling 52 week high throughout the dataframe.</p>
<p>My dataset is from yahoo. You can pull the same data down with the folllowing code to get daily data:</p>
<pre><code>import pandas.io.data as web
df = web.DataReader('SPX', 'yahoo', start, end)
</code></pre>
<p>A tail of the data gives the output below:</p>
<pre><code> Open High Low Close Volume
Date
2016-07-15 216.779999 217.009995 215.309998 215.830002 107155400
2016-07-18 215.970001 216.600006 215.669998 216.410004 58725900
2016-07-19 215.919998 216.229996 215.630005 216.190002 54345700
2016-07-20 216.190002 217.369995 216.190002 217.089996 58159500
2016-07-21 216.960007 217.220001 215.750000 216.270004 66070000
</code></pre>
<p>To get the 52 week high (rolling), I can run the following:</p>
<pre><code>df["52weekhigh"] = pd.rolling_max(df.High, window=200, min_periods=1)
</code></pre>
<p>I get the following (some col:</p>
<pre><code> High 52weekhigh
Date
2016-07-15 217.009995 217.009995
2016-07-18 216.600006 217.009995
2016-07-19 216.229996 217.009995
2016-07-20 217.369995 217.369995
2016-07-21 217.220001 217.369995
</code></pre>
<p>This gives me an values for the 52 week highs as new highs come in, but I'm not a fan of using 200 here. Should it be 200 or 201 or 220 (there are "approximately" 200 trading days in the year)?</p>
<p>I could resample the data to weekly to get the values, but then i can't easily get back to my original daily data (or can I?).</p>
<p>So...here's the question:</p>
<p>Is there a way to run rolling_max on pandas dataframes and set the window to '52 weeks' or something similar? If not, can anyone think of a better approach to this than the above?</p>
| 1 | 2016-07-22T23:09:47Z | 38,536,645 | <p>If your data has business-day frequency then there should be roughly 5 rows per week.
So 52 weeks would roughly correspond to <code>window=52*5</code>.</p>
<p>Of course, there might be a few other days missing due to holidays. To be more
accurate, you could use <code>asfreq('D')</code> to change the frequency from business days
to actual days. Then you could use a rolling window of size <code>52*7</code>:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
np.random.seed(2016)
N = 1000
index = pd.date_range('2000-1-1', periods=N, freq='B')
data = (np.random.random((N, 1))-0.5).cumsum(axis=0)
df = pd.DataFrame(data, index=index, columns=['price'])
# result = pd.rolling_max(df.asfreq('D'), window=52*7, min_periods=1) # for older versions of Pandas
result = df.asfreq('D').rolling(window=52*7, min_periods=1).max()
result = result.rename(columns={'price':'rolling max'})
ax = df.plot()
result.plot(ax=ax)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/JbGlq.png" rel="nofollow"><img src="http://i.stack.imgur.com/JbGlq.png" alt="enter image description here"></a></p>
| 2 | 2016-07-22T23:35:53Z | [
"python",
"pandas"
] |
mongoengine - filter by "OR" on a single field in a single query | 38,536,515 | <p>Here's a (really) rough example of what I'm looking for - given a document with the following schema:</p>
<pre><code>from mongoengine import Document, StringField, DateTimeField
class Client(Document):
name = StringField()
activated_on = DateTimeField(required=False)
</code></pre>
<p>How would I query it for a client that was never activated or activated before a certain point in time?</p>
<p>In other words, both of the documents would show up in the results if I searched for entries without an activation date or one that occurred before 2016-07-22.</p>
<pre><code>{ "name": "Bob Lawbla" }
{ "name": "Gerry Mander", "activated_on": 2016-07-01T00:00:00 }
</code></pre>
<p>I know I can do:</p>
<pre><code>Client.objects(activated_on__lte=datetime.datetime(2016,7,22))
</code></pre>
<p>and</p>
<pre><code>Client.objects(activated_on__exists=False)
</code></pre>
<p>but how do I combine them into one query?</p>
| 0 | 2016-07-22T23:18:37Z | 38,537,127 | <p>You can use <a href="http://docs.mongoengine.org/guide/querying.html#advanced-queries" rel="nofollow">Q class</a> :</p>
<pre><code>from mongoengine.queryset.visitor import Q as MQ
Client.objects(MQ(activated_on__exists=False)|MQ(activated_on__lte=datetime.datetime(2016,7,22)))
</code></pre>
| 0 | 2016-07-23T00:58:29Z | [
"python",
"mongoengine"
] |
grab headers from multiple tsv/csv files | 38,536,539 | <p>I have a list of tsv files where I am looking to grab column headers for all the files.</p>
<pre><code>with open(os.path.abspath('reference/file.tsv'), 'rU') as file:
reader = csv.reader(file)
row1 = next(reader)
</code></pre>
<p>Currently, this snippet only reads 1 file where I have a list of files that needs to be parsed. </p>
<pre><code>dir_path = os.path.abspath('reference/')
files = os.listdir(dir_path)
</code></pre>
<p>The name of the files are listed in <code>files</code>. How do I loop through the list of files and grab only the column headers for each file?</p>
| 0 | 2016-07-22T23:21:29Z | 38,536,689 | <p>The <code>files</code> variable in your code is the content of the <code>reference</code> folder, meaning all files and subfolders of the folder. They are returned in a list of strings, containing only the file or subfolder name. This means that you'll have to prefix the path yourself.</p>
<p>Example:</p>
<pre><code>dir_path = os.path.abspath('reference/')
files = os.listdir(dir_path)
for file in files:
# Skip non-files
if not os.path.isfile(file):
continue
with open(os.path.join(dir_path, file), 'rU') as f:
reader = csv.reader(f)
row1 = next(reader)
</code></pre>
<p>An alternative using the <code>pathlib</code> module:</p>
<pre><code>for file in Path('reference/').glob('*'):
if not file.is_file():
continue
with open(str(file.resolve()), 'rU') as f:
reader = csv.reader(f)
row1 = next(reader)
</code></pre>
<p>Wouldn't you be better off in reading the first line of each of those files, appending them to a list and then passing them to <code>csvreader</code>?</p>
<p>Example:</p>
<pre><code>lines = []
with open(str(file.resolve()), 'rU') as f:
lines.append(f.readline())
reader = csv.reader(lines)
for row in reader:
# whatever you want to do with the parsed lines
</code></pre>
| 1 | 2016-07-22T23:42:19Z | [
"python",
"csv"
] |
grab headers from multiple tsv/csv files | 38,536,539 | <p>I have a list of tsv files where I am looking to grab column headers for all the files.</p>
<pre><code>with open(os.path.abspath('reference/file.tsv'), 'rU') as file:
reader = csv.reader(file)
row1 = next(reader)
</code></pre>
<p>Currently, this snippet only reads 1 file where I have a list of files that needs to be parsed. </p>
<pre><code>dir_path = os.path.abspath('reference/')
files = os.listdir(dir_path)
</code></pre>
<p>The name of the files are listed in <code>files</code>. How do I loop through the list of files and grab only the column headers for each file?</p>
| 0 | 2016-07-22T23:21:29Z | 38,536,713 | <p>I try this and it works.</p>
<pre><code>import os
import csv
dir_path = os.path.abspath('reference/')
files = os.listdir(dir_path)
for f in files:
with open(dir_path +'/'+f, 'rU') as file:
reader = csv.reader(file)
row1 = next(reader)
print row1
</code></pre>
| 1 | 2016-07-22T23:46:35Z | [
"python",
"csv"
] |
How to set locale for all children of python app? | 38,536,543 | <p>I have written an app indicator in python for Ubuntu desktop, which calls several external programs via subprocess. It works fine under English locale , but breaks with others. </p>
<p>I know that there is a way to do <code>subprocess.call( ['command','arg1','arg3'], env=new_env_dict)</code> however I am interested in whether there is a way to force all <code>subprocess</code> calls have new environment instead calling it every time.</p>
| 2 | 2016-07-22T23:22:00Z | 38,539,274 | <p>So far I have not found a way to globally tell all <code>subprocess</code> calls to use specific environment , so I decided to go with single function that only takes list of arguments , and locale set as shown in <a href="http://stackoverflow.com/a/13404327/3701431">related post</a> but with slight variation.</p>
<pre><code>def run_cmd(self, cmdlist):
new_env = dict( os.environ )
new_env['LC_ALL'] = 'C'
try:
stdout = subprocess.check_output(cmdlist,env=new_env)
except subprocess.CalledProcessError:
pass
else:
if stdout:
return stdout
</code></pre>
| 1 | 2016-07-23T07:23:10Z | [
"python",
"subprocess",
"locale"
] |
passing soap envelope in Python works on urllib but not on requests | 38,536,769 | <p>I'm passing a soap envelope in Python2 with urllib and it works fine, but upon upgrading to Python3 and requests, the transaction fails. The specific error on the failure is, "The server cannot service the request because the media type is unsupported." Here are the content/commands for each:</p>
<h1>Python2/urllib</h1>
<pre><code>request = urllib2.Request(self._url, xml, request_headers)
</code></pre>
<h2>contents of each variable</h2>
<p><strong>self._url:</strong></p>
<p><a href="https://cert.api2.heartlandportico.com/Hps.Exchange.PosGateway/PosGatewayService.asmx?wsdl" rel="nofollow">https://cert.api2.heartlandportico.com/Hps.Exchange.PosGateway/PosGatewayService.asmx?wsdl</a></p>
<p><strong>xml</strong></p>
<pre><code><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soap:Body><PosRequest xmlns="http://Hps.Exchange.PosGateway"><Ver1.0><Header><SecretAPIKey>skapi_cert_MYl2AQAowiQxxxxxxxizOP2jcX9BrEMqQ</SecretAPIKey><DeveloperID>000000</DeveloperID><VersionNbr>0000</VersionNbr></Header><Transaction><CreditSale><Block1><AllowDup>Y</AllowDup><AllowPartialAuth>N</AllowPartialAuth><Amt>1.15</Amt><CardHolderData><CardHolderFirstName>evan</CardHolderFirstName><CardHolderLastName>stone</CardHolderLastName><CardHolderPhone>9405947406</CardHolderPhone><CardHolderAddr>417 Neverland</CardHolderAddr><CardHolderCity>Denton</CardHolderCity><CardHolderState>TX</CardHolderState><CardHolderZip>76209</CardHolderZip></CardHolderData><CardData><TokenData><TokenValue>supt_kMKxxxxxxQacvPDvZNa</TokenValue><CardPresent>N</CardPresent><ReaderPresent>N</ReaderPresent></TokenData><TokenRequest>N</TokenRequest></CardData></Block1></CreditSale></Transaction></Ver1.0></PosRequest></soap:Body></soap:Envelope>
</code></pre>
<p><strong>request_headers</strong></p>
<pre><code>{'Content-length': '1110', 'Content-type': 'text/xml; charset=UTF-8'}
</code></pre>
<h1>Python3/requests</h1>
<pre><code>request = requests.post(self._url, xml, request_headers)
</code></pre>
<h2>contents of each variable</h2>
<p>(identical as above)</p>
<h1>Note:</h1>
<p>The actual request.headers.headers (which shows the <em>sent</em> headers) in Python3/requests make it look as if my headers dict was totally ignored, except for that one variable regarding length:</p>
<pre><code>{'User-Agent': 'python-requests/2.10.0', 'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Content-Length': '1110', 'Connection': 'keep-alive'}
</code></pre>
<p>In Python2/urllib, the output of request.headers is precisely as I sent it (see below), which makes me think there something going on within requests that is changing my headers and that's what is screwing everything up. Thoughts? Any help would be greatly appreciated:</p>
<pre><code>{'Content-length': '1110', 'Content-type': 'text/xml; charset=UTF-8'}
</code></pre>
| 0 | 2016-07-22T23:55:41Z | 38,536,848 | <p>Wow... I figured it out as soon as I posted my question: apparently headers cannot simply be a positional argument in requests like they can in urllib--you have to specify <code>headers=</code> and voila!</p>
<p>I changed this: <code>request = requests.post(self._url, xml, request_headers)</code></p>
<p>to this: <code>request = requests.post(self._url, xml, headers=request_headers)</code></p>
<p>And it just WORKED.</p>
| 0 | 2016-07-23T00:07:10Z | [
"python",
"soap",
"http-headers"
] |
Django template tag search through for-loop looking for specific object with default if not found | 38,536,797 | <p>I'm passing a queryset of people to my django template, some of which have been assigned a "seat" and others that haven't. Seats can be assigned to no one and therefore remain empty. For each seat, I want the template to loop through the queryset looking for someone in that seat. If the for loop doesn't find anyone for that seat, I want them to render an empty seat. Here is what I was thinking:</p>
<pre><code>{% for person in people %}
{% if person.seat_num = 1 %}
<div class="filled_seat"></div>
{% endif %}
{% empty %}
<div class="empty_seat"></div>
{% endfor %}
</code></pre>
<p>Except I realize that {% empty %} is only triggered if the set being iterated through is empty, whereas I need to have a default if the seat is not found (aka nothing in the set survives the "if" condition.</p>
| 0 | 2016-07-22T23:59:30Z | 38,536,847 | <p>Yes, because <a href="https://docs.djangoproject.com/en/1.9/ref/templates/builtins/#for-empty" rel="nofollow">for...empty</a> works like that. It basically can't know if your seat is empty or not, for that you need to implement your own logic. </p>
<p>I don't know the details of your model, but guessing from your example you need to do something like this:</p>
<pre><code> {% for person in people %}
{% if person.seat_num = 0 %}
<div class="empty_seat"></div>
{% else %}
<div class="filled_seat"></div>
{% endif %}
{% endfor %}
</code></pre>
| 0 | 2016-07-23T00:07:07Z | [
"python",
"html",
"django",
"django-templates"
] |
Django template tag search through for-loop looking for specific object with default if not found | 38,536,797 | <p>I'm passing a queryset of people to my django template, some of which have been assigned a "seat" and others that haven't. Seats can be assigned to no one and therefore remain empty. For each seat, I want the template to loop through the queryset looking for someone in that seat. If the for loop doesn't find anyone for that seat, I want them to render an empty seat. Here is what I was thinking:</p>
<pre><code>{% for person in people %}
{% if person.seat_num = 1 %}
<div class="filled_seat"></div>
{% endif %}
{% empty %}
<div class="empty_seat"></div>
{% endfor %}
</code></pre>
<p>Except I realize that {% empty %} is only triggered if the set being iterated through is empty, whereas I need to have a default if the seat is not found (aka nothing in the set survives the "if" condition.</p>
| 0 | 2016-07-22T23:59:30Z | 38,547,259 | <p>John Gordon's comment made me realize I shouldn't try to do too much in the template itself. In the view I created a list called "seats" and filled the appropriate seats, and then passed it to the template:</p>
<pre><code>seats = []
for n in range(4):
try:
seats.append(students.objects.get(seat_num=n+1))
except:
seats.append(None)
</code></pre>
<hr>
<p>I then used a for loop to cycle through the seats one at a time and check if that seat is filled and then generate the appropriate div:</p>
<pre><code>{% for seat in seats %}
<td>
{% if seat %}
<div class="filled_seat"></div>
{% else %}
<div class="empty_seat"></div>
{% endif %}
</td>
{% endfor %}
</code></pre>
| 0 | 2016-07-23T23:19:16Z | [
"python",
"html",
"django",
"django-templates"
] |
Get the latest non NaN values of timeseries for each identifier in pandas | 38,536,829 | <p>I am stuck on how to get the latest non-NaN values of a DataFrame for unique identifiers. So I have a Pandas DataFrame with a column of IDs, values, and years, similar to this:</p>
<pre><code> | ID | Values | Year
-------------------------
0 | A | 4.0 | 2016
1 | B | NaN | 2016
2 | C | NaN | 2016
3 | D | 1.0 | 2016
4 | A | 2.0 | 2015
5 | B | 2.0 | 2015
6 | C | 1.0 | 2015
7 | D | 3.0 | 2015
8 | A | 2.0 | 2014
9 | B | 2.0 | 2014
10| C | 3.0 | 2014
11| D | NaN | 2014
</code></pre>
<p>I'm trying to figure out how to get a list of the latest (most recent) non-NaN values for each ID. So the list for this case should be:</p>
<pre><code>[4.0, 2.0, 1.0, 1.0]
</code></pre>
<p>Which are the latest values for A, B, C, and D respectively (skipping over any NaNs).
So far I've approached this by doing a pivot like this:</p>
<pre><code>df.pivot(index = 'Year', columns = 'ID', values = 'Values')
</code></pre>
<p>So that I get:</p>
<pre><code>ID | A | B | C | D
----------------------
Year | | | |
2014 |2.0|2.0|3.0|NaN
2015 |2.0|2.0|1.0|3.0
2016 |4.0|NaN|Nan|1.0
</code></pre>
<p>And here I'm stuck- what would be the best way to get the most recent non-NaN values for each ID? Any suggestions using either the original DataFrame or the pivoted one would be appreciated!</p>
| 3 | 2016-07-23T00:04:17Z | 38,536,870 | <p>You were sooo close. Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html" rel="nofollow"><code>ffill()</code></a>:</p>
<pre><code>df.pivot(index='Year',columns='ID',values='Values').ffill().values[-1]
</code></pre>
<p>Result:</p>
<pre><code>array([ 4., 2., 1., 1.])
</code></pre>
| 3 | 2016-07-23T00:10:23Z | [
"python",
"pandas"
] |
Get the latest non NaN values of timeseries for each identifier in pandas | 38,536,829 | <p>I am stuck on how to get the latest non-NaN values of a DataFrame for unique identifiers. So I have a Pandas DataFrame with a column of IDs, values, and years, similar to this:</p>
<pre><code> | ID | Values | Year
-------------------------
0 | A | 4.0 | 2016
1 | B | NaN | 2016
2 | C | NaN | 2016
3 | D | 1.0 | 2016
4 | A | 2.0 | 2015
5 | B | 2.0 | 2015
6 | C | 1.0 | 2015
7 | D | 3.0 | 2015
8 | A | 2.0 | 2014
9 | B | 2.0 | 2014
10| C | 3.0 | 2014
11| D | NaN | 2014
</code></pre>
<p>I'm trying to figure out how to get a list of the latest (most recent) non-NaN values for each ID. So the list for this case should be:</p>
<pre><code>[4.0, 2.0, 1.0, 1.0]
</code></pre>
<p>Which are the latest values for A, B, C, and D respectively (skipping over any NaNs).
So far I've approached this by doing a pivot like this:</p>
<pre><code>df.pivot(index = 'Year', columns = 'ID', values = 'Values')
</code></pre>
<p>So that I get:</p>
<pre><code>ID | A | B | C | D
----------------------
Year | | | |
2014 |2.0|2.0|3.0|NaN
2015 |2.0|2.0|1.0|3.0
2016 |4.0|NaN|Nan|1.0
</code></pre>
<p>And here I'm stuck- what would be the best way to get the most recent non-NaN values for each ID? Any suggestions using either the original DataFrame or the pivoted one would be appreciated!</p>
| 3 | 2016-07-23T00:04:17Z | 38,536,879 | <p>This should do it:</p>
<pre><code>df.ix[df.groupby('ID').Values.apply(lambda x: x.first_valid_index())]
</code></pre>
<p><a href="http://i.stack.imgur.com/3ifr2.png" rel="nofollow"><img src="http://i.stack.imgur.com/3ifr2.png" alt="enter image description here"></a></p>
| 2 | 2016-07-23T00:11:52Z | [
"python",
"pandas"
] |
Get the latest non NaN values of timeseries for each identifier in pandas | 38,536,829 | <p>I am stuck on how to get the latest non-NaN values of a DataFrame for unique identifiers. So I have a Pandas DataFrame with a column of IDs, values, and years, similar to this:</p>
<pre><code> | ID | Values | Year
-------------------------
0 | A | 4.0 | 2016
1 | B | NaN | 2016
2 | C | NaN | 2016
3 | D | 1.0 | 2016
4 | A | 2.0 | 2015
5 | B | 2.0 | 2015
6 | C | 1.0 | 2015
7 | D | 3.0 | 2015
8 | A | 2.0 | 2014
9 | B | 2.0 | 2014
10| C | 3.0 | 2014
11| D | NaN | 2014
</code></pre>
<p>I'm trying to figure out how to get a list of the latest (most recent) non-NaN values for each ID. So the list for this case should be:</p>
<pre><code>[4.0, 2.0, 1.0, 1.0]
</code></pre>
<p>Which are the latest values for A, B, C, and D respectively (skipping over any NaNs).
So far I've approached this by doing a pivot like this:</p>
<pre><code>df.pivot(index = 'Year', columns = 'ID', values = 'Values')
</code></pre>
<p>So that I get:</p>
<pre><code>ID | A | B | C | D
----------------------
Year | | | |
2014 |2.0|2.0|3.0|NaN
2015 |2.0|2.0|1.0|3.0
2016 |4.0|NaN|Nan|1.0
</code></pre>
<p>And here I'm stuck- what would be the best way to get the most recent non-NaN values for each ID? Any suggestions using either the original DataFrame or the pivoted one would be appreciated!</p>
| 3 | 2016-07-23T00:04:17Z | 38,536,949 | <p>Another <code>groupby</code> option:</p>
<p>If the data is already sorted by <code>'Year'</code> descending, like in the example data:</p>
<pre><code>df.groupby('ID')['Values'].first()
</code></pre>
<p>If the data isn't already sorted:</p>
<pre><code>df.sort_values(by='Year').groupby('ID')['Values'].last()
</code></pre>
<p>The resulting output:</p>
<pre><code>ID
A 4.0
B 2.0
C 1.0
D 1.0
</code></pre>
| 3 | 2016-07-23T00:23:31Z | [
"python",
"pandas"
] |
Add docstring to a namedtuple field | 38,536,959 | <p>I know it is possible to add docstring for namedtuples by subclassing it, e.g.</p>
<pre><code>from collections import namedtuple
NT = namedtuple('NT', ['f1', 'f2', 'f3'])
class NTWithDoc(NT):
""" DOCSTRING """
__slots__ = ()
</code></pre>
<p>Now I wish to add docstring for f1, f2, and f3. Is there a way of doing that? Our company is using Python2, don't think people will let me use 3.</p>
| 1 | 2016-07-23T00:25:31Z | 38,537,118 | <p>I'm not sure if there is a good way to do this on python2.x. On python3.x, you can swap out the <code>__doc__</code> directly:</p>
<pre><code>$ python3
Python 3.6.0a2 (v3.6.0a2:378893423552, Jun 13 2016, 14:44:21)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from collections import namedtuple
>>> NT = namedtuple('NT', ['f1', 'f2', 'f3'])
>>> NT.f1.__doc__
'Alias for field number 0'
>>> NT.f1.__doc__ = 'Hello'
</code></pre>
<p>Unfortunately, python2.x gives you an error at this point:</p>
<pre><code>>>> NT.f1.__doc__ = 'Hello World.'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: readonly attribute
</code></pre>
<p>On python2.x, you can get it by re-defining all of the properties:</p>
<pre><code>>>> from collections import namedtuple
>>> NT = namedtuple('NT', ['f1', 'f2', 'f3'])
>>> class NTWithDoc(NT):
... """docstring."""
... __slots__ = ()
... f1 = property(NT.f1.fget, None, None, 'docstring!')
...
>>> help(NTWithDoc)
>>> a = NTWithDoc(1, 2, 3)
>>> a.f1
1
</code></pre>
<p>But this feels like a lot of trouble to go to get the docstrings :-).</p>
| 1 | 2016-07-23T00:56:42Z | [
"python",
"namedtuple"
] |
Python Tkinter OptionMenu Arguments | 38,536,999 | <p>I have a somewhat OCD trait where I like to explicit specify what arguments are being set to what values. For example, given a function:</p>
<pre><code>def func(a, b):
return a+b
</code></pre>
<p>Anytime I call on the above function in my code, I always write:</p>
<pre><code>func(a=6, b=7)
</code></pre>
<p>Instead of:</p>
<pre><code>func(6, 7)
</code></pre>
<p>The problem I'm having is that I can't seem to do this with the OptionMenu class in Tkinter. The following example is within a custom class</p>
<pre><code>self.var = tk.StringVav()
choices = ['op1', 'op2']
self.menu_m = tk.OptionMenu(master=self.frame_1, variable=self.var, *choices)
</code></pre>
<p>This results in multiple values for the argument 'master'. How can I go about explicitly defining the master, variable, and list of options to use?</p>
| 0 | 2016-07-23T00:33:04Z | 38,541,956 | <p>Unfortunately, the <code>OptionMenu</code> widget is somewhat poorly implemented. Regardless of your preferences, the optionmenu isn't designed to accept keyword arguments for the first three parameters <code>master</code>, <code>variable</code>, and <code>value</code>. They must be presented in that order as positional arguments.</p>
<pre><code>self.menu_m = tk.OptionMenu(self.frame_1, self.var, choices[0], *choices)
</code></pre>
| 1 | 2016-07-23T12:46:45Z | [
"python",
"tkinter",
"optionmenu"
] |
Googe drive api - no support for map export (python) | 38,537,063 | <p>Using the python api client I can export google docs using <code>export</code> or <code>export_media</code> and non-google doc material with <code>get_media</code>.</p>
<p>Maps saved in the user account are unexportable. <code>export</code> returns the error</p>
<p><code>HttpError: <HttpError 403 ... returned "Export only supports Google Docs."></code></p>
<p>I admit it makes no sense to use <code>get_media</code> but I try anyway given the above error. It returns:</p>
<p><code>HttpError: <HttpError 403 ... returned "Only files with binary content can be downloaded. Use Export with Google Docs files."></code></p>
<p>It seems <code>get_media</code> and <code>export</code> disagree about what this object is.</p>
<p>I suggest that <code>export</code> should work with <code>mimeType='application/vnd.google-earth.kmz'</code></p>
| 0 | 2016-07-23T00:46:33Z | 38,616,818 | <p>You can use <code>Drive.About.get</code> to determine the export formats available for each Google MIME type:</p>
<pre><code>GET https://www.googleapis.com/drive/v3/about?fields=exportFormats&key={YOUR_API_KEY}
{
"exportFormats": {
"application/vnd.google-apps.form": [
"application/zip"
],
"application/vnd.google-apps.document": [
"application/rtf",
"application/vnd.oasis.opendocument.text",
"text/html",
"application/pdf",
"application/zip",
"application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"text/plain"
],
"application/vnd.google-apps.drawing": [
"image/svg+xml",
"image/png",
"application/pdf",
"image/jpeg"
],
"application/vnd.google-apps.spreadsheet": [
"text/csv",
"application/x-vnd.oasis.opendocument.spreadsheet",
"application/zip",
"application/pdf",
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
],
"application/vnd.google-apps.script": [
"application/vnd.google-apps.script+json"
],
"application/vnd.google-apps.presentation": [
"application/vnd.openxmlformats-officedocument.presentationml.presentation",
"application/pdf",
"text/plain"
]
}
}
</code></pre>
<p>As you can see, there are currently no export formats defined for <code>application/vnd.google-apps.map</code>. Given that Google My Maps does support exporting to KMZ/KML, I think ideally the Google Drive API would as well. You can file a feature request on the <a href="https://code.google.com/a/google.com/p/apps-api-issues/issues/entry?template=Feature%20request&labels=Type-Enhancement,API-Drive" rel="nofollow">issue tracker</a>.</p>
| 0 | 2016-07-27T15:06:22Z | [
"python",
"google-drive-sdk"
] |
AttributeError: 'module' object has no attribute '__version__' | 38,537,125 | <p>I have installed LDA plibrary (using pip)
I have a very simple test code (the next two rows)</p>
<blockquote>
<p>import lda</p>
<p>print lda.datasets.load_reuters()</p>
</blockquote>
<p>But i keep getting the error </p>
<blockquote>
<p>AttributeError: 'module' object has no attribute 'datasets'</p>
</blockquote>
<p>in fact i get that each time i access any attribute/function under lda!</p>
| 0 | 2016-07-23T00:58:01Z | 38,537,431 | <p>Do you have a module named <code>lda.py</code> or <code>lda.pyc</code> in the current directory?</p>
<p>If so, then your import statement is finding that module instead of the "real" lda module.</p>
| 0 | 2016-07-23T01:59:40Z | [
"python",
"module",
"dataset",
"attributeerror",
"lda"
] |
Zip through a list and get every pair of items | 38,537,188 | <p>I would like to zip through a list of strings as below</p>
<pre><code>collections = [u'Room Designers', u'BCRF', u'House']
</code></pre>
<p>What I would like to achieve is 6 combinations of the three elements in the list - </p>
<pre><code>("Room Designers", "BCRF"), ("Room Designers", "House"), ("BCRF", "House"), ("BCRF", "Room Designers"), ("House", "BCRF"), ("House", "Room")
</code></pre>
<p>With my code below</p>
<pre><code>zipall = [zip(i,j) for i in collections for j in collections if i!=j]
</code></pre>
<p>I obtained:</p>
<pre><code>[[(u'R', u'B'), (u'o', u'C'), (u'o', u'R'), (u'm', u'F')], [(u'R', u'H'), (u'o', u'o'), (u'o', u'u'), (u'm', u's'), (u' ', u'e')], [(u'B', u'R'), (u'C', u'o'), (u'R', u'o'), (u'F', u'm')], [(u'B', u'H'), (u'C', u'o'), (u'R', u'u'), (u'F', u's')], [(u'H', u'R'), (u'o', u'o'), (u'u', u'o'), (u's', u'm'), (u'e', u' ')], [(u'H', u'B'), (u'o', u'C'), (u'u', u'R'), (u's', u'F')]]
</code></pre>
<p>What would be a better way to do this? Thank you!!</p>
| 1 | 2016-07-23T01:09:31Z | 38,537,199 | <p>If you wanted to do it the way you've written, you need to omit the final <code>zip</code> call because that will break the strings up into their individual characters and pair <em>those</em> up. </p>
<pre><code>zipall = [(i,j) for i in collections for j in collections if i!=j]
# [(u'Room Designers', u'BCRF'), (u'Room Designers', u'House'), (u'BCRF', u'Room Designers'), (u'BCRF', u'House'), (u'House', u'Room Designers'), (u'House', u'BCRF')]
</code></pre>
<p>For these sorts of problems though, the <a href="https://docs.python.org/2/library/itertools.html" rel="nofollow"><code>itertools</code></a> library is very handy. For this specific problem, you can use <a href="https://docs.python.org/2/library/itertools.html#itertools.permutations" rel="nofollow"><code>itertools.permutations</code></a> to yield all permutations of 2 elements. With permutations (as opposed to combinations), the order of the pairing matters.</p>
<pre><code>import itertools
# Create all permutations of 2 items
output = list(itertools.permutations(collections, 2))
# [(u'Room Designers', u'BCRF'), (u'Room Designers', u'House'), (u'BCRF', u'Room Designers'), (u'BCRF', u'House'), (u'House', u'Room Designers'), (u'House', u'BCRF')]
</code></pre>
| 1 | 2016-07-23T01:12:45Z | [
"python",
"string",
"zip"
] |
How can I write the argument name from Python argparse to a file? | 38,537,217 | <p>I want to use argparse to write the argument names & values it receives to a file. I have this so far:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--infile", help = "seed file", default = 'test')
parser.add_argument("--lang", help="wiki language bigram", default = 'en')
parser.add_argument("--request_type", help="request type", default = 'sum')
parser.add_argument("--outfile", help = "path to outfile", default = 'outfile')
parser.add_argument("--summary", help = "true or false", action = "store_true")
parser.add_argument("--pagehits", help = "path to list of page hits", default = 'pagehits')
parser.add_argument("--exemplar_file", help = "file to inspire book")
args = parser.parse_args()
input_file = args.infile
print(args.infile)
print(var(args))
with open('modified-variables', 'w') as outfile:
outfile.write(args.infile)
outfile.write('hello world')
</code></pre>
<p>with --infile "/path/to/file" --lang "en" output is:</p>
<pre><code>/path/to/file
en
[but nothing written to file]
</code></pre>
<p>I want it to loop over all the positional parameters and write all those that are supplied in the command line to modified-variables in the format</p>
<pre><code>infile="/path/to/file"
lang="en"
</code></pre>
<p>Googling is not helping me figure out how to print the argnames and there is something simple wrong in the with/write construct.</p>
<p>UPDATE: added print(vars((args)) per answer #1 which yields:</p>
<pre><code>{'lang': 'en', 'exemplar_file': None, 'pagehits': 'pagehits', 'summary': False, 'outfile': '/tmp/pagekicker/123/out_test', 'request_type': 'sum', 'infile': '/home/fred/pagekicker-community/scripts/seeds/revolutionary-figures'}
</code></pre>
<p>Now I just want to write</p>
<p>lang="en"
exemplar_file="None"</p>
<p>etc.</p>
<p>to a file.</p>
| 0 | 2016-07-23T01:16:22Z | 38,537,273 | <p>Have you tried</p>
<pre><code>print(args) # the default Namespace display format
</code></pre>
<p>or</p>
<pre><code>print(vars(args))
</code></pre>
<p><code>vars(args)</code> is a dictionary, which you can display in a number of different ways. You could convert it to a list of tuples (<code>.items()</code>), you could iterate over the <code>keys</code>, etc.</p>
| 1 | 2016-07-23T01:25:17Z | [
"python",
"argparse",
"with-statement"
] |
How to find element based on text and class with Selenium? | 38,537,223 | <p>I have an html table, and I want to find the <code>td</code> element in <code>tbody</code> based on the text and if it doesn't contain a certain class. It should ignore anything in <code>thead</code> even if the text is present.</p>
<pre><code>...
<div id="mytable">
<div>
<table>
<thead>
<tr>....</tr>
...
</thead>
<tbody>
<tr>
<td class="inactive">3</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
</div>
...
</code></pre>
<p>Here is the code:</p>
<pre><code>def do_click(user_input):
browser = webdriver.Firefox()
browser.get(url)
mytable_div = browser.find_element_by_id("mytable")
element = mytable_div.find_element_by_xpath("//div/table/tbody/td[contains(text()='%s')]" % user_input)
if element:
element.click()
</code></pre>
<p>So if the <code>user_input = 3</code>, the table element <strong>should NOT</strong> be clicked because it has the class <code>inactive</code> even though the text is present.</p>
<p>If the <code>user_input = 4</code>, then the table element <strong>should</strong> be clicked since it does not have a inactive class and the text is present.</p>
<p>Currently the code doesn't work because my xpath expression is invalid, but I'm not sure what the correct way to check for both the class and the text would be.</p>
| 2 | 2016-07-23T01:17:10Z | 38,537,246 | <p>You are on the right track. <a href="https://developer.mozilla.org/en-US/docs/Web/XPath/Functions/contains" rel="nofollow"><code>contains()</code></a> is a function and expects 2 arguments to be passed in - a string where to look and a substring - what to look. Fix your expression:</p>
<pre><code>"//div/table/tbody//td[contains(text(), '%s')]" % user_input
</code></pre>
<p>You can also use a dot instead of <code>text()</code> in this case.</p>
<p>And, to also check for the class being not <code>inactive</code>, add "not contains" check:</p>
<pre><code>"//div/table/tbody//td[contains(., '%s')][not(contains(@class, 'inactive'))]" % user_input
</code></pre>
<p>As for the logic, handle the <a href="http://selenium-python.readthedocs.io/api.html#selenium.common.exceptions.NoSuchElementException" rel="nofollow"><code>NoSuchElementException</code></a> exception which, if raised, would mean the element was not found, click the element if no exception was raised:</p>
<pre><code>from selenium.common.exceptions import NoSuchElementException
try:
element = mytable_div.find_element_by_xpath("//div/table/tbody//td[contains(., '%s')][not(contains(@class, 'inactive'))]" % user_input)
element.click()
except NoSuchElementException:
print("Element not found")
</code></pre>
| 0 | 2016-07-23T01:20:51Z | [
"python",
"python-2.7",
"selenium",
"xpath",
"selenium-webdriver"
] |
Optimizing min function for changing lists | 38,537,346 | <p>I am dealing with a problem where I need to keep track of the minimum number in a list. However, this list is constantly diminishing, say like from a million elements to a single element. I was looking for a way to avoid checking the minimum value everytime I got a one element smaller list. Like keeping track of the minimum element and if it is removed the next minimum becomes the minimum. I want to accomplish this in linear time.(It should be achievable given the mechanics of the problem)</p>
<p>What I thought of since I started that, I can use <code>collections Counter</code> to count elements in the list. Then I find the minimum(already <code>O(2*n)</code>), and everytime I remove an element, I subtract 1 from the value of the dictionary key. However when the minimum number's count is depleted, I would still require to find the second minimum element so it could replace it.</p>
<p>Please help me find a solution to this. I would say this is an interesting problem.</p>
| 0 | 2016-07-23T01:41:01Z | 38,537,423 | <p>Let's say your program would take some time to sort that list</p>
<pre><code>a = [10,9,10,8,7,6,5,4,3,2,1,1,1,0] # you're just removing
a = sorted(a) #sort ascending
# then you remove stuff from your list
# but always a[0] is minimum element
min = a[0] #you must be careful, there must be at least one item so check that before
#getting the min
</code></pre>
<p>So there is no need for searching it every time</p>
| 1 | 2016-07-23T01:57:37Z | [
"python",
"algorithm",
"performance"
] |
pandas the row data transform to the column data | 38,537,399 | <p>I have a datafram like ï¼</p>
<pre><code>user_id category view collect
1 1 a 2 3
2 1 b 5 9
3 2 a 8 6
4 3 a 7 3
5 3 b 4 2
6 3 c 3 0
7 4 e 1 4
</code></pre>
<p>how to change it to a new dataframe ï¼each user_id can appear onceï¼then the category with the view and collect appears to the columns ,if there is no data ,fill it with 0, like this :</p>
<pre><code>user_id a_view a_collect b_view b_collect c_view c_collect d_view d_collect e_view e_collect
1 2 3 5 6 0 0 0 0 0 0
2 8 6 0 0 0 0 0 0 0 0
3 7 3 4 2 3 0 0 0 0 0
4 0 0 0 0 0 0 0 0 1 4
</code></pre>
| 1 | 2016-07-23T01:50:48Z | 38,537,440 | <p>The desired result can be obtained by <a href="http://pandas.pydata.org/pandas-docs/stable/reshaping.html" rel="nofollow">pivoting <code>df</code></a>, with values from <code>user_id</code> becoming the index and values from <code>category</code> becoming a column level:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({'category': ['a', 'b', 'a', 'a', 'b', 'c', 'e'],
'collect': [3, 9, 6, 3, 2, 0, 4],
'user_id': [1, 1, 2, 3, 3, 3, 4],
'view': [2, 5, 8, 7, 4, 3, 1]})
result = (df.pivot(index='user_id', columns='category')
.swaplevel(axis=1).sortlevel(axis=1).fillna(0))
</code></pre>
<p>yields</p>
<pre><code>category a b c e
view collect view collect view collect view collect
user_id
1 2.0 3.0 5.0 9.0 0.0 0.0 0.0 0.0
2 8.0 6.0 0.0 0.0 0.0 0.0 0.0 0.0
3 7.0 3.0 4.0 2.0 3.0 0.0 0.0 0.0
4 0.0 0.0 0.0 0.0 0.0 0.0 1.0 4.0
</code></pre>
<p>Above, <code>result</code> has a MultiIndex. In general I think this should be preferred over a flattened single index, since it retains more of the structure of the data. </p>
<p>However, the MultiIndex can be flattened into a single index:</p>
<pre><code>result.columns = ['{}_{}'.format(cat,col) for cat, col in result.columns]
print(result)
</code></pre>
<p>yields</p>
<pre><code> a_view a_collect b_view b_collect c_view c_collect e_view \
user_id
1 2.0 3.0 5.0 9.0 0.0 0.0 0.0
2 8.0 6.0 0.0 0.0 0.0 0.0 0.0
3 7.0 3.0 4.0 2.0 3.0 0.0 0.0
4 0.0 0.0 0.0 0.0 0.0 0.0 1.0
e_collect
user_id
1 0.0
2 0.0
3 0.0
4 4.0
</code></pre>
| 1 | 2016-07-23T02:01:27Z | [
"python",
"pandas"
] |
How to share a variable between two python scripts run seperately | 38,537,405 | <p>I'm an extreme noob to python, so If there's a better way to do what I'm asking please let me know.</p>
<p>I have one file, which works with flask to create markers on a map. It has an array which stores these said markers. I'm starting the file through command prompt, and opening said file multiple times. Basically, how would one open a file multiple times, and have them share a variable (Not the same as having a subfile that shares variables with a superfile.) I'm okay with creating another file that starts the instances if needed, but I'm not sure how I'd do that.</p>
<p>Here is an example of what I'd like to accomplish. I have a file called, let's
say, test.py:</p>
<pre><code>global number
number += 1
print(number)
</code></pre>
<p>I'd like it so that when I start this through command prompt (python test.py) multiple times, it'd print the following:</p>
<pre><code>1
2
3
4
5
</code></pre>
<p>The only difference between above and what I have, is that what I have will be non-terminating and continuously running</p>
| 0 | 2016-07-23T01:52:48Z | 38,537,488 | <p>Your description is kind of confusing, but if I understand you correctly, one way of doing this would be to keep the value of the variable in a separate file.</p>
<p>When a script needs the value, read the value from the file and add one to it. If the file doesn't exist, use a default value of 1. Finally, rewrite the file with the new value.</p>
<p>However you said that this value would be shared among two python scripts, so you'd have to be careful that both scripts don't try to access the file at the same time.</p>
| 1 | 2016-07-23T02:13:29Z | [
"python",
"arrays",
"variables"
] |
How to share a variable between two python scripts run seperately | 38,537,405 | <p>I'm an extreme noob to python, so If there's a better way to do what I'm asking please let me know.</p>
<p>I have one file, which works with flask to create markers on a map. It has an array which stores these said markers. I'm starting the file through command prompt, and opening said file multiple times. Basically, how would one open a file multiple times, and have them share a variable (Not the same as having a subfile that shares variables with a superfile.) I'm okay with creating another file that starts the instances if needed, but I'm not sure how I'd do that.</p>
<p>Here is an example of what I'd like to accomplish. I have a file called, let's
say, test.py:</p>
<pre><code>global number
number += 1
print(number)
</code></pre>
<p>I'd like it so that when I start this through command prompt (python test.py) multiple times, it'd print the following:</p>
<pre><code>1
2
3
4
5
</code></pre>
<p>The only difference between above and what I have, is that what I have will be non-terminating and continuously running</p>
| 0 | 2016-07-23T01:52:48Z | 38,537,524 | <p>I think you could use <code>pickle.dump(your array, file)</code> to serie the data(your array) intoto a file. And at next time running the script, you could just load the data back with <code>pickle.dump(your array, file)</code></p>
| 0 | 2016-07-23T02:20:31Z | [
"python",
"arrays",
"variables"
] |
How to share a variable between two python scripts run seperately | 38,537,405 | <p>I'm an extreme noob to python, so If there's a better way to do what I'm asking please let me know.</p>
<p>I have one file, which works with flask to create markers on a map. It has an array which stores these said markers. I'm starting the file through command prompt, and opening said file multiple times. Basically, how would one open a file multiple times, and have them share a variable (Not the same as having a subfile that shares variables with a superfile.) I'm okay with creating another file that starts the instances if needed, but I'm not sure how I'd do that.</p>
<p>Here is an example of what I'd like to accomplish. I have a file called, let's
say, test.py:</p>
<pre><code>global number
number += 1
print(number)
</code></pre>
<p>I'd like it so that when I start this through command prompt (python test.py) multiple times, it'd print the following:</p>
<pre><code>1
2
3
4
5
</code></pre>
<p>The only difference between above and what I have, is that what I have will be non-terminating and continuously running</p>
| 0 | 2016-07-23T01:52:48Z | 38,537,583 | <p>What you seem to be looking for is some form of inter-process communication. In terms of python, each process has its own memory space and its own variables meaning that if I ran.</p>
<pre><code>number += 1
print(number)
</code></pre>
<p>Multiple times then I would get 1,2..5 on a new line. No matter how many times I start the script, number would be a global.</p>
<p>There are a few ways where you can keep consistency.</p>
<h1><strong>Writing To A File (named pipe)</strong></h1>
<p>One of your scripts can have (generator.py)</p>
<pre><code>import os
num = 1
try:
os.mkfifo("temp.txt")
except:
pass # In case one of your other files already started
while True:
file = open("temp.txt", "w")
file.write(num)
file.close() # Important because if you don't close the file
# The operating system will lock your file and your other scripts
# Won't have access
sleep(# seconds)
</code></pre>
<p>In your other scripts (consumer.py)</p>
<pre><code>while True:
file = open("temp.txt", "r")
number = int(file.read())
print(number)
sleep(# seconds)
</code></pre>
<p>You would start 1 or so generator and as many consumers as you want. Note: this does have a race condition that can't really be avoided. When you write to the file, you should use a serializer like pickler or json to properly encode and decode your array object.</p>
<h2>Other Ways</h2>
<p>You can also look up how to use pipes (both named and unnamed), databases, ampq (IMHO the best way to do it but there is a learning curve and added dependencies), and if you are feeling bold use mmap.</p>
<h2>Design Change</h2>
<p>If you are willing to listen to a design change, Since you are making a flask application that has the variable in memory why don't you just make an endpoint to serve up your array and check the endpoint every so often?</p>
<pre><code> import json # or pickle
import flask
app = Flask(__name__)
array = [objects]
converted = method_to_convert_to_array_of_dicts(array)
@app.route("/array")
def hello():
return json.dumps(array)
</code></pre>
<p>You will need to convert but then the web server can be hosted and your clients would just need something like</p>
<pre><code>import requests
import json
while True:
result = requests.get('localhost/array')
array = json.loads(str(result.body)) # or some string form of result
sleep(...)
</code></pre>
| 2 | 2016-07-23T02:32:10Z | [
"python",
"arrays",
"variables"
] |
Invalid block tag. Did you forget to register or load this tag? | 38,537,464 | <p>Getting an invalid block tag message <code>Invalid block tag on line 2: 'out'. Did you forget to register or load this tag?</code> but don't know why. Here's my setup:</p>
<p>graphs.html</p>
<pre><code>{% out %}
</code></pre>
<p>views.py</p>
<pre><code>out = 'something to say'
template = loader.get_template('viz_proj/graphs.html')
context = {
'out' : out
}
return HttpResponse(template.render(context, request))
</code></pre>
<p>settings.py</p>
<pre><code>INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'viz_proj'
]
</code></pre>
<p>project heirarchy</p>
<pre><code>viz_proj
|
viz_proj----------------------------------------templates
| |
settings.py--views.py--urls.py graphs.html
</code></pre>
| 2 | 2016-07-23T02:06:01Z | 38,537,520 | <p>I think you want to try {{ out }} instead of {% out %}.</p>
| 7 | 2016-07-23T02:19:42Z | [
"python",
"html",
"django"
] |
unable to install cairosvg on centos 6 | 38,537,532 | <p>when executing sudo pip install cairosvg on Centos
getting following error</p>
<pre><code> self.run_setup(setup_script, setup_base, args)
File "/usr/lib/python2.6/site-packages/setuptools/command/easy_install.py", line 949, in run_setup
raise DistutilsError("Setup script exited with %s" % (v.args[0],))
distutils.errors.DistutilsError: Setup script exited with error: command 'gcc' failed with exit status 1
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-WJIHyJ/cairocffi/
</code></pre>
| 0 | 2016-07-23T02:22:25Z | 38,540,686 | <p>CentOS 6.8 - 64</p>
<pre><code># yum install libffi-devel.x86_64
# pip install cairosvg
.
.
Installing collected packages: pycparser, cffi, cairocffi, cairosvg
Running setup.py install for pycparser
Running setup.py install for cffi
Running setup.py install for cairocffi
Running setup.py install for cairosvg
Successfully installed cairocffi-0.7.2 cairosvg-1.0.22 cffi-1.7.0 pycparser-2.14
</code></pre>
| 0 | 2016-07-23T10:22:26Z | [
"python",
"svg",
"centos",
"cairo"
] |
Use __float__ with non-float type | 38,537,551 | <p>Per my <a href="http://stackoverflow.com/q/38533476/3928184">question from earlier today</a> (which was wonderfully answered, and I appreciate everyone's insight), I've <a href="https://github.com/bjd2385/fftconvolve/blob/master/operalist.py" rel="nofollow">extended that small class</a> for the heck of it to almost all the operations we'd normally perform upon integers and floats.</p>
<p>Now I'm not certain how to convert all the entries to floats <em>without</em> <a href="http://stackoverflow.com/questions/1614236/in-python-how-to-i-convert-all-items-in-a-list-to-floats">list comprehensions</a>.</p>
<p>For instance, right now I have the following:</p>
<pre><code>def __float__(self):
return operalist(float(i) for i in self)
</code></pre>
<p>But when I call the following sequence of commands:</p>
<pre><code>>>> from operalist import operalist
>>> X = operalist(range(1, 11))
>>> X
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> float(X)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __float__ returned non-float (type operalist)
</code></pre>
<p>What I would rather see is what we'd get by using a list comprehension:</p>
<pre><code>>>> [float(i) for i in X]
[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0]
</code></pre>
<p>How do I add a <code>__float__</code> method, since it doesn't appear to be native to lists? Unless it's not a good idea.</p>
| 1 | 2016-07-23T02:26:31Z | 38,537,777 | <p>Unfortunately, you can't. I can't find any precise wording on this, but <a href="https://docs.python.org/3/reference/datamodel.html#object.__complex__" rel="nofollow">the documentation states</a>:</p>
<blockquote>
<p>Called to implement the built-in functions complex(), int(), float()
and round(). Should return a value of the appropriate type.</p>
</blockquote>
<p>I read that as that <code>__float__()</code> should return a <code>float</code>, since <a href="https://docs.python.org/3/library/functions.html#float" rel="nofollow"><code>float()</code></a>:</p>
<blockquote>
<p>[r]eturn[s] a floating point number constructed from a number or string x.</p>
</blockquote>
<hr>
<p>For comparison, numpy also doesn't do this. Instead, it has a method <code>astype(<type>)</code> to its <code>ndarray</code> that converts to the specific <code><type></code>.<br>
That confirms to me that this indeed cannot be done.</p>
| 1 | 2016-07-23T03:14:33Z | [
"python",
"list"
] |
Run commands on python | 38,537,572 | <p>I'm trying to create a script to automatically compile apache. Sadly on my work I need to compile each an every apache I install.<br>
So, I came up with this little code to run a command:</p>
<pre><code>print("Source location %s" % source_location)
print("Configure command %s" % configure_command)
config = subprocess.Popen(configure_command, stdout=subprocess.PIPE, shell=True)
(output, err) = config.communicate()
config_status = config.wait()
print("Return configure status = %s" % config_status)
</code></pre>
<p>At the moment I'm stuck on the configure part.<br>
Basically the configure line is like this:</p>
<blockquote>
<p>/Volumes/nirvash/script/workarea/httpd-2.2.31/configure --prefix=/tmp/apache-2.2.31-instance1 --enable-mods-shared=all --enable-proxy --enable-proxy-connect --enable-proxy-ftp --enable-proxy-http --enable-deflate --enable-cache --enable-disk-cache --enable-mem-cache --enable-file-cache --with-included-apr --with-mpm=worker</p>
</blockquote>
<p>The problem is that when the apache is compiling, it creates (mkdir) an "include" directory inside the httpd-2.2.31. But in this case the directory is created on the bin directory of my script.<br>
So the directory is created were the script is running. <br/><br/>
Is it possible to fix this? Is there any way to run the configure in the directory that is compiling?</p>
| 0 | 2016-07-23T02:30:24Z | 38,537,587 | <p>You can use <code>os.chdir</code> to change the current directory of your script to be the same as the directory which contains the source code.</p>
<pre><code>os.chdir(source_location)
</code></pre>
<p>Alternately, you could change <code>configure_command</code> to first change directories using <code>cd</code> prior to running <code>configure</code>.</p>
<pre><code>configure_command = 'cd "%s" && %s' % (source_location, configure_command)
</code></pre>
| 1 | 2016-07-23T02:32:47Z | [
"python",
"apache",
"python-3.x"
] |
How to continue onto the next object in a word list after the nth loop | 38,537,607 | <p>Sorry for the confusing title, but basically I have two word list files and I want them to print out as show below.</p>
<p>One text file is formatted like this:</p>
<pre><code>1
12
123
</code></pre>
<p>And the other is like this:</p>
<pre><code>Test1
Test2
Test3
</code></pre>
<p>I am trying to achieve this in the end result (printing it out)
so the first file keeps increasing by one (next word in list) and the other word increases one (next word in list) every five times</p>
<pre><code>Test1:1
Test2:1
Test3:1
Test4:1
Test5:1
Test6:12
Test7:12
Test8:12
Test9:12
Test10:12
Test11:123
Test12:123
Test13:123
Test14:123
Test15:123
Test16:1234
Etc
Etc
Etc
Etc
</code></pre>
<p>I have been attempting this for the last hour using:</p>
<pre><code>with open('file1.txt', 'r') as f:
for line in f:
for word in line.split():
</code></pre>
<p>But with no luck</p>
<p>Anything helps :)</p>
| 0 | 2016-07-23T02:37:56Z | 38,537,703 | <p>Something like this? Open both files and for every line in file one print five in file two:</p>
<pre><code>with open('file1.txt', 'r') as f1:
with open('file2.txt', 'r') as f2:
for line1 in f1:
i = 0
for line2 in f2:
i += 1
print(line2.rstrip('\n') + ':' + line1.rstrip('\n'))
if i == 5:
break
Test1:1
Test2:1
Test3:1
Test4:1
Test5:1
Test6:12
Test7:12
Test8:12
Test9:12
Test10:12
Test11:123
Test12:123
Test13:123
Test14:123
Test15:123
</code></pre>
| 1 | 2016-07-23T02:57:56Z | [
"python",
"python-2.7"
] |
scipy's cdist incompatible with Sympy symbols | 38,537,627 | <p>I realised today that sympy's matrix symbols in a vector (for multiple partial element-wise derivatives over a covariance matrix) was incompatible with scipy's cdist during the the optimize.minimize process, as it assumes actual use of numbers when making the function call (fair enough).</p>
<p>It first runs into the issue that the expression can't be converted to a float, as it contains sympy symbols</p>
<pre><code>TypeError: can't convert expression to a float
</code></pre>
<p>Followed by a very long list of repeated:</p>
<pre><code>During handling of the above exception, another exception occurred:
SystemError: <built-in function hasattr> returned a result with an error set
</code></pre>
<p>Is there any built-in way of utilising both cdist and substitution by way of sympy's symbols, or is the only option here to implement a custom implementation of cdist that can deal with symbols? Not that it matters, but cdist method I'm using is squared euclidean.</p>
<p>Thanks.</p>
| 1 | 2016-07-23T02:42:50Z | 38,541,729 | <p>As you've found, scipy.distance.cdist does not handle arbitrary objects. You'll need to implement Euclidean norm yourself.</p>
| 1 | 2016-07-23T12:20:41Z | [
"python",
"optimization",
"scipy",
"sympy"
] |
scipy's cdist incompatible with Sympy symbols | 38,537,627 | <p>I realised today that sympy's matrix symbols in a vector (for multiple partial element-wise derivatives over a covariance matrix) was incompatible with scipy's cdist during the the optimize.minimize process, as it assumes actual use of numbers when making the function call (fair enough).</p>
<p>It first runs into the issue that the expression can't be converted to a float, as it contains sympy symbols</p>
<pre><code>TypeError: can't convert expression to a float
</code></pre>
<p>Followed by a very long list of repeated:</p>
<pre><code>During handling of the above exception, another exception occurred:
SystemError: <built-in function hasattr> returned a result with an error set
</code></pre>
<p>Is there any built-in way of utilising both cdist and substitution by way of sympy's symbols, or is the only option here to implement a custom implementation of cdist that can deal with symbols? Not that it matters, but cdist method I'm using is squared euclidean.</p>
<p>Thanks.</p>
| 1 | 2016-07-23T02:42:50Z | 38,579,090 | <p>You can get the norm of a SymPy matrix expression by first converting it to an explicit matrix object, then using the norm method. </p>
<pre><code>In [13]: A = MatrixSymbol("A", 1, 3)
In [14]: A.as_explicit().norm()
Out[14]:
__________________________
â± 2 2 2
â²â± âAâââ + âAâââ + âAâââ
</code></pre>
| 1 | 2016-07-25T23:53:55Z | [
"python",
"optimization",
"scipy",
"sympy"
] |
Remove rows from Pandas DataFrame based on index condition | 38,537,753 | <p>I have a dataframe like this:</p>
<pre><code> 0 1 2 3 4
19238V105 NaN NaN NaN NaN NaN
91731X102 NaN NaN NaN 2450900.0 996600.0
97X1 NaN NaN NaN NaN NaN
</code></pre>
<p>I would like to drop all of the rows where: <code>len(index) != 9</code>. So the result would be:</p>
<pre><code> 0 1 2 3 4
19238V105 NaN NaN NaN NaN NaN
91731X102 NaN NaN NaN 2450900.0 996600.0
</code></pre>
<p>EDIT</p>
<p>I wrote this code:</p>
<pre><code>for index, row in df.iterrows():
if len(index) != 9:
df = df.drop(index)
</code></pre>
<p>Is there a better way? Also, I'm not entirely sure why both the <code>index, row</code> are required and not just <code>index</code>. Thanks</p>
| 2 | 2016-07-23T03:11:01Z | 38,537,886 | <p>Try this:</p>
<pre><code>df[df.index.str.len() >= 9]
</code></pre>
| 1 | 2016-07-23T03:34:56Z | [
"python",
"pandas",
"dataframe"
] |
Joining Python dict with MySQL table | 38,537,855 | <p>Is there a way to perform an SQL query that joins a MySQL table with a dict-like structure that is not in the database but instead provided in the query?</p>
<p>In particular, I regularly need to post-process data I extract from a database with the respective exchange rates. Exchange rates are not stored in the database but retrieved on the fly and stored temporarily in a Python dict.</p>
<p>So, I have a dict: exchange_rates = {'EUR': 1.10, 'GBP': 1.31, ...}.</p>
<p>Let's say some query returns something like: id, amount, currency_code.</p>
<p>Would it be possible to add the dict to the query so I can return: id, amount, currency_code, usd_amount? This would remove the need to post-process in Python.</p>
| 2 | 2016-07-23T03:28:21Z | 38,538,171 | <p>This solution doesn't use a 'join', but does combine the data from Python into SQL via a case statement. You could generate the sql you want in python (as a string) that includes these values in a giant case statement. </p>
<p>You give no details, and don't say which version of Python, so it's hard to provide useful code. But This works with Python 2.7 and assumes you have some connection to the MySQL db in python: </p>
<pre><code>exchange_rates = {'EUR': 1.10, 'GBP': 1.31, ...}
# create a long set of case conditions as a string
er_case_statement = "\n".join("mytable.currency = \"{0}\" then {1}".format(k,v) for (k,v) in exchange_rates.iteritems())
# build the sql with these case statements
sql = """select <some stuff>,
case {0}
end as exchange_rate,
other columns
from tables etc
where etc
""".format(er_case_statement)
</code></pre>
<p>Then send this SQL to MySQL</p>
<p>I don't like this solution; you end up with a very large SQL statement which can hit the maximum ( <a href="http://stackoverflow.com/questions/16335011/what-is-maximum-query-size-for-mysql">What is maximum query size for mysql?</a> ). </p>
<p>Another idea is to use temporary tables in mysql. Again assuming you are connecting to the db in python, with python create the sql that creates a temporary table and insert the exchange rates, send that to MySQL, then build a query that joins your data to that temporary table. </p>
<p>Finally you say you don't want to post-process in python but you have a dict from somewhere do I don't know which environment you are using BUT if you can get these exchange rates from the web, say with CURL, then you could use shell to also insert these values into a MySQL temp table, and join there. </p>
<p>sorry this is general and not specific, but the question could use more specificity. Hope it helps someone else give a more targeted answer. </p>
| 1 | 2016-07-23T04:30:39Z | [
"python",
"mysql",
"sql"
] |
Set logging levels | 38,537,905 | <p>I'm trying to use the standard library to debug my code:</p>
<p>This works fine:</p>
<pre><code>import logging
logging.basicConfig(level=logging.info)
logger = logging.getLogger(__name__)
logger.info('message')
</code></pre>
<p>I can't make work the logger for the lower levels: </p>
<pre><code>logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.info('message')
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.debug('message')
</code></pre>
<p>I don't get any response for neither of those.</p>
| 1 | 2016-07-23T03:38:29Z | 38,537,983 | <p>What Python version? That workd for me in 3.4. But note that <a href="https://docs.python.org/3.4/library/logging.html#logging.basicConfig" rel="nofollow">basicConfig()</a> won't effect the root handler if it's already setup:</p>
<blockquote>
<p>This function does nothing if the root logger already has handlers configured for it.</p>
</blockquote>
<p>To set the level on root explicitly do <code>logging.getLogger().setLevel(logging.DEBUG)</code>. But ensure you've called <code>basicConfig()</code> before hand so the root logger initially has some setup. I.e.:</p>
<pre><code>import logging
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger('foo').debug('bah')
logging.getLogger().setLevel(logging.INFO)
logging.getLogger('foo').debug('bah')
</code></pre>
<p>Also note that "Loggers" and their "Handlers" both have distinct independent log levels. So if you've previously explicitly loaded some complex logger config in you Python script, and that has messed with the root logger's handler(s), then this can have an effect, and just changing the loggers log level with <code>logging.getLogger().setLevel(..)</code> may not work. This is because the attached handler may have a log level set independently. This is unlikely to be the case and not something you'd normally have to worry about.</p>
| 1 | 2016-07-23T03:55:08Z | [
"python",
"logging"
] |
Set logging levels | 38,537,905 | <p>I'm trying to use the standard library to debug my code:</p>
<p>This works fine:</p>
<pre><code>import logging
logging.basicConfig(level=logging.info)
logger = logging.getLogger(__name__)
logger.info('message')
</code></pre>
<p>I can't make work the logger for the lower levels: </p>
<pre><code>logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.info('message')
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.debug('message')
</code></pre>
<p>I don't get any response for neither of those.</p>
| 1 | 2016-07-23T03:38:29Z | 38,538,348 | <p>I use the following setup for logging</p>
<h2>Yaml based config</h2>
<p>Create a yaml file called <strong>logging.yml</strong> like this</p>
<pre><code>version: 1
formatters:
simple:
format: "%(name)s - %(lineno)d - %(message)s"
complex:
format: "%(asctime)s - %(name)s - %(lineno)d - %(message)s"
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
file:
class: logging.handlers.TimedRotatingFileHandler
when: midnight
backupCount: 5
level: DEBUG
formatter: simple
filename : Thrift.log
loggers:
qsoWidget:
level: INFO
handlers: [console,file]
propagate: yes
__main__:
level: DEBUG
handlers: [console]
propagate: yes
</code></pre>
<h2>Python - The main</h2>
<p>The "main" module should look like this</p>
<pre><code>import logging.config
import logging
with open('logging.yaml','rt') as f:
config=yaml.safe_load(f.read())
f.close()
logging.config.dictConfig(config)
logger=logging.getLogger(__name__)
logger.info("Contest is starting")
</code></pre>
<h2>Sub Modules/Classes</h2>
<p>These should start like this</p>
<pre><code>import logging
class locator(object):
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.debug('{} initialized')
</code></pre>
<p>Hope that helps you...</p>
| 0 | 2016-07-23T05:02:46Z | [
"python",
"logging"
] |
How can I detect the method to request data from this site? | 38,537,941 | <p><strong>UPDATE:</strong> I've put together the following script to use the url for the XML <em>without the time-code-like suffix</em> as recommended in the <a href="http://stackoverflow.com/a/38538127/3904031">answer below</a>, and report the downlink powers which clearly fluctuate on the website. I'm getting three hour old, unvarying data. </p>
<p>So it looks like I need to properly construct that (time code? authorization? secret password?) in order to do this successfully. Like I say in the comment below, "<em>I don't want to do anything that's not allowed and welcome - NASA has enough challenges already trying to talk to a forty year old spacecraft 20 billion kilometers away!</em>"</p>
<pre><code>def dictify(r,root=True):
"""from: http://stackoverflow.com/a/30923963/3904031"""
if root:
return {r.tag : dictify(r, False)}
d=copy(r.attrib)
if r.text:
d["_text"]=r.text
for x in r.findall("./*"):
if x.tag not in d:
d[x.tag]=[]
d[x.tag].append(dictify(x,False))
return d
import xml.etree.ElementTree as ET
from copy import copy
import urllib2
url = 'https://eyes.nasa.gov/dsn/data/dsn.xml'
contents = urllib2.urlopen(url).read()
root = ET.fromstring(contents)
DSNdict = dictify(root)
dishes = DSNdict['dsn']['dish']
dp_dict = dict()
for dish in dishes:
powers = [float(sig['power']) for sig in dish['downSignal'] if sig['power']]
dp_dict[dish['name']] = powers
print dp_dict['DSS26']
</code></pre>
<hr>
<p>I'd like to keep track of which spacecraft that the <a href="https://en.wikipedia.org/wiki/NASA_Deep_Space_Network" rel="nofollow">NASA Deep Space Network</a> (DSN) is communicating with, say once per minute. </p>
<p>I learned how to do something similar from Flight Radar 24 from the answer to my <a href="http://stackoverflow.com/q/32162872/3904031">previous question</a>, which also still represents my current skills in getting data from web sites.</p>
<p>With FR24 I had explanations in <a href="http://blog.cykey.ca/post/88174516880/analyzing-flightradar24s-internal-api-structure" rel="nofollow">this blog</a> as a great place to start. I have opened the page with the <em>Developer Tools</em> function in the Chrome browser, and I can see that data for items such as dishes, spacecraft and associated numerical data are requested as an XML with urls such as </p>
<pre><code>https://eyes.nasa.gov/dsn/data/dsn.xml?r=293849023
</code></pre>
<p>so it looks like I need to construct the integer (time code? authorization? secret password?) after the <code>r=</code> once a minute. </p>
<p><strong>My Question:</strong> Using python, how could I best find out what that integer represents, and how to generate it in order to correctly request data once per minute?</p>
<p><a href="http://i.stack.imgur.com/i8ZLy.png" rel="nofollow"><img src="http://i.stack.imgur.com/i8ZLy.png" alt="enter image description here"></a></p>
<p><strong>above:</strong> screen shot montage from NASA's DSN Now page <a href="https://eyes.nasa.gov/dsn/dsn.html" rel="nofollow">https://eyes.nasa.gov/dsn/dsn.html</a> see also <a href="http://space.stackexchange.com/q/17430/12102">this question</a></p>
| 1 | 2016-07-23T03:46:16Z | 38,538,127 | <p>Using a random number (or a timestamp...) in a get parameter tricks the browser into really making the request (instead of using the browser cache).</p>
<p>This method is some kind of "hack" the webdevs use so that they are sure the request actually happens.</p>
<p>Since you aren't using a web browser, I'm pretty sure you could totally ignore this parameter, and still get the refreshed data.</p>
<p><strong>--- Edit ---</strong></p>
<p>Actually <code>r</code> seems to be required, and has to be updated.</p>
<pre><code>#!/bin/bash
wget https://eyes.nasa.gov/dsn/data/dsn.xml?r=$(date +%s) -O a.xml -nv
while true; do
sleep 1
wget https://eyes.nasa.gov/dsn/data/dsn.xml?r=$(date +%s) -O b.xml -nv
diff a.xml b.xml
cp b.xml a.xml -f
done
</code></pre>
<p>You don't need to emulate a browser. Simply set r to anything and increment it. (Or use a timestamp)</p>
| 1 | 2016-07-23T04:20:15Z | [
"python",
"url",
"web"
] |
How can I detect the method to request data from this site? | 38,537,941 | <p><strong>UPDATE:</strong> I've put together the following script to use the url for the XML <em>without the time-code-like suffix</em> as recommended in the <a href="http://stackoverflow.com/a/38538127/3904031">answer below</a>, and report the downlink powers which clearly fluctuate on the website. I'm getting three hour old, unvarying data. </p>
<p>So it looks like I need to properly construct that (time code? authorization? secret password?) in order to do this successfully. Like I say in the comment below, "<em>I don't want to do anything that's not allowed and welcome - NASA has enough challenges already trying to talk to a forty year old spacecraft 20 billion kilometers away!</em>"</p>
<pre><code>def dictify(r,root=True):
"""from: http://stackoverflow.com/a/30923963/3904031"""
if root:
return {r.tag : dictify(r, False)}
d=copy(r.attrib)
if r.text:
d["_text"]=r.text
for x in r.findall("./*"):
if x.tag not in d:
d[x.tag]=[]
d[x.tag].append(dictify(x,False))
return d
import xml.etree.ElementTree as ET
from copy import copy
import urllib2
url = 'https://eyes.nasa.gov/dsn/data/dsn.xml'
contents = urllib2.urlopen(url).read()
root = ET.fromstring(contents)
DSNdict = dictify(root)
dishes = DSNdict['dsn']['dish']
dp_dict = dict()
for dish in dishes:
powers = [float(sig['power']) for sig in dish['downSignal'] if sig['power']]
dp_dict[dish['name']] = powers
print dp_dict['DSS26']
</code></pre>
<hr>
<p>I'd like to keep track of which spacecraft that the <a href="https://en.wikipedia.org/wiki/NASA_Deep_Space_Network" rel="nofollow">NASA Deep Space Network</a> (DSN) is communicating with, say once per minute. </p>
<p>I learned how to do something similar from Flight Radar 24 from the answer to my <a href="http://stackoverflow.com/q/32162872/3904031">previous question</a>, which also still represents my current skills in getting data from web sites.</p>
<p>With FR24 I had explanations in <a href="http://blog.cykey.ca/post/88174516880/analyzing-flightradar24s-internal-api-structure" rel="nofollow">this blog</a> as a great place to start. I have opened the page with the <em>Developer Tools</em> function in the Chrome browser, and I can see that data for items such as dishes, spacecraft and associated numerical data are requested as an XML with urls such as </p>
<pre><code>https://eyes.nasa.gov/dsn/data/dsn.xml?r=293849023
</code></pre>
<p>so it looks like I need to construct the integer (time code? authorization? secret password?) after the <code>r=</code> once a minute. </p>
<p><strong>My Question:</strong> Using python, how could I best find out what that integer represents, and how to generate it in order to correctly request data once per minute?</p>
<p><a href="http://i.stack.imgur.com/i8ZLy.png" rel="nofollow"><img src="http://i.stack.imgur.com/i8ZLy.png" alt="enter image description here"></a></p>
<p><strong>above:</strong> screen shot montage from NASA's DSN Now page <a href="https://eyes.nasa.gov/dsn/dsn.html" rel="nofollow">https://eyes.nasa.gov/dsn/dsn.html</a> see also <a href="http://space.stackexchange.com/q/17430/12102">this question</a></p>
| 1 | 2016-07-23T03:46:16Z | 38,541,348 | <p>Regarding your updated question, why avoid sending the <code>r</code> query string parameter when it is very easy to generate it? Also, with the <a href="http://docs.python-requests.org" rel="nofollow"><code>requests</code></a> module, it's easy to send the parameter with the request too:</p>
<pre><code>import time
import requests
import xml.etree.ElementTree as ET
url = 'https://eyes.nasa.gov/dsn/data/dsn.xml'
r = int(time.time() / 5)
response = requests.get(url, params={'r': r})
root = ET.fromstring(response.content)
# etc....
</code></pre>
| 1 | 2016-07-23T11:38:04Z | [
"python",
"url",
"web"
] |
pandas comparing series with numpy float gives TypeError | 38,537,943 | <p>In <code>pandas</code>, you can do this:</p>
<pre><code>>>> x = pd.DataFrame([[1,2,3,4], [3,4,5,6]], columns=list('abcd'))
>>> x
a b c d
0 1 2 3 4
1 3 4 5 6
>>> 2 < x.a
0 False
1 True
Name: a, dtype: bool
</code></pre>
<p>However, when I try it with it with a <code>numpy</code> float:</p>
<pre><code>>>> np.float64(2) < x.a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/simon/Documents/workspace/rent-my-rez/venv/lib/python2.7/site-packages/pandas/core/ops.py", line 741, in wrapper
if len(self) != len(other):
TypeError: len() of unsized object
</code></pre>
<p>Is there some way around this (which doesn't involve casting the <code>numpy</code> float to a regular float), or some way I can monkey patch the <code>Series</code> class from pandas to implement reverse comparison? I've looked around in the source code for where comparison is implemented, but I couldn't find it, so a reference to the location in the code would be very helpful</p>
<p>(I am aware that it is easily fixed by changing the order of comparison, but I am asking this more out of interest, as I would like to understand the source code more)</p>
| 2 | 2016-07-23T03:46:57Z | 38,542,991 | <p>This seems to be a known issue talked about <a href="https://github.com/pydata/pandas/issues/13006" rel="nofollow">here</a> and fixed <a href="https://github.com/pydata/pandas/pull/13307" rel="nofollow">here</a>, making it tough to find the source if you (like me) are running <code>0.18.0</code> and trying to find the equivalent lines on github. If you look on github at <code>0.18.0</code> instead of <code>master</code> you can see the relevant lines, e.g. line 739 <a href="https://github.com/pydata/pandas/blob/v0.18.0/pandas/core/ops.py" rel="nofollow">here</a></p>
| 0 | 2016-07-23T14:43:55Z | [
"python",
"numpy",
"pandas"
] |
How can I use POST from requests module to login to Github? | 38,537,963 | <p>I have tried logging into GitHub using the following code:</p>
<pre><code>url = 'https://github.com/login'
headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36',
'login':'username',
'password':'password',
'authenticity_token':'Token that keeps changing',
'commit':'Sign in',
'utf8':'%E2%9C%93'
}
res = requests.post(url)
print(res.text)
</code></pre>
<p>Now, <code>res.text</code> prints the code of login page. I understand that it maybe because the token keeps changing continuously. I have also tried setting the URL to <code>https://github.com/session</code> but that does not work either.</p>
<p>Can anyone tell me a way to generate the token. I am looking for a way to login without using the API. I had asked <a href="http://stackoverflow.com/questions/38533741/where-can-i-use-the-post-method-without-any-api-in-real-life">another question</a> where I mentioned that I was unable to login. One comment said that I am not doing it right and it is possible to login just by using the requests module without the help of Github API.</p>
<p>ME: </p>
<blockquote>
<p>So, can I log in to Facebook or Github using the POST method? I have tried that and it did not work.</p>
</blockquote>
<p>THE USER:</p>
<blockquote>
<p>Well, presumably you did something wrong</p>
</blockquote>
<p>Can anyone please tell me what I did wrong?</p>
<p>After the suggestion about using sessions, I have updated my code:</p>
<pre><code>s = requests.Session()
headers = {Same as above}
s.put('https://github.com/session', headers=headers)
r = s.get('https://github.com/')
print(r.text)
</code></pre>
<p>I still can't get past the login page.</p>
| 1 | 2016-07-23T03:50:47Z | 38,538,011 | <p>I think you get back to the login page because you are redirected and since your code doesn't send back your cookies, you can't have a session.</p>
<p>You are looking for session persistance, <code>requests</code> provides it :</p>
<blockquote>
<p>Session Objects The Session object allows you to persist certain
parameters across requests. It also persists cookies across all
requests made from the Session instance, and will use urllib3's
connection pooling. So if you're making several requests to the same
host, the underlying TCP connection will be reused, which can result
in a significant performance increase (see HTTP persistent
connection).</p>
</blockquote>
<pre><code>s = requests.Session()
s.get('http://httpbin.org/cookies/set/sessioncookie/123456789')
r = s.get('http://httpbin.org/cookies')
print(r.text)
# '{"cookies": {"sessioncookie": "123456789"}}'
</code></pre>
<p><a href="http://docs.python-requests.org/en/master/user/advanced/" rel="nofollow">http://docs.python-requests.org/en/master/user/advanced/</a></p>
| 1 | 2016-07-23T03:59:05Z | [
"python",
"python-3.x",
"post",
"github",
"python-requests"
] |
How can I use POST from requests module to login to Github? | 38,537,963 | <p>I have tried logging into GitHub using the following code:</p>
<pre><code>url = 'https://github.com/login'
headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36',
'login':'username',
'password':'password',
'authenticity_token':'Token that keeps changing',
'commit':'Sign in',
'utf8':'%E2%9C%93'
}
res = requests.post(url)
print(res.text)
</code></pre>
<p>Now, <code>res.text</code> prints the code of login page. I understand that it maybe because the token keeps changing continuously. I have also tried setting the URL to <code>https://github.com/session</code> but that does not work either.</p>
<p>Can anyone tell me a way to generate the token. I am looking for a way to login without using the API. I had asked <a href="http://stackoverflow.com/questions/38533741/where-can-i-use-the-post-method-without-any-api-in-real-life">another question</a> where I mentioned that I was unable to login. One comment said that I am not doing it right and it is possible to login just by using the requests module without the help of Github API.</p>
<p>ME: </p>
<blockquote>
<p>So, can I log in to Facebook or Github using the POST method? I have tried that and it did not work.</p>
</blockquote>
<p>THE USER:</p>
<blockquote>
<p>Well, presumably you did something wrong</p>
</blockquote>
<p>Can anyone please tell me what I did wrong?</p>
<p>After the suggestion about using sessions, I have updated my code:</p>
<pre><code>s = requests.Session()
headers = {Same as above}
s.put('https://github.com/session', headers=headers)
r = s.get('https://github.com/')
print(r.text)
</code></pre>
<p>I still can't get past the login page.</p>
| 1 | 2016-07-23T03:50:47Z | 38,545,068 | <p>You can also try using the PyGitHub API to perform common git tasks.
Check the link below:
<a href="https://github.com/PyGithub/PyGithub" rel="nofollow">https://github.com/PyGithub/PyGithub</a></p>
| 0 | 2016-07-23T18:17:06Z | [
"python",
"python-3.x",
"post",
"github",
"python-requests"
] |
How to use Selenium to get this index? | 38,538,032 | <p>I'm trying to retrieve the fear index from the link <a href="http://money.cnn.com/data/fear-and-greed/" rel="nofollow">http://money.cnn.com/data/fear-and-greed/</a>. The index is dynamically changing. When I inspect the element, it shows the coding below. I'm just wondering how to use python Selenium to get the 84 and other indexes? I tried to use the code below but only got blank. Any ideas?</p>
<pre><code>cr = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH,"//*[contains(text(), 'Fear & Greed Now')]")))
</code></pre>
<p>Below is the webpage code</p>
<pre><code><div id="needleChart" style="background-image:url('http://money.cnn.com/.element/img/5.0/data/feargreed/1.png');">
<ul>
<li>Fear &amp; Greed Now: 84 (Extreme Greed)
</li>
<li>Fear &amp; Greed Previous Close: 86 (Extreme Greed)</li>
<li>Fear &amp; Greed 1 Week Ago: 89 (Extreme Greed)</li>
<li>Fear &amp; Greed 1 Month Ago: 57 (Greed)</li>
<li>Fear &amp; Greed 1 Year Ago: 16 (Extreme Fear)</li>
</ul>
</code></pre>
| 3 | 2016-07-23T04:02:35Z | 38,538,114 | <p>According to the <a href="https://w3c.github.io/webdriver/webdriver-spec.html#get-element-text" rel="nofollow">specification</a>, <code>.text</code> would only give you the <em>rendered text</em> by default, which, I suspect, is becoming empty because of the weird styling of the "needleChart" parent container. </p>
<p>You need to use <code>innerHTML</code> instead of <code>.text</code> to workaround the "empty text" problem:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.get("http://money.cnn.com/data/fear-and-greed/")
driver.maximize_window()
wait = WebDriverWait(driver, 10)
list_indexes = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "#needleChart")))
indexes = list_indexes.find_elements_by_tag_name("li")
for index in indexes:
print(index.get_attribute("innerHTML"))
driver.close()
</code></pre>
<p>Prints:</p>
<pre><code>Fear &amp; Greed Now: 86 (Extreme Greed)
Fear &amp; Greed Previous Close: 86 (Extreme Greed)
Fear &amp; Greed 1 Week Ago: 89 (Extreme Greed)
Fear &amp; Greed 1 Month Ago: 57 (Greed)
Fear &amp; Greed 1 Year Ago: 16 (Extreme Fear)
</code></pre>
<hr>
<p>You can then post-process these texts and make a nice result dictionary, extracting the period as a key and the index as a value:</p>
<pre><code>import re
pattern = re.compile(r"^Fear &amp; Greed (.*?): (\d+)")
d = dict(pattern.search(index.get_attribute("innerHTML")).groups() for index in indexes)
print(d)
</code></pre>
<p>Prints:</p>
<pre><code>{
u'Previous Close': u'86',
u'Now': u'86',
u'1 Year Ago': u'16',
u'1 Week Ago': u'89',
u'1 Month Ago': u'57'
}
</code></pre>
| 2 | 2016-07-23T04:17:34Z | [
"python",
"selenium"
] |
How to use Selenium to get this index? | 38,538,032 | <p>I'm trying to retrieve the fear index from the link <a href="http://money.cnn.com/data/fear-and-greed/" rel="nofollow">http://money.cnn.com/data/fear-and-greed/</a>. The index is dynamically changing. When I inspect the element, it shows the coding below. I'm just wondering how to use python Selenium to get the 84 and other indexes? I tried to use the code below but only got blank. Any ideas?</p>
<pre><code>cr = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH,"//*[contains(text(), 'Fear & Greed Now')]")))
</code></pre>
<p>Below is the webpage code</p>
<pre><code><div id="needleChart" style="background-image:url('http://money.cnn.com/.element/img/5.0/data/feargreed/1.png');">
<ul>
<li>Fear &amp; Greed Now: 84 (Extreme Greed)
</li>
<li>Fear &amp; Greed Previous Close: 86 (Extreme Greed)</li>
<li>Fear &amp; Greed 1 Week Ago: 89 (Extreme Greed)</li>
<li>Fear &amp; Greed 1 Month Ago: 57 (Greed)</li>
<li>Fear &amp; Greed 1 Year Ago: 16 (Extreme Fear)</li>
</ul>
</code></pre>
| 3 | 2016-07-23T04:02:35Z | 38,538,121 | <p>You can find it by finding the element and extract its innerHTML text:</p>
<pre><code>element = webdriver.find_element_by_xpath("//div[@id='needleChart']/ul/li")
text = element.get_attribute("innerHTML")
</code></pre>
<p>text will contain all text as following:</p>
<pre><code>Fear & Greed Now: 86 (Extreme Greed)
</code></pre>
<p>then you can use <strong>regex</strong> to extract the greed index from this string above.</p>
| 1 | 2016-07-23T04:18:34Z | [
"python",
"selenium"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.