title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Boto3: create, execute shell command, terminate on ec2 instance | 38,754,839 | <p>I am newbie to <strong>EC2</strong> and <strong>boto</strong>. I have to create an EC2 running instance, where I can send a command with S3 file and execute a shell script. </p>
<p>I searched a lot and found a way with <strong>boto</strong> and <strong>paramiko</strong>. I don't know, whether it is possible to run a shell script in ec2 instance using boto3. Any clue or example in this reference will be a great help.</p>
<p>Thank you!</p>
| 0 | 2016-08-03T22:44:51Z | 38,755,787 | <p>The boto.manage.cmdshell module can be used to do this. To use it, you must have the paramiko package installed. A simple example of it's use:</p>
<pre><code>import boto.ec2
from boto.manage.cmdshell import sshclient_from_instance
# Connect to your region of choice
conn = boto.ec2.connect_to_region('us-west-2')
# Find the instance object related to my instanceId
instance = conn.get_all_instances(['i-12345678'])[0].instances[0]
# Create an SSH client for our instance
# key_path is the path to the SSH private key associated with instance
# user_name is the user to login as on the instance (e.g. ubuntu, ec2-user, etc.)
ssh_client = sshclient_from_instance(instance,
'<path to SSH keyfile>',
user_name='ec2-user')
# Run the command. Returns a tuple consisting of:
# The integer status of the command
# A string containing the output of the command
# A string containing the stderr output of the command
status, stdout, stderr = ssh_client.run('ls -al')
</code></pre>
| 0 | 2016-08-04T00:42:41Z | [
"python",
"shell",
"amazon-ec2",
"boto3"
] |
getting module import error while running PyInstaller generated binary | 38,754,847 | <p>I'm getting an error while trying to a simple hello.py using PyInstaller on RHEL X64.</p>
<p>Python 2.7.12 is alt installed in /opt/python</p>
<p>Compilation Output:</p>
<pre><code>[root@myrig CommandManager]# pyinstaller Hello.py
21 INFO: PyInstaller: 3.2
21 INFO: Python: 2.7.12
22 INFO: Platform: Linux-3.10.0-327.22.2.el7.x86_64-x86_64-with-redhat-7.2-Maipo
62 INFO: wrote /home/myuser/CommandManager/Hello.spec
66 INFO: UPX is not available.
107 INFO: Extending PYTHONPATH with paths
['/home/myuser/CommandManager', '/home/myuser/CommandManager']
107 INFO: checking Analysis
108 INFO: Building Analysis because out00-Analysis.toc is non existent
108 INFO: Initializing module dependency graph...
110 INFO: Initializing module graph hooks...
148 INFO: running Analysis out00-Analysis.toc
155 INFO: Caching module hooks...
158 INFO: Analyzing /home/myuser/CommandManager/Hello.py
160 INFO: Loading module hooks...
161 INFO: Loading module hook "hook-encodings.py"...
1493 INFO: Looking for ctypes DLLs
1493 INFO: Analyzing run-time hooks ...
1500 INFO: Looking for dynamic libraries
1801 INFO: Looking for eggs
1801 INFO: Python library not in binary depedencies. Doing additional searching...
1827 INFO: Using Python library /lib64/libpython2.7.so.1.0
1899 INFO: Warnings written to /home/myuser/CommandManager/build/Hello/warnHello.txt
1983 INFO: checking PYZ
1983 INFO: Building PYZ because out00-PYZ.toc is non existent
1983 INFO: Building PYZ (ZlibArchive) /home/myuser/CommandManager/build/Hello/out00-PYZ.pyz
2465 INFO: checking PKG
2465 INFO: Building PKG because out00-PKG.toc is non existent
2465 INFO: Building PKG (CArchive) out00-PKG.pkg
2648 INFO: Bootloader /opt/python/lib/python2.7/site-packages/PyInstaller/bootloader/Linux-64bit/run
2648 INFO: checking EXE
2649 INFO: Building EXE because out00-EXE.toc is non existent
2649 INFO: Building EXE from out00-EXE.toc
2690 INFO: Appending archive to ELF section in EXE /home/myuser/CommandManager/build/Hello/Hello
2991 INFO: checking COLLECT
2992 INFO: Building COLLECT because out00-COLLECT.toc is non existent
2993 INFO: Building COLLECT out00-COLLECT.toc
</code></pre>
<p>Hello.py:</p>
<pre><code>print("Hello")
</code></pre>
<p>This is the error I'm getting:</p>
<pre><code>mod is NULL - structTraceback (most recent call last):
File "/opt/python/lib/python2.7/struct.py", line 1, in <module>
from _struct import *
ImportError: /home/myuser/CommandManager/dist/Hello/_struct.so: undefined symbol: PyUnicodeUCS2_AsEncodedString
mod is NULL - pyimod02_archiveTraceback (most recent call last):
File "/tmp/pip-build-xDjNbD/pyinstaller/PyInstaller/loader/pyimod02_archive.py", line 28, in <module>
ImportError: No module named struct
mod is NULL - pyimod03_importersTraceback (most recent call last):
File "/tmp/pip-build-xDjNbD/pyinstaller/PyInstaller/loader/pyimod03_importers.py", line 24, in <module>
ImportError: No module named pyimod02_archive
Traceback (most recent call last):
File "site-packages/PyInstaller/loader/pyiboot01_bootstrap.py", line 15, in <module>
ImportError: No module named pyimod03_importers
Failed to execute script pyiboot01_bootstrap
</code></pre>
<p>Any clue what may be causing this?</p>
<p>The auto-generated Hello.spec file looks like this:</p>
<pre><code># -*- mode: python -*-
block_cipher = None
a = Analysis(['Hello.py'],
pathex=['/home/myuser/CommandManager'],
binaries=None,
datas=None,
hiddenimports=[],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
exclude_binaries=True,
name='Hello',
debug=False,
strip=False,
upx=True,
console=True )
coll = COLLECT(exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
name='Hello')
</code></pre>
| 0 | 2016-08-03T22:45:42Z | 38,781,390 | <p>PyInstaller needs python to be compiled with <code>--enable-shared and LDFLAGS=-Wl,-rpath=<path to python lib></code>.</p>
<p>In my case:</p>
<blockquote>
<p>./configure --enable-shared --prefix=/opt/python
LDFLAGS=-Wl,-rpath=/opt/python/lib</p>
</blockquote>
<p>Additional reference:
<a href="https://bugs.python.org/issue27685" rel="nofollow">https://bugs.python.org/issue27685</a></p>
| 0 | 2016-08-05T04:56:12Z | [
"python",
"python-2.7",
"pyinstaller"
] |
Why am I getting "ValueError: setting an array element with a sequence." when using the brute function from Scipy.optimization? | 38,754,937 | <p>Working on a model to predict electrophysiological data given a set of parameters. This script is trying to find values for those parameters that give predictions closest to experimental data. I'm running Python 2.7, Scipy 0.17.0, and Numpy 1.10.4. The script is attached below. The line that is getting the error is <code>epsc_sims[n,1] = r_prob*poolsize</code>.</p>
<p>Here is the script:</p>
<pre><code>import scipy.optimize as optimize
import numpy as np
import math
def min_params(*params):
std_err = 0
epsc_exp = np.loadtxt('sample.txt')
max_pool = params[0]
r_prob = params[1]
tau_recov = params[2]
poolsize = epsc_exp[0,1]/r_prob
epsc_sims = np.copy(epsc_exp)
count = epsc_exp.size
for n in xrange(1 , count/2):
poolsize = poolsize - epsc_sims[n-1, 1]
poolsize = max_pool + (poolsize - max_pool) * math.exp((epsc_sims[n-1, 0] - epsc_sims[n,0]) / tau_recov)
epsc_sims[n,1] = r_prob*poolsize
std_err += (epsc_exp[n,1] - epsc_sims[n,1])**2
std_err /= count
return std_err
params = (1e-8, 0.2, 0.5)
rranges = (slice(5e-9,5e-8,1e-9), slice(0.1, 0.3, 0.01), slice(0.3, 0.4, 0.01))
y = optimize.brute(min_params, rranges, args = params)
print y
</code></pre>
<p>And here is the Traceback (most recent call last):</p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-25-21d343f36a44>", line 1, in <module>
runfile('C:/Users/brennan/Google Drive/Python Scripts/Inhibitory Model/brute.py', wdir='C:/Users/brennan/Google Drive/Python Scripts/Inhibitory Model')
File "D:\Python\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 699, in runfile
execfile(filename, namespace)
File "D:\Python\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/Users/brennan/Google Drive/Python Scripts/Inhibitory Model/brute.py", line 33, in <module>
y = optimize.brute(min_params, rranges, args = params)
File "D:\Python\Anaconda2\lib\site-packages\scipy\optimize\optimize.py", line 2604, in brute
Jout = vecfunc(*grid)
File "D:\Python\Anaconda2\lib\site-packages\numpy\lib\function_base.py", line 1811, in __call__
return self._vectorize_call(func=func, args=vargs)
File "D:\Python\Anaconda2\lib\site-packages\numpy\lib\function_base.py", line 1874, in _vectorize_call
ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
File "D:\Python\Anaconda2\lib\site-packages\numpy\lib\function_base.py", line 1836, in _get_ufunc_and_otypes
outputs = func(*inputs)
File "D:\Python\Anaconda2\lib\site-packages\scipy\optimize\optimize.py", line 2598, in _scalarfunc
return func(params, *args)
File "C:/Users/brennan/Google Drive/Python Scripts/Inhibitory Model/brute.py", line 25, in min_params
epsc_sims[n,1] = r_prob*poolsize
ValueError: setting an array element with a sequence.
</code></pre>
<p>The text file I use for <code>spikes = np.loadtxt('sample.txt')</code> is formatted as follows with ~3,000 lines:</p>
<pre><code>0.01108 1.223896e-08
0.03124 6.909375e-09
0.074 6.2475e-09
0.07718 3.895625e-09
</code></pre>
<p>This is my first post on here so please let me know if I need to change anything or provide more info!</p>
| 0 | 2016-08-03T22:56:36Z | 38,764,129 | <p>The <code>scipy.optimize</code> routines call the function with a vector of parameters to be optimized. Your function is thus called as <code>min_params(x, *params)</code>, where the <code>*params</code> are your custom arguments to the function that you supplied to the function using the keyword argument <code>args</code>. The way you define your function <code>x</code> will end up as the first elements of <code>params</code> inside the function.</p>
<p>Assuming that <code>max_pool</code>, <code>r_prob</code>, <code>tau_recov</code> are what you want to optimize over here is how to fix things:</p>
<pre><code>def min_params(params):
...
y = optimize.brute(min_params, rranges)
</code></pre>
| 0 | 2016-08-04T09:55:42Z | [
"python",
"numpy",
"scipy",
"scientific-computing"
] |
get text after specific tag with beautiful soup | 38,754,940 | <p>I have a text like</p>
<pre><code>page.content = <body><b>Title:</b> Test title</body>
</code></pre>
<p>I can get the Title tag with </p>
<pre><code>soup = BeautifulSoup(page.content)
record_el = soup('body')[0]
b_el = record_el.find('b',text='Title:')
</code></pre>
<p>but how can I get the text after the b tag? I would like to get the text after the element containing "Title:" by referring to that element, and not the body element.</p>
| 1 | 2016-08-03T22:56:39Z | 38,754,999 | <p>Referring to <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#going-sideways" rel="nofollow">the docs</a> you might want to use the <code>next_sibling</code> of your <code>b_el</code>:</p>
<pre><code>b_el.next_sibling # contains " Test title"
</code></pre>
<p>"Sibling" in this context is the next node, not the next element/tag. Your element's next node is a text node, so you get the text you want.</p>
| 1 | 2016-08-03T23:02:28Z | [
"python",
"html",
"beautifulsoup"
] |
How does one extract the alpha channel from an image using Pillow | 38,754,957 | <p>How does one export just an alpha mask from a PNG using Pillow?</p>
<p>Ideally, the result would be a grayscale image that represents the alpha channel.</p>
| 0 | 2016-08-03T22:58:09Z | 38,754,958 | <pre><code># Open the image and convert it to RGBA, just in case it was indexed
image = Image.open(image_path).convert('RGBA')
# Extract just the alpha channel
alpha = image.split()[-1]
# Unfortunately the alpha channel is still treated as such and can't be dumped
# as-is
# Create a new image with an opaque black background
bg = Image.new("RGBA", image.size, (0,0,0,255)
# Copy the alpha channel to the new image using itself as the mask
bg.paste(alpha, mask=alpha)
# Since the bg image started as RGBA, we can save some space by converting it
# to grayscale ('L') Optionally, we can convert the image to be indexed which
# saves some more space ('P') In my experience, converting directly to 'P'
# produces both the Gray channel and an Alpha channel when viewed in GIMP,
# althogh the file sizes is about the same
bg.convert('L').convert('P', palette=Image.ADAPTIVE, colors=8).save(
mask_path,
optimize=True)
</code></pre>
| 0 | 2016-08-03T22:58:09Z | [
"python",
"pillow"
] |
Why set is not in locals, globals or vars dictionaries | 38,754,974 | <p>When I try to check if a set is available in current local scope, or in global scope, I always get the below error.</p>
<pre><code>>>my_set = set()
>>my_set in locals()
>>Traceback (most recent call last):
File "<ipython-input-22-47b6756e3345>", line 1, in <module>
my_set in locals()
TypeError: unhashable type: 'set'
>>my_set in globals()
>>Traceback (most recent call last):
File "<ipython-input-22-47b6755f5503>", line 1, in <module>
my_set in globals()
TypeError: unhashable type: 'set'
>>my_set in vars()
>>Traceback (most recent call last):
File "<ipython-input-22-47b6755f9947>", line 1, in <module>
my_set in vars()
TypeError: unhashable type: 'set'
</code></pre>
<p>If set is not in any of these dictionaries (locals, globals or vars), where can I check if a set is defined?</p>
| 0 | 2016-08-03T22:59:11Z | 38,755,002 | <p>You need to quote the name when checking.</p>
<pre><code>>>> my_set = set()
>>> locals
<built-in function locals>
>>> locals()
{'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', 'my_set': set([]), '__doc__': None, '__package__': None}
>>> 'my_set' in locals()
True
>>>
</code></pre>
| 6 | 2016-08-03T23:02:43Z | [
"python"
] |
How to randomly select a character everytime it's needed | 38,754,990 | <p>How do I randomly select a character from a string of characters every time I want it to be changed, for example:</p>
<pre><code>import random
def user_input():
chars = 'abcdefghijklmnopqrstuvwxyz'
present = random.choice(chars)
while True:
print present
to_eval = raw_input('Enter key: ')
if to_eval == present:
print 'Correct!'
break
else:
# change the key and ask again
user_input()
</code></pre>
| 0 | 2016-08-03T23:01:05Z | 38,755,008 | <pre><code>import random
def user_input():
chars = 'abcdefghijklmnopqrstuvwxyz'
present = random.choice(chars)
while True:
print present
to_eval = raw_input('Enter key: ')
if to_eval == present:
print 'Correct!'
present = random.choice(chars)
user_input()
</code></pre>
<p>This will keep asking until correct. Then pick a new value and continue to loop. To end you would have to type ctl-c</p>
| 1 | 2016-08-03T23:03:33Z | [
"python",
"random"
] |
How to randomly select a character everytime it's needed | 38,754,990 | <p>How do I randomly select a character from a string of characters every time I want it to be changed, for example:</p>
<pre><code>import random
def user_input():
chars = 'abcdefghijklmnopqrstuvwxyz'
present = random.choice(chars)
while True:
print present
to_eval = raw_input('Enter key: ')
if to_eval == present:
print 'Correct!'
break
else:
# change the key and ask again
user_input()
</code></pre>
| 0 | 2016-08-03T23:01:05Z | 38,755,011 | <p>It think this is what you want:</p>
<pre><code>import random
def user_input():
while True:
chars = 'abcdefghijklmnopqrstuvwxyz'
present = random.choice(chars)
print present
to_eval = raw_input('Enter key: ')
if to_eval == present:
print 'Correct!'
break
user_input()
</code></pre>
| 0 | 2016-08-03T23:03:49Z | [
"python",
"random"
] |
How to randomly select a character everytime it's needed | 38,754,990 | <p>How do I randomly select a character from a string of characters every time I want it to be changed, for example:</p>
<pre><code>import random
def user_input():
chars = 'abcdefghijklmnopqrstuvwxyz'
present = random.choice(chars)
while True:
print present
to_eval = raw_input('Enter key: ')
if to_eval == present:
print 'Correct!'
break
else:
# change the key and ask again
user_input()
</code></pre>
| 0 | 2016-08-03T23:01:05Z | 38,755,081 | <p>you could play with <a href="http://stackoverflow.com/questions/231767/what-does-the-yield-keyword-do-in-python">yield</a> to try simplify your code too:</p>
<pre><code>import random
def guesses():
chars = 'abcd..'
while True:
yield random.choice(chars) == raw_input('Enter Key: ').strip()
def play():
for guess in guesses():
if guess:
print 'Correct!'
break
</code></pre>
| 0 | 2016-08-03T23:12:15Z | [
"python",
"random"
] |
Reading bluetooth low energy data from a custom app to a ble dongle(csr8510) | 38,755,087 | <p>I am having a problem connecting and sending Bluetooth low energy data from a custom built app I created in android studio to a BLE dongle. The app I created has 4 virtual push buttons and every time I press one these buttons its send a 4bit number letting the Bluetooth dongle( peripheral) know. The problem is that when I use "hcidump" in Linux I cant read anything. which I figured is how I can view this data.</p>
<p>One of the problems I believe I am having is that I need to advertise some command that lets the app know what information I want. If this is the case I'm unsure what to send, to notify the app that I want to read the states of the virtual pushbuttons.</p>
<p>I am able to bring up the ble dongle in Linux and the app is able to discovery the ble dongle as well they will connect for a short period then disconnect cause as I said the app is waiting for some kind of characteristic/service or so is what I believe?</p>
| 0 | 2016-08-03T23:12:42Z | 38,758,013 | <p>I am also looking forward for something similar application where Android application sends some button command to Linux Application via Bluetooth and nearest similar example I get from github.</p>
<p>Here is <a href="https://gist.github.com/dvas0004/8209b67ff556cb18651d#file-aquapi-py" rel="nofollow">Python application</a> code for Linux side this is using Bluetooth RFCOMM communication.</p>
<p>And here is <a href="https://gist.github.com/dvas0004/3b9128d94c0ecd50588a#file-aquariumdroid-java" rel="nofollow">Android application</a> main activity code you have to make sure to put your Bluetooth name in android application and also make sure UUID is same on both python and android application.</p>
<p>Give it a try and then you can modify both your application for multiple buttons.</p>
<p>Make sure you have paired your bluetooth with android first before you run application.</p>
| 0 | 2016-08-04T03:28:20Z | [
"python",
"linux",
"android-studio",
"bluetooth",
"bluetooth-lowenergy"
] |
Dividing a large file into several smaller, stochastic writing to multiple files | 38,755,111 | <p>It's a bit embarrassing but I having a bit of a difficulty with a rather simple (at least it should be) task: </p>
<p>I want to have a script that takes a large text file (several GBs) and divides into smaller pieces. This partitioning however is supposed to happen not on the order of lines but based on matching string patterns, such that each line/entry is supposed to be categorized based on the starting characters. So the algorithm looks something like this:</p>
<ol>
<li>define categories in a dict {key : pattern}</li>
<li>define the matching/categorizing function</li>
<li>open the input file and begin to iterate entries</li>
<li>classify each entry</li>
<li>write entry out to the appropriate file</li>
</ol>
<p>The problem I'm having is with the output files, specifically:</p>
<ul>
<li><p>do I declare them in advance? THe number of categories may change from instance to instance, so I technically don't know how many files to open. Also, there is no guarantee that each category is represented in the input data, hence it would be silly to create files that have no content. </p>
<ul>
<li><p>if I iterate over the categories in the dict, and open a bunch of files; how do I keep track of which file is for which key? Having another dict i.e. <code>dict2 {key : file}</code> feels like overkill and not particularly pretty either...</p></li>
<li><p>If I don't open the files in advance, and open/close a new io channel every time I need to write to a file, there will be significant overhead I think. </p></li>
</ul></li>
</ul>
<p>Another complication with opening the files only when needed is the following; every time I run the script I want to overwrite the resultant files. But if I have the file access inside the main loop, I will need to open the file for appending. </p>
<p>Below is the test code I have so far: </p>
<pre><code>from itertools import islice
import random, sys, os
cats = {
"key1" : "<some pattern>",
"key2" : "<some other pattern>",
"key3" : "<yet another pattern>"}
def delta(x, s):
return sum([int(c0 != c1) for c0,c1 in zip(x,s)])
def categorize_str(x, cats):
d = {cat : delta(x,tag) for cat,tag in cats.items()}
return min(d, key=d.get)
def main():
file = sys.argv[1]
nseq = None if len(sys.argv) < 3 else int(sys.argv[2])
path = os.path.dirname(file)
base = os.path.basename(file))
(name,ext) = os.path.splitext(base)
for k in cats.keys(): # <----
outfile = os.path.join(path, ''.join([name, "_", k, ext])
# what do I do with outfile now???
read = ... # library call to that opens input file and returns an iterator
for rec in islice(read,nseq):
c = categorize_str(rec, cats)
# depending on c, write to one of the "outfile"s
if __name__ == "__main__":
main()
</code></pre>
| 0 | 2016-08-03T23:15:57Z | 38,755,334 | <p>Idea: define a class named, say, Pattern. In here are several member variables: one is the <code>"<some pattern>"</code> that you have already; another is the filename; a third is the mode to use for the next call to open() on that file. The mode would be "w" the first time (creating a new file) and then your code would change that to "w+" so you would append any later writes. Instances of this class go into the "cats" dictionary as the values. This addresses your objection to using more than one dictionary - all the information you need to deal with one category is kept in one object. It also allows you to avoid creating empty files. </p>
<p>Perhaps the OS will deal well enough with the problem of doing a lot of small appends to several files. If that's a performance bottleneck you will have some more work to do (maybe you can cache a few updates in a list before your write them out in one step).</p>
| 0 | 2016-08-03T23:42:15Z | [
"python",
"io"
] |
How to add a number to an index range of a pandas array | 38,755,143 | <p>In pandas, how can I operate on a subset of rows in a column, selected by index?</p>
<p>In particular, how can I add 1.0 to column y here, only where the date is greater than 2016-08-04?</p>
<pre><code>>>> pandas.DataFrame(
... index=[datetime.date.today(), datetime.date.today() + datetime.timedelta(1)],
... data=[[1.2, 234], [3.3, 432]],
... columns=['x', 'y'])
x y
2016-08-04 1.2 234
2016-08-05 3.3 432
[2 rows x 2 columns]
</code></pre>
<p>I don't mind whether this is in-place or returns a new dataframe.</p>
<p>The answer in this case should be:</p>
<pre><code> x y
2016-08-04 1.2 234
2016-08-05 3.3 433
</code></pre>
| 2 | 2016-08-03T23:19:55Z | 38,755,233 | <p>If you convert the index to a DateTimeIndex it becomes easier:</p>
<pre><code>df.index = pd.to_datetime(df.index)
df.loc[df.index > '2016-08-04', 'y'] += 1
df
Out:
x y
2016-08-04 1.2 234
2016-08-05 3.3 433
</code></pre>
| 3 | 2016-08-03T23:31:45Z | [
"python",
"pandas"
] |
How to add a number to an index range of a pandas array | 38,755,143 | <p>In pandas, how can I operate on a subset of rows in a column, selected by index?</p>
<p>In particular, how can I add 1.0 to column y here, only where the date is greater than 2016-08-04?</p>
<pre><code>>>> pandas.DataFrame(
... index=[datetime.date.today(), datetime.date.today() + datetime.timedelta(1)],
... data=[[1.2, 234], [3.3, 432]],
... columns=['x', 'y'])
x y
2016-08-04 1.2 234
2016-08-05 3.3 432
[2 rows x 2 columns]
</code></pre>
<p>I don't mind whether this is in-place or returns a new dataframe.</p>
<p>The answer in this case should be:</p>
<pre><code> x y
2016-08-04 1.2 234
2016-08-05 3.3 433
</code></pre>
| 2 | 2016-08-03T23:19:55Z | 38,755,262 | <p>You can use the <code>.where</code> method on the <code>y</code> column.</p>
<pre><code>df.y = df.y.where(df.index < datetime.date(2016, 8, 4), lambda k: k + 1)
</code></pre>
| 0 | 2016-08-03T23:35:27Z | [
"python",
"pandas"
] |
How to add a number to an index range of a pandas array | 38,755,143 | <p>In pandas, how can I operate on a subset of rows in a column, selected by index?</p>
<p>In particular, how can I add 1.0 to column y here, only where the date is greater than 2016-08-04?</p>
<pre><code>>>> pandas.DataFrame(
... index=[datetime.date.today(), datetime.date.today() + datetime.timedelta(1)],
... data=[[1.2, 234], [3.3, 432]],
... columns=['x', 'y'])
x y
2016-08-04 1.2 234
2016-08-05 3.3 432
[2 rows x 2 columns]
</code></pre>
<p>I don't mind whether this is in-place or returns a new dataframe.</p>
<p>The answer in this case should be:</p>
<pre><code> x y
2016-08-04 1.2 234
2016-08-05 3.3 433
</code></pre>
| 2 | 2016-08-03T23:19:55Z | 38,755,269 | <p>As a non-inplace alternative, you can use <code>df.add</code>:</p>
<pre><code>df.add(df.index > pd.to_datetime('2016-08-04'), axis=0, level="y")
</code></pre>
| 1 | 2016-08-03T23:36:12Z | [
"python",
"pandas"
] |
How to add a number to an index range of a pandas array | 38,755,143 | <p>In pandas, how can I operate on a subset of rows in a column, selected by index?</p>
<p>In particular, how can I add 1.0 to column y here, only where the date is greater than 2016-08-04?</p>
<pre><code>>>> pandas.DataFrame(
... index=[datetime.date.today(), datetime.date.today() + datetime.timedelta(1)],
... data=[[1.2, 234], [3.3, 432]],
... columns=['x', 'y'])
x y
2016-08-04 1.2 234
2016-08-05 3.3 432
[2 rows x 2 columns]
</code></pre>
<p>I don't mind whether this is in-place or returns a new dataframe.</p>
<p>The answer in this case should be:</p>
<pre><code> x y
2016-08-04 1.2 234
2016-08-05 3.3 433
</code></pre>
| 2 | 2016-08-03T23:19:55Z | 38,755,614 | <p>Checkout <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.html" rel="nofollow">the docs</a> for <code>DatetimeIndex</code> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html" rel="nofollow">the docs</a> for the convenience function <code>date_range</code>. Either will allow you to set a <code>DatetimeIndex</code> that is natural to work with.</p>
<pre><code>df = pandas.DataFrame(
[[1.2, 234], [3.3, 432]],
index=pd.DatetimeIndex(start='today', periods=2, freq='D', normalize=True),
columns=['x', 'y'])
df.loc[df.index > '2016-08-04', 'y'] += 1
</code></pre>
| 1 | 2016-08-04T00:15:31Z | [
"python",
"pandas"
] |
Develop Raspberry apps from windows | 38,755,220 | <p>Is it possible to open files from a Raspberry pi in windows for editing (using for example notepad++)?</p>
<p>I am currently using the built in python IDE in Raspbian but i feel that it would speed up the development process if i could use a windows IDE for development. I have also tried using a git repo to share files between the PI and Windows but it is a bit cumbersome to.</p>
<p>Or does anyone have any other ideas about workflow between Windows and Raspberry?</p>
| 0 | 2016-08-03T23:29:25Z | 38,755,272 | <p>Why not just set up a VM on your windows machine with rasbian running? Something like this will get you started: <a href="http://www.makeuseof.com/tag/emulate-raspberry-pi-pc/" rel="nofollow">http://www.makeuseof.com/tag/emulate-raspberry-pi-pc/</a></p>
<p>Otherwise - set up a network share between the two, edit files on your windows computer, & run from the pi.</p>
| 1 | 2016-08-03T23:36:33Z | [
"python",
"windows",
"git",
"raspberry-pi"
] |
Develop Raspberry apps from windows | 38,755,220 | <p>Is it possible to open files from a Raspberry pi in windows for editing (using for example notepad++)?</p>
<p>I am currently using the built in python IDE in Raspbian but i feel that it would speed up the development process if i could use a windows IDE for development. I have also tried using a git repo to share files between the PI and Windows but it is a bit cumbersome to.</p>
<p>Or does anyone have any other ideas about workflow between Windows and Raspberry?</p>
| 0 | 2016-08-03T23:29:25Z | 38,759,185 | <p>Sure. I go through many ways and I found one of the best way is using <a href="https://winscp.net/eng/download.php" rel="nofollow">WinSCP</a>. </p>
<p>It's very easy for you to edit and update file with notepad++ right in the Windows.</p>
| 1 | 2016-08-04T05:35:37Z | [
"python",
"windows",
"git",
"raspberry-pi"
] |
Develop Raspberry apps from windows | 38,755,220 | <p>Is it possible to open files from a Raspberry pi in windows for editing (using for example notepad++)?</p>
<p>I am currently using the built in python IDE in Raspbian but i feel that it would speed up the development process if i could use a windows IDE for development. I have also tried using a git repo to share files between the PI and Windows but it is a bit cumbersome to.</p>
<p>Or does anyone have any other ideas about workflow between Windows and Raspberry?</p>
| 0 | 2016-08-03T23:29:25Z | 38,760,276 | <p>You can run a SAMBA server on your Raspberry Pi, set your python project folder as a network disk. Then you can use any windows IDE you like, just open the file which is on the network disk.</p>
<p>Currently I am using VS2015 + Python Tools for Visual Studio for remote debugging purpose.</p>
| 1 | 2016-08-04T06:47:47Z | [
"python",
"windows",
"git",
"raspberry-pi"
] |
Access similar lines from a file and apply a function | 38,755,263 | <p>I am trying to access similar lines from a file and then apply a sum on their values.</p>
<p>Here is my input file format:</p>
<pre><code>K1 20
K2 23
K3 24
K3 14
K3 10
K2 5
</code></pre>
<p>So, my goal is to create an output file that creates a sum of values per record:</p>
<pre><code>K1 20
K2 28
K3 48
</code></pre>
<ul>
<li>It is a big text file >20GB. So I cannot store the whole thing into memory at once. </li>
<li>I was successful in reading the file in to chunks and do the sums per record for those chunks, now I want to merge these output chunks.</li>
</ul>
<p>For example first chunk</p>
<pre><code>K1 20
K2 23
K3 24
</code></pre>
<p>second chunk</p>
<pre><code>K3 24
K2 5
</code></pre>
<p>Now I am lost at how do I merge them all and keep updating the records with their new values.</p>
<p>New values after merging will be </p>
<p>K1 20</p>
<p>K2 28</p>
<p>K3 48</p>
| 0 | 2016-08-03T23:35:41Z | 38,755,383 | <p>The following should accomplish the desired functionality.</p>
<pre><code>from collections import Counter
output = Counter()
with open("input.txt") as file:
for line in file.read().split('\n'):
if line:
key, value = line.split()
output[key] += int(value)
with open("output.txt", 'w+') as file:
for key, value in output.items():
file.write("{key} {value}\n".format(key=key, value=value))
</code></pre>
| 1 | 2016-08-03T23:48:59Z | [
"python"
] |
Access similar lines from a file and apply a function | 38,755,263 | <p>I am trying to access similar lines from a file and then apply a sum on their values.</p>
<p>Here is my input file format:</p>
<pre><code>K1 20
K2 23
K3 24
K3 14
K3 10
K2 5
</code></pre>
<p>So, my goal is to create an output file that creates a sum of values per record:</p>
<pre><code>K1 20
K2 28
K3 48
</code></pre>
<ul>
<li>It is a big text file >20GB. So I cannot store the whole thing into memory at once. </li>
<li>I was successful in reading the file in to chunks and do the sums per record for those chunks, now I want to merge these output chunks.</li>
</ul>
<p>For example first chunk</p>
<pre><code>K1 20
K2 23
K3 24
</code></pre>
<p>second chunk</p>
<pre><code>K3 24
K2 5
</code></pre>
<p>Now I am lost at how do I merge them all and keep updating the records with their new values.</p>
<p>New values after merging will be </p>
<p>K1 20</p>
<p>K2 28</p>
<p>K3 48</p>
| 0 | 2016-08-03T23:35:41Z | 38,755,433 | <blockquote>
<p>It is a big text file >20GB. So I cannot store the whole thing into memory at once.</p>
</blockquote>
<ol>
<li>It does not matter how big the file is. What matters is how many unique records are there because you will be keeping only unique records.</li>
<li>Python <code>Counter</code> would still be keeping it in memory. That is not going to do any good to you if you are running in a constrained environment.</li>
</ol>
<hr>
<p>My suggestion:</p>
<ul>
<li>Sort the file in the alphabetical order. I would just send it via unix <code>sort</code>. (I am assuming you have space on your FS)</li>
<li>Iterate over lines. Extract the first portion of the current record. Iterate while the first portion of the record is the same -- while summing up the second part.</li>
<li>When the record type changes -- write one line to the file -- with the sum you have been holding in your memory until now.</li>
<li>Repeat.</li>
</ul>
| 0 | 2016-08-03T23:54:10Z | [
"python"
] |
Encoding csv files on opening with Python | 38,755,277 | <p>So i have this csv which has rows like these:</p>
<pre><code>"41975","IT","Catania","2016-01-12T10:57:50+01:00",409.58
"538352","DE","Düsseldorf","2015-12-18T20:50:21+01:00",95.03
"V22211","GB","Nottingham","2015-12-31T11:17:59+00:00",872
</code></pre>
<p>In the current example the first and the third word are working fine but the program crashes when it prints <code>Düsseldorf</code>, the <code>ü</code> is problematic</p>
<p>I want to be able to get the information from this csv file and to be able to <code>print</code> it. Here is my code:</p>
<pre><code>def load_sales(file_name):
SALES_ID = 0
SALES_COUNTRY = 1
SALES_CITY = 2
SALES_DATE = 3
SALES_PRICE =4
with open(file_name, 'r', newline='', encoding='utf8') as r:
reader = csv.reader(r)
result=[]
for row in reader:
sale={}
sale["id"]=row[SALES_ID]
sale["country"]=row[SALES_COUNTRY]
sale["city"]=row[SALES_CITY]
sale["date"]=row[SALES_DATE]
sale["price"]=float(row[SALES_PRICE])
result.append(sale)
</code></pre>
<p>when I print I print the <code>result</code> I get:</p>
<pre><code> File "C:\Anaconda3\lib\encodings\cp866.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\xfc' in position 384: character maps to <undefined>
</code></pre>
<p>So far I have tried: changing the <code>encoding</code> value in the open function with <code>utf-8</code>, <code>UTF8</code> etc., making a print function:</p>
<pre><code>def write_uft8(data):
print(data).encode('utf-8')
</code></pre>
<p>But this is not a viable way when you have to print list of dictionaries.</p>
<p>Someone told me that the problem is that my python is not set to encode to these messages to utf-8, is that true and how do I change it ? </p>
| 1 | 2016-08-03T23:37:06Z | 38,755,510 | <p>The issue here is that when python writes to a stream, it attempts to write text in a fashion that is compatible with the encoding or character set of that stream.</p>
<p>In this case, it appears you are running the command in a Windows console that is set to display Cyrillic text (CP866). The Cyrillic codepage does not contain a corresponding character for <code>ü</code> and thus the string cannot be decoded to an appropriate character for output.</p>
<p>Changing the active codepage of your windows cmd console to <code>utf-8</code> should help:</p>
<pre><code>$ CHCP 65001
</code></pre>
| 0 | 2016-08-04T00:02:48Z | [
"python",
"python-3.x",
"csv",
"encoding",
"utf-8"
] |
Restricted set operations on python dictionary key views | 38,755,358 | <p>Lets see the code snippet below:</p>
<pre><code>d = {1:1}
keys = d.keys()
print(keys & {1,2,3})# {1}
d[2] = 2
print(keys & {1,2,3}) # {1,2} # keys() is a view/reference
print({1,2}.issubset({1,2,3})) # True
print(keys.issubset({1,2,3})) # 'dict_keys' object has no attribute 'issubset'
</code></pre>
<p>It is mentioned in the official documents on <a href="https://docs.python.org/3.0/library/stdtypes.html#dictionary-view-objects" rel="nofollow">dictionary view objects</a>:</p>
<blockquote>
<p>Keys views are set-like since their entries are unique and hashable.
.. Then these set operations are available (âotherâ refers either to
another view or a set): [&,|, ^, ^]</p>
</blockquote>
<p>If the keys are set-like, why are the set operation on them restricted to those four infix operations. Why, for example, side-effect free operation like <code>issuperset</code> or <code>issubset</code> not permitted?</p>
| 5 | 2016-08-03T23:45:14Z | 38,755,413 | <blockquote>
<p>Why, for example, are not side-effect free operation like <code>issuperset</code> or <code>issubset</code> operation not permitted?</p>
</blockquote>
<p>They are; you just have to use the <code>>=</code> and <code><=</code> operators:</p>
<pre><code>print(keys <= {1, 2, 3})
</code></pre>
<p>They also support <code>isdisjoint</code> in method form, since there's no operator for it:</p>
<pre><code>print(keys.isdisjoint({1, 2, 3}))
</code></pre>
| 6 | 2016-08-03T23:52:26Z | [
"python",
"set"
] |
Stripping punctuation from text file including 's and commas | 38,755,412 | <p>I am creating a list of most frequent words in a text file. But I keep getting words and their possessive versions. Like iphone and iphone's. I also need to strip the commas after words like iphone and iphone, in my results. I want to count those words together as one entity.
Here is my entire code.</p>
<pre><code># Prompuser for text file to analyze
filename = input("Enter a valid text file to analyze: ")
# Open file and read it line by line
s = open(filename, 'r').read().lower()
# List of stop words that are not inportant to the meaning of the content
# These words will not be counted in the number of characters, or words.
stopwords = ['a', 'about', 'above', 'across', 'after', 'afterwards']
stopwords += ['again', 'against', 'all', 'almost', 'alone', 'along']
stopwords += ['already', 'also', 'although', 'always', 'am', 'among']
stopwords += ['amongst', 'amoungst', 'amount', 'an', 'and', 'another']
stopwords += ['any', 'anyhow', 'anyone', 'anything', 'anyway', 'anywhere']
stopwords += ['are', 'around', 'as', 'at', 'back', 'be', 'became']
stopwords += ['because', 'become', 'becomes', 'becoming', 'been']
stopwords += ['before', 'beforehand', 'behind', 'being', 'below']
stopwords += ['beside', 'besides', 'between', 'beyond', 'bill', 'both']
stopwords += ['bottom', 'but', 'by', 'call', 'can', 'cannot', 'cant']
stopwords += ['co', 'computer', 'con', 'could', 'couldnt', 'cry', 'de']
stopwords += ['describe', 'detail', 'did', 'do', 'done', 'down', 'due']
stopwords += ['during', 'each', 'eg', 'eight', 'either', 'eleven', 'else']
stopwords += ['elsewhere', 'empty', 'enough', 'etc', 'even', 'ever']
stopwords += ['every', 'everyone', 'everything', 'everywhere', 'except']
stopwords += ['few', 'fifteen', 'fifty', 'fill', 'find', 'fire']
stopwords += ['five', 'for', 'former', 'formerly', 'forty', 'found']
stopwords += ['four', 'from', 'front', 'full', 'further', 'get', 'give']
stopwords += ['go', 'had', 'has', 'hasnt', 'have', 'he', 'hence', 'her']
stopwords += ['here', 'hereafter', 'hereby', 'herein', 'hereupon', 'hers']
stopwords += ['herself', 'him', 'himself', 'his', 'how', 'however']
stopwords += ['hundred', 'i', 'ie', 'if', 'in', 'inc', 'indeed']
stopwords += ['interest', 'into', 'is', 'it', 'its', 'itself', 'keep']
stopwords += ['last', 'latter', 'latterly', 'least', 'less', 'ltd', 'made']
stopwords += ['many', 'may', 'me', 'meanwhile', 'might', 'mill', 'mine']
stopwords += ['more', 'moreover', 'most', 'mostly', 'move', 'much']
stopwords += ['must', 'my', 'myself', 'name', 'namely', 'neither', 'never']
stopwords += ['nevertheless', 'next', 'nine', 'no', 'nobody', 'none']
stopwords += ['noone', 'nor', 'not', 'nothing', 'now', 'nowhere', 'of']
stopwords += ['off', 'often', 'on','once', 'one', 'only', 'onto', 'or']
stopwords += ['other', 'others', 'otherwise', 'our', 'ours', 'ourselves']
stopwords += ['out', 'over', 'own', 'part', 'per', 'perhaps', 'please']
stopwords += ['put', 'rather', 're', 's', 'same', 'see', 'seem', 'seemed']
stopwords += ['seeming', 'seems', 'serious', 'several', 'she', 'should']
stopwords += ['show', 'side', 'since', 'sincere', 'six', 'sixty', 'so']
stopwords += ['some', 'somehow', 'someone', 'something', 'sometime']
stopwords += ['sometimes', 'somewhere', 'still', 'such', 'take']
stopwords += ['ten', 'than', 'that', 'the', 'their', 'them', 'themselves']
stopwords += ['then', 'thence', 'there', 'thereafter', 'thereby']
stopwords += ['therefore', 'therein', 'thereupon', 'these', 'they']
stopwords += ['thick', 'thin', 'third', 'this', 'those', 'though', 'three']
stopwords += ['three', 'through', 'throughout', 'thru', 'thus', 'to']
stopwords += ['together', 'too', 'top', 'toward', 'towards', 'twelve']
stopwords += ['twenty', 'two', 'un', 'under', 'until', 'up', 'upon']
stopwords += ['us', 'very', 'via', 'was', 'we', 'well', 'were', 'what']
stopwords += ['whatever', 'when', 'whence', 'whenever', 'where']
stopwords += ['whereafter', 'whereas', 'whereby', 'wherein', 'whereupon']
stopwords += ['wherever', 'whether', 'which', 'while', 'whither', 'who']
stopwords += ['whoever', 'whole', 'whom', 'whose', 'why', 'will', 'with']
stopwords += ['within', 'without', 'would', 'yet', 'you', 'your']
stopwords += ['yours', 'yourself', 'the', 'â', 'The', '9', '5', 'just']
# count characters
num_chars = len(s)
# count lines
num_lines = s.count('\n')
# Split the words in the file
words = s.split()
# Create empty dictionary
d = {}
for i in d:
i = i.replace('.','')
i = i.replace(',','')
i = i.replace('\'','')
d.append(i.split())
# Add words to dictionary if they are not in it already.
for w in words:
if w in d:
#increase the count of each word added to the dictionary by 1 each time it appears
d[w] += 1
else:
d[w] = 1
# Find the sum of each word's count in the dictionary. Drop the stopwords.
num_words = sum(d[w] for w in d if w not in stopwords)
# Create a list of words and their counts from file
# in the form number of occurrence word count word
lst = [(d[w], w) for w in d if w not in stopwords]
# Sort the list of words by the count of the word
lst.sort()
# Sort the word list from greatest to least
lst.reverse()
# Print number of characters, number of lines rea, then number of words
# that were counted minus the stop words.
print('Your input file has characters = ' + str(num_chars))
print('Your input file has num_lines = ' + str(num_lines))
print('Your input file has num_words = ' + str(num_words))
# Print the top 30 most frequent words
print('\n The 30 most frequent words are \n')
# Start list at 1, the most most frequent word
i = 1
# Create list for the first 30 frequent words
for count, word in lst[:30]:
# Create list with the number of occurrence the word count then the word
print('%2s. %4s %s' % (i, count, word))
# Increase the list number by 1 after each entry
i += 1
</code></pre>
<p>Here are some of my results.</p>
<pre><code> 1. 40 iphone
2. 15 users
3. 12 iphoneâs
4. 9 music
5. 9 apple
6. 8 web
7. 7 new
</code></pre>
<p>Any help would be greatly appreciated. Thank You</p>
| 1 | 2016-08-03T23:52:15Z | 38,755,502 | <p>You need to say <code>for i in words:</code> not <code>for i in d:</code>
You are iterating through an empty dictionary during the replacement steps, so nothing is changing. Just remove that loop and move the replacement steps to the top of the <code>for w in words:</code> loop, so you only have to make one pass through.</p>
<p>I would redo that whole section thusly:</p>
<pre><code>for w in words:
w = w.replace('.','').replace(',','').replace('\'','').replace("â","")
d[w] = d.get(w,0) + 1
</code></pre>
<p>As it is now, you are also trying to split <code>i</code> before appending to the dictionary. It has already been split. Also, you need a key:value for the dictionary. Just give it a value of zero at that point?, so later you can count occurrences w/o testing?</p>
<p>Instead of testing <code>if w in d:</code> it will be hundreds (even thousands of times faster) to use <code>.get()</code> with a default value of zero (returned if <code>w</code> is not found), as shown above.</p>
| 0 | 2016-08-04T00:01:44Z | [
"python",
"replace",
"strip"
] |
Python IndexError when Scanning Files | 38,755,425 | <p>I'm trying to scan/search files and it throws:</p>
<blockquote>
<p>IndexError: list index out of range on the line "list = self.scanFolder(path[t])"</p>
</blockquote>
<p>This is an Object and has some methods/functions that aren't shown here since they are not relevant to this code. </p>
<pre><code>def scanFolder(self, path):
try:
return os.listdir(path)
except WindowsError:
return "%access-denied%"
def separate(self, files):
#Give Param of Files with exact path
file = []
dir = []
for x in range(len(files)):
if os.path.isfile(files[x]):
file.append(files[x])
for x in range(len(files)):
if os.path.isdir(files[x]):
dir.append(files[x])
return file, dir
def startScan(self):
driveLetters = self.getDrives()
matches = []
paths = []
paths2 = []
path = "C:/"
list = self.scanFolder(path)
if list != "%access-denied%":
for i in range(len(list)):
list[i] = os.path.join(path, list[i])
files, dirs = self.separate(list)
paths2.extend(dirs)
for i in range(len(files)):
if files[i].lower() in self.keyword.lower():
matches.append(files[i])
paths = []
paths = paths2
paths2 = []
while paths != []:
for t in range(len(paths)):
print(paths)
print(t)
list = self.scanFolder(paths[t])
if list != "%access-denied%":
for i in range(len(list)):
list[i] = os.path.join(paths[t], list[i])
files, dirs = self.separate(list)
if dirs != []:
paths2.extend(dirs)
for i in range(len(files)):
if files[i].lower() in self.keyword.lower():
matches.append(files[t])
paths = paths2
paths2 = []
return matches
</code></pre>
| 0 | 2016-08-03T23:53:22Z | 38,755,525 | <p>You are trying to access an invalid position.</p>
<pre><code>for t in range(len(paths)):
print(paths)
print(t)
list = self.scanFolder(paths[t])
</code></pre>
<p>The valid list indexes are 0..len(path)-1</p>
<p>You should access the list elements in a more pythonic form:</p>
<pre><code>for path in paths:
list = self.scanFolder(path)
</code></pre>
<p>If you need change some list element, you should use <a href="https://docs.python.org/3/library/functions.html" rel="nofollow">enumerate()</a>:</p>
<pre><code>for pos, path in enumerate(paths):
print ("paths[%s] = %s" %(pos, path))
</code></pre>
| 1 | 2016-08-04T00:04:15Z | [
"python",
"list",
"indexoutofboundsexception",
"iterable"
] |
Error with multiple numpy.delete uses on array? | 38,755,506 | <p>So, I was trying to understand the <code>numpy.delete</code> function, and I came up with something weird. Here's the program: </p>
<pre><code>>>>import numpy as np
>>>a = np.arange(10)
>>> a
array([0, 1, 2, 3, 4, 6, 7, 9])
>>> a[5]
5
>>> a=np.delete(a,[a[5]])
>>> a
array([0, 1, 2, 3, 4, 6, 7, 8, 9]) #so far so good
>>> a[6]
7
>>> a=np.delete(a,[a[6]])
>>> a
array([0, 1, 2, 3, 4, 6, 7, 9])
</code></pre>
<p>So... When I put <code>a=np.delete(a,[a[6]])</code>, it should be expected to remove the number <code>7</code> from the array, right? Why was it the number <code>8</code> (the term <code>a[7]</code>) from the array) removed instead of the expected <code>a[6]</code>?<br>
I also noticed that when I try to remove the <code>a[0]</code>(=0) from the array after the first delete, I just can't. It always removes one term ahead. Any Idea how do I remove it?</p>
| 0 | 2016-08-04T00:02:03Z | 38,755,544 | <p>The second argument should be the <strong>index</strong> of the element you want to delete, not the element itself.</p>
<pre><code>a=np.delete(a,6)
</code></pre>
<p>In the first case, it only worked because a[5] happened to equal 5, so the index and the value were the same.</p>
<p>When you have:</p>
<pre><code>a=np.delete(a,[a[6]])
</code></pre>
<p>You are deleting the 7th element since a[6] = 7 there.</p>
| 1 | 2016-08-04T00:07:12Z | [
"python",
"arrays",
"python-2.7",
"numpy"
] |
Extract terms of an expression | 38,755,517 | <p>Is there a sympy function that extracts all terms of an equation of <code>Add</code> , <code>Mul</code> and <code>Div</code> expressions, as a list or set?</p>
<p>For example:</p>
<pre><code>(x**2 +(x-1)*ln(x)+(x+2)/(x-1))
</code></pre>
<p>I want to get :</p>
<pre><code>[x**2,(x+1)*ln(x),(x+2)/(x-1)]
</code></pre>
<p>same thing with Mul:</p>
<pre><code>(x-1)*ln(x) : [(x-1),ln(x)]
</code></pre>
<p>and Divison:</p>
<pre><code>(x+2)/(x-1) : [x+2,x-1]
</code></pre>
| 2 | 2016-08-04T00:03:40Z | 38,775,467 | <p>For a sum or product, you can use <code>expr.args</code>:</p>
<pre><code>In [1]: ((x**2 +(x-1)*ln(x)+(x+2)/(x-1))).args
Out[1]:
â 2 x + 2 â
âx , âââââ, (x - 1)â
log(x)â
â x - 1 â
In [2]: ((x-1)*ln(x)).args
Out[2]: (x - 1, log(x))
</code></pre>
<p>For a division, SymPy represents <code>x/y</code> as <code>x*y**-1</code> (there is no division class, only <code>Mul</code> and <code>Pow</code>). </p>
<pre><code>In [3]: ((x+2)/(x-1)).args
Out[3]:
â 1 â
ââââââ, x + 2â
âx - 1 â
</code></pre>
<p>However, you can use <code>fraction</code> to split it </p>
<pre><code>In [4]: fraction((x+2)/(x-1))
Out[4]: (x + 2, x - 1)
</code></pre>
| 1 | 2016-08-04T19:06:18Z | [
"python",
"sympy"
] |
Read a distributed Tab delimited CSV | 38,755,522 | <p>Inspired from this <a href="http://stackoverflow.com/questions/31898964/how-to-write-the-resulting-rdd-to-a-csv-file-in-spark-python">question</a>, I wrote some code to store an RDD (which was read from a Parquet file), with a Schema of (photo_id, data), in pairs, delimited by tabs, and just as a detail base 64 encode it, like this:</p>
<pre><code>def do_pipeline(itr):
...
item_id = x.photo_id
def toTabCSVLine(data):
return '\t'.join(str(d) for d in data)
serialize_vec_b64pkl = lambda x: (x[0], base64.b64encode(cPickle.dumps(x[1])))
def format(data):
return toTabCSVLine(serialize_vec_b64pkl(data))
dataset = sqlContext.read.parquet('mydir')
lines = dataset.map(format)
lines.saveAsTextFile('outdir')
</code></pre>
<p>So now, the point of interest: <strong><em>How to read that dataset</em></strong> and print for example its deserialized data?</p>
<p>I am using Python 2.6.6.</p>
<hr>
<p>My attempt lies here, where for just verifying that everything can be done, I wrote this code:</p>
<pre><code>deserialize_vec_b64pkl = lambda x: (x[0], cPickle.loads(base64.b64decode(x[1])))
base64_dataset = sc.textFile('outdir')
collected_base64_dataset = base64_dataset.collect()
print(deserialize_vec_b64pkl(collected_base64_dataset[0].split('\t')))
</code></pre>
<p>which calls <a href="http://spark.apache.org/docs/latest/api/python/pyspark.html?highlight=collect#pyspark.RDD.collect" rel="nofollow">collect()</a>, which for testing is OK, but in a real-world scenario would struggle...</p>
<hr>
<p>Edit:</p>
<p>When I tried zero323's suggestion:</p>
<pre><code>foo = (base64_dataset.map(str.split).map(deserialize_vec_b64pkl)).collect()
</code></pre>
<p>I got this error, which boils down to this:</p>
<pre><code>PythonRDD[2] at RDD at PythonRDD.scala:43
16/08/04 18:32:30 WARN TaskSetManager: Lost task 4.0 in stage 0.0 (TID 4, gsta31695.tan.ygrid.yahoo.com): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/grid/0/tmp/yarn-local/usercache/gsamaras/appcache/application_1470212406507_56888/container_e04_1470212406507_56888_01_000009/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/grid/0/tmp/yarn-local/usercache/gsamaras/appcache/application_1470212406507_56888/container_e04_1470212406507_56888_01_000009/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/grid/0/tmp/yarn-local/usercache/gsamaras/appcache/application_1470212406507_56888/container_e04_1470212406507_56888_01_000009/pyspark.zip/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
UnpicklingError: NEWOBJ class argument has NULL tp_new
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/08/04 18:32:30 ERROR TaskSetManager: Task 12 in stage 0.0 failed 4 times; aborting job
16/08/04 18:32:31 WARN TaskSetManager: Lost task 14.3 in stage 0.0 (TID 38, gsta31695.tan.ygrid.yahoo.com): TaskKilled (killed intentionally)
16/08/04 18:32:31 WARN TaskSetManager: Lost task 13.3 in stage 0.0 (TID 39, gsta31695.tan.ygrid.yahoo.com): TaskKilled (killed intentionally)
16/08/04 18:32:31 WARN TaskSetManager: Lost task 16.3 in stage 0.0 (TID 42, gsta31695.tan.ygrid.yahoo.com): TaskKilled (killed intentionally)
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
/homes/gsamaras/code/read_and_print.py in <module>()
17 print(base64_dataset.map(str.split).map(deserialize_vec_b64pkl))
18
---> 19 foo = (base64_dataset.map(str.split).map(deserialize_vec_b64pkl)).collect()
20 print(foo)
/home/gs/spark/current/python/lib/pyspark.zip/pyspark/rdd.py in collect(self)
769 """
770 with SCCallSiteSync(self.context) as css:
--> 771 port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
772 return list(_load_from_socket(port, self._jrdd_deserializer))
773
/home/gs/spark/current/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in __call__(self, *args)
811 answer = self.gateway_client.send_command(command)
812 return_value = get_return_value(
--> 813 answer, self.gateway_client, self.target_id, self.name)
814
815 for temp_arg in temp_args:
/home/gs/spark/current/python/lib/py4j-0.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
306 raise Py4JJavaError(
307 "An error occurred while calling {0}{1}{2}.\n".
--> 308 format(target_id, ".", name), value)
309 else:
310 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
</code></pre>
| 3 | 2016-08-04T00:03:57Z | 38,775,296 | <p>Let's try a simple example. For convenience I'll be using handy <a href="https://github.com/pytoolz/toolz" rel="nofollow"><code>toolz</code></a> library but it is not really required here.</p>
<pre><code>import sys
import base64
if sys.version_info < (3, ):
import cPickle as pickle
else:
import pickle
from toolz.functoolz import compose
rdd = sc.parallelize([(1, {"foo": "bar"}), (2, {"bar": "foo"})])
</code></pre>
<p>Now, your code is not exactly portable right now. In Python 2 <code>base64.b64encode</code> returns <code>str</code>, while in Python 3 it returns <code>bytes</code>. Lets illustrate that:</p>
<ul>
<li><p><strong>Python 2</strong></p>
<pre><code>type(base64.b64encode(pickle.dumps({"foo": "bar"})))
## str
</code></pre></li>
<li><p><strong>Python 3</strong></p>
<pre><code>type(base64.b64encode(pickle.dumps({"foo": "bar"})))
## bytes
</code></pre></li>
</ul>
<p>So lets add decoding to the pipeline:</p>
<pre><code># Equivalent to
# def pickle_and_b64(x):
# return base64.b64encode(pickle.dumps(x)).decode("ascii")
pickle_and_b64 = compose(
lambda x: x.decode("ascii"),
base64.b64encode,
pickle.dumps
)
</code></pre>
<p>Please note that this doesn't assume any particular shape of the data. Because of that, we'll use <code>mapValues</code> to serialize only keys:</p>
<pre><code>serialized = rdd.mapValues(pickle_and_b64)
serialized.first()
## 1, u'KGRwMApTJ2ZvbycKcDEKUydiYXInCnAyCnMu')
</code></pre>
<p>Now we can follow it with simple format and save:</p>
<pre><code>from tempfile import mkdtemp
import os
outdir = os.path.join(mkdtemp(), "foo")
serialized.map(lambda x: "{0}\t{1}".format(*x)).saveAsTextFile(outdir)
</code></pre>
<p>To read the file we reverse the process:</p>
<pre><code># Equivalent to
# def b64_and_unpickle(x):
# return pickle.loads(base64.b64decode(x))
b64_and_unpickle = compose(
pickle.loads,
base64.b64decode
)
decoded = (sc.textFile(outdir)
.map(lambda x: x.split("\t")) # In Python 3 we could simply use str.split
.mapValues(b64_and_unpickle))
decoded.first()
## (u'1', {'foo': 'bar'})
</code></pre>
| 2 | 2016-08-04T18:56:16Z | [
"python",
"hadoop",
"apache-spark",
"io",
"distributed-computing"
] |
Subprocess doesn't show data from tcpdump in realtime. It shows with a pause about 10-20 seconds | 38,755,545 | <p>So i wanna get all data from <code>tcpdump</code> and add some logic in the future.
I haven't such problem yet with subprocess' pipes.
I wrote code and ran <code>tcpdump</code> and <code>run.py</code> in parallel.</p>
<p>run.py:</p>
<pre><code>from subprocess import Popen, PIPE
# process = Popen(['/usr/bin/sudo', '/usr/sbin/tcpdump', '-i', 'wlan0'], bufsize=1, stdout=PIPE, stderr=PIPE)
process = Popen('sudo tcpdump -i wlan0', bufsize=1, universal_newlines=True, shell=True, stdout=PIPE, stderr=PIPE)
while True:
print(process.stdout.readline())
</code></pre>
<p>Output looks like this:
<a href="http://i.stack.imgur.com/FwAcj.png" rel="nofollow"><img src="http://i.stack.imgur.com/FwAcj.png" alt="enter image description here"></a>
I tried different values for <code>bufsize</code> and other but behavior hasn't changed.
How can i get output as fast as tcpdump gets with <code>Subprocess.Popen</code>?</p>
| 1 | 2016-08-04T00:07:18Z | 38,755,655 | <p>Try: sudo stdbuf -oL tcpdump -i wlan0 </p>
<p>It works for me </p>
<pre><code>from subprocess import Popen, PIPE
# process = Popen(['/usr/bin/sudo', '/usr/sbin/tcpdump', '-i', 'wlan0'], bufsize=1, stdout=PIPE, stderr=PIPE)
process = Popen('sudo stdbuf -oL tcpdump -i wlan0', bufsize=1, universal_newlines=True, shell=True, stdout=PIPE, stderr=PIPE)
while True:
print(process.stdout.readline())
</code></pre>
| 2 | 2016-08-04T00:22:22Z | [
"python",
"pipe",
"subprocess",
"real-time"
] |
Subprocess doesn't show data from tcpdump in realtime. It shows with a pause about 10-20 seconds | 38,755,545 | <p>So i wanna get all data from <code>tcpdump</code> and add some logic in the future.
I haven't such problem yet with subprocess' pipes.
I wrote code and ran <code>tcpdump</code> and <code>run.py</code> in parallel.</p>
<p>run.py:</p>
<pre><code>from subprocess import Popen, PIPE
# process = Popen(['/usr/bin/sudo', '/usr/sbin/tcpdump', '-i', 'wlan0'], bufsize=1, stdout=PIPE, stderr=PIPE)
process = Popen('sudo tcpdump -i wlan0', bufsize=1, universal_newlines=True, shell=True, stdout=PIPE, stderr=PIPE)
while True:
print(process.stdout.readline())
</code></pre>
<p>Output looks like this:
<a href="http://i.stack.imgur.com/FwAcj.png" rel="nofollow"><img src="http://i.stack.imgur.com/FwAcj.png" alt="enter image description here"></a>
I tried different values for <code>bufsize</code> and other but behavior hasn't changed.
How can i get output as fast as tcpdump gets with <code>Subprocess.Popen</code>?</p>
| 1 | 2016-08-04T00:07:18Z | 38,755,782 | <p>It's stdio buffering in tcpdump process.<br>
By default stdio sets the buffering mode to _IOFBF(full) on redirected streams.<br>
Luckily tcpdump has <code>-l</code> option which switches the mode to line-buffered:</p>
<pre><code>process = Popen('sudo tcpdump -l -i wlan0', bufsize=1, universal_newlines=True,
shell=True, stdout=PIPE, stderr=PIPE)
</code></pre>
<p>Andrea's solution also works but mine would work on windows too.</p>
| 2 | 2016-08-04T00:41:42Z | [
"python",
"pipe",
"subprocess",
"real-time"
] |
Define an arbitrary Field on a django-rest-framework Serializer that doesn't exist on the Django Model | 38,755,622 | <p>I'd want to define an arbitrary <code>Field</code> on a <strong>django-rest-framework</strong> <code>Serializer</code> that doesn't exist on the Django <code>Model</code>.</p>
<p>My code looks like so:</p>
<pre><code>class Person(models.Model):
pass
class PersonSerializer(serializers.ModelSerializer):
foo = serializers.CharField()
class Meta:
model = Person
fields = ('id', 'foo')
class PersonViewSet(viewsets.ModelViewSet):
queryset = Person.objects.all()
serializer_class = PersonSerializer
</code></pre>
<p>This code fails with:</p>
<pre><code>AttributeError: Got AttributeError when attempting to get a value for field `key` on serializer `PersonSerializer`.
The serializer field might be named incorrectly and not match any attribute or key on the `Person` instance.
Original exception text was: 'Person' object has no attribute 'key'.
</code></pre>
<p>But... If I adjust the <code>Person</code> class to this:</p>
<pre><code>class Person(models.Model):
def foo(self):
pass
</code></pre>
<p>Then I don't get the error, and I can POST the data.</p>
<p>I don't like the idea of creating a dummy method on the <code>Person</code> class to get around this error. Is there a <strong>django-rest-framework</strong> way to alleviate this error that I missed?</p>
| 2 | 2016-08-04T00:16:42Z | 38,755,647 | <p>You could do it like this, using a <a href="http://www.django-rest-framework.org/api-guide/fields/#serializermethodfield" rel="nofollow"><code>SerializerMethodField</code></a>:</p>
<pre><code>class PersonSerializer(serializers.ModelSerializer):
foo = serializers.SerializerMethodField()
...
def get_foo(self, obj):
return 'the foo'
</code></pre>
<p>Then the logic about how to serialize the additional field just lives on the serializer itself, however it does have access to the instance, if required, through <code>obj</code>. </p>
| 1 | 2016-08-04T00:20:54Z | [
"python",
"django",
"rest",
"django-rest-framework"
] |
how can i use scrapy to extract two level text? | 38,755,681 | <p>My code is not working properly.</p>
<p>The second for loop is not getting all text.</p>
<p>How can I do that in scrapy?</p>
<p>Thanks for any tips and let me know if I'm missing anything.</p>
<pre><code><dl>
<dt>Release Date:</dt>
<dd>Aug. 01, 2016<br>
</dd>
<dt>Runtime:</dt>
<dd itemprop="duration">200min.<br></dd>
<dt>Languages:</dt>
<dd>Japanese<br></dd>
<dt>Subtitles:</dt>
<dd>----<br></dd>
<dt>Content ID:</dt>
<dd>8dtkm00045<br></dd>
<dt>Actress(es):</dt>
<dd itemprop="actors">
<span itemscope="" itemtype="http://schema.org/Person">
<a itemprop="name">Shinobu Oshima</a>
</span>
<span itemscope="" itemtype="http://schema.org/Person">
<a itemprop="name">Yukie Mizukami</a>
</span>
</dd>
</code></pre>
<p>
SPIDER :</p>
<pre><code>def parse_item(self, response):
for sel in response.xpath('//*[@id="contents"]/div[10]/section/section[1]/section[1]'):
item = EnMovie()
Content_ID = sel.xpath('normalize-space(div[2]/dl/dt[contains (.,"Content ID:")]/following-sibling::dd[1]/text())').extract()
item['Content_ID'] = Content_ID[0].encode('utf-8')
release_date = sel.xpath('normalize-space(div[2]/dl[1]/dt[contains (.,"Release Date:")]/following-sibling::dd[1]/text())').extract()
item['release_date'] = release_date[0].encode('utf-8')
running_time = sel.xpath('normalize-space(div[2]/dl[1]/dt[contains (.,"Runtime:")]/following-sibling::dd[1]/text())').extract()
item['running_time'] = running_time[0].encode('utf-8')
Series = sel.xpath('normalize-space(div[2]/dl[2]/dt[contains (.,"Series:")]/following-sibling::dd[1]/text())').extract()
item['Series'] = Series[0].encode('utf-8')
Studio = sel.xpath('normalize-space(div[2]/dl[2]/dt[contains (.,"Studio:")]/following-sibling::dd[1]/a/text())').extract()
item['Studio'] = Studio[0].encode('utf-8')
Director = sel.xpath('normalize-space(div[2]/dl[2]/dt[contains (.,"Director:")]/following-sibling::dd[1]/text())').extract()
item['Director'] = Director[0].encode('utf-8')
Label = sel.xpath('normalize-space(div[2]/dl[2]/dt[contains (.,"Label:")]/following-sibling::dd[1]/text())').extract()
item['Label'] = Label[0].encode('utf-8')
item['image_urls'] = sel.xpath('div[1]/img/@src').extract()
for actress in sel.xpath("//*[@itemprop='actors']//*[@itemprop='name']"):
actress_ = actress.xpath("text()").extract()
item['Actress'] = actress_[0].strip()
yield item
</code></pre>
<p>Partially spider works well.(Except for the second for loop) Second for loop yield only the last [itemprop="name"] value and saved to DB.</p>
<p>Sorry for my bad English and Thanks for any tips.</p>
| 0 | 2016-08-04T00:26:19Z | 38,757,218 | <p>Replace your second loop with this:</p>
<pre><code>actresses = sel.xpath("//*[@itemprop='actors']//*[@itemprop='name']/text()").extract()
item['Actress'] = [x.strip() for x in actresses]
yield item
</code></pre>
<p>It will give an item which has a list of actresses.</p>
<p>BYW, please stop posting the same question <a href="http://stackoverflow.com/questions/38520312/why-does-my-scrapy-not-scrape-anything">again</a> and <a href="http://stackoverflow.com/questions/38565891/why-does-my-scrapy-not-scrape-anything">again</a> and again.</p>
| 0 | 2016-08-04T01:46:40Z | [
"python",
"scrapy"
] |
Pythonic way to chain python generator function to form a pipeline | 38,755,702 | <p>I'm doing a pipeline code refactoring using python.</p>
<p>Assuming we have a series of <strong>generator</strong> functions and we want to chain those to form a data processing pipeline. <br></p>
<p>Example:</p>
<pre><code>#!/usr/bin/python
import itertools
def foo1(g):
for i in g:
yield i + 1
def foo2(g):
for i in g:
yield 10 + i
def foo3(g):
for i in g:
yield 'foo3:' + str(i)
res = foo3(foo2(foo1(range(0, 5))))
for i in res:
print i
</code></pre>
<p>Output:</p>
<pre><code>foo3:11
foo3:12
foo3:13
foo3:14
foo3:15
</code></pre>
<p>I do not think <code>foo3(foo2(foo1(range(0, 5))))</code> is a pythonic way to achieve my pipeline goal. Especially when the number of stages in the pipeline is large.</p>
<p>I wish I could rewrite it like chain in jquery. Something similar to :</p>
<pre><code>range(0, 5).foo1().foo2().foo3()
</code></pre>
<p>Or maybe</p>
<pre><code>l = [range(0, 5), foo1, foo2, foo3]
res = runner.run(l)
</code></pre>
<p>But I'm new to generator topic and couldn't find a way to achieve this.</p>
<p>Any help will be welcome.</p>
| 3 | 2016-08-04T00:29:37Z | 38,755,760 | <p>I sometimes like to use a left fold (called <code>reduce</code> in Python) for this type of situation:</p>
<pre><code>from functools import reduce
def pipeline(*steps):
return reduce(lambda x, y : y(x), list(steps))
res = pipeline(range(0,5), foo1, foo2, foo3)
</code></pre>
<p>Or even better:</p>
<pre><code>def compose(*funcs):
return lambda x : reduce(lambda f, g : g(f), list(funcs), x)
p = compose(foo1, foo2, foo3)
res = p(range(0,5))
</code></pre>
| 6 | 2016-08-04T00:39:27Z | [
"python",
"generator"
] |
Pythonic way to chain python generator function to form a pipeline | 38,755,702 | <p>I'm doing a pipeline code refactoring using python.</p>
<p>Assuming we have a series of <strong>generator</strong> functions and we want to chain those to form a data processing pipeline. <br></p>
<p>Example:</p>
<pre><code>#!/usr/bin/python
import itertools
def foo1(g):
for i in g:
yield i + 1
def foo2(g):
for i in g:
yield 10 + i
def foo3(g):
for i in g:
yield 'foo3:' + str(i)
res = foo3(foo2(foo1(range(0, 5))))
for i in res:
print i
</code></pre>
<p>Output:</p>
<pre><code>foo3:11
foo3:12
foo3:13
foo3:14
foo3:15
</code></pre>
<p>I do not think <code>foo3(foo2(foo1(range(0, 5))))</code> is a pythonic way to achieve my pipeline goal. Especially when the number of stages in the pipeline is large.</p>
<p>I wish I could rewrite it like chain in jquery. Something similar to :</p>
<pre><code>range(0, 5).foo1().foo2().foo3()
</code></pre>
<p>Or maybe</p>
<pre><code>l = [range(0, 5), foo1, foo2, foo3]
res = runner.run(l)
</code></pre>
<p>But I'm new to generator topic and couldn't find a way to achieve this.</p>
<p>Any help will be welcome.</p>
| 3 | 2016-08-04T00:29:37Z | 38,755,838 | <p>Following up on your runner.run approach, let's define this utility function:</p>
<pre><code>def recur(ops):
return ops[0](recur(ops[1:])) if len(ops)>1 else ops[0]
</code></pre>
<p>As an example:</p>
<pre><code>>>> ops = foo3, foo2, foo1, range(0, 5)
>>> list( recur(ops) )
['foo3:11', 'foo3:12', 'foo3:13', 'foo3:14', 'foo3:15']
</code></pre>
<h3>Alternative: backward ordering</h3>
<pre><code>def backw(ops):
return ops[-1](backw(ops[:-1])) if len(ops)>1 else ops[0]
</code></pre>
<p>For example: </p>
<pre><code>>>> list( backw([range(0, 5), foo1, foo2, foo3]) )
['foo3:11', 'foo3:12', 'foo3:13', 'foo3:14', 'foo3:15']
</code></pre>
| 1 | 2016-08-04T00:49:22Z | [
"python",
"generator"
] |
Pandas Pivot Table Keeps Returning False Instead of 0 | 38,755,711 | <p>I am making a pivot table using pandas. If I set <code>aggfunc=sum</code> or <code>aggfunc=count</code> on a column of boolean values, it works fine provided there's at least one <code>True</code> in the column. E.g. <code>[True, False, True, True, False]</code> would return 3. However, if all the values are <code>False</code>, then the pivot table outputs <code>False</code> instead of 0. No matter what, I can't get around it. The only way I can circumvent it is to define a function follows:</p>
<pre><code>def f(x):
mySum = sum(x)
return "0" if mySum == 0 else mySum
</code></pre>
<p>and then set <code>aggfunc=lambda x: f(x)</code>. While that works visually, it still disturbs me that outputing a <code>string</code> is the only way I can get the 0 to stick. If I cast it as an <code>int</code>, or try to return 0.0, or do anything that's numeric at all, <code>False</code> is always the result.</p>
<p>Why is this, and how do I get the pivot table to actually give me 0 in this case (by only modifying the <code>aggfunc</code>, not the dataframe itself)?</p>
| 2 | 2016-08-04T00:30:09Z | 38,755,781 | <pre><code>df = pd.DataFrame({'count': [False] * 12, 'index': [0, 1] * 6, 'cols': ['a', 'b', 'c'] * 4})
print(df)
</code></pre>
<p>outputs</p>
<pre><code> cols count index
0 a False 0
1 b False 1
2 c False 0
3 a False 1
4 b False 0
5 c False 1
6 a False 0
7 b False 1
8 c False 0
9 a False 1
10 b False 0
11 c False 1
</code></pre>
<p>You can use <code>astype</code> (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow">docs</a>) to cast to <code>int</code> before pivoting.</p>
<pre><code>res = df.pivot_table(values='count', aggfunc=np.sum, columns='cols', index='index').astype(int)
print(res)
</code></pre>
<p>outputs</p>
<pre><code>cols a b c
index
0 0 0 0
1 0 0 0
</code></pre>
| 3 | 2016-08-04T00:41:25Z | [
"python",
"pandas",
"boolean",
"aggregate-functions",
"pivot-table"
] |
feed_dict in TensorFlow throwing unexpected error (summary operations) | 38,755,719 | <p>My training script, for training a TensorFlow model, very slightly modified from the tutorials online:</p>
<pre><code>def train(data_set_dir, train_set_dir):
data = data_input.read_data_sets(data_set_dir, train_set_dir)
with tf.Graph().as_default():
global_step = tf.Variable(0, trainable=False)
# defines placeholders (type=tf.float32)
images_placeholder, labels_placeholder = placeholder_inputs(batch_size, image_size, channels)
logits = model.inference(images_placeholder, num_classes)
loss = loss(logits, labels_placeholder, num_classes)
train_op = training(loss, global_step, batch_size)
saver = tf.train.Saver(tf.all_variables())
summary_op = tf.merge_all_summaries()
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
summary_writer = tf.train.SummaryWriter(FLAGS.train_dir, sess.graph)
for step in range(max_steps):
start_time = time.time()
feed_dict = fill_feed_dict(data, images_placeholder, labels_placeholder, batch_size)
_, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)
# ... continue to print loss_value, run summaries and save checkpoints
</code></pre>
<p>The placeholder_inputs function called above is:</p>
<pre><code>def placeholder_inputs(batch_size, img_size, channels):
images_pl = tf.placeholder(tf.float32,
shape=(batch_size, img_size, img_size, channels), name='images')
labels_pl = tf.placeholder(tf.float32,
shape=(batch_size, img_size, img_size), name='labels')
return images_pl, labels_pl
</code></pre>
<p>To clarify, the data I'm dealing with is for per-pixel classification in a segmentation problem. As seen above, this is a binary classification problem.</p>
<p>And the feed_dict function is:</p>
<pre><code>def fill_feed_dict(data_set, images_pl, labels_pl, batch_size):
images_feed, labels_feed = data_set.next_batch(batch_size)
feed_dict = {images_pl: images_feed, labels_pl: labels_feed}
return feed_dict
</code></pre>
<p>Where I'm stuck at:</p>
<pre><code>tensorflow.python.framework.errors.InvalidArgumentError: You must feed a value for placeholder tensor 'labels' with dtype float and shape [1,750,750]
[[Node: labels = Placeholder[dtype=DT_FLOAT, shape=[1,750,750], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
</code></pre>
<p>The traceback reveals it to have been caused by the 'labels' tensor from my <code>placeholder_inputs</code> function. Moreover, this error keeps shifting between the two placeholders, as far as I can see - randomly. One time, it is the 'labels' [<code>labels_pl</code>] tensor, another time, it is my 'images'[<code>images_pl</code>] tensor. </p>
<p>Error in detail:</p>
<pre><code>File ".../script.py", line 32, in placeholder_inputs
shape=(batch_size, img_size, img_size), name='labels')
File ".../tensorflow/python/ops/array_ops.py", line 895, in placeholder
name=name)
File ".../tensorflow/python/ops/gen_array_ops.py", line 1238, in _placeholder
name=name)
File ".../tensorflow/python/ops/op_def_library.py", line 704, in apply_op
op_def=op_def)
File ".../tensorflow/python/framework/ops.py", line 2260, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/tensorflow/python/framework/ops.py", line 1230, in __init__
self._traceback = _extract_stack()
</code></pre>
<p>What I've tried/checked:</p>
<ol>
<li><p>Placing the feed_dict out of the for loop, as well, to no avail. </p></li>
<li><p>Verified that there is enough data in the training data directory to correspond to the batch_size requirements. </p></li>
<li><p>Multiple variations on specifying the dtype of the placeholders - assuming 'float' was key in the stacktrace.</p></li>
<li><p>Cross-checked data shapes. They are exactly as specified in the placeholders.</p></li>
</ol>
<p>Perhaps this is a much simpler problem than I think it to be. Maybe even a minor typo I just cannot see here. Suggestions? I believe I've exhausted all options. Looking for someone to shed new light on the problem.</p>
<p>I've referred to <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.InvalidArgumentError.md" rel="nofollow">this</a> description of the error.</p>
<p><strong>Update:</strong></p>
<p>Did a <code>print feed_dict</code> before the <code>session.run</code> (as suggested in a comment here) and noticed that the expected values are being fed into the placeholder: </p>
<pre><code>{<tf.Tensor 'images:0' shape=(1, 750, 750, 3) dtype=float32>:
array([[[[-0.1556225 , -0.13209309, -0.15954407],
[-0.15954407, -0.12032838, -0.13601466],
.....
[-0.03405387, 0.04829907, 0.09535789]]]], dtype=float32),
<tf.Tensor 'labels:0' shape=(1, 750, 750) dtype=float32>:
array([[[ 0., 0., 0., ..., 0., 0., 0.],
.....
[ 0., 0., 0., ..., 0., 0., 0.]]], dtype=float32)}
</code></pre>
<p>Also something I didn't mention earlier:
The loop runs for the first time. So, I get an output for the first value of <code>step = 0</code> and then proceeds to promptly exit, after printing the <code>loss_value</code> statement I specify for <code>step=0</code>.</p>
<p><strong>Update 2:</strong></p>
<p>I figured where the problem was. It was with printing the <code>summary_op</code>. But why this is so is beyond me. This is how I print it in the for loop:</p>
<pre><code>if step % 100 == 0:
summary_str = sess.run(summary_op)
summary_writer.add_summary(summary_str, step)
</code></pre>
<p>Commenting this block out does the trick. Thoughts on why this is going wrong?</p>
<p><strong>Update 3: Solved</strong></p>
<p>Answer below. What I notice though, is that the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/cifar10/cifar10_train.py" rel="nofollow">TensorFlow CIFAR-10 example</a> does a similar <code>sess.run</code>, without an explicit mention of <code>feed_dict</code> and that runs fine. How exactly does it work then? </p>
| 0 | 2016-08-04T00:31:44Z | 38,762,088 | <p>Obvious error.
I didn't specify a <code>feed_dict</code> for the session run on <code>summary_op</code>.</p>
<pre><code>if step % 100 == 0:
summary_str = sess.run(summary_op, feed_dict=feed_dict)
summary_writer.add_summary(summary_str, step)
</code></pre>
<p>Explicitly mentioning the <code>feed_dict</code> call in the session run did the trick. But why? The TensorFlow CIFAR-10 example does a similar <code>sess.run</code>, without this explicit mention of <code>feed_dict</code> and that runs fine. </p>
| 0 | 2016-08-04T08:21:09Z | [
"python",
"tensorflow"
] |
Python logs between multiple modules only work in functions called by original file? | 38,755,732 | <p>I'm trying to add logging between a couple different modules. Why, when I place a log somewhere that isn't referenced in my original file, why does all logging break? I'm importing <code>logging</code> in both files. I import the <code>other_file</code> in <code>file1</code>.</p>
<p>Here is an example of what I'm running into with comments in <code>other_file</code>:</p>
<h1>file1</h1>
<pre><code>import other_file
def main():
logging.debug("Start")
other_file.use_bar()
logging.debug("End")
if __name__ == '__main__':
import logging
logging.basicConfig(filename='logfile.log',
level=logging.DEBUG,
format='%(asctime)s %(message)s',
datefmt='%m/%d/%Y %I:%M:%S %p')
main()
</code></pre>
<h1>other_file</h1>
<pre><code>import logging
def get_something():
something = 'foo'
# PUTTING A LOG HERE BREAKS ALL LOGS
return something
bar = get_something()
# SO DOES PUTTING A LOG HERE
def use_bar():
print(bar)
logging.info("log bar") # THIS LOG WORKS FINE
</code></pre>
| 1 | 2016-08-04T00:33:45Z | 38,755,901 | <h3>What's the problem?</h3>
<p>The function <code>get_something</code> is called before logging be configured. Then, it don't will appear on your <code>logfile</code>.
This occur when you import the module <code>other_file</code>. Python automatically execute the function <code>get_something</code> and assign the result to variable <code>bar</code>. <strong>Take care on it</strong>.</p>
<pre><code>bar = get_something()
# SO DOES PUTTING A LOG HERE
</code></pre>
<h3>Suggested solution</h3>
<p>To solve this problem, just assign the function <code>get_something</code> to bar, without call. Then, call <code>bar()</code> on function <code>use_bar</code>.</p>
<pre><code>bar = get_something
# SO DOES PUTTING A LOG HERE
def use_bar():
print(bar())
logging.info("log bar") # THIS LOG WORKS FINE
</code></pre>
<h3>Complete Solution</h3>
<p><strong>main.py</strong></p>
<pre><code>import other_file
import logging # Is recommend import modules on the top
def main():
logging.debug("Start")
other_file.use_bar()
logging.debug("End")
if __name__ == '__main__':
logging.basicConfig(filename='logfile.log',
level=logging.DEBUG,
format='%(asctime)s %(message)s',
datefmt='%m/%d/%Y %I:%M:%S %p')
main()
</code></pre>
<p><strong>other_file.py</strong></p>
<pre><code>import logging
def get_something():
something = 'foo'
# PUTTING A LOG HERE BREAKS ALL LOGS
return something
bar = get_something
# SO DOES PUTTING A LOG HERE
def use_bar():
print(bar())
logging.info("log bar") # THIS LOG WORKS FINE
</code></pre>
| 1 | 2016-08-04T00:57:30Z | [
"python",
"logging",
"module"
] |
Python logs between multiple modules only work in functions called by original file? | 38,755,732 | <p>I'm trying to add logging between a couple different modules. Why, when I place a log somewhere that isn't referenced in my original file, why does all logging break? I'm importing <code>logging</code> in both files. I import the <code>other_file</code> in <code>file1</code>.</p>
<p>Here is an example of what I'm running into with comments in <code>other_file</code>:</p>
<h1>file1</h1>
<pre><code>import other_file
def main():
logging.debug("Start")
other_file.use_bar()
logging.debug("End")
if __name__ == '__main__':
import logging
logging.basicConfig(filename='logfile.log',
level=logging.DEBUG,
format='%(asctime)s %(message)s',
datefmt='%m/%d/%Y %I:%M:%S %p')
main()
</code></pre>
<h1>other_file</h1>
<pre><code>import logging
def get_something():
something = 'foo'
# PUTTING A LOG HERE BREAKS ALL LOGS
return something
bar = get_something()
# SO DOES PUTTING A LOG HERE
def use_bar():
print(bar)
logging.info("log bar") # THIS LOG WORKS FINE
</code></pre>
| 1 | 2016-08-04T00:33:45Z | 38,757,080 | <p>Your problem is related to how module import works in python.</p>
<p>Well, when importing the module other_file, Python automatically evaluates the line <code>bar = set_something()</code> by executing the function and store the result in <code>bar</code> and all of this is done before the logging starts. About putting a log in the line after <code>bar = get_something()</code> won't work because it'll be executed before the logging begins.</p>
<p>(I imagine that you'll need to use the value of <code>bar</code> to be global, if not, you can declare it inside the function <code>use_bar()</code> and in this case, the problem is solved.)</p>
<p>So the solution here might be replacing the line <code>bar = get_something()</code> by <code>bar = get_something</code> and now, <code>bar</code> is no longer a variable instead, Python will consider it as a function which is <code>get_something</code> and you can use it like : <code>bar()</code>.</p>
<p>So if you want to preserve the same structure of your code, you have to change the <code>other_file</code> code as below :</p>
<h1>other_file</h1>
<pre><code>import logging
def get_something():
something = 'foo'
# PUTTING A LOG HERE BREAKS ALL LOGS
return something
bar = get_something
def use_bar():
print(bar())
logging.info("log bar")
</code></pre>
<p>But I suggest that you use this code instead which is (let's say) more pythonic :</p>
<h1>file1</h1>
<pre><code>import logging
import other_file
def logger_config():
logging.basicConfig(filename='logfile.log',
level=logging.DEBUG,
format='%(asctime)s %(message)s',
datefmt='%m/%d/%Y %I:%M:%S %p')
return logging.getLogger(__name__)
def main():
# Get the logger
logger = logging.getLogger(__name__)
logger.debug("Start")
other_file.use_bar()
logger.debug("End")
if __name__ == '__main__':
main()
</code></pre>
<h1>other_file</h1>
<pre><code>import logging
# Get the logger from the main module (file1)
logger = logging.getLogger(__name__)
def get_something():
something = 'foo'
# Your log here
logger.info('Test log')
return something
bar = get_something
def use_bar():
print(bar())
logger.info("log bar")
</code></pre>
| 1 | 2016-08-04T01:26:50Z | [
"python",
"logging",
"module"
] |
Run a flask python app on apache server | 38,755,885 | <p>I am running an Amazon EC2 micro instance and I want to run a python app from it using Flask.</p>
<p>Here is my <code>app.py</code> file where I'm doing a simple file upload (it works fine on <code>localhost:5000</code>):</p>
<pre><code>from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello from Flask!'
if __name__ == '__main__':
app.run()
</code></pre>
<p>Here is my file named <code>adapter.wsgi</code> to connect it to apache:</p>
<pre><code>import sys
sys.path.insert(0, '/var/www/html/lumos')
from app import app as application
</code></pre>
<p>Finally, in my <code>httpd.conf</code> file, I have done the following:</p>
<pre><code><VirtualHost *>
ServerName http://lumos.website.me
DocumentRoot /var/www/html/lumos
WSGIDaemonProcess lumos threads=5
WSGIScriptAlias / /var/www/html/lumos/adapter.wsgi
<Directory "/var/www/html/lumos">
WSGIProcessGroup lumos
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
</code></pre>
<p>Then when I restart the apache server and go to <a href="http://lumos.website.me/" rel="nofollow">http://lumos.website.me/</a>, all I get is a 503:</p>
<pre><code>Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
Apache/2.2.31 (Amazon) Server at lumos.website.me Port 80
</code></pre>
<p>Any ideas on how I can get the flask app to work on the apache server?</p>
<p>Note: My server is running. </p>
<p>Update:</p>
<p>Here is my error log file</p>
<pre><code>[Thu Aug 04 01:34:09 2016] [notice] caught SIGTERM, shutting down
[Thu Aug 04 01:34:09 2016] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Thu Aug 04 01:34:09 2016] [notice] Digest: generating secret for digest authentication ...
[Thu Aug 04 01:34:09 2016] [notice] Digest: done
[Thu Aug 04 01:34:10 2016] [notice] Apache/2.2.31 (Unix) DAV/2 PHP/5.3.29 mod_wsgi/3.2 Python/2.6.9 configured -- resuming normal operations
[Thu Aug 04 01:34:14 2016] [error] [client 72.219.147.5] (13)Permission denied: mod_wsgi (pid=30315): Unable to connect to WSGI daemon process 'lumos' on '/etc/httpd/logs/wsgi.30311.0.1.sock' after multiple attempts.
[Thu Aug 04 01:34:14 2016] [error] [client 72.219.147.5] (13)Permission denied: mod_wsgi (pid=30316): Unable to connect to WSGI daemon process 'lumos' on '/etc/httpd/logs/wsgi.30311.0.1.sock' after multiple attempts., referer: http://lumos.website.me/
[Thu Aug 04 01:34:15 2016] [error] [client 72.219.147.5] (13)Permission denied: mod_wsgi (pid=30317): Unable to connect to WSGI daemon process 'lumos' on '/etc/httpd/logs/wsgi.30311.0.1.sock' after multiple attempts.
</code></pre>
| 0 | 2016-08-04T00:55:14Z | 38,758,498 | <p>Okay, so looking at the error logs helped me figure out my answer. </p>
<p>Since my error was:</p>
<pre><code>(13)Permission denied: mod_wsgi (pid=30315): Unable to connect to WSGI daemon process 'lumos' on '/etc/httpd/logs/wsgi.30311.0.1.sock' after multiple attempts.
</code></pre>
<p>I added this <code>WSGISocketPrefix /var/run/wsgi</code> in my httpd.conf file and restarted apache. </p>
<p>For other people who may have the same problem as me in the future, here is a more detailed explanation of my error:</p>
<p><a href="https://code.google.com/archive/p/modwsgi/wikis/ConfigurationIssues.wiki#Location_Of_UNIX_Sockets" rel="nofollow">https://code.google.com/archive/p/modwsgi/wikis/ConfigurationIssues.wiki#Location_Of_UNIX_Sockets</a></p>
| 0 | 2016-08-04T04:28:48Z | [
"python",
"apache",
"amazon-web-services",
"amazon-ec2",
"flask"
] |
Run a flask python app on apache server | 38,755,885 | <p>I am running an Amazon EC2 micro instance and I want to run a python app from it using Flask.</p>
<p>Here is my <code>app.py</code> file where I'm doing a simple file upload (it works fine on <code>localhost:5000</code>):</p>
<pre><code>from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello from Flask!'
if __name__ == '__main__':
app.run()
</code></pre>
<p>Here is my file named <code>adapter.wsgi</code> to connect it to apache:</p>
<pre><code>import sys
sys.path.insert(0, '/var/www/html/lumos')
from app import app as application
</code></pre>
<p>Finally, in my <code>httpd.conf</code> file, I have done the following:</p>
<pre><code><VirtualHost *>
ServerName http://lumos.website.me
DocumentRoot /var/www/html/lumos
WSGIDaemonProcess lumos threads=5
WSGIScriptAlias / /var/www/html/lumos/adapter.wsgi
<Directory "/var/www/html/lumos">
WSGIProcessGroup lumos
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
</code></pre>
<p>Then when I restart the apache server and go to <a href="http://lumos.website.me/" rel="nofollow">http://lumos.website.me/</a>, all I get is a 503:</p>
<pre><code>Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
Apache/2.2.31 (Amazon) Server at lumos.website.me Port 80
</code></pre>
<p>Any ideas on how I can get the flask app to work on the apache server?</p>
<p>Note: My server is running. </p>
<p>Update:</p>
<p>Here is my error log file</p>
<pre><code>[Thu Aug 04 01:34:09 2016] [notice] caught SIGTERM, shutting down
[Thu Aug 04 01:34:09 2016] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Thu Aug 04 01:34:09 2016] [notice] Digest: generating secret for digest authentication ...
[Thu Aug 04 01:34:09 2016] [notice] Digest: done
[Thu Aug 04 01:34:10 2016] [notice] Apache/2.2.31 (Unix) DAV/2 PHP/5.3.29 mod_wsgi/3.2 Python/2.6.9 configured -- resuming normal operations
[Thu Aug 04 01:34:14 2016] [error] [client 72.219.147.5] (13)Permission denied: mod_wsgi (pid=30315): Unable to connect to WSGI daemon process 'lumos' on '/etc/httpd/logs/wsgi.30311.0.1.sock' after multiple attempts.
[Thu Aug 04 01:34:14 2016] [error] [client 72.219.147.5] (13)Permission denied: mod_wsgi (pid=30316): Unable to connect to WSGI daemon process 'lumos' on '/etc/httpd/logs/wsgi.30311.0.1.sock' after multiple attempts., referer: http://lumos.website.me/
[Thu Aug 04 01:34:15 2016] [error] [client 72.219.147.5] (13)Permission denied: mod_wsgi (pid=30317): Unable to connect to WSGI daemon process 'lumos' on '/etc/httpd/logs/wsgi.30311.0.1.sock' after multiple attempts.
</code></pre>
| 0 | 2016-08-04T00:55:14Z | 38,760,224 | <blockquote>
<p>Please make sure in advance that any app.run() calls you might have in your application file are inside an if <strong>name</strong> == '<strong>main</strong>': block or moved to a separate file. Just make sure itâs not called because this will always start a local WSGI server which we do not want if we deploy that application to mod_wsgi.</p>
</blockquote>
<p>above is extract from <a href="http://flask.pocoo.org" rel="nofollow">http://flask.pocoo.org</a>, it seems happen with you.</p>
| 1 | 2016-08-04T06:44:10Z | [
"python",
"apache",
"amazon-web-services",
"amazon-ec2",
"flask"
] |
Send a message multiple times all at once in Python | 38,755,902 | <p>In python 3.5 is it possible to send multiple messages with <code>sock.send(msg) * 10</code>. In this case will it send 10 times or just one time and have some sort of error in the background?</p>
| 0 | 2016-08-04T00:57:34Z | 38,757,032 | <p>If you want to call <code>sock.send(msg)</code> ten times, you can use a <code>for</code> loop as such:</p>
<pre><code>for x in range(0,9):
sock.send(msg)
</code></pre>
<p><em>Reference: <a href="https://wiki.python.org/moin/ForLoop" rel="nofollow">https://wiki.python.org/moin/ForLoop</a></em></p>
| 0 | 2016-08-04T01:18:35Z | [
"python",
"sockets"
] |
Send a message multiple times all at once in Python | 38,755,902 | <p>In python 3.5 is it possible to send multiple messages with <code>sock.send(msg) * 10</code>. In this case will it send 10 times or just one time and have some sort of error in the background?</p>
| 0 | 2016-08-04T00:57:34Z | 38,757,223 | <p>if list-creation side-effect is not a problem, then you can do it as a list comprehension as well:</p>
<pre><code>[sock.send(msg) for _ in range(10)]
</code></pre>
<p><code>_</code> is a sort of "throwaway" variable, you can use any other identifier name there.</p>
| 0 | 2016-08-04T01:47:33Z | [
"python",
"sockets"
] |
Python increment loop to add filter parameters to dataframe | 38,756,950 | <p>I created a dictionary with a set of functions. Then, I created a while loop that attempts to use those functions. But part of the loop doesn't call the functions the way I want it to. Here's the code:</p>
<pre><code>while bool(str(w).endswith(' 2')) != True:
a = re.search('[0-9]{1,2}$', str(w))
w = w & int(a.group())-1
result = df[f[w]]
</code></pre>
<p>The third line, <code>w = w & int(a.group())-1</code>, doesn't function the way I want when I test it outside of this loop. I try setting <code>w = 34</code>, and then testing what results when I do <code>34 & int(a.group())-1</code>. Instead of giving me <code>34 & 33</code>, I get <code>32</code>. Is there any way to create an increment that adds parameters to the result, instead of creating some integer that doesn't even seem to be derived logically? I would like it to start with 34, and add an integer that is one less for every go around the loop (<code>34</code>, <code>34 & 33</code>, <code>34 & 33 & 32</code>, etc.). Thanks in advance!</p>
| -1 | 2016-08-04T01:06:09Z | 38,756,998 | <p><code>34 & 33</code> <em>is</em> <code>32</code>. <code>&</code> is the bitwise and operator.</p>
<p>Saying you want <code>"34 & 33"</code> suggests that you want a <em>string</em> as a result, but that seems to conflict w/ the use of <code>str(w)</code> throughout your code. Or maybe you are just unclear about what <code>&</code> does, and really want some different operation.</p>
| 3 | 2016-08-04T01:13:10Z | [
"python",
"loops",
"dataframe",
"while-loop"
] |
Python increment loop to add filter parameters to dataframe | 38,756,950 | <p>I created a dictionary with a set of functions. Then, I created a while loop that attempts to use those functions. But part of the loop doesn't call the functions the way I want it to. Here's the code:</p>
<pre><code>while bool(str(w).endswith(' 2')) != True:
a = re.search('[0-9]{1,2}$', str(w))
w = w & int(a.group())-1
result = df[f[w]]
</code></pre>
<p>The third line, <code>w = w & int(a.group())-1</code>, doesn't function the way I want when I test it outside of this loop. I try setting <code>w = 34</code>, and then testing what results when I do <code>34 & int(a.group())-1</code>. Instead of giving me <code>34 & 33</code>, I get <code>32</code>. Is there any way to create an increment that adds parameters to the result, instead of creating some integer that doesn't even seem to be derived logically? I would like it to start with 34, and add an integer that is one less for every go around the loop (<code>34</code>, <code>34 & 33</code>, <code>34 & 33 & 32</code>, etc.). Thanks in advance!</p>
| -1 | 2016-08-04T01:06:09Z | 38,793,923 | <p>Okay, I figured it out. I needed q = f[w] and then q = q & f[w-n], where f[w] defines parameters to filter a dataframe (df) based on a column, and f[w-n] defines a different parameter for filtering based on the next adjacent column. So, the progression should be f[w], f[w] & f[w-n], f[w] & f[w-n] & f[w-n], etc., instead of 34, 34 & 33, 34 & 33 & 32, etc. while n <= w.</p>
<p>So that would look like this:</p>
<pre><code>w = 34
n = 1
q = f[w]
while n <= w:
q = q & f[w-n]
result = df[q]
n = n+1
</code></pre>
<p>And, there would be conditions later on to decide whether or not enough parameters were used. In my usage, I'm not looking for a result before the while loop after q is initially defined because that result would have already been found in a different portion of the program. (In case this helps anyone else.)</p>
<p><a href="http://stackoverflow.com/users/535275/scott-hunter">Scott Hunter</a> thanks for the tip on the & operator.</p>
| 0 | 2016-08-05T16:29:20Z | [
"python",
"loops",
"dataframe",
"while-loop"
] |
TwythonError: Twitter API returned a 404 for string 'usernames' | 38,756,952 | <p>I am refering to this <a href="http://social-metrics.org/twitter-user-data/" rel="nofollow">article</a>.The article used <code>twython</code> to download user profile information with reference to <code>UserID</code> (int) and I am trying to use <code>username</code>(str). </p>
<p>The exact same code works when <code>userID</code> is given and when the same user's <code>username</code> is given this gives me an error. </p>
<p>what should I change in the program to read and process the usernames ? I have checked several question here and other forums the <code>404</code> is shown when the user is no longer available twitter user. </p>
<p>But to test that theory I did use same userid and username of which I can return the user profile fields but when the same user's username or screenname is used I am getting error. </p>
<p>Error: </p>
<pre><code>Traceback (most recent call last):
File "user_prof_twython.py", line 25, in <module>
users = t.lookup_user(user_id = ids)
File "/usr/local/lib/python2.7/dist-packages/twython/endpoints.py", line 522, in lookup_user
return self.get('users/lookup', params=params)
File "/usr/local/lib/python2.7/dist-packages/twython/api.py", line 264, in get
return self.request(endpoint, params=params, version=version)
File "/usr/local/lib/python2.7/dist-packages/twython/api.py", line 258, in request
api_call=url)
File "/usr/local/lib/python2.7/dist-packages/twython/api.py", line 194, in _request
retry_after=response.headers.get('X-Rate-Limit-Reset'))
twython.exceptions.TwythonError: Twitter API returned a 404 (Not Found), No user matches for specified terms.
</code></pre>
| 0 | 2016-08-04T01:06:33Z | 38,773,813 | <p>This works for me. Is this what you want?</p>
<p>Please note, screen_name can be a list. e.g.: ["aaaa","bbb","ccc","ddd"]</p>
<pre><code># Create a Twython instance with your application key and access token
twitter = Twython(APP_KEY, APP_SECRET, oauth_version=1)
output = twitter.lookup_user(screen_name="BarackObama")
for user in output:
print user["id_str"]
print user['created_at']
print user['followers_count']
</code></pre>
| 1 | 2016-08-04T17:27:08Z | [
"python",
"twitter",
"twython"
] |
What's the equivalent 'nth_element' function in Python? | 38,757,089 | <p>I want to implement Vantage Point Tree in the python, but it use the std::nth_element in the C++.</p>
<p>So I want to find the equivalent 'nth_element' function in Python or in numpy.</p>
<p>Notice, the nth_element would only partial order an array, and it's O(N).</p>
<pre><code>int the_array[10] = {4,5,7,3,6,0,1,2,9,8};
std::vector<int> the_v(the_array,the_array+10);
std::nth_element (the_v.begin()+0, the_v.begin()+5, the_v.begin()+10);
</code></pre>
<p>And now the vector could be:</p>
<pre><code>3,0,2,1,4,5,6,7,9,8
</code></pre>
<p>And I not only want to get the nth element, but also want to get the re-arrange the two part of list, [3,0,2,1,4] and [6,7,9,8].</p>
<p>Moreover, nth_element support accept a function that could compare the two elements, such as, in the below as, the vector is a vector op DataPoint, and the DistanceComparator function will compare the two points distance with the_v.begin():</p>
<pre><code>vector<DataPoint> the_v;
for(int n = 0; n < N; n++) the_v[n] = DataPoint(D, n, X + n * D);
std::nth_element (the_v.begin()+0, the_v.begin()+5, the_v.begin()+10,
DistanceComparator(the_v.begin()));
</code></pre>
<p><strong>EDIT:</strong></p>
<p>I've used the bhuvan-venkatesh's answer, and write some code to test.</p>
<pre><code>partition_timer = timeit.Timer("numpy.partition(a, 10000)",
"import numpy;numpy.random.seed(2);"+
"a = numpy.random.rand(10000000)")
print(partition_timer.timeit(10))
sort_timer = timeit.Timer("numpy.sort(a)",
"import numpy;numpy.random.seed(2);"+
"a = numpy.random.rand(10000000)")
print(sort_timer.timeit(10))
sorted_timer = timeit.Timer("sorted(a)",
"import numpy;numpy.random.seed(2);"+
"a = numpy.random.rand(10000000)")
print(sorted_timer.timeit(10))
</code></pre>
<p>and the result:</p>
<pre><code>2.2217168808
17.0386350155
281.301710844
</code></pre>
<p>And then, I will do more test using C++ code.</p>
<p>But there is a problem, when using the numpy, it would always return a new array, it will waste a lot of memory, when my array is huge.
How can I handle it.
Or I just have to write a C++ extend for python.</p>
<p><strong>EDIT2:</strong></p>
<p>@bhuvan-venkatesh Thanks for recommending the partition function.</p>
<p>I use partition like the below:</p>
<pre><code>import numpy
@profile
def for_numpy():
numpy.random.seed(2)
a = numpy.random.rand(1e7)
for i in range(100):
a.partition(numpy.random.randint(1e6))
if __name__ == '__main__':
for_numpy()
</code></pre>
<p>and ran the profiler like:</p>
<pre><code>python -m memory_profiler profiler_test.py
</code></pre>
<p>and the result is:</p>
<pre><code>Line # Mem usage Increment Line Contents
================================================
25 23.613 MiB 0.000 MiB @profile
26 def for_numpy():
27 23.613 MiB 0.000 MiB numpy.random.seed(2)
28 99.934 MiB 76.320 MiB a = numpy.random.rand(1e7)
29 100.004 MiB 0.070 MiB for i in range(100):
30 100.004 MiB 0.000 MiB a.partition(numpy.random.randint(1e6))
</code></pre>
<p>And it would not copy the whole array like:
numpy.partition(a, 3)</p>
<p><strong>Conclusion:</strong> numpy.ndarray.partition is the one I want to find.</p>
| 1 | 2016-08-04T01:29:03Z | 38,757,666 | <p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.partition.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.partition.html</a></p>
<p>Just make sure that the numpy partition will create two new arrays, meaning that you will quickly make a lot of new arrays. They are more efficient than python lists but will not do the exact same thing as in c++.</p>
<p>If you want the exact element then you can do a filter search which will still be O(n)</p>
<pre><code>array = np.array(...)
partition = np.partition(array, 5) # O(n)
element = np.where(partition==array[5]) # O(n)
left, right = partition[:element], partition[element+1:] # O(n)
</code></pre>
<p>So your new code is slower but that is the python-y way to do it.</p>
<p><strong><em>EDIT:</em></strong></p>
<p>So you need a comparator? Apart from writing a small function of your own there is no way -- in pure numpy as a keyword -- because each numpy operation is implemented in highly optimized c-code meaning that passing in a python function or a python lambda would force numpy to go to the object level every time and eval. </p>
<p><em>numpy.vectorize</em> goes to the object level, but in the end you will have to write your own code; <a href="https://rosettacode.org/wiki/Quickselect_algorithm#Python" rel="nofollow">Rosetta code</a> has the impelmentation if you want to create a more "optimized algorithm". (I put that in quotes because with python objects you will still be much slower than c or numpy code because of object level access). If speed is your true concern, but you want the python readability consider making an extension with cython.</p>
| 1 | 2016-08-04T02:40:13Z | [
"python",
"c++",
"nth-element"
] |
How can I put two arguments in one string for a input function in python? | 38,757,134 | <p>I am pretty new to python and am trying to create a small game just to help develop my skills, but I ran in to one code line I just can't figure out.</p>
<pre><code>r = str(input("Player 1 please enter a integer between 1 and 10: "))
</code></pre>
<p>I have another line that goes earlier and asks the player for a name. </p>
<p><code>name = input('Player 1 what is your name? ')</code></p>
<p>but I want it so that instead of it saying </p>
<p><code>r = str(input("Player 1 please enter a integer between 1 and 10: "))</code></p>
<p>that it says the name of the player I got from the input earlier on in the code?
How can I do this?</p>
| 0 | 2016-08-04T01:35:03Z | 38,757,160 | <p>You can use string formatting for this.</p>
<pre><code>r = str(input("%s please enter a integer between 1 and 10: " % name))
</code></pre>
| 1 | 2016-08-04T01:38:25Z | [
"python",
"string"
] |
How can I put two arguments in one string for a input function in python? | 38,757,134 | <p>I am pretty new to python and am trying to create a small game just to help develop my skills, but I ran in to one code line I just can't figure out.</p>
<pre><code>r = str(input("Player 1 please enter a integer between 1 and 10: "))
</code></pre>
<p>I have another line that goes earlier and asks the player for a name. </p>
<p><code>name = input('Player 1 what is your name? ')</code></p>
<p>but I want it so that instead of it saying </p>
<p><code>r = str(input("Player 1 please enter a integer between 1 and 10: "))</code></p>
<p>that it says the name of the player I got from the input earlier on in the code?
How can I do this?</p>
| 0 | 2016-08-04T01:35:03Z | 38,757,161 | <p>You can use the <a href="https://docs.python.org/3/tutorial/inputoutput.html#old-string-formatting" rel="nofollow">formated string</a>:</p>
<pre><code>r = str(input("%s please enter a integer between 1 and 10: " % player_name))
</code></pre>
<p><code>input</code> expects a string. So, first you construct a approprate string and then pass it. Simplified example of <code>%</code></p>
<pre><code>"%s is good" % "he" # transforms to "he is good"
</code></pre>
<p><code>%</code> It is a sort of substitution operation with type checking, eg.<code>%s</code> specifies string type.</p>
| 1 | 2016-08-04T01:38:44Z | [
"python",
"string"
] |
How can I put two arguments in one string for a input function in python? | 38,757,134 | <p>I am pretty new to python and am trying to create a small game just to help develop my skills, but I ran in to one code line I just can't figure out.</p>
<pre><code>r = str(input("Player 1 please enter a integer between 1 and 10: "))
</code></pre>
<p>I have another line that goes earlier and asks the player for a name. </p>
<p><code>name = input('Player 1 what is your name? ')</code></p>
<p>but I want it so that instead of it saying </p>
<p><code>r = str(input("Player 1 please enter a integer between 1 and 10: "))</code></p>
<p>that it says the name of the player I got from the input earlier on in the code?
How can I do this?</p>
| 0 | 2016-08-04T01:35:03Z | 38,757,234 | <p>I strongly suggest that you use (according to <a href="https://www.python.org/dev/peps/pep-3101/" rel="nofollow">PEP-3101</a>) :</p>
<pre><code>r = str(input('{} please enter a integer between 1 and 10: '.format(name)))
</code></pre>
<p>Instead of using the modulo operator (%) like :</p>
<pre><code>r = str(input("%s please enter a integer between 1 and 10: " % name))
</code></pre>
| 1 | 2016-08-04T01:49:24Z | [
"python",
"string"
] |
How can I put two arguments in one string for a input function in python? | 38,757,134 | <p>I am pretty new to python and am trying to create a small game just to help develop my skills, but I ran in to one code line I just can't figure out.</p>
<pre><code>r = str(input("Player 1 please enter a integer between 1 and 10: "))
</code></pre>
<p>I have another line that goes earlier and asks the player for a name. </p>
<p><code>name = input('Player 1 what is your name? ')</code></p>
<p>but I want it so that instead of it saying </p>
<p><code>r = str(input("Player 1 please enter a integer between 1 and 10: "))</code></p>
<p>that it says the name of the player I got from the input earlier on in the code?
How can I do this?</p>
| 0 | 2016-08-04T01:35:03Z | 38,757,404 | <p>You can also do this since you are working with integers:</p>
<p>However, this solution will only work in Python 2</p>
<pre><code>player_name = raw_input("What is your name")
r = int(input("%s Enter a number:" % player_name))
</code></pre>
| 1 | 2016-08-04T02:09:20Z | [
"python",
"string"
] |
Python: sqlite3: List index out of range | 38,757,175 | <p>Hi i am looking for some help with an "List index out of range" error i am getting while trying to insert data into my sqlite3 database.</p>
<p>This is my first attempt at using a class helper to parse data to and from my database, so please don't laugh at my unwieldy coding. :))</p>
<p>here is my main.py for testing out my class.</p>
<pre><code>import kivy
kivy.require('1.9.1')
from databaseHelper import DatabaseHelper
from kivy.app import App
from kivy.uix.widget import Widget
class Window1(Widget):
pass
class MyApp(App):
def build(self):
db = DatabaseHelper()
db.createDatabase('myDatabase')
columnData = ['unix', 'value', 'datestamp', 'keyword']
data = [57489543789, 2096, "12-12-12", "hello data"]
db.createTable(db.name, "datatable", columnData)
db.insertInto(db.name, "datatable", columnData, data)
return Window1()
if __name__ == '__main__':
MyApp().run()
</code></pre>
<p>Which creates the database and creates the table entries.
Here is my insertInto method from my DatabaseHelper class.</p>
<pre><code>def insertInto(self, db_name, table_name, column_data, data):
self.db_name = db_name
self.table_name = table_name
self.column_data = column_data
self.data = data
try:
conn = sqlite3.connect(self.db_name)
c = conn.cursor()
dataString = ''
string = ''
values = ''
for i in data:
string += column_data[i] + ", "
values += '?, '
dataString += 'self.data' + '[' + str(i) + ']' + ', '
string = string[0:string.__len__() - 2]
values = values[0:values.__len__() - 2]
dataString = dataString[0:dataString.__len__() - 2]
c.execute("INSERT INTO " + self.table_name + " (" + string + ")" + " VALUES " + "(" + values + ")",
"(" + dataString + ")"
)
conn.commit()
print("Succesfully input data into database: " + self.db_name + " Table: " + self.table_name)
except Exception as e:
print("Failed to input data into database: " + self.db_name + " Table: " + self.table_name)
print(e)
finally:
c.close()
conn.close()
</code></pre>
<p>Which throws an "List index out of range" error.</p>
<p>Any help would be much appreciated, thanks.</p>
| 0 | 2016-08-04T01:40:39Z | 38,757,450 | <p>There is way too much string manipulation going on there. This <code>insertInto</code> is probably closer to what you want:</p>
<pre><code>def insertInto(self, db_name, table_name, column_data, data):
self.db_name = db_name
self.table_name = table_name
self.column_data = column_data
self.data = data
try:
conn = sqlite3.connect(self.db_name)
c = conn.cursor()
c.execute(
"INSERT INTO {table} ({columns}) VALUES ({parameters})".format(
table=table_name,
columns=', '.join(column_data),
parameters=', '.join(['?'] * len(column_data)),
),
data
)
conn.commit()
print("Succesfully input data into database: " + self.db_name + " Table: " + self.table_name)
except Exception as e:
print("Failed to input data into database: " + self.db_name + " Table: " + self.table_name)
print(e)
finally:
c.close()
conn.close()
</code></pre>
<p>The key changes here are:</p>
<ul>
<li><p><a href="https://docs.python.org/3/library/stdtypes.html#str.join" rel="nofollow"><code>str.join</code></a>ing all of the items instead of concatenating the next part and a delimiter in a loop, then slicing away the delimiter afterwards.</p>
<p>Hereâs how it works:</p>
<pre><code>>>> ', '.join(['one', 'two', 'three'])
'one, two, three'
</code></pre></li>
<li><p>Using <a href="https://docs.python.org/3/library/stdtypes.html#str.format" rel="nofollow">string formatting</a> to build strings by naming parts instead of using the <code>+</code> operator a bunch. Itâs easier to read.</p></li>
<li><p>Using list multiplication to get some number of <code>?</code> placeholders.</p>
<p>And hereâs how that works:</p>
<pre><code>>>> ['?'] * 5
['?', '?', '?', '?', '?']
</code></pre></li>
<li><p>Passing <code>data</code> as a parameter instead of a string with the text <code>'(data[0], data[1], â¦)'</code>. <code>data</code> should probably be a tuple, too:</p>
<pre><code>columnData = ('unix', 'value', 'datestamp', 'keyword')
data = (57489543789, 2096, "12-12-12", "hello data")
</code></pre></li>
</ul>
<p>Iâm also not sure what a <code>DatabaseHelper</code> is supposed to represent. Does it have any state associated with it? <code>self.db_name</code>, <code>self.table_name</code>, <code>self.column_data</code>, <code>self.data</code>⦠they all seem to get overwritten with every insertion. A database connection seems like useful state to associate with a database helper, though:</p>
<pre><code>class DatabaseHelper:
def __init__(self, db_name):
self.connection = sqlite3.connect(self.db_name, isolation_level=None)
def close(self):
self.connection.close()
def insertInto(self, table_name, columns, data):
query = "INSERT INTO {table} ({columns}) VALUES ({parameters})".format(
table=table_name,
columns=', '.join(columns),
parameters=', '.join(['?'] * len(columns))
)
self.connection.execute(query, data)
print("Succesfully input data into database: " + db_name + " Table: " + table_name)
</code></pre>
<p>Then you can use it like this:</p>
<pre><code>class MyApp(App):
def build(self):
db = DatabaseHelper('myDatabase')
columnData = ('unix', 'value', 'datestamp', 'keyword')
data = (57489543789, 2096, "12-12-12", "hello data")
db.createTable("datatable", columnData)
db.insertInto("datatable", columnData, data)
return Window1()
</code></pre>
| 1 | 2016-08-04T02:15:08Z | [
"python",
"sqlite3",
"kivy"
] |
Best practice for config variables in Python | 38,757,351 | <p>So recently my programs have become more complex, and are starting to require more configuration. I've been doing the following, however it feels wrong...</p>
<pre><code>class config:
delay = 1.3
files = "path/to/stuff"
name = "test"
dostuff(config.name) #etc...
</code></pre>
<p>I've never been a fan of the ALL_CAPS_VARIABLE method, and was wondering if there is an "official" way to do this, and if there is anything wrong with my current method.</p>
| 1 | 2016-08-04T02:02:34Z | 38,757,685 | <p>I recommend use of <a href="https://github.com/henriquebastos/python-decouple" rel="nofollow">python-decouple</a>. This library allow separate code from configurations(data).</p>
<p>UPDATE:</p>
<h3>Brief explanation of usage of this library:</h3>
<p>Simply create a <strong>.env</strong> text file on your repository's root directory in the form:</p>
<pre><code>DEBUG=True
TEMPLATE_DEBUG=True
EMAIL_PORT=405
SECRET_KEY=ARANDOMSECRETKEY
DATABASE_URL=mysql://myuser:mypassword@myhost/mydatabase
PERCENTILE=90%
#COMMENTED=42
</code></pre>
<p>On your python code, can be used in this way:</p>
<pre><code>SECRET_KEY = config('SECRET_KEY')
DEBUG = config('DEBUG', default=False, cast=bool)
EMAIL_HOST = config('EMAIL_HOST', default='localhost')
EMAIL_PORT = config('EMAIL_PORT', default=25, cast=int)
</code></pre>
| 2 | 2016-08-04T02:42:25Z | [
"python"
] |
How do I set a boundary in which my image can go through in Pygame?How do I keep an image from going behind another? | 38,757,357 | <p>This is the code for my pygame</p>
<pre><code>import pygame
import os
img_path=os.path.join('C:/Desktop/Python Stuff','Angry Birds.jpg')
class pic(object):
def __init__(self):
""" The constructor of the class """
self.image = pygame.image.load(img_path)
# the bird's position
self.x = 0
self.y = 0
def handle_keys(self):
""" Handles Keys """
key = pygame.key.get_pressed()
dist = 5
if key[pygame.K_DOWN]: # down key
self.y += dist # move down
elif key[pygame.K_UP]: # up key
self.y -= dist # move up
if key[pygame.K_RIGHT]: # right key
self.x += dist # move right
elif key[pygame.K_LEFT]: # left key
self.x -= dist # move left
def draw(self, surface):
""" Draw on surface """
# blit yourself at your current position
surface.blit(self.image, (self.x, self.y))
</code></pre>
<p>This is the screen size. Is this where the I should restrict the image's boundaries? </p>
<pre><code>pygame.init()
screen=pygame.display.set_mode([1500,850])
Pic=pic()
pygame.display.set_caption('Angry Birds')
</code></pre>
<p>This is the image that I want to have a boundary for</p>
<pre><code>pic=pygame.image.load('Angry Birds.jpg')
keep_going=True
while keep_going:
event=pygame.event.poll()
*emphasized text* if event.type == pygame.QUIT:
pygame.quit()
running = False
Pic.handle_keys()
screen.blit(pic, (-200, 0))
Pic.draw(screen)
</code></pre>
<p>This image is what the 'Angry Birds' image is going behind. How do I stop it from going behind this image? </p>
<pre><code>tux=pygame.image.load('Rock Lee.gif')
screen.blit(tux,(500,600))
screen.blit(tux,(500,400))
screen.blit(tux,(500,0))
screen.blit(tux,(900,200))
screen.blit(tux,(900,400))
screen.blit(tux,(900,600))
screen.blit(tux,(1300,0))
screen.blit(tux,(1300,200))
screen.blit(tux,(1300,600))
pygame.display.get_surface([1500,850]).get_size([1500,850])
pygame.display.update()
</code></pre>
| 0 | 2016-08-04T02:03:33Z | 38,785,647 | <h2>Border collision</h2>
<p>An easy way to implement border collision is to just check if the current position is outside the screen, and if it is you move it back. It's easiest done by creating a Rect object from the <em>screen</em> which you could pass in an <strong>update</strong> method of your class <em>pic</em> (classes should start with capital letter). So start with creating an update method were you pass the <em>screen</em> object.</p>
<p>Also, since the x and y position reference the <strong>top left</strong> of the image you need to take that in consideration when checking for border collision with the right side and the bottom. Best would be to create attributes <em>width</em> and <em>height</em> instead of what I'm doing below.</p>
<pre><code>def update(self, screen):
"""Method that check border collision for object 'pic'."""
border = screen.get_rect()
width = self.image.get_width()
height = self.image.get_height()
if self.x < border.left:
# You collided with the left side of the border.
# Move your character back to the screen
self.x = border.left
elif self.x > border.right - width:
# You collided with the right side of the border.
# Move your character back to the screen
self.x = border.right - width
if self.y < border.top:
# You collided with the top of the border.
# Move your character back to the screen
self.y = border.top
elif self.y > border.bottom - height:
# You collided with the bottom of the border.
# Move your character back to the screen
self.y = border.bottom - height
</code></pre>
<p>All you have to do is call this method every time in your loop, like so:</p>
<pre><code>Pic.handle_keys() # Variables should be all lowercase! Not like it's currently.
Pic.update(screen) # Variables should be all lowercase! Not like it's currently.
Pic.draw(screen) # Variables should be all lowercase! Not like it's currently.
</code></pre>
<hr>
<h2>Keep image in front</h2>
<p>When blitting to the screen it draws each image on top of each other. In your case you're blitting your character and then the rocks, meaning the rocks always be on top of your character. Changing this is simple, blit the rocks first and the character last and your character will end up in front of your rocks.</p>
| 0 | 2016-08-05T09:18:08Z | [
"python",
"pygame"
] |
How do I set a boundary in which my image can go through in Pygame?How do I keep an image from going behind another? | 38,757,357 | <p>This is the code for my pygame</p>
<pre><code>import pygame
import os
img_path=os.path.join('C:/Desktop/Python Stuff','Angry Birds.jpg')
class pic(object):
def __init__(self):
""" The constructor of the class """
self.image = pygame.image.load(img_path)
# the bird's position
self.x = 0
self.y = 0
def handle_keys(self):
""" Handles Keys """
key = pygame.key.get_pressed()
dist = 5
if key[pygame.K_DOWN]: # down key
self.y += dist # move down
elif key[pygame.K_UP]: # up key
self.y -= dist # move up
if key[pygame.K_RIGHT]: # right key
self.x += dist # move right
elif key[pygame.K_LEFT]: # left key
self.x -= dist # move left
def draw(self, surface):
""" Draw on surface """
# blit yourself at your current position
surface.blit(self.image, (self.x, self.y))
</code></pre>
<p>This is the screen size. Is this where the I should restrict the image's boundaries? </p>
<pre><code>pygame.init()
screen=pygame.display.set_mode([1500,850])
Pic=pic()
pygame.display.set_caption('Angry Birds')
</code></pre>
<p>This is the image that I want to have a boundary for</p>
<pre><code>pic=pygame.image.load('Angry Birds.jpg')
keep_going=True
while keep_going:
event=pygame.event.poll()
*emphasized text* if event.type == pygame.QUIT:
pygame.quit()
running = False
Pic.handle_keys()
screen.blit(pic, (-200, 0))
Pic.draw(screen)
</code></pre>
<p>This image is what the 'Angry Birds' image is going behind. How do I stop it from going behind this image? </p>
<pre><code>tux=pygame.image.load('Rock Lee.gif')
screen.blit(tux,(500,600))
screen.blit(tux,(500,400))
screen.blit(tux,(500,0))
screen.blit(tux,(900,200))
screen.blit(tux,(900,400))
screen.blit(tux,(900,600))
screen.blit(tux,(1300,0))
screen.blit(tux,(1300,200))
screen.blit(tux,(1300,600))
pygame.display.get_surface([1500,850]).get_size([1500,850])
pygame.display.update()
</code></pre>
| 0 | 2016-08-04T02:03:33Z | 38,792,765 | <h1>A) Keep rect on screen</h1>
<p>The simplest way would be using <a href="http://www.pygame.org/docs/ref/rect.html#pygame.Rect.clamp_ip" rel="nofollow" title="Rect.clamp_ip(rect)"><code>Rect.clamp_ip(rect)</code></a> on a <a href="http://www.pygame.org/docs/ref/sprite.html#pygame.sprite.Sprite" rel="nofollow" title="Sprite"><code>Sprite</code></a></p>
<pre><code>screen_size = Rect(1500,850)
# right after when you move the bird
bird.rect.clamp_ip(screen_size)
</code></pre>
<h1>B) rect on rect collision</h1>
<pre><code># Where .xvel and .yvel are bird's movement per frame
new_rect = bird.rect.move(bird.vxel, bird.yvel)
if not new_rect.collide_rect(other_bird.rect)
bird.rect = new_rect
else
print("Ouch!")
</code></pre>
| 0 | 2016-08-05T15:23:08Z | [
"python",
"pygame"
] |
String to time python in hours and minutes and calculate difference | 38,757,367 | <p>I am taking time as input from the user in the <code>HH:MM</code> format. Let's say 00:00 and now I want to keep adding a minute to the time and make it 00:01, 00:02 and so on.</p>
<p>Also, I am taking two inputs form the user <code>start_time</code> and <code>end_time</code> as strings. How can I calculate the difference between the two times as well in minutes?</p>
<p>I am new to Python and any help would be appreciated!</p>
<p>I am using the below code:</p>
<pre><code>#to calculate difference in time
time_diff = datetime.strptime(end_time, '%H:%M') - datetime.strptime(start_time, '%H:%M')
minutes = int(time_diff.total_seconds()/60)
print minutes
#to convert string to time format HH:MM
start_time = datetime.strptime(start_time, '%H:%M').time()
#to increment time by 1 minute
start_time = start_time + datetime.timedelta(minutes=1)
</code></pre>
<p>I am not able to increment the <code>start_time</code> using timedelta.</p>
| 0 | 2016-08-04T02:04:56Z | 38,757,716 | <pre><code>import datetime
time_diff = datetime.datetime.strptime(end_time, '%H:%M') - datetime.datetime.strptime(start_time, '%H:%M')
minutes = int(time_diff.total_seconds()/60)
print minutes
</code></pre>
<p><code>datetime</code> is a class of the <code>datetime</code> module that has a classmethod called <code>strptime</code>. The nomenclature is a bit confusing, but this should work as you intend it to.</p>
<p>As for adding a time delta, you'll need store the start time as a <code>datetime</code> object in order to get that to work:</p>
<pre><code>start_datetime = datetime.datetime.strptime(start_time, '%H:%M')
start_datetime = start_datetime + datetime.timedelta(minutes=1)
print start_datetime
</code></pre>
| 0 | 2016-08-04T02:46:35Z | [
"python",
"datetime",
"time"
] |
String to time python in hours and minutes and calculate difference | 38,757,367 | <p>I am taking time as input from the user in the <code>HH:MM</code> format. Let's say 00:00 and now I want to keep adding a minute to the time and make it 00:01, 00:02 and so on.</p>
<p>Also, I am taking two inputs form the user <code>start_time</code> and <code>end_time</code> as strings. How can I calculate the difference between the two times as well in minutes?</p>
<p>I am new to Python and any help would be appreciated!</p>
<p>I am using the below code:</p>
<pre><code>#to calculate difference in time
time_diff = datetime.strptime(end_time, '%H:%M') - datetime.strptime(start_time, '%H:%M')
minutes = int(time_diff.total_seconds()/60)
print minutes
#to convert string to time format HH:MM
start_time = datetime.strptime(start_time, '%H:%M').time()
#to increment time by 1 minute
start_time = start_time + datetime.timedelta(minutes=1)
</code></pre>
<p>I am not able to increment the <code>start_time</code> using timedelta.</p>
| 0 | 2016-08-04T02:04:56Z | 38,757,760 | <p>First part of your question, you can use the <a href="https://docs.python.org/3.5/library/datetime.html" rel="nofollow">datetime module</a>:</p>
<pre><code>from datetime import datetime as dt
from datetime import timedelta as td
UsrInput = '00:00'
fmtString = '%H:%M'
myTime = dt.strptime(UsrInput, fmtString)
increment = td(0,1)
for count in range(10):
myTime += increment
print (dt.strftime(myTime, fmtString))
</code></pre>
<p>Second part will also use datetime, as such:</p>
<pre><code>from datetime import datetime as dt
from datetime import timedelta as td
start_time = '00:01'
end_time = '00:23'
fmtString = '%H:%M'
myStart = dt.strptime(start_time, fmtString)
myEnd = dt.strptime(end_time, fmtString)
difference = myEnd - myStart
print(td.strftime(difference, '%M')
</code></pre>
| 0 | 2016-08-04T02:52:09Z | [
"python",
"datetime",
"time"
] |
Why boost python calls the copy constructor? | 38,757,369 | <p>Assume a wrapper class is provided</p>
<pre><code>class Example
{
public:
Example()
{
std::cout << "hello\n";
}
Example(const Example& e)
{
std::cout << "copy\n";
counter++;
}
~Example()
{
std::cout << "bye\n";
}
Example& count()
{
std::cout << "Count: " << counter << std::endl;
return *this;
}
static int counter;
};
int Example::counter = 0;
</code></pre>
<p>which is exposed to python using</p>
<pre><code>using namespace boost::python;
class_<Example>("Example", init<>())
.def("count", &Example::count, return_value_policy<copy_non_const_reference>());
</code></pre>
<p>now if i execute the following python code</p>
<pre><code>obj=Example()
obj.count().count()
</code></pre>
<p>I get</p>
<pre><code>hello
Count: 0
copy
Count: 1
copy
bye
</code></pre>
<p>which means boost python is using the copy constructor.</p>
<p>My questions:</p>
<ol>
<li>Why the copy constructor is called?</li>
<li><p>If I use boost::noncopyable, then the copy constructor is not called. However, in this case I cannot execute my python code as it complains about a to_python converter (see below). Is there way to fix this?</p>
<p>TypeError: No to_python (by-value) converter found for C++ type: class Example</p></li>
</ol>
| 1 | 2016-08-04T02:05:07Z | 38,759,433 | <p>The copy constructor is called because <code>return_value_policy<copy_non_const_reference>()</code> is being set and by <a href="http://www.boost.org/doc/libs/1_37_0/libs/python/doc/v2/copy_non_const_reference.html" rel="nofollow">boost docs</a>:</p>
<blockquote>
<p>copy_non_const_reference is a model of ResultConverterGenerator which
can be used to wrap C++ functions returning a reference-to-non-const
type such that the referenced value is copied into a new Python
object.</p>
</blockquote>
<p>It complains because the return value is required to be copied but at the same time the class is <code>noncopyable</code> so the default converter is not found. </p>
<p>To fix this either don't use <code>return_value_policy<copy_non_const_reference>()</code> or define your custom <code>to_python</code> converter.</p>
| 2 | 2016-08-04T05:54:04Z | [
"python",
"boost",
"boost-python"
] |
Error when installing 'pycryptodome' | 38,757,403 | <p>Whenever I install the package, I get this string out:-</p>
<p>Command "/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -u -c "import setuptools, tokenize;<strong>file</strong>='/private/var/folders/l8/nrwgdgr96v33vfcd95r2yl3m0000gp/T/pip-build-AJpKPV/pycryptodomex/setup.py';exec(compile(getattr(tokenize, 'open', open)(<strong>file</strong>).read().replace('\r\n', '\n'), <strong>file</strong>, 'exec'))" install --record /var/folders/l8/nrwgdgr96v33vfcd95r2yl3m0000gp/T/pip-GEiSB1-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/l8/nrwgdgr96v33vfcd95r2yl3m0000gp/T/pip-build-AJpKPV/pycryptodomex/</p>
<p>Can anyone help?</p>
| 0 | 2016-08-04T02:09:19Z | 38,782,031 | <p>Have you tried to upgrade <code>pip</code> first?</p>
<pre><code>pip install --upgrade pip
</code></pre>
<p>And then install the <code>pycryptodome</code> package.</p>
| 0 | 2016-08-05T05:53:52Z | [
"python",
"osx",
"pip"
] |
How do you submit a PHP form that doesn't return results immediately using Python? | 38,757,415 | <p>There is a PHP form which queries a massive database. The URL for the form is <a href="https://db.slickbox.net/venues.php" rel="nofollow">https://db.slickbox.net/venues.php</a>. It takes up to 10 minutes after the form is sent for results to be returned, and the results are returned inline on the same page. I've tried using Requests, URLLib2, LXML, and Selenium but I cannot come up with a solution using any of these libraries. Does anyone know of a way to retrieve the page source of the results after submitting this form?</p>
<p>If you know of a solution for this, for the sake of testing just fill out the name field ("vname") with the name of any store/gas station that comes to mind. Ultimately, I need to also set the checkboxes with the "checked" attribute but that's a subsequent goal after I get this working. Thank you!</p>
| 0 | 2016-08-04T02:10:28Z | 38,757,548 | <p>I usually rely on Curl to do these kind of thing.
Instead of sending the form with the button to retrieve the source, call directly the response page (giving it your request).
As i work under PHP, it's quite easy to do this. With python, you will need pycURL to manage the same thing.</p>
<p>So the only thing to do is to call venues.php with the good arguments values thrown using <a href="http://stackoverflow.com/questions/28395/passing-post-values-with-curlhttp://">POST method with Curl</a>.</p>
<p>This way, you will need to prepare your request (country code, cat name), but you won't need to check the checkbox nor load the website page on your browser.</p>
<pre><code>set_ini(max_execution_time,1200) // wait 20 minutes before quitting
$ch = curl_init();
// set URL and other appropriate options
curl_setopt($ch, CURLOPT_URL, "https://db.slickbox.net/venues.php");
curl_setopt($ch, CURLOPT_HEADER, 0);
// prepare arguments for the form
$data = array('adlock ' => 1, 'age' => 0,'country' => 145,'imgcnt'=>0, 'lock'=>0,'regex'=>1,'submit'=>'Search','vname'=>'test');
//add arguments to our request
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
//launch request
if( ! $result = curl_exec($ch))
{
trigger_error(curl_error($ch));
}
echo $result;
</code></pre>
| 0 | 2016-08-04T02:26:07Z | [
"php",
"python",
"forms",
"post"
] |
How do you submit a PHP form that doesn't return results immediately using Python? | 38,757,415 | <p>There is a PHP form which queries a massive database. The URL for the form is <a href="https://db.slickbox.net/venues.php" rel="nofollow">https://db.slickbox.net/venues.php</a>. It takes up to 10 minutes after the form is sent for results to be returned, and the results are returned inline on the same page. I've tried using Requests, URLLib2, LXML, and Selenium but I cannot come up with a solution using any of these libraries. Does anyone know of a way to retrieve the page source of the results after submitting this form?</p>
<p>If you know of a solution for this, for the sake of testing just fill out the name field ("vname") with the name of any store/gas station that comes to mind. Ultimately, I need to also set the checkboxes with the "checked" attribute but that's a subsequent goal after I get this working. Thank you!</p>
| 0 | 2016-08-04T02:10:28Z | 38,757,620 | <p>How about <a href="https://github.com/jeanphix/Ghost.py" rel="nofollow">ghost</a>?</p>
<pre><code>from ghost import Ghost
ghost = Ghost()
with ghost.start() as session:
page, extra_resources = session.open("https://db.slickbox.net/venues.php", wait_onload_event=True)
ghost.set_field_value("input[name=vname]", "....")
# Any other values
page.fire_on('form', 'submit')
page, resources = ghost.wait_for_page_loaded()
content = session.content # or page.content I forgot which
</code></pre>
<p>After you can use beautifulsoup to parse the HTML or Ghost may have some rudimentary utilities to do that.</p>
| 0 | 2016-08-04T02:35:38Z | [
"php",
"python",
"forms",
"post"
] |
'None' is being displayed in the output of quotient and remainder | 38,757,462 | <p>Writing a program to print quotient and remainder, but everywhere change the quotientProblem function into one called quotientString that merely returns the string rather than printing the string directly. Have the main function print the result of each call to the quotientString function.</p>
<pre><code>def value():
a=int(input("Enter a number: "))
b=int(input("Enter next number: "))
z=print("When",a,"is divided by",b,"the remainder is",a%b,"and the quotient is",a//b,".")
print (division(a,b,z))
def division(x,y,z):
return z
value()
</code></pre>
<p>When I executed</p>
<pre><code>>>>Enter a number: 5
>>>Enter next number: 3
When 5 is divided by 3 the remainder is 2 and the quotient is 1 .
None
</code></pre>
<p>Here <code>None</code> keeps displaying. </p>
| -2 | 2016-08-04T02:16:18Z | 38,757,501 | <p>You assigned the result of the <code>print</code> call to <code>z</code>. <code>print</code> has no specific return, which means it returns <code>None</code>. Your <code>division</code> function then prints that. Don't use the return value from <code>print</code>. I have no idea what you're expecting it to do, but initializing <code>z</code> to anything useful is not a possible outcome.</p>
| 0 | 2016-08-04T02:20:50Z | [
"python",
"python-3.x"
] |
'None' is being displayed in the output of quotient and remainder | 38,757,462 | <p>Writing a program to print quotient and remainder, but everywhere change the quotientProblem function into one called quotientString that merely returns the string rather than printing the string directly. Have the main function print the result of each call to the quotientString function.</p>
<pre><code>def value():
a=int(input("Enter a number: "))
b=int(input("Enter next number: "))
z=print("When",a,"is divided by",b,"the remainder is",a%b,"and the quotient is",a//b,".")
print (division(a,b,z))
def division(x,y,z):
return z
value()
</code></pre>
<p>When I executed</p>
<pre><code>>>>Enter a number: 5
>>>Enter next number: 3
When 5 is divided by 3 the remainder is 2 and the quotient is 1 .
None
</code></pre>
<p>Here <code>None</code> keeps displaying. </p>
| -2 | 2016-08-04T02:16:18Z | 38,757,505 | <p>What do you expect to happen? <code>division</code> is a function that takes three arguments and returns the last. So <code>division(a, b, z)</code> returns <code>z</code>. What is <code>z</code>? It's the return value of <code>print</code>. The <code>print</code> function returns <code>None</code>. So <code>division(a, b, z)</code> is <code>None</code>. And hence, your <code>print(division(a,b,z))</code> line prints <code>None</code>.</p>
| 0 | 2016-08-04T02:21:08Z | [
"python",
"python-3.x"
] |
creating an sql query using a DataFrame column | 38,757,655 | <p>Based on this qs from stackoverflow:
<a href="http://stackoverflow.com/questions/29738859/sql-where-in-clause-using-column-in-pandas-dataframe">SQL where in clause using column in pandas dataframe</a></p>
<p>I tried:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame()
df1['col1'] = [1,2,3,4,5]
str = ','.join([str(x) for x in df1['col1'].unique().tolist()])
</code></pre>
<p>However, I see the below error:</p>
<pre><code>TypeError: 'list' object is not callable
</code></pre>
<p>I want to query all the unique items in a column into another SQL table and then append those results to my original dataframe</p>
<p>is there another built-in approach altogether for this pls?</p>
<p>thanks</p>
| 0 | 2016-08-04T02:39:08Z | 38,757,759 | <p>Maybe something like:</p>
<pre><code>df1['new'] = df1['col1'].apply(lambda x: f(x) if x in df1['col1'].unique().tolist() else 'n/a')
</code></pre>
<p>You'll need to define f(x) to take in the unique value, run the query, and return the results you want to append. You can also change 'n/a' to whatever you want if it is not a unique value. </p>
<pre><code>def f(x):
"RUN QUERY HERE"
return result
</code></pre>
| 0 | 2016-08-04T02:52:09Z | [
"python",
"sql",
"pandas"
] |
Remove NaNs from numpy array that has string values and numerical values | 38,757,657 | <p>I have a <code>(M x N)</code> numpy array, which contains string values, numerical values and nans. I want to drop the rows which contain <code>NaN</code> values. I've tried:</p>
<pre><code>arr[~np.isnan(arr)]
</code></pre>
<p>however i get the error: </p>
<pre><code>TypeError: ufunc 'isnan' not supported for the input types, and the inputs
could not be safely coerced to any supported types according to the casting rule ''save''
</code></pre>
<p>Solution that I used:</p>
<pre><code># get column with NaNs, find row index, store in list
nan_idx = []
for v,m in enumerate(arr[:,row]):
if np.isnan(m):
nan_idx.append(v)
# separate columns with strings and non strings
numeric_cols = arr[:,:some_idx]
non_numeric_cols = arr[:,other_idx:]
# remove the nans
numeric_cols = numeric_cols[~np.isnan(numeric_cols).any(axis=1)]
non_numeric_cols = np.delete(non_numeric_cols, nan_idx, 0)
</code></pre>
| 0 | 2016-08-04T02:39:10Z | 38,759,206 | <p>I get your error if I make an object dtype array:</p>
<pre><code>In [112]: arr=np.ones((3,2),object)
In [113]: arr
Out[113]:
array([[1, 1],
[1, 1],
[1, 1]], dtype=object)
In [114]: np.isnan(arr)
...
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
</code></pre>
<p>That <code>dtype</code> is the only one that can mix numbers, strings and <code>np.nan</code> (which is a float). You can't do a lot of whole-array operations with this.</p>
<p>I can't readily test your solution because several variables are unknown.</p>
<p>With a more general <code>arr</code>, I don't see how you can remove a row without iterating over both rows and cols, testing whether each value is numeric, and if numeric <code>isnan</code>. <code>np.isnan</code> is picky and can only operate on a float.</p>
<p>As mentioned in the 'possible duplicate' pandas <code>isnull</code> is more general.</p>
<p>So basically two points:</p>
<ul>
<li><p>what's a good general test that can handle strings as well as numbers</p></li>
<li><p>can you get around a full iteration, assuming the array is dtype object.</p></li>
</ul>
<p><a href="http://stackoverflow.com/questions/36198118/np-isnan-on-arrays-of-dtype-object">np.isnan on arrays of dtype "object"</a>
My solution here is to do a list comprehension to loop over a 1d array.</p>
<p>From that I can test each element of <code>arr</code> with:</p>
<pre><code>In [125]: arr
Out[125]:
array([['str', 1],
[nan, 'str'],
[1, 1]], dtype=object)
In [136]: for row in arr:
...: for col in row:
...: print(np.can_cast(col,float) and np.isnan(col))
False
False
True
False
False
False
</code></pre>
| 0 | 2016-08-04T05:37:39Z | [
"python",
"arrays",
"numpy"
] |
Remove NaNs from numpy array that has string values and numerical values | 38,757,657 | <p>I have a <code>(M x N)</code> numpy array, which contains string values, numerical values and nans. I want to drop the rows which contain <code>NaN</code> values. I've tried:</p>
<pre><code>arr[~np.isnan(arr)]
</code></pre>
<p>however i get the error: </p>
<pre><code>TypeError: ufunc 'isnan' not supported for the input types, and the inputs
could not be safely coerced to any supported types according to the casting rule ''save''
</code></pre>
<p>Solution that I used:</p>
<pre><code># get column with NaNs, find row index, store in list
nan_idx = []
for v,m in enumerate(arr[:,row]):
if np.isnan(m):
nan_idx.append(v)
# separate columns with strings and non strings
numeric_cols = arr[:,:some_idx]
non_numeric_cols = arr[:,other_idx:]
# remove the nans
numeric_cols = numeric_cols[~np.isnan(numeric_cols).any(axis=1)]
non_numeric_cols = np.delete(non_numeric_cols, nan_idx, 0)
</code></pre>
| 0 | 2016-08-04T02:39:10Z | 38,763,221 | <p>One solution is you can use np.sum() to sum each row up. because nan + any float = nan, so that you can get which lines incluede nan value.</p>
<pre><code>np.sum(arr,axis = 1)
rowsWithoutNaN = [ not(np.isnan(i)) for i in b]
result = np.array( [val for shouldKeep, val in zip(rowsWithoutNaN,arr) if shouldKeep])
</code></pre>
| 0 | 2016-08-04T09:15:06Z | [
"python",
"arrays",
"numpy"
] |
Targets don't match node IDs in networkx json file | 38,757,701 | <p>I have a network I want to output to a json file. However, when I output it, node targets become converted to numbers and do not match the node ids which are strings.</p>
<p>For example:</p>
<pre><code>G = nx.DiGraph(data)
G.edges()
</code></pre>
<p>results in:</p>
<pre><code>[(22, 'str1'),
(22, 'str2'),
(22, 'str3')]
</code></pre>
<p>in python. This is correct.</p>
<p>But in the output, when I write out the data like so...</p>
<pre><code>json.dump(json_graph.node_link_data(G), f,
indent = 4, sort_keys = True, separators=(',',':'))
</code></pre>
<p>while the ids for the three target nodes 'str1', 'str2', and 'str3'...</p>
<pre><code>{
"id":"str1"
},
{
"id":"str2"
},
{
"id":"str3"
}
</code></pre>
<p>The targets of node 22 have been turned into numbers</p>
<pre><code> {
"source":22,
"target":972
},
{
"source":22,
"target":1261
},
{
"source":22,
"target":1259
}
</code></pre>
<p>This happens for all nodes that have string ids</p>
<p>Why is this, and how can I prevent it?</p>
<p>The desired result is that either "target" fields should keep the string ids, or that the string ids become numeric in a way that they match the targets.</p>
| 0 | 2016-08-04T02:44:21Z | 38,765,461 | <blockquote>
<p>Why is this</p>
</blockquote>
<p>It's a feature. Not all graph libraries accept strings as identifiers, but all that I know of accept integers.</p>
<blockquote>
<p>how can I prevent it?</p>
</blockquote>
<p>Replace the ids by node names using the <code>nodes</code> map:</p>
<pre><code>>>> import networkx as nx
>>> import pprint
>>> g = nx.DiGraph()
>>> g.add_edge(1, 'foo')
>>> g.add_edge(2, 'bar')
>>> g.add_edge('foo', 'bar')
>>> res = nx.node_link_data(g)
>>> pprint.pprint(res)
{'directed': True,
'graph': {},
'links': [{'source': 0, 'target': 3},
{'source': 1, 'target': 2},
{'source': 3, 'target': 2}],
'multigraph': False,
'nodes': [{'name': 1}, {'name': 2}, {'name': 'bar'}, {'name': 'foo'}]}
>>> res['links'] = [
{
'source': res['nodes'][link['source']]['name'],
'target': res['nodes'][link['target']]['name']
}
for link in res['links']]
>>> pprint.pprint(res)
{'directed': True,
'graph': {},
'links': [{'source': 1, 'target': 'foo'},
{'source': 2, 'target': 'bar'},
{'source': 'foo', 'target': 'bar'}],
'multigraph': False,
'nodes': [{'name': 1}, {'name': 2}, {'name': 'bar'}, {'name': 'foo'}]}
</code></pre>
| 1 | 2016-08-04T10:57:52Z | [
"python",
"networkx"
] |
pandas.io.common.CParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file | 38,757,713 | <p>I have large csv files with size more than 10 mb each and about 50+ such files. These inputs have more than 25 columns and more than 50K rows. </p>
<p>All these have same headers and I am trying to merge them into one csv with headers to be mentioned only one time. </p>
<p>Option: One
Code: Working for small sized csv -- 25+ columns but size of the file in kbs.</p>
<pre><code>import pandas as pd
import glob
interesting_files = glob.glob("*.csv")
df_list = []
for filename in sorted(interesting_files):
df_list.append(pd.read_csv(filename))
full_df = pd.concat(df_list)
full_df.to_csv('output.csv')
</code></pre>
<p>But the above code does not work for the larger files and gives the error. </p>
<p>Error: </p>
<pre><code>Traceback (most recent call last):
File "merge_large.py", line 6, in <module>
all_files = glob.glob("*.csv", encoding='utf8', engine='python')
TypeError: glob() got an unexpected keyword argument 'encoding'
lakshmi@lakshmi-HP-15-Notebook-PC:~/Desktop/Twitter_Lat_lon/nasik_rain/rain_2$ python merge_large.py
Traceback (most recent call last):
File "merge_large.py", line 10, in <module>
df = pd.read_csv(file_,index_col=None, header=0)
File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 562, in parser_f
return _read(filepath_or_buffer, kwds)
File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 325, in _read
return parser.read()
File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 815, in read
ret = self._engine.read(nrows)
File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 1314, in read
data = self._reader.read(nrows)
File "pandas/parser.pyx", line 805, in pandas.parser.TextReader.read (pandas/parser.c:8748)
File "pandas/parser.pyx", line 827, in pandas.parser.TextReader._read_low_memory (pandas/parser.c:9003)
File "pandas/parser.pyx", line 881, in pandas.parser.TextReader._read_rows (pandas/parser.c:9731)
File "pandas/parser.pyx", line 868, in pandas.parser.TextReader._tokenize_rows (pandas/parser.c:9602)
File "pandas/parser.pyx", line 1865, in pandas.parser.raise_parser_error (pandas/parser.c:23325)
pandas.io.common.CParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file.
</code></pre>
<p>Code: Columns 25+ but size of the file more than 10mb </p>
<p>Option: <a href="http://stackoverflow.com/questions/20906474/import-multiple-csv-files-into-pandas-and-concatenate-into-one-dataframe">Two</a>
Option: <a href="http://stackoverflow.com/questions/36915188/pandas-cparsererror-error-tokenizing-data">Three</a></p>
<p>Option: Four </p>
<pre><code>import pandas as pd
import glob
interesting_files = glob.glob("*.csv")
df_list = []
for filename in sorted(interesting_files):
df_list.append(pd.read_csv(filename))
full_df = pd.concat(df_list)
full_df.to_csv('output.csv')
</code></pre>
<p>Error: </p>
<pre><code>Traceback (most recent call last):
File "merge_large.py", line 6, in <module>
allFiles = glob.glob("*.csv", sep=None)
TypeError: glob() got an unexpected keyword argument 'sep'
</code></pre>
<p>I have searched extensively but I am not able to find a solution to concatenate large csv files with same headers into one file. </p>
<p><strong>Edit:</strong> </p>
<p>Code:</p>
<pre><code>import dask.dataframe as dd
ddf = dd.read_csv('*.csv')
ddf.to_csv('master.csv',index=False)
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "merge_csv_dask.py", line 5, in <module>
ddf.to_csv('master.csv',index=False)
File "/usr/local/lib/python2.7/dist-packages/dask/dataframe/core.py", line 792, in to_csv
return to_csv(self, filename, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/dask/dataframe/io.py", line 762, in to_csv
compute(*values)
File "/usr/local/lib/python2.7/dist-packages/dask/base.py", line 179, in compute
results = get(dsk, keys, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/dask/threaded.py", line 58, in get
**kwargs)
File "/usr/local/lib/python2.7/dist-packages/dask/async.py", line 481, in get_async
raise(remote_exception(res, tb))
dask.async.ValueError: could not convert string to float: {u'type': u'Point', u'coordinates': [4.34279, 50.8443]}
Traceback
---------
File "/usr/local/lib/python2.7/dist-packages/dask/async.py", line 263, in execute_task
result = _execute_task(task, data)
File "/usr/local/lib/python2.7/dist-packages/dask/async.py", line 245, in _execute_task
return func(*args2)
File "/usr/local/lib/python2.7/dist-packages/dask/dataframe/csv.py", line 49, in bytes_read_csv
coerce_dtypes(df, dtypes)
File "/usr/local/lib/python2.7/dist-packages/dask/dataframe/csv.py", line 73, in coerce_dtypes
df[c] = df[c].astype(dtypes[c])
File "/usr/local/lib/python2.7/dist-packages/pandas/core/generic.py", line 2950, in astype
raise_on_error=raise_on_error, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 2938, in astype
return self.apply('astype', dtype=dtype, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 2890, in apply
applied = getattr(b, f)(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 434, in astype
values=values, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 477, in _astype
values = com._astype_nansafe(values.ravel(), dtype, copy=True)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/common.py", line 1920, in _astype_nansafe
return arr.astype(dtype
</code></pre>
<p>)</p>
| 2 | 2016-08-04T02:46:09Z | 38,758,363 | <p>If I understand your problem, you have large csv files with the same structure that you want to merge into one big CSV file. </p>
<p>My suggestion is to use <a href="http://dask.pydata.org/en/latest/examples/dataframe-csv.html" rel="nofollow"><code>dask</code></a> from Continuum Analytics to handle this job. You can merge your files but also perform out-of-core computations and analysis of the data just like pandas.</p>
<pre class="lang-sh prettyprint-override"><code>### make sure you include the [complete] tag
pip install dask[complete]
</code></pre>
<h1>Solution Using Your Sample Data from DropBox</h1>
<p>First, check versions of dask. For me, dask = 0.11.0 and pandas = 0.18.1</p>
<pre><code>import dask
import pandas as pd
print (dask.__version__)
print (pd.__version__)
</code></pre>
<p>Here's the code to read in ALL your csvs. I had no errors using your DropBox example data. </p>
<pre><code>import dask.dataframe as dd
from dask.delayed import delayed
import dask.bag as db
import glob
filenames = glob.glob('/Users/linwood/Downloads/stack_bundle/rio*.csv')
'''
The key to getting around the CParse error was using sep=None
Came from this post
http://stackoverflow.com/questions/37505577/cparsererror-error-tokenizing-data
'''
# custom saver function for dataframes using newfilenames
def reader(filename):
return pd.read_csv(filename,sep=None)
# build list of delayed pandas csv reads; then read in as dask dataframe
dfs = [delayed(reader)(fn) for fn in filenames]
df = dd.from_delayed(dfs)
'''
This is the final step. The .compute() code below turns the
dask dataframe into a single pandas dataframe with all your
files merged. If you don't need to write the merged file to
disk, I'd skip this step and do all the analysis in
dask. Get a subset of the data you want and save that.
'''
df = df.reset_index().compute()
df.to_csv('./test.csv')
</code></pre>
<h1>The rest of this is extra stuff</h1>
<pre><code># print the count of values in each column; perfect data would have the same count
# you have dirty data as the counts will show
print (df.count().compute())
</code></pre>
<p>The next step is doing some pandas-like analysis. Here is some code of me first "cleaning" your data for the 'tweetFavoriteCt' column. All of the data is not an integer, so I replace strings with "0" and convert everything else to an integer. Once I get the integer conversion, I show a simple analytic where I filter the entire dataframe to only include the rows where the favoriteCt is greater than 3</p>
<pre><code># function to convert numbers to integer and replace string with 0; sample analytics in dask dataframe
# you can come up with your own..this is just for an example
def conversion(value):
try:
return int(value)
except:
return int(0)
# apply the function to the column, create a new column of cleaned data
clean = df['tweetFavoriteCt'].apply(lambda x: (conversion(x)),meta=('stuff',str))
# set new column equal to our cleaning code above; your data is dirty :-(
df['cleanedFavoriteCt'] = clean
</code></pre>
<p>Last bit of code shows dask analysis and how to load this merged file into pandas and also write the merged file to disk. Be warned, if you have tons of CSVs, when you use the <code>.compute()</code> code below, it will load this merged csv into memory.</p>
<pre><code># retreive the 50 tweets with the highest favorite count
print(df.nlargest(50,['cleanedFavoriteCt']).compute())
# only show me the tweets that have been favorited at least 3 times
# TweetID 763525237166268416, is VERRRRY popular....7000+ favorites
print((df[df.cleanedFavoriteCt.apply(lambda x: x>3,meta=('stuff',str))]).compute())
'''
This is the final step. The .compute() code below turns the
dask dataframe into a single pandas dataframe with all your
files merged. If you don't need to write the merged file to
disk, I'd skip this step and do all the analysis in
dask. Get a subset of the data you want and save that.
'''
df = df.reset_index().compute()
df.to_csv('./test.csv')
</code></pre>
<p>Now, if you want to switch to pandas for the merged csv file:</p>
<pre><code>import pandas as pd
dff = pd.read_csv('./test.csv')
</code></pre>
<p>Let me know if this works.</p>
<p><strong>Stop here</strong> </p>
<h1>ARCHIVE: Previous solution; good to example of using dask to merge CSVs</h1>
<p>The first step is making sure you have <code>dask</code> installed. There are <a href="http://dask.pydata.org/en/latest/install.html#pip" rel="nofollow">install instructions for <code>dask</code> in the documentation page</a> but this should work:</p>
<p>With dask installed it's easy to read in the files. </p>
<p>Some housekeeping first. Assume we have a directory with csvs where the filenames are <code>my18.csv</code>, <code>my19.csv</code>, <code>my20.csv</code>, etc. Name standardization and single directory location are key. This works if you put your csv files in one directory and serialize the names in some way. </p>
<p>In steps:</p>
<ol>
<li>Import dask, read all the csv files in using wildcard. This merges all csvs into one single <code>dask.dataframe</code> object. You can do pandas-like operation immediately after this step if you want.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import dask.dataframe as dd
ddf = dd.read_csv('./daskTest/my*.csv')
ddf.describe().compute()
</code></pre>
<ol start="2">
<li>Write merged dataframe file to disk in the same directory as original files and name it <code>master.csv</code> </li>
</ol>
<pre class="lang-py prettyprint-override"><code>ddf.to_csv('./daskTest/master.csv',index=False)
</code></pre>
<ol start="3">
<li><strong>Optional</strong>, read <code>master.csv</code>, a much bigger in size, into dask.dataframe object for computations. This can also be done after step one above; dask can perform pandas like operations on the staged files...this is a way to do "big data" in Python</li>
</ol>
<pre class="lang-py prettyprint-override"><code># reads in the merged file as one BIG out-of-core dataframe; can perform functions like pangas
newddf = dd.read_csv('./daskTest/master.csv')
#check the length; this is now length of all merged files. in this example, 50,000 rows times 11 = 550000 rows.
len(newddf)
# perform pandas-like summary stats on entire dataframe
newddf.describe().compute()
</code></pre>
<p>Hopefully this helps answer your question. In three steps, you read in all the files, merge to single dataframe, and write that massive dataframe to disk with only one header and all your rows. </p>
| 3 | 2016-08-04T04:11:57Z | [
"python",
"csv",
"pandas",
"concatenation"
] |
AttributeError: 'tuple' object has no attribute 'decode' while trying to decode string | 38,757,724 | <p>I am running this code on Python 3. I encoded the data using <code>.encode('utf_8')</code> while accepting from the server. But now I want to <code>decode</code> it in order make it human readable.</p>
<pre><code> All1 = soup.findAll('tag_name', class_='class_name')
All2 = "".join([p.text for p in All1])
str = "1",All2.encode('utf_8')
print(str.decode('utf_8'))
</code></pre>
<p>But it is giving the following error:</p>
<pre><code>print(str.decode('utf_8'))
AttributeError: 'tuple' object has no attribute 'decode'
</code></pre>
<p>How can I decode it ?</p>
| -2 | 2016-08-04T02:47:19Z | 38,757,750 | <p><code>str</code> (don't name your variables after built-in functions, by the way) is a <code>tuple</code>, not a string.</p>
<pre><code>str = "1",All2.encode('utf_8')
</code></pre>
<p>This is equivalent to the more readable:</p>
<pre><code>str = ("1", All2.encode('utf_8'))
</code></pre>
<p>I don't know what you need the <code>"1"</code> for, but you might try this:</p>
<pre><code>num, my_string = '1', All2.encode('utf_8')
</code></pre>
<p>And then decode the string:</p>
<pre><code>print(my_string.decode('utf_8'))
</code></pre>
| 1 | 2016-08-04T02:51:18Z | [
"python",
"string",
"python-3.x",
"encoding"
] |
cannot locate a web element | 38,757,830 | <p>I just want to write a simple log in script for Apple website:
<a href="https://secure2.store.apple.com/shop/sign_in?c=aHR0cDovL3d3dy5hcHBsZS5jb20vc2hvcC9iYWd8MWFvczVjNGU3ZWNjZjgwODVjNWY4NDk0OTA0ODJhMDc2Y2FkNmU3ODJkOTE&o=O01LV0gy&r=SXYD4UDAPXU7P7KXF&s=aHR0cHM6Ly9zZWN1cmUyLnN0b3JlLmFwcGxlLmNvbS9zaG9wL2NoZWNrb3V0L3N0YXJ0P3BsdG49RkNBRjZGQjR8MWFvczAyZmZkZjQwNTgwOGI4ZTNkMDQ5MWRiM2NmZmExYTgxNzRkZTllMjY&t=SXYD4UDAPXU7P7KXF&up=t" rel="nofollow">Sign In</a></p>
<p>The ID and password form cannot be located properly.
Actually, I tried a lot of thing like:</p>
<pre><code> driver.find_element_by_xpath("//*[@type='email']")
</code></pre>
<p>or </p>
<pre><code> driver.find_element_by_xpath("//*[@name='login-appleId']")
</code></pre>
<p>and </p>
<pre><code> driver.find_element_by_xpath("//*[@id='login-appleId']")
</code></pre>
<p>I did not find any iframe in this page. And I tried same thing for customer checkout button, also same problem happens. </p>
<p>Any suggestions would be appreciate!</p>
<p>Best,
Luke</p>
| 0 | 2016-08-04T03:02:36Z | 38,757,962 | <p>I recommend you try the following:</p>
<pre><code>driver.find_element_by_id("login-appleId")
driver.find_element_by_id("login-password")
</code></pre>
| 0 | 2016-08-04T03:21:13Z | [
"python",
"selenium-webdriver"
] |
cannot locate a web element | 38,757,830 | <p>I just want to write a simple log in script for Apple website:
<a href="https://secure2.store.apple.com/shop/sign_in?c=aHR0cDovL3d3dy5hcHBsZS5jb20vc2hvcC9iYWd8MWFvczVjNGU3ZWNjZjgwODVjNWY4NDk0OTA0ODJhMDc2Y2FkNmU3ODJkOTE&o=O01LV0gy&r=SXYD4UDAPXU7P7KXF&s=aHR0cHM6Ly9zZWN1cmUyLnN0b3JlLmFwcGxlLmNvbS9zaG9wL2NoZWNrb3V0L3N0YXJ0P3BsdG49RkNBRjZGQjR8MWFvczAyZmZkZjQwNTgwOGI4ZTNkMDQ5MWRiM2NmZmExYTgxNzRkZTllMjY&t=SXYD4UDAPXU7P7KXF&up=t" rel="nofollow">Sign In</a></p>
<p>The ID and password form cannot be located properly.
Actually, I tried a lot of thing like:</p>
<pre><code> driver.find_element_by_xpath("//*[@type='email']")
</code></pre>
<p>or </p>
<pre><code> driver.find_element_by_xpath("//*[@name='login-appleId']")
</code></pre>
<p>and </p>
<pre><code> driver.find_element_by_xpath("//*[@id='login-appleId']")
</code></pre>
<p>I did not find any iframe in this page. And I tried same thing for customer checkout button, also same problem happens. </p>
<p>Any suggestions would be appreciate!</p>
<p>Best,
Luke</p>
| 0 | 2016-08-04T03:02:36Z | 38,762,379 | <p>Sometimes in <code>WebDriver</code> there are scenarios where the <code>WebElement</code> isn't loaded properly on <code>DOM</code> and webdriver tries to find it. So to handle these kind of scenarios there are 2 types of wait provided by <code>WebDriver</code> library.</p>
<p>You just need to implement one of these based on your requirements.</p>
<ol>
<li><a href="http://selenium-python.readthedocs.io/waits.html#implicit-waits" rel="nofollow">Implicit Waits</a></li>
<li><a href="http://selenium-python.readthedocs.io/waits.html#explicit-waits" rel="nofollow">Explicit Waits</a></li>
</ol>
<p>I suggest you to implement one of these and then try to execute your script.</p>
| 0 | 2016-08-04T08:36:38Z | [
"python",
"selenium-webdriver"
] |
cannot locate a web element | 38,757,830 | <p>I just want to write a simple log in script for Apple website:
<a href="https://secure2.store.apple.com/shop/sign_in?c=aHR0cDovL3d3dy5hcHBsZS5jb20vc2hvcC9iYWd8MWFvczVjNGU3ZWNjZjgwODVjNWY4NDk0OTA0ODJhMDc2Y2FkNmU3ODJkOTE&o=O01LV0gy&r=SXYD4UDAPXU7P7KXF&s=aHR0cHM6Ly9zZWN1cmUyLnN0b3JlLmFwcGxlLmNvbS9zaG9wL2NoZWNrb3V0L3N0YXJ0P3BsdG49RkNBRjZGQjR8MWFvczAyZmZkZjQwNTgwOGI4ZTNkMDQ5MWRiM2NmZmExYTgxNzRkZTllMjY&t=SXYD4UDAPXU7P7KXF&up=t" rel="nofollow">Sign In</a></p>
<p>The ID and password form cannot be located properly.
Actually, I tried a lot of thing like:</p>
<pre><code> driver.find_element_by_xpath("//*[@type='email']")
</code></pre>
<p>or </p>
<pre><code> driver.find_element_by_xpath("//*[@name='login-appleId']")
</code></pre>
<p>and </p>
<pre><code> driver.find_element_by_xpath("//*[@id='login-appleId']")
</code></pre>
<p>I did not find any iframe in this page. And I tried same thing for customer checkout button, also same problem happens. </p>
<p>Any suggestions would be appreciate!</p>
<p>Best,
Luke</p>
| 0 | 2016-08-04T03:02:36Z | 38,762,790 | <p>You can Follow this code .. It works ..!!</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome('E:\chromedriver.exe') #location
driver.get('url') #https://secure2.store.apple.com/shop/sign_in?c=aHR0cDovL3d3dy5hcHBsZS5jb20vc2hvcC9iYWd8MWFvczVjNGU3ZWNjZjgwODVjNWY4NDk0OTA0ODJhMDc2Y2FkNmU3ODJkOTE&o=O01LV0gy&r=SXYD4UDAPXU7P7KXF&s=aHR0cHM6Ly9zZWN1cmUyLnN0b3JlLmFwcGxlLmNvbS9zaG9wL2NoZWNrb3V0L3N0YXJ0P3BsdG49RkNBRjZGQjR8MWFvczAyZmZkZjQwNTgwOGI4ZTNkMDQ5MWRiM2NmZmExYTgxNzRkZTllMjY&t=SXYD4UDAPXU7P7KXF&up=t
def find_by_xpath(locator):
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, locator))
)
return element
class FormPage(object):
def fill_form(self, data):
find_by_xpath('//input[@name = "login-appleId"]').send_keys(data['usr')
find_by_xpath('//input[@name = "login-password"]').send_keys(data['pwd'])
return self
def submit(self):
find_by_xpath('//input[@id = "sign-in"]').click()
data = {
'usr': 'xx@apple.com',
'pwd': 'xxxx'
}
if __name__=="__main__":
FormPage().fill_form(data).submit()
driver.quit() # closes the webbrowser
</code></pre>
<p>Hope it is helpful to you
Thanks.!!</p>
| 1 | 2016-08-04T08:55:01Z | [
"python",
"selenium-webdriver"
] |
Python threading - Run function at certain times | 38,757,831 | <p>I have a set of data like</p>
<pre><code>schedule = [(2, 5),
(4, 6),
(10, 2)]
</code></pre>
<p>with the first element of each tuple being a time (in seconds) and the second element being a value.</p>
<p>I would like to start a separate thread which tracks the time and at each scheduled time runs some arbitrary function <code>func(value)</code>.</p>
<p>What is the cleanest way to do this? I could create a Timer object for each scheduled time, but that seems sloppy.</p>
<p>edit: You can assume the times are in ascending order</p>
| 0 | 2016-08-04T03:02:50Z | 38,758,579 | <p>If you need them to be in separate threads then a Timer object or as a commenter pointed out APScheduler. If you want to do the threading manually you could also use sched. They added support for threads in version 3.3. <a href="https://docs.python.org/3/library/sched.html" rel="nofollow">Docs</a></p>
| 1 | 2016-08-04T04:38:36Z | [
"python",
"timer",
"python-multithreading"
] |
Python 3 non-blocking read with pySerial (Cannot get pySerial's "in_waiting" property to work) | 38,757,906 | <p>For the life of me, I cannot figure out how to do a non-blocking serial read in Python 3 using my Raspberry Pi.</p>
<p>Here's my code: </p>
<pre><code>import serial #for pySerial
ser = serial.Serial('/dev/ttyUSB0', 9600) #open serial port
print ('serial port = ' + ser.name) #print the port used
while (True):
if (ser.in_waiting>0):
ser.read(ser.in_waiting)
</code></pre>
<p>Result:<br>
<code>AttributeError: 'Serial' object has no attribute 'in_waiting'</code></p>
<p>Here's the reference page I'm referencing that told me "in_waiting" exists: <a href="http://pyserial.readthedocs.io/en/latest/pyserial_api.html" rel="nofollow">http://pyserial.readthedocs.io/en/latest/pyserial_api.html</a></p>
| 0 | 2016-08-04T03:12:51Z | 38,758,047 | <p>You have to run the open port function first then at the end close it using the close function.</p>
| 0 | 2016-08-04T03:31:35Z | [
"python",
"raspberry-pi",
"pyserial"
] |
Python 3 non-blocking read with pySerial (Cannot get pySerial's "in_waiting" property to work) | 38,757,906 | <p>For the life of me, I cannot figure out how to do a non-blocking serial read in Python 3 using my Raspberry Pi.</p>
<p>Here's my code: </p>
<pre><code>import serial #for pySerial
ser = serial.Serial('/dev/ttyUSB0', 9600) #open serial port
print ('serial port = ' + ser.name) #print the port used
while (True):
if (ser.in_waiting>0):
ser.read(ser.in_waiting)
</code></pre>
<p>Result:<br>
<code>AttributeError: 'Serial' object has no attribute 'in_waiting'</code></p>
<p>Here's the reference page I'm referencing that told me "in_waiting" exists: <a href="http://pyserial.readthedocs.io/en/latest/pyserial_api.html" rel="nofollow">http://pyserial.readthedocs.io/en/latest/pyserial_api.html</a></p>
| 0 | 2016-08-04T03:12:51Z | 38,758,474 | <p>The documentation link you listed shows <code>in_waiting</code> as a property added in PySerial 3.0. Most likely you're using PySerial < 3.0 so you'll have to call the <code>inWaiting()</code> function.</p>
<p>You can check the version of PySerial as follows:</p>
<pre class="lang-py prettyprint-override"><code>import serial
print serial.VERSION
</code></pre>
<p>If you installed PySerial using <a href="https://pypi.python.org/pypi/pip" rel="nofollow">pip</a>, you should be able to perform an upgrade (admin privileges may be required):</p>
<pre><code>pip install --upgrade pyserial
</code></pre>
<p>Otherwise, change your code to use the proper interface from PySerial < 3.0:</p>
<pre class="lang-py prettyprint-override"><code>while (True):
if (ser.inWaiting() > 0):
ser.read(ser.inWaiting())
</code></pre>
| 1 | 2016-08-04T04:26:00Z | [
"python",
"raspberry-pi",
"pyserial"
] |
python appengine-gcs-client demo with local devserver hitting AccessTokenRefreshError(u'internal_failure',) | 38,757,936 | <p>I'm having trouble getting the python appengine-gcs-client demo working using the 1.9.40 (latest presently) SDK's <code>dev_appserver.py</code>.</p>
<p>I followed the <a href="https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/setting-up-cloud-storage" rel="nofollow">Setting Up Google Cloud Storage</a> and the <a href="https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/app-engine-cloud-storage-sample" rel="nofollow">App Engine and Google Cloud Storage Sample</a> instructions.</p>
<p>I created the default bucket for a paid app, with billing enabled and a non-zero daily spending limit set. I successfully uploaded a file to that bucket using the developer console.</p>
<p>I cloned the <a href="https://github.com/GoogleCloudPlatform/appengine-gcs-client" rel="nofollow">GoogleCloudPlatform/appengine-gcs-client</a> repo from github. I copied the <code>python/src/cloudstorage</code> dir into the <code>python/demo</code> dir, which now looks like this:</p>
<pre><code>dancorn-laptop.acasa:/home/dancorn/src/appengine-gcs-client/python> find demo/ | sort
demo/
demo/app.yaml
demo/blobstore.py
demo/cloudstorage
demo/cloudstorage/api_utils.py
demo/cloudstorage/api_utils.pyc
demo/cloudstorage/cloudstorage_api.py
demo/cloudstorage/cloudstorage_api.pyc
demo/cloudstorage/common.py
demo/cloudstorage/common.pyc
demo/cloudstorage/errors.py
demo/cloudstorage/errors.pyc
demo/cloudstorage/__init__.py
demo/cloudstorage/__init__.pyc
demo/cloudstorage/rest_api.py
demo/cloudstorage/rest_api.pyc
demo/cloudstorage/storage_api.py
demo/cloudstorage/storage_api.pyc
demo/cloudstorage/test_utils.py
demo/__init__.py
demo/main.py
demo/main.pyc
demo/README
</code></pre>
<p>This is how I executed the devserver and the errors reported when trying to access <a href="http://localhost:8080" rel="nofollow">http://localhost:8080</a> as instructed:</p>
<pre><code>dancorn-laptop.acasa:/home/dancorn/src/appengine-gcs-client/python> /home/usr_local/google_appengine_1.9.40/dev_appserver.py demo
INFO 2016-08-04 01:07:51,786 sdk_update_checker.py:229] Checking for updates to the SDK.
INFO 2016-08-04 01:07:51,982 sdk_update_checker.py:257] The SDK is up to date.
INFO 2016-08-04 01:07:52,121 api_server.py:205] Starting API server at: http://localhost:50355
INFO 2016-08-04 01:07:52,123 dispatcher.py:197] Starting module "default" running at: http://localhost:8080
INFO 2016-08-04 01:07:52,124 admin_server.py:116] Starting admin server at: http://localhost:8000
INFO 2016-08-04 01:08:03,461 client.py:804] Refreshing access_token
INFO 2016-08-04 01:08:05,234 client.py:827] Failed to retrieve access token: {
"error" : "internal_failure"
}
ERROR 2016-08-04 01:08:05,236 api_server.py:272] Exception while handling service_name: "app_identity_service"
method: "GetAccessToken"
request: "\n7https://www.googleapis.com/auth/devstorage.full_control"
request_id: "ccqdTObLrl"
Traceback (most recent call last):
File "/home/usr_local/google_appengine_1.9.40/google/appengine/tools/devappserver2/api_server.py", line 247, in _handle_POST
api_response = _execute_request(request).Encode()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/tools/devappserver2/api_server.py", line 186, in _execute_request
make_request()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/tools/devappserver2/api_server.py", line 181, in make_request
request_id)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/api/apiproxy_stub.py", line 131, in MakeSyncCall
method(request, response)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/api/app_identity/app_identity_defaultcredentialsbased_stub.py", line 192, in _Dynamic_GetAccessToken
token = credentials.get_access_token()
File "/home/usr_local/google_appengine_1.9.40/lib/oauth2client/oauth2client/client.py", line 689, in get_access_token
self.refresh(http)
File "/home/usr_local/google_appengine_1.9.40/lib/oauth2client/oauth2client/client.py", line 604, in refresh
self._refresh(http.request)
File "/home/usr_local/google_appengine_1.9.40/lib/oauth2client/oauth2client/client.py", line 775, in _refresh
self._do_refresh_request(http_request)
File "/home/usr_local/google_appengine_1.9.40/lib/oauth2client/oauth2client/client.py", line 840, in _do_refresh_request
raise AccessTokenRefreshError(error_msg)
AccessTokenRefreshError: internal_failure
WARNING 2016-08-04 01:08:05,239 tasklets.py:468] suspended generator _make_token_async(rest_api.py:55) raised RuntimeError(AccessTokenRefreshError(u'internal_failure',))
WARNING 2016-08-04 01:08:05,240 tasklets.py:468] suspended generator get_token_async(rest_api.py:224) raised RuntimeError(AccessTokenRefreshError(u'internal_failure',))
WARNING 2016-08-04 01:08:05,240 tasklets.py:468] suspended generator urlfetch_async(rest_api.py:259) raised RuntimeError(AccessTokenRefreshError(u'internal_failure',))
WARNING 2016-08-04 01:08:05,240 tasklets.py:468] suspended generator run(api_utils.py:164) raised RuntimeError(AccessTokenRefreshError(u'internal_failure',))
WARNING 2016-08-04 01:08:05,240 tasklets.py:468] suspended generator do_request_async(rest_api.py:198) raised RuntimeError(AccessTokenRefreshError(u'internal_failure',))
WARNING 2016-08-04 01:08:05,241 tasklets.py:468] suspended generator do_request_async(storage_api.py:128) raised RuntimeError(AccessTokenRefreshError(u'internal_failure',))
ERROR 2016-08-04 01:08:05,241 main.py:62] AccessTokenRefreshError(u'internal_failure',)
Traceback (most recent call last):
File "/home/dancorn/src/appengine-gcs-client/python/demo/main.py", line 43, in get
self.create_file(filename)
File "/home/dancorn/src/appengine-gcs-client/python/demo/main.py", line 89, in create_file
retry_params=write_retry_params)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/cloudstorage_api.py", line 97, in open
return storage_api.StreamingBuffer(api, filename, content_type, options)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/storage_api.py", line 697, in __init__
status, resp_headers, content = self._api.post_object(path, headers=headers)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/rest_api.py", line 82, in sync_wrapper
return future.get_result()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 383, in get_result
self.check_success()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/storage_api.py", line 128, in do_request_async
deadline=deadline, callback=callback)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/rest_api.py", line 198, in do_request_async
follow_redirects=False)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/api_utils.py", line 164, in run
result = yield tasklet(**kwds)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/rest_api.py", line 259, in urlfetch_async
self.token = yield self.get_token_async()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/rest_api.py", line 224, in get_token_async
self.scopes, self.service_account_id)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/rest_api.py", line 55, in _make_token_async
token, expires_at = yield rpc
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 513, in _on_rpc_completion
result = rpc.get_result()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/api/app_identity/app_identity.py", line 519, in get_access_token_result
rpc.check_success()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/api/apiproxy_stub_map.py", line 579, in check_success
self.__rpc.CheckSuccess()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/api/apiproxy_rpc.py", line 157, in _WaitImpl
self.request, self.response)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/remote_api/remote_api_stub.py", line 201, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/remote_api/remote_api_stub.py", line 235, in _MakeRealSyncCall
raise pickle.loads(response_pb.exception())
RuntimeError: AccessTokenRefreshError(u'internal_failure',)
INFO 2016-08-04 01:08:05,255 module.py:788] default: "GET / HTTP/1.1" 200 249
</code></pre>
<p>I was surprised when I saw the attempt to contact a Google server, I was expecting to use a faked, local filesystem-based emulation, based on these notes from the <a href="https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/app-engine-cloud-storage-sample" rel="nofollow">App Engine and Google Cloud Storage Sample</a> instructions:</p>
<ul>
<li><a href="https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/setting-up-cloud-storage#using_the_client_library_with_the_development_app_server" rel="nofollow">Using the client library with the development app server</a>:</li>
</ul>
<blockquote>
<p>You can use the client library with the development server.</p>
<pre><code>**Note**: Files saved locally are subject to the file size and naming conventions imposed by the local filesystem.
</code></pre>
</blockquote>
<ul>
<li><a href="https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/app-engine-cloud-storage-sample#appyaml_walkthrough" rel="nofollow">app.yaml walkthrough</a>:</li>
</ul>
<blockquote>
<p>You specify the project ID in the line application: your-app-id,
replacing the value your-app-id. This value isn't used when running
locally, but you must supply a valid project ID before deploying: the
deployment utility reads this entry to determine where to deploy your
app.</p>
</blockquote>
<ul>
<li><a href="https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/app-engine-cloud-storage-sample#deploying_the_sample" rel="nofollow">Deploying the Sample</a>, step 5:</li>
</ul>
<blockquote>
<p>In your browser, visit https://.appspot.com; the
application will execute on page load, just as it did when running
locally. Only this time, the app will actually be writing to and
reading from a real bucket.</p>
</blockquote>
<p>I even placed my real app's ID into the <code>app.yaml</code> file, but that didn't make any difference.</p>
<p>I've checked the known GAE issues and only found this potentially related one, but on a much older SDK version:</p>
<ul>
<li><a href="https://code.google.com/p/googleappengine/issues/detail?id=11690" rel="nofollow">Issue 11690</a> GloudStorage bug in GoogleAppEngineLanucher development server</li>
</ul>
<p>I checked a few older SDK versions I have around (1.9.30, 1.9.35), just in case - no difference either.</p>
<p>My questions:</p>
<ol>
<li>How can I make the GCS client operate locally (w/ faked GCS based on the local filesystem) when it's used with dev_appserver.py?</li>
<li>Since it's mentioned it should work with the real GCS as well even when used with dev_appserver.py what do I need to do to achieve that? (less important, more of a curiosity)</li>
</ol>
| 1 | 2016-08-04T03:16:51Z | 38,771,428 | <p>Actually the reason was what IMHO is a quite silly bug - inability to read the credentials from a local file written by an earlier version of the SDK (or related package?) and failure to fallback to a more decent action which leads to a rather misleading traceback throwing off the investigation.</p>
<p>Credit goes to this answer: <a href="http://stackoverflow.com/a/35890078/4495081">http://stackoverflow.com/a/35890078/4495081</a> ('tho the bug mentioned in the post was for something else, ultimately triggering the similar end result)</p>
<p>After removing the <code>~/.config/gcloud/application_default_credentias.json</code> file the demo completed successfully using the local filesystem. And my real app worked fine as well.</p>
<p>My 2nd question stands, but I'm not too worried about it - personally I don't see a great value in using the real GCS storage with the local development server - I have to do testing on a real staging GAE app anyways for other reasons.</p>
| 0 | 2016-08-04T15:23:13Z | [
"python",
"google-app-engine",
"google-cloud-storage"
] |
How to update and delete data using Luigi? | 38,757,994 | <p>what module can be use from luigi for update/delete data into database? i have use copy to table and sql alchemy for inserting data. for update and delete document is not clear how can it be achieved? please advise.</p>
| 0 | 2016-08-04T03:25:27Z | 38,797,266 | <p>If the database is Postgres, you may be able to use PostgresQuery.
<a href="http://luigi.readthedocs.io/en/stable/api/luigi.postgres.html#luigi.postgres.PostgresQuery" rel="nofollow">http://luigi.readthedocs.io/en/stable/api/luigi.postgres.html#luigi.postgres.PostgresQuery</a></p>
| 0 | 2016-08-05T20:23:42Z | [
"python",
"apache",
"luigi"
] |
How to use AIML with Python | 38,758,029 | <p>I would like to integrate <code>python</code> scripts into my <code>pandorabot</code> written in <code>aiml</code>. </p>
<p>I know that you can tag <code>aiml</code> syntax with <code>javascript</code>, but I haven't found any documentation on <code>python</code>, except the following, which uses <code><oob></code> (out of bounds) tags, running services on the background:</p>
<pre><code><oob>
<mrl>
<service>python</service>
<method>exec</method>
<param>myfuction()<param>
</mrl>
</oob>
</code></pre>
<p><code><mrl></code>tags stands for <a href="https://github.com/MyRobotLab/myrobotlab" rel="nofollow">myrobot lab</a>, and it is part of <code>program-ab</code>, a <code>java framework</code> for actual robotics. </p>
<p>But I would like to use my <code>app</code> solely on the web...</p>
<p>I also came across <code>pyAiml</code>, but as for now I haven't see how it would help me to achieve my goal.</p>
<p><strong>MY GOAL</strong>:</p>
<p>I want to use <code>python</code> because it manipulates <code>NLTK</code> (<a href="http://www.nltk.org/" rel="nofollow">http://www.nltk.org/</a>), a Natural Language Toolkit which handles huge literary corpus, and I would like to integrate this library to my bot capabilities.</p>
<p>lets say I have a <code><pattern>PYTHON</pattern></code>, and it would run a python script.</p>
<p>the script, on its turn, would <code>import nltk</code> (and its corpus), linking AIML <code>patterns</code>, or "questions" to PYTHON <code>templates</code>, or "answers"?</p>
<p>any clues on how I could achieve this? many thanks in advance.</p>
| 0 | 2016-08-04T03:30:00Z | 39,268,101 | <p>While I have no experience working with python in conjunction with pandorabots, but I did work with php, and this is what I came up with conceptually. The objective was similar, but in my case I needed to add information for the pandorabot response from an external api, and the following is what I did :</p>
<p>I used symbols/delimiters to<br>
1. Flag the response that needs to be modified.<br>
2. Used delimiters to fragment the response into parts that need to be modified, and parts that dont.<br>
3. The modifiable parts were in my case php function calls, where functions were already predefined.<br>
4. I then combined the response from the api with the non modified bot response and rendered it to the client. </p>
<p>The end output was that I was basically able to translate an aiml response to a php call. </p>
<hr>
<p>Example:<br>
In my case I used '#" at the beginning of the response to mark the response as modifiable. I used '%' to mark the beginning and end of the segment I wanted to modify, and used ',' to separate the function call, and parameters.</p>
<p>So the stored aiml response looked like :</p>
<pre><code><template>#Response to be modified %method call,param1% continued response.</template>
</code></pre>
<p>Algo : </p>
<pre><code> So for every response,check if it contains a # at the beginning,
If it does, remove the # (for php I used substr ($response,1) )
Extract the function call (for php I used explode($str, '%') )
Process function call.
</code></pre>
<p>I believe you can use a similar logic to extract a query and send it to nltk. Hope this helps. </p>
| 2 | 2016-09-01T09:44:28Z | [
"python",
"nltk",
"aiml",
"pandorabots"
] |
change file name of uploaded file django | 38,758,084 | <p>this is what I am trying to do. I created a method that will upload a file however, I would like to change its file name. I was able to change the filename but I also lost the extension.</p>
<p>this is how my codes look:</p>
<pre><code>def upload_delivery_to_media(self,deliveryId, deliveryFile):
with open('/app/media/TaskDelivery/'+str(delivery), 'wb+') as destination:
for chunk in deliveryFile.chunks():
destination.write(chunk)
return "Done uploading"
</code></pre>
<p>but the file looks like <code>324329432840932</code> only when I am expecting something like <code>324329432840932.jpg</code></p>
| 0 | 2016-08-04T03:36:39Z | 38,760,576 | <p>It's better to use <a href="https://docs.python.org/3/library/os.path.html#os.path.splitext" rel="nofollow">splitext</a> instead of method <code>split()</code>:</p>
<pre><code>import os
from django.conf import settings
def upload_delivery_to_media(self, delivery_id, delivery_file):
_, ext = os.path.splitext(delivery_file.name)
with open(os.path.join(settings.MEDIA_ROOT, 'TaskDelivery', '{}{}'.format(delivery_id, ext)), 'wb') as destination:
for chunk in delivery_file.chunks():
destination.write(chunk)
</code></pre>
| 0 | 2016-08-04T07:04:37Z | [
"python",
"django"
] |
How to give 2 characters in "delimiter" using 'csv' module? | 38,758,158 | <p>I'm trying to generate the csv with delimiter '@|@' but, I couldn't achieve through below code.</p>
<pre><code>import csv
ifile = open('test.csv', "rb")
reader = csv.reader(ifile)
ofile = open('ttest.csv', "wb")
writer = csv.writer(ofile, delimiter='|', quotechar='"', quoting=csv.QUOTE_ALL)
for row in reader:
writer.writerow(row)
ifile.close()
ofile.close()
</code></pre>
<p>While running, It has thrown below error.</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "csvfile.py", line 6, in <module>
writer = csv.writer(ofile, delimiter='@|@', quotechar='"', quoting=csv.QUOTE_ALL)
TypeError: "delimiter" must be an 1-character string
</code></pre>
<p>How can I achieve this?</p>
| 1 | 2016-08-04T03:45:50Z | 38,758,249 | <p>In csv <a href="https://docs.python.org/3/library/csv.html#csv.Dialect.delimiter" rel="nofollow">documentation</a> they say </p>
<blockquote>
<p>A one-character string used to separate fields. It defaults to ','.</p>
</blockquote>
<p>So you can do this as an alternative.</p>
<pre><code>csv.reader((line.replace('@|@', '|') for line in ifile), delimiter='|')
</code></pre>
<p>So the complete code is,</p>
<pre><code>import csv
ifile = open('test.csv', "rb")
reader = csv.reader((line.replace('@|@', '|') for line in ifile), delimiter='|')
ofile = open('ttest.csv', "wb")
writer = csv.writer(ofile, delimiter='|', quotechar='"', quoting=csv.QUOTE_ALL)
for row in reader:
writer.writerow(row)
ifile.close()
ofile.close()
</code></pre>
| 0 | 2016-08-04T03:57:24Z | [
"python",
"csv"
] |
How to give 2 characters in "delimiter" using 'csv' module? | 38,758,158 | <p>I'm trying to generate the csv with delimiter '@|@' but, I couldn't achieve through below code.</p>
<pre><code>import csv
ifile = open('test.csv', "rb")
reader = csv.reader(ifile)
ofile = open('ttest.csv', "wb")
writer = csv.writer(ofile, delimiter='|', quotechar='"', quoting=csv.QUOTE_ALL)
for row in reader:
writer.writerow(row)
ifile.close()
ofile.close()
</code></pre>
<p>While running, It has thrown below error.</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "csvfile.py", line 6, in <module>
writer = csv.writer(ofile, delimiter='@|@', quotechar='"', quoting=csv.QUOTE_ALL)
TypeError: "delimiter" must be an 1-character string
</code></pre>
<p>How can I achieve this?</p>
| 1 | 2016-08-04T03:45:50Z | 38,758,327 | <p>Life is too short, just use <code>pandas</code></p>
<pre><code>import pandas as pd
df.to_csv('output.csv', sep='|')
</code></pre>
<p>By default, the delimiator is <code>','</code>, that is why it is called Commasv. But by changing the <code>sep</code> to <code>'|'</code>, you could simply achieve what you want.</p>
| 0 | 2016-08-04T04:07:18Z | [
"python",
"csv"
] |
How to give 2 characters in "delimiter" using 'csv' module? | 38,758,158 | <p>I'm trying to generate the csv with delimiter '@|@' but, I couldn't achieve through below code.</p>
<pre><code>import csv
ifile = open('test.csv', "rb")
reader = csv.reader(ifile)
ofile = open('ttest.csv', "wb")
writer = csv.writer(ofile, delimiter='|', quotechar='"', quoting=csv.QUOTE_ALL)
for row in reader:
writer.writerow(row)
ifile.close()
ofile.close()
</code></pre>
<p>While running, It has thrown below error.</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "csvfile.py", line 6, in <module>
writer = csv.writer(ofile, delimiter='@|@', quotechar='"', quoting=csv.QUOTE_ALL)
TypeError: "delimiter" must be an 1-character string
</code></pre>
<p>How can I achieve this?</p>
| 1 | 2016-08-04T03:45:50Z | 38,758,467 | <p>The <em>csvfile</em> argument to a the <code>csv.writer</code> constructor only has to be a "file-like object". This means you could supply an instance of your own class which changes a single character into one with two or more characters (but which otherwise acts like a open output file object).</p>
<p>Here's what I mean:</p>
<pre><code>import csv
class CSV_Translater(object):
""" Output file-like object that translates characters. """
def __init__(self, f, old, new):
self.f = f
self.old = old
self.new = new
def write(self, s):
self.f.write(s.replace(self.old, self.new))
def close(self):
self.f.close()
def flush(self):
self.f.flush()
with open('in_test.csv', "rb") as ifile:
reader = csv.reader(ifile)
with open('out_test.csv', "wb") as ofile:
translater = CSV_Translater(ofile, '|', '@|@')
writer = csv.writer(translater, delimiter='|', quotechar='"',
quoting=csv.QUOTE_ALL)
for row in reader:
writer.writerow(row)
</code></pre>
| 0 | 2016-08-04T04:25:27Z | [
"python",
"csv"
] |
Can the repetition in this line be avoided? | 38,758,187 | <p><code>'=' not in access and name + '.' not in access</code></p>
<p>I hope to avoid the multiplicity of <code>not in access</code>s in a line of Python code. I've used expression evaluation loops for cases of higher numbers of repetitions for convenience but it just seems odd at two.</p>
| 0 | 2016-08-04T03:49:14Z | 38,758,232 | <p>Here's another option:</p>
<pre><code>all(s not in access for s in ('=', name + '.'))
</code></pre>
<p>It's up to you to decide if this is simpler than your code - but at least it avoids having to write <code>not in access</code> <em>twice</em>.</p>
| 0 | 2016-08-04T03:55:43Z | [
"python",
"repetition",
"non-repetitive"
] |
Can't select textbox in selenium | 38,758,221 | <p>I am trying to access the comment textbox in a generic huffington post artical. When I right click inspect element I get the following HTML code:</p>
<pre><code><div class="UFIInputContainer">
<div class="_1cb _5yk1">
<div class="_5yk2" tabindex="-2">
<div class="_5rp7">
</code></pre>
<p>with the line <code><div class="_1cb _5yk1"></code> highlighted.</p>
<pre><code>from selenium import webdriver
driver = webdriver.Chrome()
'''
Just pretend that I put in some code to log in to facebook
so I can actually post a comment on huffington post
'''
driver.get.('http://www.huffingtonpost.com/entry/worst-suicide-squad-reviews_us_57a1e213e4b0693164c34744?')
'''
Just a random artical about a movie
'''
comment_box = driver.find_element_by_css_selector('._1cb._5yk1')
'''
since this is a compound class I think I should use find_by_css_selector
'''
</code></pre>
<p>When I run this though, I get the error message: "no such element found". I have tried other methods of trying to get a hold of the comment textbox but I get the same error message and I am at a lost of how to access it. I am hoping somebody can shed some light on this problem.</p>
<p>edit: This is a more complete HTML code:</p>
<pre><code><html lang="en" id="facebook" class="svg ">
<head>...</head>
<body dir="ltr" class="plugin chrome webkit win x1 Locale_en_US">
<div class="_li">
<div class="pluginSkinLight pluginFontHelvetica">
<div id="u_0_0">
<div data-reactroot class="_56q9">
<div class="_2pi8">
<div class="_491z clearfix">...</div>
<div spacing="medium" class="_4uyl _1zz8 _2392 clearfix" direction="left">
<div class="_ohe lfloat">...</div>
<div class>
<div class="UFIInputContainer">
<div class="_1cb _5yk1">
<div class="_5yk2" tabindex="-2">
<div class="_5rp7">
</div>
</div>
<div class="UFICommentAttachmentButtons clearfix">...</div>
<!-- react-empty: 39 -->
<div class="_4uym">...</div>
</div>
</div>
</div>
::after
</code></pre>
| 0 | 2016-08-04T03:54:16Z | 38,758,561 | <p>You have to switch to the iframe containing the text box. Try the following approach, it should work:
Clicking the load comments button might be required first if load comment button is displayed</p>
<pre><code>load_comment = driver.find_element_by_css_selector('.comment-button.js-comment-button')
load_comment.click()
driver.switch_to_frame(driver.find_element_by_css_selector('.fb_ltr.fb_iframe_widget_lift'))
comment_box = driver.find_element_by_css_selector('._1cb._5yk1')
comment_box.send_keys('Test')
</code></pre>
| 0 | 2016-08-04T04:36:17Z | [
"python",
"selenium",
"selenium-webdriver"
] |
Is it possible to set a default intent in Wit.ai? | 38,758,342 | <p>I'm working on a chatbot project based on Facebook's Wit.ai and was wondering if it is possible to set a default intent? </p>
<p>For example, my bot currently supports only a handful of questions, such as "Where are you located?" or "What is your phone number?", each of these questions has an intent and story associated with it but if someone asks something the bot doesn't understand, wit seems (I haven't been able to find any info about this) to choose a story at random and execute it. </p>
<p>I would like to set a default intent that will respond with something like "I don't understand what you mean." in the event that no other intent is recognized. Is it possible to do this? Specifically, I would like to know if there is an officially accepted way to do this as I currently have a way to achieve this but it is a bit hacky and requires me to edit the <code>wit</code> package from facebook which I would prefer not to do. </p>
| 1 | 2016-08-04T04:09:13Z | 38,828,177 | <p>There is no any functionality available yet in wit.ai</p>
<p>But you can get the required functionality by using the confidence value returned by the wit api. You can set a threshold value of the confidence and if the value falls below your threshold return a custom message. you can handle this functionality in your action function implementation.</p>
<p>For further reference look at this <a href="http://stackoverflow.com/questions/38334663/make-chatbot-wit-ai-reply-that-it-doesnt-have-a-proper-answer">post</a>.</p>
| 3 | 2016-08-08T11:33:46Z | [
"python",
"facebook",
"wit.ai"
] |
Why Does This Usage of a Class Work in Python? | 38,758,345 | <p>I run the following very trivial Python code. I am very surprised that it actually run. Could someone explain to me why I can even assign values to "nd" and "hel" without defining them in the class definition? Is this because the attribute can be added in the instance level?</p>
<pre><code>class tempClass(object):
a = tempClass()
a.nd = 1
a.hel = 'wem3'
</code></pre>
| 0 | 2016-08-04T04:09:43Z | 38,758,573 | <p>Python has no notion of variable declaration, only assignments. The same applies to attributes: you simply assign an initial value to bring it into existence.</p>
<p>There is nothing special about the <code>__init__</code> method in this regard. For example,</p>
<pre><code>class TempClass(object):
def __init__(self):
self.nd = 1
a = tempClass()
a.hel = 'wem3'
</code></pre>
<p>Both attributes are created in the same way: by assigning a value to them. <code>__init__</code> is called when <code>a</code> is first created, but otherwise is not special. <code>self</code> inside <code>__init__</code> is a reference to the object referenced by <code>a</code>, so <code>self.nd = 1</code> is identical to <code>a.nd = 1</code>. After the object is created, <code>a.hel</code> is created and initialized with <code>'wem3'</code> by the same process.</p>
| 2 | 2016-08-04T04:37:46Z | [
"python",
"class",
"instance"
] |
How to define an element belongs to other categories? | 38,758,367 | <p>I know the topic is really hard to understand but I don't know how to describe my problem in one sentence...T^T</p>
<p>Here is what I'm trying to do.</p>
<p>I have a set of 1-dimensional points in three categories.</p>
<pre><code>A = [[0,1], [0,2], [0,3], [1,1], [2,1], [3,2], [3,3], [4,2], [4,3], [5,3], [6,3]]
</code></pre>
<p>First number is x-coordinate and second number is label in each [ ]</p>
<p>And I want to insert a cut point into every pair of adjacent points [x1, L1], [x2, L2] if at least one of them has more than one kinds of label and L2 belongs to those categories that differ from L1.;;;</p>
<p>For example, </p>
<pre><code>[0,1], [0,2], [0,3]
</code></pre>
<p>they are all on x = 0 but there are three kinds of labels</p>
<pre><code>[1,1]
</code></pre>
<p>belongs to only one categories, so I would like to add a cut point x=0.5 in the middle of 0 and 1.</p>
<pre><code>3 x
2 x
1 x 1
x
0-x-1-
</code></pre>
<p>but like</p>
<pre><code>[1,1] and [2,1]
</code></pre>
<p>they both has only one and identical label, there is no need to add a cut point here.</p>
<p>So the result should be
<code>[0.5, 2.5, 3.5, 4.5]</code>
and maybe looks like this</p>
<pre><code> 3 x x 3 x 3 x 3 3 <--Label
2 x x 2 x 2 x <--Label
1 x 1 1 x x x <--Label
x x x x
-0-x-1---2-x-3-x-4-x-5---6--- <--X-axis
0.5 2.5 3.5 4.5 <--Cut points
</code></pre>
<p>The code I want to write will looks like this form</p>
<pre><code>A = [[0,1], [0,2], [0,3], [1,1], [2,1], [3,2], [3,3], [4,2], [4,3], [5,3], [6,3]]
X = []
for a in A:
X.append(a[0])
X = sorted(list(set(X)))
labels = [[1], [2], [3]]
group = []
for i in range(len(labels)):
group.append([])
for a in A:
for i in range(3):
if a[1] in labels[i]:
group[i].append(a[0])
cutpoints = []
for i, x in enumerate(X):
for j in range(len(group)):
if x in group[j] and (X[i+1] in group[ other than j ]):
cutpoints.append((x+X[i+1])/2)
</code></pre>
<p>But I stuck at the part "other than j"
In this case there are only 3 categories so maybe I can do that manually but I'm looking for a more clever way to do it so I don't need to rewrite this part every time I meet a new data with different number of categories.</p>
<p>Is there any function I can use to do the "other than j" operation??</p>
<p>Any comment or answer will be appreciated.
Thanks in advance T^T</p>
| 1 | 2016-08-04T04:12:12Z | 38,758,501 | <p>You can use not in, like this: <code>X[i+1] not in group[j]</code></p>
<p>Secondly, your algorithm seems overly complicated. What something like this?</p>
<pre><code>A = [[0,1], [0,2], [0,3], [1,1], [2,1], [3,2], [3,3], [4,2], [4,3], [5,3], [6,3]]
point, label = A[0]
cuts = []
for npoint, nlabel in A[1:]:
if not npoint == point:
if not label == nlabel:
cuts.append((point+npoint)/2.)
point = npoint
label = nlabel
</code></pre>
| 2 | 2016-08-04T04:28:57Z | [
"python",
"list"
] |
How to define an element belongs to other categories? | 38,758,367 | <p>I know the topic is really hard to understand but I don't know how to describe my problem in one sentence...T^T</p>
<p>Here is what I'm trying to do.</p>
<p>I have a set of 1-dimensional points in three categories.</p>
<pre><code>A = [[0,1], [0,2], [0,3], [1,1], [2,1], [3,2], [3,3], [4,2], [4,3], [5,3], [6,3]]
</code></pre>
<p>First number is x-coordinate and second number is label in each [ ]</p>
<p>And I want to insert a cut point into every pair of adjacent points [x1, L1], [x2, L2] if at least one of them has more than one kinds of label and L2 belongs to those categories that differ from L1.;;;</p>
<p>For example, </p>
<pre><code>[0,1], [0,2], [0,3]
</code></pre>
<p>they are all on x = 0 but there are three kinds of labels</p>
<pre><code>[1,1]
</code></pre>
<p>belongs to only one categories, so I would like to add a cut point x=0.5 in the middle of 0 and 1.</p>
<pre><code>3 x
2 x
1 x 1
x
0-x-1-
</code></pre>
<p>but like</p>
<pre><code>[1,1] and [2,1]
</code></pre>
<p>they both has only one and identical label, there is no need to add a cut point here.</p>
<p>So the result should be
<code>[0.5, 2.5, 3.5, 4.5]</code>
and maybe looks like this</p>
<pre><code> 3 x x 3 x 3 x 3 3 <--Label
2 x x 2 x 2 x <--Label
1 x 1 1 x x x <--Label
x x x x
-0-x-1---2-x-3-x-4-x-5---6--- <--X-axis
0.5 2.5 3.5 4.5 <--Cut points
</code></pre>
<p>The code I want to write will looks like this form</p>
<pre><code>A = [[0,1], [0,2], [0,3], [1,1], [2,1], [3,2], [3,3], [4,2], [4,3], [5,3], [6,3]]
X = []
for a in A:
X.append(a[0])
X = sorted(list(set(X)))
labels = [[1], [2], [3]]
group = []
for i in range(len(labels)):
group.append([])
for a in A:
for i in range(3):
if a[1] in labels[i]:
group[i].append(a[0])
cutpoints = []
for i, x in enumerate(X):
for j in range(len(group)):
if x in group[j] and (X[i+1] in group[ other than j ]):
cutpoints.append((x+X[i+1])/2)
</code></pre>
<p>But I stuck at the part "other than j"
In this case there are only 3 categories so maybe I can do that manually but I'm looking for a more clever way to do it so I don't need to rewrite this part every time I meet a new data with different number of categories.</p>
<p>Is there any function I can use to do the "other than j" operation??</p>
<p>Any comment or answer will be appreciated.
Thanks in advance T^T</p>
| 1 | 2016-08-04T04:12:12Z | 38,758,682 | <p>This is sort of a weird problem you've got, but here's a functional way to do.</p>
<pre><code>from itertools import groupby
</code></pre>
<p><code>groupby</code> will let us easily merge your X coordinates, assuming the array is pre-sorted by them.</p>
<pre><code>l = [(i, [x[1] for x in g]) for i, g in groupby(A, lambda x: x[0])]
</code></pre>
<p>This looks a bit daunting, but is conceptually pretty easy. The <code>groupby</code> pulls together all the things that share an X, and the inner list comprehension just dumps the X values out:</p>
<pre><code>l
[(0, [1, 2, 3]),
(1, [1]),
(2, [1]),
(3, [2, 3]),
(4, [2, 3]),
(5, [3]),
(6, [3])]
</code></pre>
<p>Then if we group each together with the next element using <code>zip</code> we can just pick out the pairs that meet your criteria and get the midpoint between them:</p>
<pre><code>[(i1+i2) / 2.
for (i1, l1), (i2, l2)
in zip(l, l[1:])
if l1 != l2 or len(l1) > 1]
[0.5, 2.5, 3.5, 4.5]
</code></pre>
| 2 | 2016-08-04T04:49:05Z | [
"python",
"list"
] |
How to convert a tuple list with duplicated keys and values into a fancy dicionary? | 38,758,378 | <p>I have a tuple list like the code below, an identifier has multiple values(both identifier and values maybe duplicated), I thought using <code>dict</code> with a <code>str</code> as a key and a <code>set</code> as it's value would be reasonable, but how?</p>
<pre><code>tuple_list = [
('id1', 123),
('id1', 123),
('id2', 123),
('id1', 456),
('id2', 456),
('id1', 789),
('id2', 789)
]
</code></pre>
<p>What I need is like this: <code>{ 'id1': {123, 456, 789}</code>, ... }`, my code is:</p>
<pre><code>for id, val in tuple_list:
map_data[id].append(val) # error here, how to fix this?
</code></pre>
| -2 | 2016-08-04T04:13:08Z | 38,758,406 | <p>You can use <code>setdefaultdict</code>:</p>
<pre><code>map_data = {}
for id, val in tuple_list:
map_data.setdefault(id,set()).add(val)
</code></pre>
| 1 | 2016-08-04T04:17:28Z | [
"python",
"dictionary"
] |
How to convert a tuple list with duplicated keys and values into a fancy dicionary? | 38,758,378 | <p>I have a tuple list like the code below, an identifier has multiple values(both identifier and values maybe duplicated), I thought using <code>dict</code> with a <code>str</code> as a key and a <code>set</code> as it's value would be reasonable, but how?</p>
<pre><code>tuple_list = [
('id1', 123),
('id1', 123),
('id2', 123),
('id1', 456),
('id2', 456),
('id1', 789),
('id2', 789)
]
</code></pre>
<p>What I need is like this: <code>{ 'id1': {123, 456, 789}</code>, ... }`, my code is:</p>
<pre><code>for id, val in tuple_list:
map_data[id].append(val) # error here, how to fix this?
</code></pre>
| -2 | 2016-08-04T04:13:08Z | 38,758,552 | <p>To use a <code>dict</code> containing a <code>set</code> do it like this.</p>
<pre><code>from collections import defaultdict
tuple_list = [
('id1', 123),
('id1', 123),
('id2', 123),
('id1', 456),
('id2', 456),
('id1', 789),
('id2', 789)
]
map_data = defaultdict(set)
for id, val in tuple_list:
map_data[id].add(val)
print(map_data)
</code></pre>
<p>result</p>
<pre><code>defaultdict(<type 'set'>, {'id2': set([456, 123, 789]), 'id1': set([456, 123, 789])})
</code></pre>
| 1 | 2016-08-04T04:35:05Z | [
"python",
"dictionary"
] |
python flask bucketlist app | 38,758,389 | <p>I am attempting to build a python flask web app with a tutorial and I am having trouble implementing my signUp method.
Tutorial: <a href="http://code.tutsplus.com/tutorials/creating-a-web-app-from-scratch-using-python-flask-and-mysql--cms-22972" rel="nofollow">http://code.tutsplus.com/tutorials/creating-a-web-app-from-scratch-using-python-flask-and-mysql--cms-22972</a></p>
<p>I get a 500 error when I hit the 'sign up' button: _name = Request.form['inputName']
TypeError: 'cached_property' object has no attribute '<strong>getitem</strong>'</p>
<p>Not sure why I am receiving 500 error. Any help would be appreciated. Thanks</p>
<p>Below is my python code:</p>
<pre><code>from flask import Flask, render_template, json, Request
app = Flask(__name__)
@app.route('/main')
def main():
return render_template('index.html')
@app.route('/showSignUp')
def showSignUp():
return render_template('signup.html')
@app.route('/signUp',methods=['POST'])
def signUp():
_name = Request.form['inputName']
_email = Request.form['inputEmail']
_password = Request.form['inputPassword']
if _name and _email and _password:
return json.dumps({'html':'<span>All fields good !!</span>'})
else:
return json.dumps({'html:':'<span>Enter the required fields</span>'})
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
<p>Here is the javascript ajax code:</p>
<pre><code>$(function () {
$('#btnSignUp').click(function () {
$.ajax({
url: '/signUp',
data: $('form').serialize(),
type: 'POST',
success: function(response) {
console.log(response);
},
error: function(error) {
console.log(error);
}
});
});
});
</code></pre>
<p>Here is signup.html</p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<title>Python Flask Bucket List App</title>
<link href="http://getbootstrap.com/dist/css/bootstrap.min.css" rel="stylesheet" />
<link href="http://getbootstrap.com/examples/jumbotron-narrow/jumbotron-narrow.css" rel="stylesheet" />
<link href="../static/signup.css" rel="stylesheet" />
<script src="../static/js/jquery-3.1.0.js"></script>
<script src="../static/js/signUp.js"></script>
</head>
<body>
<div class="container">
<div class="header">
<nav>
<ul class="nav nav-pills pulls-right">
<li role="presentation"><a href="main">Home</a></li>
<li role="presentation"><a href="#">Sign In</a></li>
<li role="presentation" class="active"><a href="#">Sign Up</a></li>
</ul>
</nav>
<h3 class="text-muted">Python Flask App</h3>
</div>
<div class="jumbotron">
<h1>Bucket List App</h1>
<form class="form-signin">
<label for="inputName" class="sr-only">Name</label>
<input type="name" name="inputName" id="inputName" class="form-control" placeholder="Name" required autofocus />
<label for="inputEmail" class="sr-only">Email address</label>
<input type="email" name="inputEmail" id="inputEmail" class="form-control" placeholder="Email address" required autofocus />
<label for="inputPassword" class="sr-only">Password</label>
<input type="password" name="inputPassword" id="inputPassword" class="form-control" placeholder="Password" required />
<button id="btnSignUp" class="btn btn-lg btn-primary btn-block" type="button">Sign up</button>
</form>
</div>
<footer class="footer">
<p>&copy; Company 2016</p>
</footer>
</div>
</body>
</html>
</code></pre>
| 1 | 2016-08-04T04:14:53Z | 38,758,447 | <p>I also encountered the same issue when I was implementing the same recently.</p>
<p>Change your signUp code in app.py as follows including the import.</p>
<pre><code>from flask import Flask, render_template, json, request
def signUp():
_name = request.form['inputName']
_email = request.form['inputEmail']
_password = request.form['inputPassword']
</code></pre>
<p>Note: it should be <code>request</code> not <code>Request</code></p>
<p><hr/>
This error occurs because <code>Request</code> is the class Flask uses to represent an incoming request; it is not, however, the request itself. Instead, Flask stores the current request, which is an instance of the <code>Request</code> class, in the <code>request</code> variable.</p>
| 1 | 2016-08-04T04:23:22Z | [
"python",
"ajax",
"flask"
] |
Installing Qutip in fedora 24 | 38,758,428 | <p>I can't install Qutip in my fedora 24</p>
<pre><code>pip install qutip
</code></pre>
<p>Whenever I type this , error message showing <a href="http://i.stack.imgur.com/r8g3z.png" rel="nofollow">It starts like this </a>
But at the end <a href="http://i.stack.imgur.com/q54nr.png" rel="nofollow">this</a> happens , and Qutip is not get installed .
What to do ?</p>
| 0 | 2016-08-04T04:20:50Z | 38,758,648 | <p>The error is quite evident:</p>
<pre><code>gcc: error: /usr/lib/rpm/redhat/redhat-hardened-001: No such file or directory
</code></pre>
<p>Somehow this file has vanished from your computer. Find out which package provides this file.</p>
<pre><code>sudo dnf provides /usr/lib/rpm/redhat/redhat-hardened-cc1
Last metadata expiration check: 6 days, 22:14:34 ago on Thu Jul 28 11:58:37 2016.
redhat-rpm-config-40-2.fc24.noarch : Red Hat specific rpm configuration files
Repo : @System
redhat-rpm-config-40-2.fc24.noarch : Red Hat specific rpm configuration files
Repo : fedora
</code></pre>
<p>Then reinstall the package:</p>
<pre><code>sudo dnf reinstall -y redhat-rpm-config
</code></pre>
| 1 | 2016-08-04T04:46:40Z | [
"python",
"fedora",
"qutip"
] |
running Scrapy on terminal server | 38,758,532 | <p>i wanna crawling on server side but my python it isnt so good...</p>
<p>my source is works so well, if i running it on mylaptop terminal, but it was going wrong when running it on server terminal</p>
<p>here my source code </p>
<pre><code>from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from thehack.items import NowItem
import time
class MySpider(BaseSpider):
name = "nowhere"
allowed_domains = ["n0where.net"]
start_urls = ["https://n0where.net/"]
def parse(self, response):
for article in response.css('.loop-panel'):
item = NowItem()
item['title'] = article.css('.article-title::text').extract_first()
item['link'] = article.css('.loop-panel>a::attr(href)').extract_first()
item['body'] ='' .join(article.css('.excerpt p::text').extract()).strip()
#date ga kepake
#item['date'] = article.css('[itemprop="datePublished"]::attr(content)').extract_first()
yield item
time.sleep(5)
</code></pre>
<p>the wrong line said</p>
<pre><code>ERROR: Spider error processing <GET https://n0where.net/>
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 824, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/usr/lib/python2.7/dist-packages/twisted/internet/task.py", line 638, in _tick
taskObj._oneWorkUnit()
File "/usr/lib/python2.7/dist-packages/twisted/internet/task.py", line 484, in _oneWorkUnit
result = next(self._iterator)
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 57, in <genexpr>
work = (callable(elem, *args, **named) for elem in iterable)
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 96, in iter_errback
yield next(it)
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/offsite.py", line 26, in process_spider_output
for x in result:
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/urllength.py", line 33, in <genexpr>
return (r for r in result or () if _filter(r))
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/depth.py", line 50, in <genexpr>
return (r for r in result or () if _filter(r))
File "/home/admin/nowhere/thehack/spiders/thehack_spider.py", line 14, in parse
item['title'] = article.css('.article-title::text').extract_first()
exceptions.AttributeError: 'SelectorList' object has no attribute 'extract_first'
</code></pre>
<p>does anybody know how to fix it mate?
thanks a lot before :)</p>
| 0 | 2016-08-04T04:32:34Z | 38,761,763 | <p>Seems like your scrapy version is out of date. <code>scrapy.Selector</code> method <code>.extract_first()</code> was only added in scrapy 1.1, so you want to upgrade the scrapy package on your server.</p>
| 0 | 2016-08-04T08:04:46Z | [
"python",
"server",
"scrapy",
"web-crawler",
"client"
] |
Why wont my {% extends %} command work in my django app | 38,758,580 | <p>I am making a web app with django, and in one portion I am trying to make use of the {% extends %} command to put some html from one template on to another. Here is the code:</p>
<p>home.html - </p>
<pre><code><!doctype html>
<html>
<head>
</head>
<body>
{% block content %}
{% endblock %}
</body>
</html
</code></pre>
<p>search.html - </p>
<pre><code>{% extends "gamelobby/home.html" %}
{% block content %}
<h1>Hello World</h1>
{% endblock %}
</code></pre>
<p>Any idea what the problem might be?</p>
<p>Code for home.html view -</p>
<pre><code>def index(request):
all_games = GameCard.objects.all()
template = loader.get_template('gamelobby/home.html')
context = {
'all_games': all_games,
}
return HttpResponse(template.render(context, request))
</code></pre>
| -2 | 2016-08-04T04:38:42Z | 38,772,740 | <p>What you want to happen is to direct people to your search <code>view</code> so that view has to know about <code>search.html</code></p>
<pre><code>def index(request):
all_games = GameCard.objects.all()
template = loader.get_template('search.html') <!-- or whichever file -->
context = {
'all_games': all_games,
}
return HttpResponse(template.render(context, request))
</code></pre>
<p>When this view loads the template, it sees this <code>extends</code> from <code>gamelobby/home.html</code> and pulls it in <em>around</em> the block tags.</p>
| 1 | 2016-08-04T16:26:17Z | [
"python",
"django"
] |
Haystack index objects using their reverse relation | 38,758,617 | <p>I have a two models: </p>
<pre><code>class Question(models.Model):
question_text = models.TextField()
class Response(models.Model):
response_text = models.TextField()
question = models.ForeignKey(Question, related_name='responses', on_delete=models.CASCADE)
</code></pre>
<p>I have one haystack search index to index all of the <code>Question</code>'s <code>question_text</code>:</p>
<pre><code>class QuestionIndex(indexes.SearchIndex, indexes.Indexable):
text = indexes.CharField(document=True, use_template=True)
question_text = indexes.CharField(model_attr='question_text')
def get_model(self):
return Question
</code></pre>
<p>How do I index all of the <code>response_text</code> so that when I search for <code>Question</code>s, I get all of the questions that match <code>question_text</code> and all the questions that have responses that match <code>response_text</code>? I want something like:</p>
<pre><code>class QuestionIndex(indexes.SearchIndex, indexes.Indexable):
text = indexes.CharField(document=True, use_template=True)
question_text = indexes.CharField(model_attr='question_text')
response_text = indexes.CharField(model_attr='responses__response_text')
def get_model(self):
return Question
</code></pre>
<p><strong>Ultimate Question:</strong> How do I index all of <code>response_text</code> using this <code>QuestionIndex</code> class?</p>
| 1 | 2016-08-04T04:43:19Z | 38,758,666 | <p>You can supply a <a href="http://django-haystack.readthedocs.io/en/v2.4.1/searchindex_api.html#prepare-foo-self-object" rel="nofollow"><code>prepare_</code> method</a> for that field to specify what data is indexed:</p>
<pre><code>class QuestionIndex(indexes.SearchIndex, indexes.Indexable):
text = indexes.CharField(document=True, use_template=True)
question_text = indexes.CharField(model_attr='question_text')
response_text = indexes.CharField()
def prepare_response_text(self, obj):
return ', '.join([r.response_text for r in obj.responses.all()])
</code></pre>
| 1 | 2016-08-04T04:47:58Z | [
"python",
"django",
"search",
"tastypie",
"django-haystack"
] |
Grouping Functions by Using Classes in Python | 38,758,668 | <p>I have been a Python Scientific Programmer for a few years now, and I find myself coming to a sort specific problem as my programs get larger and larger. I am self taught so I have never had any formal training and spent any time really on 'conventions' of coding in Python "properly".</p>
<p>Anyways, to the point, I find myself always creating a utils.py file that I store all my defined functions in that my programs use. I then find myself grouping these functions into their respective purposes. One way of I know of grouping things is of course using Classes, but I am unsure as to whether my strategy goes against what classes should actually be used for.</p>
<p>Say I have a bunch of functions that do roughly the same thing like this:</p>
<pre><code>def add(a,b):
return a + b
def sub(a,b):
return a -b
def cap(string):
return string.title()
def lower(string):
return string.lower()
</code></pre>
<p>Now obviously these 4 functions can be seen as doing two seperate purposes one is calculation and the other is formatting. This is what logic tells me to do, but I have to work around it since I don't want to initialise a variable that corresponds to the class evertime.</p>
<pre><code>class calc_funcs(object):
def __init__(self):
pass
@staticmethod
def add(a,b):
return a + b
@staticmethod
def sub(a, b):
return a - b
class format_funcs(object):
def __init__(self):
pass
@staticmethod
def cap(string):
return string.title()
@staticmethod
def lower(string):
return string.lower()
</code></pre>
<p>This way I have now 'grouped' these methods together into a nice package that makes finding desired methods much faster based on their role in the program.</p>
<pre><code>print calc_funcs.add(1,2)
print format_funcs.lower("Hello Bob")
</code></pre>
<p>However that being said, I feel this is a very 'unpython-y' way to do things, and it just feels messy. Am I going about thinking this the right way or is there an alternate method?</p>
<p>Thank you!</p>
| 0 | 2016-08-04T04:48:03Z | 38,758,786 | <p>I think doing so is perfectly pythonic. This is exactly the purpose of <code>staticmethod</code> constructor.</p>
<p>For python conventions, see <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">PEP 8</a>.</p>
| 0 | 2016-08-04T05:00:34Z | [
"python",
"class",
"conventions"
] |
Grouping Functions by Using Classes in Python | 38,758,668 | <p>I have been a Python Scientific Programmer for a few years now, and I find myself coming to a sort specific problem as my programs get larger and larger. I am self taught so I have never had any formal training and spent any time really on 'conventions' of coding in Python "properly".</p>
<p>Anyways, to the point, I find myself always creating a utils.py file that I store all my defined functions in that my programs use. I then find myself grouping these functions into their respective purposes. One way of I know of grouping things is of course using Classes, but I am unsure as to whether my strategy goes against what classes should actually be used for.</p>
<p>Say I have a bunch of functions that do roughly the same thing like this:</p>
<pre><code>def add(a,b):
return a + b
def sub(a,b):
return a -b
def cap(string):
return string.title()
def lower(string):
return string.lower()
</code></pre>
<p>Now obviously these 4 functions can be seen as doing two seperate purposes one is calculation and the other is formatting. This is what logic tells me to do, but I have to work around it since I don't want to initialise a variable that corresponds to the class evertime.</p>
<pre><code>class calc_funcs(object):
def __init__(self):
pass
@staticmethod
def add(a,b):
return a + b
@staticmethod
def sub(a, b):
return a - b
class format_funcs(object):
def __init__(self):
pass
@staticmethod
def cap(string):
return string.title()
@staticmethod
def lower(string):
return string.lower()
</code></pre>
<p>This way I have now 'grouped' these methods together into a nice package that makes finding desired methods much faster based on their role in the program.</p>
<pre><code>print calc_funcs.add(1,2)
print format_funcs.lower("Hello Bob")
</code></pre>
<p>However that being said, I feel this is a very 'unpython-y' way to do things, and it just feels messy. Am I going about thinking this the right way or is there an alternate method?</p>
<p>Thank you!</p>
| 0 | 2016-08-04T04:48:03Z | 38,758,993 | <p>I wouldn't use a <code>class</code> for this, I'd use a <code>module</code>. A class consisting of only staticmethods strikes me as a code smell too. Here's how to do it with modules: any time you stick code in a separate file and import it into another file, Python sticks that code in a module with the same name as the file. So in your case:</p>
<p>In <code>mathutil.py</code></p>
<pre><code>def add(a,b):
return a+b
def sub(a,b):
return a-b
</code></pre>
<p>In <code>main.py</code></p>
<pre><code>import mathutil
def main():
c = mathutil.add(a,b)
</code></pre>
<p>Or, if you're going to use mathutil in a lot of places and don't want to type out (and read) the full module name each time, come up with a standard abbreviation and use that everywhere:</p>
<p>In <code>main.py</code>, alternate version</p>
<pre><code>import mathutil as mu
def main():
c = mu.add(a,b)
</code></pre>
<p>Compared to your method you'll have more files with fewer functions in each of them, but I think it's easier to navigate your code that way.</p>
<p>By the way, there is a bit of a Python convention for naming files/modules: short names, all lower case, without underscores between words. It's not what I started out doing, but I've moved over to doing it that way in my code and it's made it easier to understand the structure of other people's modules that I've used.</p>
| 0 | 2016-08-04T05:18:02Z | [
"python",
"class",
"conventions"
] |
Grouping Functions by Using Classes in Python | 38,758,668 | <p>I have been a Python Scientific Programmer for a few years now, and I find myself coming to a sort specific problem as my programs get larger and larger. I am self taught so I have never had any formal training and spent any time really on 'conventions' of coding in Python "properly".</p>
<p>Anyways, to the point, I find myself always creating a utils.py file that I store all my defined functions in that my programs use. I then find myself grouping these functions into their respective purposes. One way of I know of grouping things is of course using Classes, but I am unsure as to whether my strategy goes against what classes should actually be used for.</p>
<p>Say I have a bunch of functions that do roughly the same thing like this:</p>
<pre><code>def add(a,b):
return a + b
def sub(a,b):
return a -b
def cap(string):
return string.title()
def lower(string):
return string.lower()
</code></pre>
<p>Now obviously these 4 functions can be seen as doing two seperate purposes one is calculation and the other is formatting. This is what logic tells me to do, but I have to work around it since I don't want to initialise a variable that corresponds to the class evertime.</p>
<pre><code>class calc_funcs(object):
def __init__(self):
pass
@staticmethod
def add(a,b):
return a + b
@staticmethod
def sub(a, b):
return a - b
class format_funcs(object):
def __init__(self):
pass
@staticmethod
def cap(string):
return string.title()
@staticmethod
def lower(string):
return string.lower()
</code></pre>
<p>This way I have now 'grouped' these methods together into a nice package that makes finding desired methods much faster based on their role in the program.</p>
<pre><code>print calc_funcs.add(1,2)
print format_funcs.lower("Hello Bob")
</code></pre>
<p>However that being said, I feel this is a very 'unpython-y' way to do things, and it just feels messy. Am I going about thinking this the right way or is there an alternate method?</p>
<p>Thank you!</p>
| 0 | 2016-08-04T04:48:03Z | 38,759,051 | <p>Another approach is to make a <code>util</code> <em>package</em> and split up your functions into different modules within that package. The basics of packages: make a directory (whose name will be the package name) and put a special file in it, the <code>__init__.py</code> file. This <em>can</em> contain code, but for the basic package organization, it can be an empty file.</p>
<pre><code>my_package/
__init__.py
module1.py/
modle2.py/
...
module3.py
</code></pre>
<p>So say you are in your working directory:</p>
<pre><code>mkdir util
touch util/__init__.py
</code></pre>
<p>Then inside your <code>util</code> directory, make <code>calc_funcs.py</code></p>
<pre><code>def add(a,b):
return a + b
def sub(a,b):
return a -b
</code></pre>
<p>And <code>format_funcs.py</code>:</p>
<pre><code>def cap(string):
return string.title()
def lower(string):
return string.lower()
</code></pre>
<p>And now, from your working directory, you can do things like the following:</p>
<pre><code>>>> from util import calc_funcs
>>> calc_funcs.add(1,3)
4
>>> from util.format_funcs import cap
>>> cap("the quick brown fox jumped over the lazy dog")
'The Quick Brown Fox Jumped Over The Lazy Dog'
</code></pre>
<h2> Edited to add </h2>
<p>Notice, though, if we restart the interpreter session:</p>
<pre><code>>>> import util
>>> util.format_funcs.cap("i should've been a book")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'util' has no attribute 'format_funcs'
</code></pre>
<p>This is what the <code>__init__.py</code> is for!</p>
<p>In <code>__init__.py</code>, add the following:</p>
<pre><code>import util.calc_funcs, util.format_funcs
</code></pre>
<p>Now, restart the interpreter again:</p>
<pre><code>>>> import util
>>> util.calc_funcs.add('1','2')
'12'
>>> util.format_funcs.lower("I DON'T KNOW WHAT I'M YELLING ABOUT")
"i don't know what i'm yelling about"
</code></pre>
<p>Yay! We have flexible control over our namespaces with easy importing! Basically, the <code>__init__.py</code> plays an analogous role to the <code>__init__</code> method in a class definition.</p>
| 1 | 2016-08-04T05:24:12Z | [
"python",
"class",
"conventions"
] |
incrementally rank pandas dataframe subject to other boolean dataframe in panel | 38,758,691 | <p>I have two pandas dataframes in a panel and would like to create a third df that ranks the first df (by row) but only include those where the corresponding element of the second df is True. Some sample data to illustrate:</p>
<pre><code>p['x']
A B C D E
2015-12-31 0.957941 -0.686432 1.087717 1.363008 -1.528369
2016-01-31 0.079616 0.524744 1.675234 0.665511 0.023160
2016-02-29 -0.300144 -0.705346 -0.141015 1.341883 0.855853
2016-03-31 0.435728 1.046326 -0.422501 0.536986 -0.656256
p['y']
A B C D E
2015-12-31 True False True False NaN
2016-01-31 True True True False NaN
2016-02-29 False True True True NaN
2016-03-31 NaN NaN NaN NaN NaN
</code></pre>
<p>I have managed to do this with a few ugly hacks but still get stuck on the fact that rank won't let me use method='first' on non-numeric data. I want to force incremental integer ranks (even if duplicates) and NaN for any cell that didn't have True in the boolean df.</p>
<p>Output should be of the form:</p>
<pre><code> A B C D E
2015-12-31 2.0 NaN 1.0 NaN NaN
2016-01-31 3.0 2.0 1.0 NaN NaN
2016-02-29 NaN 3.0 2.0 1.0 NaN
2016-03-31 NaN NaN NaN NaN NaN
</code></pre>
<p>My hacked attempt is below. It works, although there should clearly be a better way to replace false with NaN. However it doesn't work once I add method='first' and this is necessary as I may have instances of duplicated values.</p>
<pre><code># I first had to hack a replacement of False with NaN.
# np.nan did not evaluate correctly
# I wasn't sure how else to specify pandas NaN
rank=p['Z'].replace(False,p['Z'].iloc[3,0])
# eliminate the elements without a corresponding True
rank=rank*p['X']
# then this works
p['rank'] = rank.rank(axis=1, ascending=False)
# but this doesn't
p['rank'] = rank.rank(axis=1, ascending=False, method='first')
</code></pre>
<p>Any help would be much appreciated!
thanks</p>
<ol>
<li>List item</li>
</ol>
| 1 | 2016-08-04T04:49:59Z | 38,759,134 | <pre><code>pd.DataFrame(np.where(p['y'] == True, p['x'], np.nan),
p.major_axis, p.minor_axis).rank(1, ascending=False)
</code></pre>
<p><a href="http://i.stack.imgur.com/8VDxF.png" rel="nofollow"><img src="http://i.stack.imgur.com/8VDxF.png" alt="enter image description here"></a></p>
| 2 | 2016-08-04T05:31:54Z | [
"python",
"pandas",
null,
"rank"
] |
Repeat each elements based on a list of values | 38,758,708 | <p>Is there a Python builtin that repeats each element of a list based on the corresponding value in another list? For example <code>A</code> in list <code>x</code> position 0 is repeated 2 times because of the value <code>2</code> at position 0 in the list <code>y</code>.</p>
<pre><code>>>> x = ['A', 'B', 'C']
>>> y = [2, 1, 3]
>>> f(x, y)
['A', 'A', 'B', 'C', 'C', 'C']
</code></pre>
<p>Or to put it another way, what is the fastest way to achieve this operation?</p>
| 2 | 2016-08-04T04:52:24Z | 38,758,755 | <p>Try this</p>
<pre><code>x = ['A', 'B', 'C']
y = [2, 1, 3]
newarray = []
for i in range(0,len(x)):
newarray.extend(x[i] * y[i])
print newarray
</code></pre>
| 0 | 2016-08-04T04:57:28Z | [
"python",
"list"
] |
Repeat each elements based on a list of values | 38,758,708 | <p>Is there a Python builtin that repeats each element of a list based on the corresponding value in another list? For example <code>A</code> in list <code>x</code> position 0 is repeated 2 times because of the value <code>2</code> at position 0 in the list <code>y</code>.</p>
<pre><code>>>> x = ['A', 'B', 'C']
>>> y = [2, 1, 3]
>>> f(x, y)
['A', 'A', 'B', 'C', 'C', 'C']
</code></pre>
<p>Or to put it another way, what is the fastest way to achieve this operation?</p>
| 2 | 2016-08-04T04:52:24Z | 38,758,764 | <p>One way would be the following</p>
<pre><code>x = ['A', 'B', 'C']
y = [2, 1, 3]
s = []
for a, b in zip(x, y):
s.extend([a] * b)
print(s)
</code></pre>
<p>result</p>
<pre><code>['A', 'A', 'B', 'C', 'C', 'C']
</code></pre>
| 2 | 2016-08-04T04:58:10Z | [
"python",
"list"
] |
Repeat each elements based on a list of values | 38,758,708 | <p>Is there a Python builtin that repeats each element of a list based on the corresponding value in another list? For example <code>A</code> in list <code>x</code> position 0 is repeated 2 times because of the value <code>2</code> at position 0 in the list <code>y</code>.</p>
<pre><code>>>> x = ['A', 'B', 'C']
>>> y = [2, 1, 3]
>>> f(x, y)
['A', 'A', 'B', 'C', 'C', 'C']
</code></pre>
<p>Or to put it another way, what is the fastest way to achieve this operation?</p>
| 2 | 2016-08-04T04:52:24Z | 38,758,772 | <pre><code>from itertools import chain
list(chain(*[[a] * b for a, b in zip(x, y)]))
['A', 'A', 'B', 'C', 'C', 'C']
</code></pre>
<p>There is <code>itertools.repeat</code> as well, but that ends up being uglier for this particular case.</p>
| 2 | 2016-08-04T04:58:41Z | [
"python",
"list"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.