title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Interpolate Matplotlib Array to an Arbitrary Length | 38,724,866 | <p>I am looking for a method of interpolating and simultaneously extending a 2D numpy array to an arbitrary length. For example say my array is </p>
<pre><code>array = [0,0,0]
[1,1,1]
[2,2,2]
</code></pre>
<p>After calling the function say I want an array of length 5:</p>
<pre><code>new_array = [0, 0, 0]
[0.5, 0.5, 0.5]
[1, 1, 1]
[1.5, 1.5, 1.5]
[2, 2, 2]
</code></pre>
<p>Ideally, this would work on any array of any length, to a new length. For example if my array was 1000 points long and I needed it to be 1500 points. Is there a way to do this easily and simply in Scipy or Numpy? I have looked through the <code>scipy.interpolate</code> module but was not able to to see how I can achieve this. Any direction would be very helpful. </p>
<p>Thanks. </p>
| 1 | 2016-08-02T15:36:52Z | 38,725,352 | <p>You can try:</p>
<pre><code>from scipy import interpolate
n = 5
np.column_stack([interpolate.interp1d(c,c)(np.linspace(c[:1,], c[-1:], n)) for c in arr.T])
Out[45]:
array([[ 0. , 0. , 0. ],
[ 0.5, 0.5, 0.5],
[ 1. , 1. , 1. ],
[ 1.5, 1.5, 1.5],
[ 2. , 2. , 2. ]])
</code></pre>
| 0 | 2016-08-02T15:58:50Z | [
"python",
"arrays",
"numpy",
"scipy"
] |
web crawling a table of links | 38,724,877 | <p>I'm creating a script in python that goes through a table with three columns. I created a list where every link in the first column is inserted into the list. And then I loop through. When looping, I click into the link, print a statement to make sure it actually clicked into the link, and then go to the previous page so that the next link can be clicked. The error I keep getting is that my loop goes through the first two links first and then I get a StaleElementReferenceException when the loop calls links[page].click() for the third time. I can't post the html because the site is confidential. </p>
<pre><code> from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
import traceback
# starting chrome browser
chrome_path = r"C:\Users\guaddavi\Downloads\chromedriver_win32 extract\chromedriver.exe"
browser = webdriver.Chrome(chrome_path)
#linking to page
browser.get('link to page with table ')
#find table of ETL Extracts
table_id = browser.find_element_by_id('sortable_table_id_0')
#print('found table')
#get all the rows of the table containing the links
rows = table_id.find_elements_by_tag_name('tr')
#remove the first row that has the header
del rows[0]
current = 0
links = [] * len(rows)
for row in rows:
col = row.find_elements_by_tag_name('td')[0]
links.append(col)
current +=1
page = 0
while(page <= len(rows)):
links[page].click()
print('clicked link' + " " + str(page))
page += 1
browser.back()
</code></pre>
| 0 | 2016-08-02T15:37:14Z | 38,725,225 | <p>I am not sure you already saw the official Selenium documentation:</p>
<blockquote>
<p>A stale element reference exception is thrown in one of two cases, the first being more common than the second:
The element has been deleted entirely.
The element is no longer attached to the DOM.</p>
</blockquote>
<p>In your case I think your are having the second issue. Every time you click and go back in the loop your DOM is changing. Please check that out.</p>
| 1 | 2016-08-02T15:53:43Z | [
"python",
"selenium",
"web-crawler"
] |
Iterating over multiindex dataframe (Python) and assigning dicts to index-value pairs | 38,725,011 | <p>I am at my wit's end with this... I have a dataframe of three columns (aff_id, mkt and bkgs) I grouped by two of them (aff_id and mkt) :</p>
<pre><code>df_gb_aff = df.groupby(["affiliate_id", 'mkt']).sum()
df_gb_aff.sort('bkgs', ascending=False, inplace=True)
</code></pre>
<p>to give me a multiindex dataframe that looks a bit like this:</p>
<pre><code> bkgs
aff_id mkt
2508b863a1a4 bcab9d6ec630 1910.707124
6cc5f0e8c96b b7d0dbd38376 1374.924684
188e238326e4 446bb566f202 1206.589522
dbe759c691eb 1203.979908
6cc5f0e8c96b 0e9013464c4c 1203.532310
</code></pre>
<p>What I want to do now is to iterate over each aff_id, and make a dict of mkt (key) - bkgs (value) pairs, but since each aff_id value has different mkt values, Python throws an error when a df_gb_aff.loc[index_1, index_2] doesn't exist.</p>
<p>I've been getting the indexes with these:</p>
<pre><code>aff_list = df_gb_aff.index.levels[0].values
mkt_list = df_gb_aff.index.levels[1].values
</code></pre>
<p>and trying to iterate over with:</p>
<pre><code>for i in aff_list:
for j in mkt_list :
print df_gb_aff.loc[i,j]
</code></pre>
<p>Anyone have a sensible way of doing this?</p>
| 2 | 2016-08-02T15:43:22Z | 38,725,212 | <p>Another solution with dict comprehension:</p>
<pre><code>d = {idx[1]: df_gb_aff.ix[idx][0] for idx in df_gb_aff.index}
print (d)
{'446bb566f202': 1206.589522,
'bcab9d6ec630': 1910.7071239999998,
'0e9013464c4c': 1203.5323100000001,
'dbe759c691eb': 1203.979908,
'b7d0dbd38376': 1374.9246840000001}
print (d['bcab9d6ec630'])
1910.707124
</code></pre>
<p>And if need loop <code>Multiindex</code>:</p>
<pre><code>for idx in df_gb_aff.index:
print (idx)
print (df_gb_aff.ix[idx])
bkgs 1910.707124
Name: (2508b863a1a4, bcab9d6ec630), dtype: float64
('6cc5f0e8c96b', 'b7d0dbd38376')
bkgs 1374.924684
Name: (6cc5f0e8c96b, b7d0dbd38376), dtype: float64
('188e238326e4', '446bb566f202')
bkgs 1206.589522
Name: (188e238326e4, 446bb566f202), dtype: float64
('188e238326e4', 'dbe759c691eb')
bkgs 1203.979908
Name: (188e238326e4, dbe759c691eb), dtype: float64
('6cc5f0e8c96b', '0e9013464c4c')
bkgs 1203.53231
Name: (6cc5f0e8c96b, 0e9013464c4c), dtype: float64
</code></pre>
| 1 | 2016-08-02T15:53:20Z | [
"python",
"pandas",
"group-by"
] |
Is it possible to use regular expressions (or more general templates) to define variables? | 38,725,013 | <p>Let us consider the following template:</p>
<pre><code>*aaa*bbb*
</code></pre>
<p>It should return all strings that contain <code>aaa</code> as well as <code>bbb</code> as sub-strings (with the restriction that <code>bbb</code> comes after <code>aaa</code>).</p>
<p>What I want to have is a possibility to use sub-strings that are (1) before <code>aaa</code>, (2) between <code>aaa</code> and <code>bbb</code> as well as (3) after <code>bbb</code> (some of these sub-strings could be empty). So, basically I want to know what stand behind each star. In more details, I want to use these three sub-strings to construct a new string (output).</p>
<p>For example I might want to interchange the first and the second sub-strings, put <code>ccc</code> between them and remove the last sub-string (as well as <code>aaa</code> and <code>bbb</code>). What I want to do can be expressed in the following more formal way:</p>
<pre><code>{?x1}aaa{?x2}bbb{?x3} -> {?x2}ccc{?x1}
</code></pre>
<p>Note that I have replaced <code>*</code> by <code>{?x1}</code>, <code>{?x2}</code> and <code>{?x1}</code>. In this way I define three variables that I use later.</p>
<p>For example, if I have <code>XXXaaaYYYbbbZZZ</code> as input, I should generate the following string as output: <code>YYYcccXXX</code></p>
<p><strong>ADDED</strong></p>
<p>My question is if there is a flexible template language that also allows to defined "variables" (parts of the original input sequence that can be use to define a new output sequence). I should probably add that I need a Python solution.</p>
| 0 | 2016-08-02T15:43:27Z | 38,772,348 | <p>Sounds like you want references, which are part of most regular expression libraries, including Python's. </p>
<pre><code>ccc_str = re.sub("(.*)aaa(.*)bbb(.*)", r"\2ccc\1", "XXXaaaYYYbbbZZZ")
</code></pre>
| 1 | 2016-08-04T16:06:20Z | [
"python",
"regex",
"templates",
"pattern-matching"
] |
Scanning a list - IndexError: list index out of range | 38,725,014 | <p>I have created a Hangman game that has one bug: if you guess the same letter twice it will break. I have created a list of every alphabet letter and when the player guesses that letter it will be removed from the list, which displays the remaining characters available to guess. This is accomplished through the .remove method, which will break if the character has already been removed from a previous guess.</p>
<p>I have attempted to nest this method in a for loop that will scan the alphabet list and check the user's guess for a match in the list and remove it. If it has already been guessed, then nothing will happen. The error i receive is an index error, presumably for the length of the guess. My confusion comes from the fact that I can accomplish this exact same task directly below when scanning the hangman word to match the player's guess. Please see the abridged code below:</p>
<pre><code># Play begins and player guesses a letter
player_word = ['_ '] * len(cpu_word)
player_word2 = ['_'] * len(cpu_word)
alphabet = ['a','b',etc.]
print 'You have 10 guesses left'
# Determines if the guess is correct
for count in range(10)[::-1]:
guess = raw_input(str('Guess a letter: '))
# This is the previous method that creates a bug:/
# alphabet.remove(guess)
for e in xrange(len(alphabet)):
if alphabet[e] == guess:
alphabet.remove(guess)
for i in xrange(len(cpu_word)):
if cpu_word[i] == guess:
player_word [i] = cpu_word [i]
print 'Correct!'
</code></pre>
<p>I have two questions. The first is can someone please explain this error to me, specifically why it works for scanning the hangman word but does not work for scanning the list.</p>
<p>And secondly, can anyone provide a solution for this problem.</p>
<p>I am new to coding so any info is greatly appreciated!</p>
<p>Thanks</p>
| 0 | 2016-08-02T15:43:32Z | 38,725,471 | <p>Your issue is using indexes to loop through the list, and changing the size of the list at the same time. You can either loop through the items directly:</p>
<pre><code>for letter in alphabet:
if letter == guess:
alphabet.remove(guess)
</code></pre>
<p>Or you can break when you've removed the letter:</p>
<pre><code>for e in range(len(alphabet)):
if alphabet[e] == guess:
alphabet.remove(guess)
break
</code></pre>
<p>Break stops termination of the loop, which is fine because you're finished after you've removed the letter.</p>
<p>I think a <code>set</code> would be better than list because it offers <code>O(1)</code> removal and contains. So:</p>
<pre><code>alphabet = ['a','b','c'...] # list, bad
alphabet = {'a','b','c'...} # set, good
</code></pre>
<p>Then your alphabet "loop" would be:</p>
<pre><code>if guess in alphabet:
alphabet.remove(guess)
</code></pre>
| 1 | 2016-08-02T16:04:20Z | [
"python",
"indexoutofboundsexception"
] |
Scanning a list - IndexError: list index out of range | 38,725,014 | <p>I have created a Hangman game that has one bug: if you guess the same letter twice it will break. I have created a list of every alphabet letter and when the player guesses that letter it will be removed from the list, which displays the remaining characters available to guess. This is accomplished through the .remove method, which will break if the character has already been removed from a previous guess.</p>
<p>I have attempted to nest this method in a for loop that will scan the alphabet list and check the user's guess for a match in the list and remove it. If it has already been guessed, then nothing will happen. The error i receive is an index error, presumably for the length of the guess. My confusion comes from the fact that I can accomplish this exact same task directly below when scanning the hangman word to match the player's guess. Please see the abridged code below:</p>
<pre><code># Play begins and player guesses a letter
player_word = ['_ '] * len(cpu_word)
player_word2 = ['_'] * len(cpu_word)
alphabet = ['a','b',etc.]
print 'You have 10 guesses left'
# Determines if the guess is correct
for count in range(10)[::-1]:
guess = raw_input(str('Guess a letter: '))
# This is the previous method that creates a bug:/
# alphabet.remove(guess)
for e in xrange(len(alphabet)):
if alphabet[e] == guess:
alphabet.remove(guess)
for i in xrange(len(cpu_word)):
if cpu_word[i] == guess:
player_word [i] = cpu_word [i]
print 'Correct!'
</code></pre>
<p>I have two questions. The first is can someone please explain this error to me, specifically why it works for scanning the hangman word but does not work for scanning the list.</p>
<p>And secondly, can anyone provide a solution for this problem.</p>
<p>I am new to coding so any info is greatly appreciated!</p>
<p>Thanks</p>
| 0 | 2016-08-02T15:43:32Z | 38,725,485 | <p>There is no need to loop over the alphabet, you could check if the letter is still in the alphabet with: </p>
<pre><code>if guess in alphabet:
#And do the function here:
alphabet.remove(guess)
</code></pre>
<p>Your for-loop contains a flaw. You are removing the guessed letter for the alphabet, but then your for-loop continues. This causes an error because your alphabet is now one letter shorter, while the loop still thinks it is the same size as before. Therefore, break the for-loop:</p>
<pre><code>for e in xrange(len(alphabet)):
if alphabet[e] == guess:
alphabet.remove(guess)
break
</code></pre>
| 2 | 2016-08-02T16:04:59Z | [
"python",
"indexoutofboundsexception"
] |
Scanning a list - IndexError: list index out of range | 38,725,014 | <p>I have created a Hangman game that has one bug: if you guess the same letter twice it will break. I have created a list of every alphabet letter and when the player guesses that letter it will be removed from the list, which displays the remaining characters available to guess. This is accomplished through the .remove method, which will break if the character has already been removed from a previous guess.</p>
<p>I have attempted to nest this method in a for loop that will scan the alphabet list and check the user's guess for a match in the list and remove it. If it has already been guessed, then nothing will happen. The error i receive is an index error, presumably for the length of the guess. My confusion comes from the fact that I can accomplish this exact same task directly below when scanning the hangman word to match the player's guess. Please see the abridged code below:</p>
<pre><code># Play begins and player guesses a letter
player_word = ['_ '] * len(cpu_word)
player_word2 = ['_'] * len(cpu_word)
alphabet = ['a','b',etc.]
print 'You have 10 guesses left'
# Determines if the guess is correct
for count in range(10)[::-1]:
guess = raw_input(str('Guess a letter: '))
# This is the previous method that creates a bug:/
# alphabet.remove(guess)
for e in xrange(len(alphabet)):
if alphabet[e] == guess:
alphabet.remove(guess)
for i in xrange(len(cpu_word)):
if cpu_word[i] == guess:
player_word [i] = cpu_word [i]
print 'Correct!'
</code></pre>
<p>I have two questions. The first is can someone please explain this error to me, specifically why it works for scanning the hangman word but does not work for scanning the list.</p>
<p>And secondly, can anyone provide a solution for this problem.</p>
<p>I am new to coding so any info is greatly appreciated!</p>
<p>Thanks</p>
| 0 | 2016-08-02T15:43:32Z | 38,725,500 | <p>Your problem is that after you remove an item from the array, it now has one less item, meaning that you can't go to the former end of the array. If your array of letters is 26 long, and you remove one letter, you can't access the 26th element anymore, because the array now only has 25 elements. After you guess correctly, you could just break, like so:</p>
<pre><code>for e in xrange(len(alphabet)):
#print alphabet[e], e
if alphabet[e] == guess:
alphabet.remove(guess)
break
</code></pre>
<p>However you could just uncomment the line above, and check if it is in the array before removing it, like this:</p>
<pre><code>for count in range(10)[::-1]:
guess = raw_input(str('Guess a letter: '))
if(guess in alphabet):
""""This is the previous method that creates a bug:/"""
alphabet.remove(guess)
for i in xrange(len(cpu_word)):
if cpu_word[i] == guess:
player_word [i] = cpu_word [i]
print 'Correct!'
</code></pre>
| 0 | 2016-08-02T16:05:41Z | [
"python",
"indexoutofboundsexception"
] |
Scanning a list - IndexError: list index out of range | 38,725,014 | <p>I have created a Hangman game that has one bug: if you guess the same letter twice it will break. I have created a list of every alphabet letter and when the player guesses that letter it will be removed from the list, which displays the remaining characters available to guess. This is accomplished through the .remove method, which will break if the character has already been removed from a previous guess.</p>
<p>I have attempted to nest this method in a for loop that will scan the alphabet list and check the user's guess for a match in the list and remove it. If it has already been guessed, then nothing will happen. The error i receive is an index error, presumably for the length of the guess. My confusion comes from the fact that I can accomplish this exact same task directly below when scanning the hangman word to match the player's guess. Please see the abridged code below:</p>
<pre><code># Play begins and player guesses a letter
player_word = ['_ '] * len(cpu_word)
player_word2 = ['_'] * len(cpu_word)
alphabet = ['a','b',etc.]
print 'You have 10 guesses left'
# Determines if the guess is correct
for count in range(10)[::-1]:
guess = raw_input(str('Guess a letter: '))
# This is the previous method that creates a bug:/
# alphabet.remove(guess)
for e in xrange(len(alphabet)):
if alphabet[e] == guess:
alphabet.remove(guess)
for i in xrange(len(cpu_word)):
if cpu_word[i] == guess:
player_word [i] = cpu_word [i]
print 'Correct!'
</code></pre>
<p>I have two questions. The first is can someone please explain this error to me, specifically why it works for scanning the hangman word but does not work for scanning the list.</p>
<p>And secondly, can anyone provide a solution for this problem.</p>
<p>I am new to coding so any info is greatly appreciated!</p>
<p>Thanks</p>
| 0 | 2016-08-02T15:43:32Z | 38,725,548 | <p>It's a really bad thing remove element from an iterable when you are looping on it or on its length!
Better do something like this</p>
<pre><code>alphabet = ['a','b','c']
print 'You have 10 guesses left'
for count in range(10):
guess = raw_input(str('Guess a letter: '))
if guess in alphabet:
alphabet.remove(guess)
print 'Correct!'
else:
print 'Wrong!'
if not alphabet:
print "Bye"
break
</code></pre>
| 0 | 2016-08-02T16:07:57Z | [
"python",
"indexoutofboundsexception"
] |
Scanning a list - IndexError: list index out of range | 38,725,014 | <p>I have created a Hangman game that has one bug: if you guess the same letter twice it will break. I have created a list of every alphabet letter and when the player guesses that letter it will be removed from the list, which displays the remaining characters available to guess. This is accomplished through the .remove method, which will break if the character has already been removed from a previous guess.</p>
<p>I have attempted to nest this method in a for loop that will scan the alphabet list and check the user's guess for a match in the list and remove it. If it has already been guessed, then nothing will happen. The error i receive is an index error, presumably for the length of the guess. My confusion comes from the fact that I can accomplish this exact same task directly below when scanning the hangman word to match the player's guess. Please see the abridged code below:</p>
<pre><code># Play begins and player guesses a letter
player_word = ['_ '] * len(cpu_word)
player_word2 = ['_'] * len(cpu_word)
alphabet = ['a','b',etc.]
print 'You have 10 guesses left'
# Determines if the guess is correct
for count in range(10)[::-1]:
guess = raw_input(str('Guess a letter: '))
# This is the previous method that creates a bug:/
# alphabet.remove(guess)
for e in xrange(len(alphabet)):
if alphabet[e] == guess:
alphabet.remove(guess)
for i in xrange(len(cpu_word)):
if cpu_word[i] == guess:
player_word [i] = cpu_word [i]
print 'Correct!'
</code></pre>
<p>I have two questions. The first is can someone please explain this error to me, specifically why it works for scanning the hangman word but does not work for scanning the list.</p>
<p>And secondly, can anyone provide a solution for this problem.</p>
<p>I am new to coding so any info is greatly appreciated!</p>
<p>Thanks</p>
| 0 | 2016-08-02T15:43:32Z | 38,913,364 | <p>I am running to an similar issue and i am pretty new to python . Can anyone help with below code . What i am trying to is to find if a particular tag is present in AWS Autoscaling group and if that tag present i need to update that particular autoscaling group . Autoscaling group describe api can go thro max of 100 at once . So ihad to put NextToken to go thro next set . But i am getting an error while executing this code .</p>
<pre><code>#!/usr/bin/python
import json
import boto3
def lambda_handler():
asg = boto3.client('autoscaling')
cntr = 0
page = 0
flag_first = False
while True and cntr <= 1000:
asg1 = asg.describe_auto_scaling_groups(MaxRecords=100)
if asg1['AutoScalingGroups'][cntr]['Tags'][0]['Key'] == "testamritha":
print "Entered IF Stmt"
asg.update_auto_scaling_groups(
AutoScalingGroupName='amritha_test',
MinSize=0,
MaxSize=0,
DesiredCapacity=0,
)
flag_first = True
else:
cntr+=1
if cntr == 100 and not flag_first:
page+=1
next_token = asg1["NextToken"]
asg1 = asg.describe_auto_scaling_groups(MaxRecords=100, NextToken=next_token)
print "Exit: While !!! page count --> ", page, "Counter", cntr
if __name__ == "__main__":
lambda_handler()
</code></pre>
<p>And the error is as below:</p>
<pre><code>if asg1['AutoScalingGroups'][cntr]['Tags'][0]['Key'] == "testamritha":
</code></pre>
<p>IndexError: list index out of range</p>
| 0 | 2016-08-12T08:29:09Z | [
"python",
"indexoutofboundsexception"
] |
`ValueError: too many values to unpack (expected 4)` with `scipy.stats.linregress` | 38,725,018 | <p>I know that this error message (<code>ValueError: too many values to unpack (expected 4)</code>) appears when more variables are set to values than a function returns. </p>
<p><code>scipy.stats.linregress</code> returns 5 values according to the scipy documentation (<a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html</a>).</p>
<p>Here is a short, reproducible example of a working call, and then a failed call, to <code>linregress</code>:</p>
<p>What could account for difference and why is the second one poorly called?</p>
<pre><code>from scipy import stats
import numpy as np
if __name__ == '__main__':
x = np.random.random(10)
y = np.random.random(10)
print(x,y)
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
'''
Code above works
Code below fails
'''
X = np.asarray([[-15.93675813],
[-29.15297922],
[ 36.18954863],
[ 37.49218733],
[-48.05882945],
[ -8.94145794],
[ 15.30779289],
[-34.70626581],
[ 1.38915437],
[-44.38375985],
[ 7.01350208],
[ 22.76274892]])
Y = np.asarray( [[ 2.13431051],
[ 1.17325668],
[ 34.35910918],
[ 36.83795516],
[ 2.80896507],
[ 2.12107248],
[ 14.71026831],
[ 2.61418439],
[ 3.74017167],
[ 3.73169131],
[ 7.62765885],
[ 22.7524283 ]])
print(X,Y) # The array initialization succeeds, if both arrays are print out
for i in range(1,len(X)):
slope, intercept, r_value, p_value, std_err = (stats.linregress(X[0:i,:], y = Y[0:i,:]))
</code></pre>
| -1 | 2016-08-02T15:43:41Z | 38,725,283 | <p>Your problem originates from slicing the <code>X</code> and <code>Y</code> arrays. Also you do not need the <code>for</code> loop. Use the following instead and it should work.</p>
<pre><code>slope, intercept, r_value, p_value, std_err = stats.linregress(X[:,0], Y[:,0])
</code></pre>
| 1 | 2016-08-02T15:56:05Z | [
"python",
"scipy",
"linear-regression"
] |
`ValueError: too many values to unpack (expected 4)` with `scipy.stats.linregress` | 38,725,018 | <p>I know that this error message (<code>ValueError: too many values to unpack (expected 4)</code>) appears when more variables are set to values than a function returns. </p>
<p><code>scipy.stats.linregress</code> returns 5 values according to the scipy documentation (<a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html</a>).</p>
<p>Here is a short, reproducible example of a working call, and then a failed call, to <code>linregress</code>:</p>
<p>What could account for difference and why is the second one poorly called?</p>
<pre><code>from scipy import stats
import numpy as np
if __name__ == '__main__':
x = np.random.random(10)
y = np.random.random(10)
print(x,y)
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
'''
Code above works
Code below fails
'''
X = np.asarray([[-15.93675813],
[-29.15297922],
[ 36.18954863],
[ 37.49218733],
[-48.05882945],
[ -8.94145794],
[ 15.30779289],
[-34.70626581],
[ 1.38915437],
[-44.38375985],
[ 7.01350208],
[ 22.76274892]])
Y = np.asarray( [[ 2.13431051],
[ 1.17325668],
[ 34.35910918],
[ 36.83795516],
[ 2.80896507],
[ 2.12107248],
[ 14.71026831],
[ 2.61418439],
[ 3.74017167],
[ 3.73169131],
[ 7.62765885],
[ 22.7524283 ]])
print(X,Y) # The array initialization succeeds, if both arrays are print out
for i in range(1,len(X)):
slope, intercept, r_value, p_value, std_err = (stats.linregress(X[0:i,:], y = Y[0:i,:]))
</code></pre>
| -1 | 2016-08-02T15:43:41Z | 38,729,710 | <p>The issue stems from the fact that your input to <code>np.asarray</code> are lists of single elements lists.</p>
<p>Thus, <code>X</code> and <code>Y</code> both have shape of (12,1):</p>
<pre><code>print(X.shape) # (12, 1) [or (12L, 1L), depending on version]
print(Y.shape) # (12, 1)
</code></pre>
<p>Note that these are each <em>two-dimensional</em> arrays. Even though one of the dimensions is 1, they're still considered two-dimensional.</p>
<p>Now consider this way of creating an array:</p>
<pre><code>x = np.asarray([1,2,3,4,5])
print(x.shape) # (5,)
</code></pre>
<p>Note in this case, since we passed a list of integers to <code>asarray</code>, we got a one-dimensional array.</p>
<p>Your function, when called with two variables, needs each to be one-dimensional arrays. So, you can either create the arrays initially as one-dimensional:</p>
<p>For example, by hand:</p>
<pre><code>X = np.asarray([-15.93675813,
-29.15297922,
36.18954863,
37.49218733,
-48.05882945,
-8.94145794,
15.30779289,
-34.70626581,
1.38915437,
-44.38375985,
7.01350208,
22.76274892])
</code></pre>
<p>Or by list comprehension:</p>
<pre><code>y_data = [[ 2.13431051],
[ 1.17325668],
[ 34.35910918],
[ 36.83795516],
[ 2.80896507],
[ 2.12107248],
[ 14.71026831],
[ 2.61418439],
[ 3.74017167],
[ 3.73169131],
[ 7.62765885],
[ 22.7524283 ]]
Y = np.asarray([e[0] for e in y_data])
</code></pre>
<p>Or by slicing:</p>
<pre><code>Y = np.asarray([[ 2.13431051],
[ 1.17325668],
[ 34.35910918],
[ 36.83795516],
[ 2.80896507],
[ 2.12107248],
[ 14.71026831],
[ 2.61418439],
[ 3.74017167],
[ 3.73169131],
[ 7.62765885],
[ 22.7524283 ]])
Y = Y[:,0]
</code></pre>
<p>All three methods would result in you having <code>X</code> and <code>Y</code> of shape <code>(12,)</code> (one-dimensional):</p>
<pre><code>print(X.shape) # (12,)
print(Y.shape) # (12,)
</code></pre>
<p>Then, you could use your loop as:</p>
<pre><code>for i in range(3,len(X)):
slope, intercept, r_value, p_value, std_err = stats.linregress(X[0:i], y = Y[0:i])
print(slope)
</code></pre>
<p>Note, I started the loop at 3, it's the first value that "makes sense".</p>
<p><strong>Or</strong>, you could keep your arrays unmodified as two-dimensional, and just fix the slicing syntax inside your loop:</p>
<pre><code>for i in range(3,len(X)):
slope, intercept, r_value, p_value, std_err = stats.linregress(X[0:i,0], y = Y[0:i,0])
print(slope)
</code></pre>
<p>This is the method that was suggested in the answer I was commenting to.</p>
| 0 | 2016-08-02T20:18:15Z | [
"python",
"scipy",
"linear-regression"
] |
HDF5: Is there a way to rename the column names in an existing HDF5 table? | 38,725,032 | <p>I used Pandas to create a large, indexed HDF5 table. I'd like to rename 2 of the columns out of the 12 columns in my table. I would prefer not to rebuild/reindex the table.</p>
<p>Can this be done without copying all the data (140GB)? I'm hoping there are just a couple pieces of metadata in the file that could be easily swapped out with the right command.</p>
<p>This came up for me because I have a few "non-natural" column names with spaces in them, and didn't realize this was an issue until trying to run a select statement.</p>
| 1 | 2016-08-02T15:44:12Z | 38,736,154 | <p>I'm afraid currently there is no way to rename indexed (belonging to <code>data_columns</code>) column as this would require making changes in <code>storer.table.colindexes</code> and in <code>storer.table.description</code> objects and both of them are of the specific types: </p>
<pre><code>In [29]: store.get_storer('df').table
Out[29]:
/df/table (Table(10,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"a": Int32Col(shape=(), dflt=0, pos=1),
"b": Int32Col(shape=(), dflt=0, pos=2),
"c": Int32Col(shape=(), dflt=0, pos=3)}
byteorder := 'little'
chunkshape := (3276,)
autoindex := True
colindexes := {
"a": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"index": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"c": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"b": Index(6, medium, shuffle, zlib(1)).is_csi=False}
In [30]: type(store.get_storer('df').table.colindexes)
Out[30]: tables.table._ColIndexes
In [31]: type(store.get_storer('df').table.description)
Out[31]: tables.description.Description
</code></pre>
<p>If you try to google for a PyTables solution you will find this question, but there was/is no answer which would allow you to rename columns.</p>
<p>So you may want to recreate your HDF5 file(s)</p>
| 1 | 2016-08-03T06:40:34Z | [
"python",
"pandas",
"hdf5",
"pytables",
"hdfstore"
] |
Deploy Blender to Azure App | 38,725,037 | <p>I have copied <code>Blender.exe</code> and all associated files into <code>Azure API App</code> then try to run it with my custom Python script like this (using <code>System.Dianostics.Process</code>()):</p>
<blockquote>
<p>blender.exe --background --python myscript.py</p>
</blockquote>
<p>But can not get it run properly. Note that it works fine in my local IIS.</p>
<p>So the question is does Azure App support to run Blender?
(as Blender may need to have GPU support machine to run, and Azure does not support GPU yet)</p>
<p>And if yes, so how to see what error return from the <code>blender.exe</code> command? (I am unable to remote desktop to <code>Azure Api App</code> to run the command manually unfortunately)</p>
<p>UPDATED:</p>
<p>I can run blender script above successfully using <code>Azure Console</code> command line by hand.
But when run the script using code <code>System.Diagnostics.Process</code>() it got this error from StandardError stream: </p>
<blockquote>
<p>Fatal Python error: Py_Initialize: can't initialize sys standard streams </p>
<p>OSError: [WinError 6] The handle is invalid</p>
</blockquote>
| 1 | 2016-08-02T15:44:38Z | 38,791,411 | <p>@MinhNguyen, According to the wiki <a href="https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#win32ksys-user32gdi32-restrictions" rel="nofollow">page</a> of Kudu, Azure App Services which include Api App are not support scenarios using GDI+ because of Win32k.sys (User32/GDI32) Restrictions, but blender works with <code>gdi32</code>. So unfortunately blender can not work on Azure Api App, please consideration for Azure Cloud Service or Virtual Machine for blender.</p>
<hr>
<p><strong>Update</strong>:
As @MinhNguyen comments said, blender can be run manually in Kudu console, although it seems blender works with GDI because of compiling blender need gdi32.lib. So the solution for the issue is that package blender.exe and related python script as a webjob to run on Azure.</p>
| 1 | 2016-08-05T14:14:11Z | [
"python",
"azure",
"blender",
"azure-api-apps"
] |
No values for varchar in cx_Oracle | 38,725,187 | <p>
<p>I have a column inside my orace database with type varchar2(256Byte).
And now I wrote a webserver with web.py and cx_Oracle to do a query and fetch the result.
The problem is I get no values for this column. But curiously it works for another column with the same type.</p>
<p>code:</p>
<pre><code>import cx_Oracle
import json
import web
urls = (
"/", "index",
"/grid", "grid",
)
app = web.application(urls, globals())
web.config.debug = True
connection = cx_Oracle.Connection("TEST_3D/limo1013@10.40.33.160:1521/sdetest")
typeObj = connection.gettype("MDSYS.SDO_GEOMETRY")
class index:
def GET(self):
return "hallo moritz "
class grid:
def GET(self):
web.header('Access-Control-Allow-Origin', '*')
web.header('Access-Control-Allow-Credentials', 'true')
web.header('Content-Type', 'application/json')
cursor = connection.cursor()
cursor.arraysize = 100000 # default = 50
cursor.execute(
"""SELECT a.id , a.json2, d.Classname FROM building a, THEMATIC_SURFACE b, SURFACE_GEOMETRY c, OBJECTCLASS d WHERE a.grid_id_500 = 2728 AND a.id = b.BUILDING_ID AND b.LOD2_MULTI_SURFACE_ID = c.ROOT_ID AND c.GEOMETRY IS NOT NULL AND b.OBJECTCLASS_ID = d.ID""")
obj = cursor.fetchone()
print obj
result = []
for id, json2, classname in cursor:
result.append({
"building_nr":id,"geometry": {
"type":"polygon","coordinates":json2,}, "polygon_typ":classname,})
return result
if __name__ == "__main__":
app.run(web.profiler)
</code></pre>
<p>For json2 I get no values:
[{'building_nr': 1314867, 'geometry': {'type': 'polygon', 'coordinates': None}, 'polygon_typ': 'BuildingWallSurface'},....</p>
<p>What is wrong?</p>
| 0 | 2016-08-02T15:52:09Z | 38,841,858 | <p>Are you using the so-far unreleased version of cx_Oracle which has Oracle object support? See <a href="http://stackoverflow.com/questions/38350314/how-to-convert-sdo-geomtry-in-geojson">How to convert SDO_GEOMTRY in GeoJSON</a></p>
| 0 | 2016-08-09T04:06:10Z | [
"python",
"web.py",
"cx-oracle"
] |
Regular expression matching for IP range in Ansible Playbooks for grouping | 38,725,204 | <p>I am trying to write a regular expression for a dynamic group in ansible-playbook for an sample ip range.
if the address range is 172.30.0.(0 to 254).(0 to 254). My regex is like
172.30.[0-254].[0-254]. Is this correct ? Even though I have the hosts in the range, the tasks are being skipped and no groups are being formed. </p>
<p>tasks:
- group_by: key=adda
when: ansible_default_ipv4.network == '172.30.[0-254].[0-254]'</p>
<p><a href="http://i.stack.imgur.com/ybjdP.png" rel="nofollow"><img src="http://i.stack.imgur.com/ybjdP.png" alt="grouping picture"></a></p>
| 1 | 2016-08-02T15:52:59Z | 38,725,494 | <p><code>[0-225]</code> is an incorrect regex. [] Define a range of character from one to another character in the ACII table and 255 is not an ASCII character.</p>
<p>replace it by <code>(?:25[0-4]|2[0-4][0-9]|[01]?[0-9][0-9]?)</code></p>
<p>So the complete regex is :</p>
<p><code>173\.30\.(?:25[0-4]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(?:25[0-4]|2[0-4][0-9]|[01]?[0-9][0-9]?)</code></p>
<p>as this post said : <a href="http://stackoverflow.com/questions/5865817/regex-to-match-an-ip-address">Regex to match an IP address</a></p>
| 1 | 2016-08-02T16:05:29Z | [
"python",
"regex",
"ansible",
"ansible-playbook"
] |
Regular expression matching for IP range in Ansible Playbooks for grouping | 38,725,204 | <p>I am trying to write a regular expression for a dynamic group in ansible-playbook for an sample ip range.
if the address range is 172.30.0.(0 to 254).(0 to 254). My regex is like
172.30.[0-254].[0-254]. Is this correct ? Even though I have the hosts in the range, the tasks are being skipped and no groups are being formed. </p>
<p>tasks:
- group_by: key=adda
when: ansible_default_ipv4.network == '172.30.[0-254].[0-254]'</p>
<p><a href="http://i.stack.imgur.com/ybjdP.png" rel="nofollow"><img src="http://i.stack.imgur.com/ybjdP.png" alt="grouping picture"></a></p>
| 1 | 2016-08-02T15:52:59Z | 38,725,523 | <p>You probably want to use <a href="http://docs.ansible.com/ansible/playbooks_filters.html" rel="nofollow">Jinja2 match</a> filter to match regex:</p>
<p>Something like this:</p>
<pre><code>---
- hosts: localhost
gather_facts: no
connection: local
vars:
ip_not_ok: '172.31.0.1'
ip_ok: '172.30.0.1'
tasks:
- debug: msg='OK'
when: ip_ok | match('172.30')
- debug: msg='OK'
when: ip_not_ok | match('172.30')
</code></pre>
| 0 | 2016-08-02T16:06:47Z | [
"python",
"regex",
"ansible",
"ansible-playbook"
] |
Regular expression matching for IP range in Ansible Playbooks for grouping | 38,725,204 | <p>I am trying to write a regular expression for a dynamic group in ansible-playbook for an sample ip range.
if the address range is 172.30.0.(0 to 254).(0 to 254). My regex is like
172.30.[0-254].[0-254]. Is this correct ? Even though I have the hosts in the range, the tasks are being skipped and no groups are being formed. </p>
<p>tasks:
- group_by: key=adda
when: ansible_default_ipv4.network == '172.30.[0-254].[0-254]'</p>
<p><a href="http://i.stack.imgur.com/ybjdP.png" rel="nofollow"><img src="http://i.stack.imgur.com/ybjdP.png" alt="grouping picture"></a></p>
| 1 | 2016-08-02T15:52:59Z | 38,725,810 | <p>Regex isn't good tool for that.</p>
<pre><code>from ipaddress import ip_address
import operator
def ip_check_range(ranges, s):
return all(map(operator.contains,
ranges,
ip_address(s).packed
))
print(ip_check_range([[172], [30], range(255), range(255)], '172.30.1.2')) # => True
print(ip_check_range([[172], [30], range(255), range(255)], '172.30.1.255')) # => False
</code></pre>
<p>Alternatively, If you're on Python<3.3 and don't have <code>ipaddress</code> module:</p>
<pre><code>def ip_check_range(ranges, s):
ip = s.split('.')
if len(ip) != 4:
raise ValueError
return all(map(operator.contains,
ranges,
(int(octet) for octet in ip)
))
</code></pre>
| 0 | 2016-08-02T16:23:33Z | [
"python",
"regex",
"ansible",
"ansible-playbook"
] |
Regular expression matching for IP range in Ansible Playbooks for grouping | 38,725,204 | <p>I am trying to write a regular expression for a dynamic group in ansible-playbook for an sample ip range.
if the address range is 172.30.0.(0 to 254).(0 to 254). My regex is like
172.30.[0-254].[0-254]. Is this correct ? Even though I have the hosts in the range, the tasks are being skipped and no groups are being formed. </p>
<p>tasks:
- group_by: key=adda
when: ansible_default_ipv4.network == '172.30.[0-254].[0-254]'</p>
<p><a href="http://i.stack.imgur.com/ybjdP.png" rel="nofollow"><img src="http://i.stack.imgur.com/ybjdP.png" alt="grouping picture"></a></p>
| 1 | 2016-08-02T15:52:59Z | 38,725,832 | <p>When using the operator '<strong>==</strong>' then in python you are trying to find a matched string with the name '172.30.[0-254[.[0-254]' </p>
<p>In ansible you can use python expression like search or match.</p>
<p>So you need to type something like this:</p>
<pre><code>when: ansible_default_ipv4.address | match("172.30.")
</code></pre>
<p>Did a test ansible playbook to verify it.</p>
| 1 | 2016-08-02T16:24:47Z | [
"python",
"regex",
"ansible",
"ansible-playbook"
] |
Regular expression matching for IP range in Ansible Playbooks for grouping | 38,725,204 | <p>I am trying to write a regular expression for a dynamic group in ansible-playbook for an sample ip range.
if the address range is 172.30.0.(0 to 254).(0 to 254). My regex is like
172.30.[0-254].[0-254]. Is this correct ? Even though I have the hosts in the range, the tasks are being skipped and no groups are being formed. </p>
<p>tasks:
- group_by: key=adda
when: ansible_default_ipv4.network == '172.30.[0-254].[0-254]'</p>
<p><a href="http://i.stack.imgur.com/ybjdP.png" rel="nofollow"><img src="http://i.stack.imgur.com/ybjdP.png" alt="grouping picture"></a></p>
| 1 | 2016-08-02T15:52:59Z | 38,726,303 | <p>If you compare networks, you shouldn't care about ranges!</p>
<pre><code>tasks:
- group_by: key=adda
when: ansible_default_ipv4.network == '172.30.0.0'
</code></pre>
<p>This will (generally) match all hosts with IPs 172.30.0.1 - 172.30.255.255.</p>
<p>If you need to compare IP addresses, use <a href="http://docs.ansible.com/ansible/playbooks_filters_ipaddr.html" rel="nofollow">ipaddr</a> filter:</p>
<pre><code>tasks:
- group_by: key=adda
when: ansible_default_ipv4.address | ipaddr('172.30.0.0/16') | ipaddr('bool')
</code></pre>
| 0 | 2016-08-02T16:50:50Z | [
"python",
"regex",
"ansible",
"ansible-playbook"
] |
Regular expression matching for IP range in Ansible Playbooks for grouping | 38,725,204 | <p>I am trying to write a regular expression for a dynamic group in ansible-playbook for an sample ip range.
if the address range is 172.30.0.(0 to 254).(0 to 254). My regex is like
172.30.[0-254].[0-254]. Is this correct ? Even though I have the hosts in the range, the tasks are being skipped and no groups are being formed. </p>
<p>tasks:
- group_by: key=adda
when: ansible_default_ipv4.network == '172.30.[0-254].[0-254]'</p>
<p><a href="http://i.stack.imgur.com/ybjdP.png" rel="nofollow"><img src="http://i.stack.imgur.com/ybjdP.png" alt="grouping picture"></a></p>
| 1 | 2016-08-02T15:52:59Z | 38,746,814 | <p>I tried comparing networks it work out. So I tired what @ebel suggested and it worked the way I wanted. Thanks </p>
<p><a href="http://i.stack.imgur.com/Z4dVM.png" rel="nofollow"><img src="http://i.stack.imgur.com/Z4dVM.png" alt="enter image description here"></a></p>
| 0 | 2016-08-03T14:42:56Z | [
"python",
"regex",
"ansible",
"ansible-playbook"
] |
How to change the code to find an approximation to the cube root of both negative and positive numbers? | 38,725,215 | <p>So I'm entirely new to coding and I'm taking the MIT Openwarecourse to get started (and I'm using the book Introducing to Computation and Programming using Python)</p>
<p>Also since I'm new here I'm a bit afraid that my question is low quality so please point out if you think I should improve the manner on how I asks questions. </p>
<p>At 3.4 I'm given the code:</p>
<pre><code>x = int(input("Please enter an integer: "))
epsilon = 0.01
numGuesses = 0
low = -100
high = max(1.0, x)
ans = (high + low)/2.0
while abs(ans**2 -x) >= epsilon:
print ("low = ", low, "High=", high, "ans=", ans)
numGuesses += 1
if ans**2 < x:
low = ans
else:
high = ans
ans = (high + low)/2.0
print ("numGuesses =", numGuesses)
print (ans, "Is close to square root of", x)
</code></pre>
<p>So what I tried to do first is understand each line of the code and what it exactly does. I've been given the hint: "Think about changing low to ensure that the answer lies within the region being searched.)</p>
<p>I've tried to change low to a negative number and I tried to add if low is less than 0, then low = -low like this:</p>
<pre><code>x = int(input("Please enter an integer: "))
epsilon = 0.01
numGuesses = 0
low = 0.0
if low < 0:
low = -low
high = max(1.0, x)
ans = (high + low)/2.0
while abs(ans**2 -x) >= epsilon:
print ("low = ", low, "High=", high, "ans=", ans)
numGuesses += 1
if ans**2 < x:
low = ans
else:
high = ans
ans = (high + low)/2.0
print ("numGuesses =", numGuesses)
print (ans, "Is close to square root of", x)
</code></pre>
<p>However I'm probably taking a wrong approach...</p>
| 0 | 2016-08-02T15:53:22Z | 38,725,462 | <p>Your algorithm is a numerical search through <a href="https://www.encyclopediaofmath.org/index.php/Dichotomy_method" rel="nofollow">dichotomy method</a> applied to X^2-x=0, it converges (quite fast actually) towards the solution X=sqrt(x) as long as low is lower than the solution, and high is higher than the solution.</p>
<p>In your case x is always higher than sqrt(x) and 0 is always lower than sqrt(x).
The answer will always be in the region being searched,you can lower epsilon as much as you want. There is no point in changing the initialisation of low. If you lower it to negative values, it will only require a few more step, roughly log2(abs(low))</p>
<p>change</p>
<pre><code> ans**2
</code></pre>
<p>to</p>
<pre><code>ans**3
</code></pre>
<p>and your algorithm converges to the cubic root of x.</p>
<p>The cubic root of negative numbers is not well defined, I guess what they mean is the cubic root of the absolute value, multiplied by -1. So (-3)^3 =-9
If x<0, replace x by abs(x), run the algorithm and return the solution multiplied by -1.</p>
| -1 | 2016-08-02T16:03:59Z | [
"python"
] |
How to change the code to find an approximation to the cube root of both negative and positive numbers? | 38,725,215 | <p>So I'm entirely new to coding and I'm taking the MIT Openwarecourse to get started (and I'm using the book Introducing to Computation and Programming using Python)</p>
<p>Also since I'm new here I'm a bit afraid that my question is low quality so please point out if you think I should improve the manner on how I asks questions. </p>
<p>At 3.4 I'm given the code:</p>
<pre><code>x = int(input("Please enter an integer: "))
epsilon = 0.01
numGuesses = 0
low = -100
high = max(1.0, x)
ans = (high + low)/2.0
while abs(ans**2 -x) >= epsilon:
print ("low = ", low, "High=", high, "ans=", ans)
numGuesses += 1
if ans**2 < x:
low = ans
else:
high = ans
ans = (high + low)/2.0
print ("numGuesses =", numGuesses)
print (ans, "Is close to square root of", x)
</code></pre>
<p>So what I tried to do first is understand each line of the code and what it exactly does. I've been given the hint: "Think about changing low to ensure that the answer lies within the region being searched.)</p>
<p>I've tried to change low to a negative number and I tried to add if low is less than 0, then low = -low like this:</p>
<pre><code>x = int(input("Please enter an integer: "))
epsilon = 0.01
numGuesses = 0
low = 0.0
if low < 0:
low = -low
high = max(1.0, x)
ans = (high + low)/2.0
while abs(ans**2 -x) >= epsilon:
print ("low = ", low, "High=", high, "ans=", ans)
numGuesses += 1
if ans**2 < x:
low = ans
else:
high = ans
ans = (high + low)/2.0
print ("numGuesses =", numGuesses)
print (ans, "Is close to square root of", x)
</code></pre>
<p>However I'm probably taking a wrong approach...</p>
| 0 | 2016-08-02T15:53:22Z | 38,725,489 | <p>This:</p>
<pre><code>low = 0.0
if low < 0:
low = -low
</code></pre>
<p>literally does nothing. You set <code>low = 0</code> and then check if it's less than 0 (which it isn't). Also, 0 is both positive and negative, so changing its sign does nothing.</p>
<p>I'm afraid I don't understand the problem entirely, but it looks like <code>high</code> and <code>low</code> are bounds within which you expect the square/cubed root to be?</p>
<p>If <code>x</code> is negative then you want to make sure your lower bound can encompass it. Maybe setting <code>low = x</code> would help?</p>
| -2 | 2016-08-02T16:05:14Z | [
"python"
] |
Tensorflow DNNClassifier return wrong prediction | 38,725,224 | <p>I try to make a sentence classifier using tensorflow as in the example of the official site <a href="https://www.tensorflow.org/versions/r0.9/tutorials/tflearn/index.html" rel="nofollow">tf.contrib.learn Quickstart</a> but using my own data, first I convert all my data (which are strings of varying lengths) to ids through the use of dictionaries and so transform each sentence in an array of integers.</p>
<p>Each record for training has its own assigned label.</p>
<p>The problem is that predictions are not exact, only some, but others even when the input is equal to a record of the training base the result is wrong.<br>
My code looks something like this:</p>
<pre><code>def launchModelData(values, labels, sample, actionClasses):
#Tensor for trainig data
v = tf.Variable(values)
l = tf.Variable(labels)
#Data Sample
s = tf.Variable(sample)
# Build 3 layer DNN with 10, 20, 10 units respectively.
classifier = tf.contrib.learn.DNNClassifier(hidden_units=[10, 20, 10], n_classes=actionClasses)
# Add an op to initialize the variables.
init_op = tf.initialize_all_variables()
# Later, when launching the model
with tf.Session() as sess:
# Run the init operation.
sess.run(init_op)
# Fit model.
classifier.fit(x=v.eval(), y=l.eval(), steps=200)
# Classify one new sample.
new_sample = np.array(s.eval(), dtype=int)
y = classifier.predict(new_sample)
print ('Predictions: {}'.format(str(y)))
return y
</code></pre>
<p>Values and classes axample: </p>
<pre><code>[0 1] 0
[0 2] 0
[0 4] 0
[7 8] 1
[7 9] 1
[ 7 13] 1
[14 15] 2
[14 16] 2
[14 18] 2
[20 21] 3
[26 27] 5
[29 27] 5
[31 32] 5
...
</code></pre>
<p>I'm new to tensorflow so I try to make it less complex possible, any help will be welcome.</p>
<p><strong>EDIT</strong><br>
My actual training data is <a href="https://drive.google.com/open?id=0B3uEZ76zDg_wNXJnT3g5c0lWWjQ" rel="nofollow">this.</a></p>
<p>I try it with 8 classes and the predictions were fine, so maybe i need a bigger corpus, i will try and show my outputs in a new edit. </p>
<p><strong>EDIT2</strong> </p>
<p>Now i use a composition of five layer [n,2n,4n,8n,16n] where n = Classes and steps = 20000, this reduce the loss and increase the accuracy really well but again it just work with a few targets (10 aprox) with a bigger amount the predictions become wrong.</p>
| 0 | 2016-08-02T15:53:42Z | 38,796,537 | <p>Estimator in tf.learn responsible to create session and graph. It gets the input tensors via input_fn. Every fit/evaluate/predict will create a new session and graph. Code should look similar to following:</p>
<pre><code># Build 3 layer DNN with 10, 20, 10 units respectively.
my_feature = tf.contrib.layers.real_valued_column('my_feature')
classifier = tf.contrib.learn.DNNClassifier(feature_columns=[my_feature], hidden_units=[10, 20, 10], n_classes=actionClasses)
def _my_train_data():
return {'my_feature': tf.constant(values), tf.constant(labels)
classifier.fit(input_fn=_my_train_data, steps=200)
# Classify one new sample.
def _my_predict_data():
return {'my_feature': tf.Constant(s)
y = classifier.predict(input_fn=_my_predict_data)
print ('Predictions: {}'.format(str(y)))
return y
</code></pre>
| 0 | 2016-08-05T19:27:51Z | [
"python",
"machine-learning",
"nlp",
"tensorflow",
"deep-learning"
] |
Tensorflow DNNClassifier return wrong prediction | 38,725,224 | <p>I try to make a sentence classifier using tensorflow as in the example of the official site <a href="https://www.tensorflow.org/versions/r0.9/tutorials/tflearn/index.html" rel="nofollow">tf.contrib.learn Quickstart</a> but using my own data, first I convert all my data (which are strings of varying lengths) to ids through the use of dictionaries and so transform each sentence in an array of integers.</p>
<p>Each record for training has its own assigned label.</p>
<p>The problem is that predictions are not exact, only some, but others even when the input is equal to a record of the training base the result is wrong.<br>
My code looks something like this:</p>
<pre><code>def launchModelData(values, labels, sample, actionClasses):
#Tensor for trainig data
v = tf.Variable(values)
l = tf.Variable(labels)
#Data Sample
s = tf.Variable(sample)
# Build 3 layer DNN with 10, 20, 10 units respectively.
classifier = tf.contrib.learn.DNNClassifier(hidden_units=[10, 20, 10], n_classes=actionClasses)
# Add an op to initialize the variables.
init_op = tf.initialize_all_variables()
# Later, when launching the model
with tf.Session() as sess:
# Run the init operation.
sess.run(init_op)
# Fit model.
classifier.fit(x=v.eval(), y=l.eval(), steps=200)
# Classify one new sample.
new_sample = np.array(s.eval(), dtype=int)
y = classifier.predict(new_sample)
print ('Predictions: {}'.format(str(y)))
return y
</code></pre>
<p>Values and classes axample: </p>
<pre><code>[0 1] 0
[0 2] 0
[0 4] 0
[7 8] 1
[7 9] 1
[ 7 13] 1
[14 15] 2
[14 16] 2
[14 18] 2
[20 21] 3
[26 27] 5
[29 27] 5
[31 32] 5
...
</code></pre>
<p>I'm new to tensorflow so I try to make it less complex possible, any help will be welcome.</p>
<p><strong>EDIT</strong><br>
My actual training data is <a href="https://drive.google.com/open?id=0B3uEZ76zDg_wNXJnT3g5c0lWWjQ" rel="nofollow">this.</a></p>
<p>I try it with 8 classes and the predictions were fine, so maybe i need a bigger corpus, i will try and show my outputs in a new edit. </p>
<p><strong>EDIT2</strong> </p>
<p>Now i use a composition of five layer [n,2n,4n,8n,16n] where n = Classes and steps = 20000, this reduce the loss and increase the accuracy really well but again it just work with a few targets (10 aprox) with a bigger amount the predictions become wrong.</p>
| 0 | 2016-08-02T15:53:42Z | 38,963,313 | <p>After all i made some changes in the code but there was no progress at all, so i change the parameters for the DNN Classifier and increase the size of my corpus and it works. </p>
<p>At the end this was my parameters aprox:<br>
-Steps = 25000+<br>
-Layers = [n/2,n,n*2,n*4,n*8]<br>
* n = number classes<br>
-Corpus size = 30000 samples<br>
-Number classes= 40 </p>
<p>So doing this the loss gets equal to 0.0945... and accuracy = 0.896..., dont know if doing this changes could help somebody but it does for me. </p>
| 0 | 2016-08-15T21:25:47Z | [
"python",
"machine-learning",
"nlp",
"tensorflow",
"deep-learning"
] |
__reversed__ Magic method | 38,725,269 | <p>I have class Count which takes 3 parameters including self, mystart and myend. It should count from mystart until myend (also reversed) using magic methods <code>__iter__</code>, <code>__next__</code> and <code>__reversed__</code>. I have implemented all three magic methods. But I am still not sure whether it is the right way to implemnet next and reversed methods. Is it possible that I can call built-in functions next and reversed inside my <code>__next__</code> and <code>__reversed__</code> methods or is there any pythonic way?</p>
<pre><code>class Count:
def __init__(self,mystart,myend):
self.mystart=mystart
self.myend=myend
self.current=mystart
self.reverse=[]
def __iter__(self):
"Returns itself as an Iterator Object"
return self
def __next__(self):
if self.current > self.myend:
raise StopIteration
else:
self.current+=1
return self.current-1
def __reversed__(self):
for i in range(self.myend,self.mystart,-1):
self.reverse.append(i)
return self.reverse
obj1=Count(0,10)
print("FOR LOOP")
for i in obj1:
print (i,end=",")
print ("\nNEXT")
obj2=Count(1,4)
print(next(obj2))
print(next(obj2))
print ("Reversed")
print(reversed(obj1))
</code></pre>
| 0 | 2016-08-02T15:55:33Z | 38,750,543 | <p>Now I have done it using yield statement. @jedwards thanks for your tipp.</p>
<pre><code>class Count:
def __init__(self, mystart,myend):
self.mystart = mystart
self.myend = myend
self.current=None
def __iter__(self):
self.current = self.mystart
while self.current < self.myend:
yield self.current
self.current += 1
def __next__(self):
if self.current is None:
self.current=self.mystart
if self.current > self.myend:
raise StopIteration
else:
self.current+=1
return self.current-1
def __reversed__(self):
self.current = self.myend
while self.current >= self.mystart:
yield self.current
self.current -= 1
obj1=Count(0,10)
for i in obj1:
print (i)
obj2=reversed(obj1)
for i in obj2:
print (i)
obj3=Count(0,10)
print (next(obj3))
print (next(obj3))
print (next(obj3))
</code></pre>
| 1 | 2016-08-03T17:52:33Z | [
"python",
"magic-methods"
] |
__reversed__ Magic method | 38,725,269 | <p>I have class Count which takes 3 parameters including self, mystart and myend. It should count from mystart until myend (also reversed) using magic methods <code>__iter__</code>, <code>__next__</code> and <code>__reversed__</code>. I have implemented all three magic methods. But I am still not sure whether it is the right way to implemnet next and reversed methods. Is it possible that I can call built-in functions next and reversed inside my <code>__next__</code> and <code>__reversed__</code> methods or is there any pythonic way?</p>
<pre><code>class Count:
def __init__(self,mystart,myend):
self.mystart=mystart
self.myend=myend
self.current=mystart
self.reverse=[]
def __iter__(self):
"Returns itself as an Iterator Object"
return self
def __next__(self):
if self.current > self.myend:
raise StopIteration
else:
self.current+=1
return self.current-1
def __reversed__(self):
for i in range(self.myend,self.mystart,-1):
self.reverse.append(i)
return self.reverse
obj1=Count(0,10)
print("FOR LOOP")
for i in obj1:
print (i,end=",")
print ("\nNEXT")
obj2=Count(1,4)
print(next(obj2))
print(next(obj2))
print ("Reversed")
print(reversed(obj1))
</code></pre>
| 0 | 2016-08-02T15:55:33Z | 38,751,057 | <p>You are mixing up Iterators and Iterables:</p>
<p>Iterators:</p>
<ol>
<li>Keep a state associated with their current iteration progress</li>
<li>Implement <code>__next__</code> to get the next state</li>
<li>Implement <code>__iter__</code> to return themselves.</li>
</ol>
<p>Iterables:</p>
<ol>
<li>Contain (or define with some rule) a collection of elements that can be traversed</li>
<li>Implement <code>__iter__</code> to return an iterator that can traverse the elements</li>
<li>can implement <code>__reversed__</code> to return an iterator that goes backwards.</li>
</ol>
<p><a href="https://docs.python.org/3/reference/datamodel.html#object.__reversed__" rel="nofollow">The <code>__reversed__</code> magic method is:</a></p>
<blockquote>
<p>Called (if present) by the reversed() built-in to implement reverse
iteration. It should return <em>a new iterator object</em> that iterates over
all the objects in the container in reverse order.</p>
</blockquote>
<p>So you probably don't want to implement an iterator that can be <code>__reversed__</code> mid iteration, for example the implementation <a href="http://stackoverflow.com/a/38750543/5827215">in your answer</a> means that this code:</p>
<pre><code>x = Count(1,10)
for i in x:
for j in x:
print(i,j)
</code></pre>
<p>Will cause an infinite loop, the output is just this pattern repeated:</p>
<pre><code>1 4
1 3
1 2
1 1
</code></pre>
<p>the reason for this is because both <code>for</code> loops are changing <code>self.current</code> in opposite directions, the outer loop will increment it by 1 then the inner loop will set it to <code>self.myend</code> and lower it back to 0, then the process repeats.</p>
<p>The only way to properly implement all three magic methods is to use two classes, one for the iterator and one for the iterable:</p>
<pre><code>class _Count_iter:
def __init__(self, start, stop, step=1):
self.current = start
self.step = step
self.myend = stop
def __iter__(self):return self
def __next__(self):
#if current is one step over the end
if self.current == self.myend+self.step:
raise StopIteration
else:
self.current+=self.step
return self.current-self.step
class Count:
def __init__(self, mystart,myend):
self.mystart = mystart
self.myend = myend
def __iter__(self):
return _Count_iter(self.mystart,self.myend,1)
def __reversed__(self):
return _Count_iter(self.myend, self.mystart, -1)
</code></pre>
| 1 | 2016-08-03T18:24:11Z | [
"python",
"magic-methods"
] |
OnSerialData() Event in Python? | 38,725,307 | <p>I am using the <a href="https://pythonhosted.org/pyserial/" rel="nofollow">pyserial</a> python library to read serial data from an Arduino. Polling for new data would require me to implement an <code>update()</code> method that I must call several times a second. This would be slow and CPU intensive even when there is no communication happening.</p>
<p>Is there an <code>OnSerialData()</code> event I can use? A routine that will execute every time new serial data arrives in the buffer? Most other languages I've worked with have an equivalent.</p>
<p>I am fairly unfamiliar with <code>threading</code> but have a feeling it is involved.</p>
| 0 | 2016-08-02T15:56:54Z | 38,732,911 | <p>A standard approach is to use a thread for this.</p>
<p>Something like this should work:</p>
<pre><code>import threading
import serial
import io
import sys
exit_loop = False
def reader_thread(ser):
while not exit_loop:
ch = ser.read(1)
do_something(ch)
def do_something(ch):
print "got a character:", ch
ser = serial.serial_for_url(...)
thr = threading.Thread(target = reader_thread, args=[ser])
thr.start()
# when ready to shutdown...
exit_loop = True
if hasattr(ser, 'cancel_read'):
ser.cancel_read()
thr.join()
</code></pre>
<p>Also see the <code>serial.threaded</code> module (also contained in the pyserial library.)</p>
| 1 | 2016-08-03T01:28:11Z | [
"python",
"multithreading",
"python-2.7",
"events",
"serial-port"
] |
Django redirect URL with parameter, inside urls.py file? | 38,725,314 | <p>I'm using Django 1.9. Is there any way to redirect a URL with a parameter in my urls.py file?</p>
<p>I want to permanently redirect a URL like <code>/org/123/</code> to the corresponding URL <code>/neworg/123</code>.</p>
<p>I know <a href="http://stackoverflow.com/questions/3139973/django-return-httpresponseredirect-to-an-url-with-a-parameter">how to redirect within a view</a>, but I'm wondering if there's any way to do it solely inside <code>urls.py</code>.</p>
| 0 | 2016-08-02T15:57:15Z | 38,725,574 | <p>You can use <a href="https://docs.djangoproject.com/en/1.10/ref/class-based-views/base/#redirectview" rel="nofollow"><code>RedirectView</code></a>. As long as the old and new url patterns have the same args and kwargs, you can use <code>pattern_name</code> to specify the url pattern to redirect to.</p>
<pre><code>from django.views.generic.base import RedirectView
urlpatterns = [
url(r'^neworg/(?P<pk>\d+)/$', new_view, name='new_view'),
url(r'^org/(?P<pk>\d+)/$', RedirectView.as_view(pattern_name='new_view'), name='old_view')
]
</code></pre>
| 2 | 2016-08-02T16:09:16Z | [
"python",
"django"
] |
How to check if a specific field contains an error in Django | 38,725,393 | <p>How do you check if a specific form field contains an error in a Django view?</p>
<p>I have a sign up form and when a username throws an error I would like to display a flash message at the top of the page with the error.</p>
<p>Basically something along the lines of: </p>
<pre><code>if username_field in form has errors:
messages.warning(request, "...")
</code></pre>
| 0 | 2016-08-02T16:00:57Z | 38,725,432 | <p>To get form errors as python dict, just <code>your_form.errors</code></p>
<pre><code>if username_field in your_form.errors:
messages.warning(request, "...")
</code></pre>
| 2 | 2016-08-02T16:02:44Z | [
"python",
"django"
] |
result when __all__ is not defined in __init__.py of package? | 38,725,419 | <p>I am just learning and practicing python,on the way,i am reading about python packages and how to import into other modules or package at <a href="https://docs.python.org/3/tutorial/modules.html" rel="nofollow">Modules</a> ,I assume the following scenario ,</p>
<p>I have package as ,</p>
<pre><code>Video/
__init__.py
formats/
__init__.py
mkv.py
mp4.py
length/
__init__.py
morethan20min.py
lessthan20min.py
</code></pre>
<p>and in no </p>
<pre><code>__init__.py
</code></pre>
<p>I have not defined</p>
<pre><code>__all__
</code></pre>
<p>what happens if i have an import statement as,</p>
<pre><code>import Video.format.mkv
import Video.formats.*
</code></pre>
<p>Since I have already imported mkv module in first statement,what exactly happens after execution of second import statement,I didnt get the concept after reading on the mentioned link.</p>
| 0 | 2016-08-02T16:02:08Z | 38,725,840 | <p>When you do</p>
<pre><code>from whatever_package import *
</code></pre>
<p>first, if the package's <code>__init__.py</code> hasn't been run yet, it will be run. (If you've already done <code>import whatever_package.something_specific</code>, the package's <code>__init__.py</code> will have already been run.)</p>
<p>Then, if <code>whatever_package.__init__</code> does not define an <code>__all__</code> list, the import will pick up all <em>current</em> contents of the <code>whatever_package</code> object*. That'll be anything defined in <code>__init__.py</code> and any submodules that have already been explicitly imported by any code that has executed in your program. For example, if <code>whatever_package</code>'s <code>__init__.py</code> is empty, you do</p>
<pre><code>import whatever_package.something_specific
from whatever_package import *
import whatever_package.other_thing
</code></pre>
<p>and no other import statements relating to <code>whatever_package</code> exist in your program, then the <code>import *</code> will pick up <code>something_specific</code>, but not any other submodules of <code>whatever_package</code>, such as <code>other_thing</code>.</p>
<hr>
<p>*excluding anything that begins with an underscore, as is standard for any <code>import *</code> with no <code>__all__</code> list, whether you're importing from a package or a normal module.</p>
| 2 | 2016-08-02T16:25:09Z | [
"python",
"mkv"
] |
How to Use Tkinter Entry like raw_input | 38,725,442 | <p>I'm trying to create a GUI for a TCP socket, the main function asks for the server address and wait an answer to proceed. This is the code</p>
<pre><code>print("Welcome to TCP Socket")
address = raw_input("Insert server address: ")
print("Connectiong to " + address)
...
</code></pre>
<p>Now I have a entry from Tkinter called iTxt to get user input and I use the method get() to get the input but the result isn't the same as a <code>raw_input()</code> and I can't figure how to emulate <code>raw_input</code>. Can someone help me?<br>
Thanks</p>
| -1 | 2016-08-02T16:03:10Z | 38,725,800 | <p>You cannot completely emulate raw_input in tkInter, since tkInter is event-driven. When you use raw_input, program execution blocks until you provide input and press enter. When you use get (), you'll get input anyhow, even if your entry is still empty.</p>
<p>The simplest way to get some kind of blocking behaviour is using a modal dialog. But the best way to go is to dive into event driven programming. If something is input (and probably OK pressed), an event is triggered by TkInter. As a reaction on this event you call an event handler function processing the input.</p>
<p>All GUI's use event driven programming, so it pays of to invest (quite) some time.</p>
| -1 | 2016-08-02T16:22:55Z | [
"python",
"tkinter",
"raw-input"
] |
How to Use Tkinter Entry like raw_input | 38,725,442 | <p>I'm trying to create a GUI for a TCP socket, the main function asks for the server address and wait an answer to proceed. This is the code</p>
<pre><code>print("Welcome to TCP Socket")
address = raw_input("Insert server address: ")
print("Connectiong to " + address)
...
</code></pre>
<p>Now I have a entry from Tkinter called iTxt to get user input and I use the method get() to get the input but the result isn't the same as a <code>raw_input()</code> and I can't figure how to emulate <code>raw_input</code>. Can someone help me?<br>
Thanks</p>
| -1 | 2016-08-02T16:03:10Z | 38,726,361 | <p>If this is running inside an existing GUI, you can create a modal dialog with a Toplevel, using the method <code>wait_window</code> to block until the window is destroyed. If you want to use a popup window in an otherwise non-GUI program, you can create a little self-contained tkinter program in a function which returns a value when the root window is destroyed. </p>
<p>In either case, the technique is to wait for the window to be destroyed, and then fetch the value that was in the window. Since the window will have been destroyed, you must use a <code>StringVar</code> since it won't be destroyed along with the window.</p>
<p>Here is an example that assumes no GUI is already running:</p>
<pre><code>import tkinter as tk
def gui_input(prompt):
root = tk.Tk()
# this will contain the entered string, and will
# still exist after the window is destroyed
var = tk.StringVar()
# create the GUI
label = tk.Label(root, text=prompt)
entry = tk.Entry(root, textvariable=var)
label.pack(side="left", padx=(20, 0), pady=20)
entry.pack(side="right", fill="x", padx=(0, 20), pady=20, expand=True)
# Let the user press the return key to destroy the gui
entry.bind("<Return>", lambda event: root.destroy())
# this will block until the window is destroyed
root.mainloop()
# after the window has been destroyed, we can't access
# the entry widget, but we _can_ access the associated
# variable
value = var.get()
return value
print("Welcome to TCP Socket")
address = gui_input("Insert server address:")
print("Connecting to " + address)
</code></pre>
<p>If you already have a GUI that is running, you can replace <code>tk.Tk()</code> with <code>tk.Toplevel()</code> to create a popup window, and then use <code>.wait_window()</code> rather than <code>.mainloop()</code> to wait for the window to be destroyed. </p>
| 2 | 2016-08-02T16:53:16Z | [
"python",
"tkinter",
"raw-input"
] |
Groupby With Multi Index | 38,725,456 | <p>I'm trying to build a data frame like the one below using pandas where Asum only gets a value if there are intervals 1 and 3 on that day. The closest I've gotten to something working is using this:</p>
<pre><code> df['ASum']=df.groupby(level=['DateTime'])['A'].sum()
</code></pre>
<p>But when I run it, it returns NaN all the way down ASum. Any ideas on how to do this are appreciated.</p>
<pre class="lang-none prettyprint-override"><code> A B ASum
DateTime INT
2016-07-05 3 4700.0 4700.0 0
2016-07-06 1 5906.0 6830.0 0
3 1090.0 1090.0 6996
2016-07-07 1 7969.0 5273.0 0
3 1971.0 1971.0 9940
2016-07-08 1 3296.0 2764.0 0
3 1179.0 1179.0 4475
2016-07-11 1 4993.0 5798.0 0
3 1325.0 1325.0 6318
</code></pre>
| 2 | 2016-08-02T16:03:40Z | 38,726,325 | <pre><code>df['ASum'] = 0 # the new column MUST be defined ahead
for idx,data in df.groupby(level=['DateTime']):
if all(x in data.index.get_level_values('INT') for x in [1,3]):
df.loc[idx,'ASum'].iloc[-1] = data['A'].sum() # adds the sum to the last row in the group only
</code></pre>
<p>Which results:</p>
<pre><code> A ASum
DateTime INT
2016-07-05 3 4700 0
2016-07-06 1 5906 0
3 1090 6996
2016-07-07 1 7967 0
3 1971 9938
2016-07-08 1 3296 0
3 119 3415
2016-07-11 1 4993 0
3 1325 6318
</code></pre>
<p><strong>Or</strong> if you want the sum to appear where <code>INT==3</code> (and not necessarily on the last line):</p>
<pre><code>df['ASum'] = 0
for idx,data in df.groupby(level=['DateTime']):
if all(x in data.index.get_level_values('INT') for x in [1,3]):
df.loc[(idx,3),'ASum'] = data['A'].sum() # << changed this line only
</code></pre>
<p>(Until I'll come up with some aggregative solution)</p>
| 0 | 2016-08-02T16:51:41Z | [
"python",
"pandas",
"multi-index"
] |
Groupby With Multi Index | 38,725,456 | <p>I'm trying to build a data frame like the one below using pandas where Asum only gets a value if there are intervals 1 and 3 on that day. The closest I've gotten to something working is using this:</p>
<pre><code> df['ASum']=df.groupby(level=['DateTime'])['A'].sum()
</code></pre>
<p>But when I run it, it returns NaN all the way down ASum. Any ideas on how to do this are appreciated.</p>
<pre class="lang-none prettyprint-override"><code> A B ASum
DateTime INT
2016-07-05 3 4700.0 4700.0 0
2016-07-06 1 5906.0 6830.0 0
3 1090.0 1090.0 6996
2016-07-07 1 7969.0 5273.0 0
3 1971.0 1971.0 9940
2016-07-08 1 3296.0 2764.0 0
3 1179.0 1179.0 4475
2016-07-11 1 4993.0 5798.0 0
3 1325.0 1325.0 6318
</code></pre>
| 2 | 2016-08-02T16:03:40Z | 38,729,389 | <p>Here is a solution based on unstacking the <code>INT</code> level, taking the sum and stacking it back.</p>
<pre><code>import pandas as pd
midx = pd.MultiIndex(levels=[['2016-07-05', '2016-07-06', '2016-07-07',
'2016-07-08', '2016-07-11'], [1, 3]],
labels=[[0, 1, 1, 2, 2, 3, 3, 4, 4],
[1, 0, 1, 0, 1, 0, 1, 0, 1]],
names=['DateTime', 'INT'])
df = pd.DataFrame({'A': [4700.0, 5906.0, 1090.0, 7969.0, 1971.0,
3296.0, 1179.0, 4993.0, 1325.0],
'B': [4700.0, 6830.0, 1090.0, 5273.0, 1971.0,
2764.0, 1179.0, 5798.0, 1325.0]},
index=midx)
df = df.unstack(level='INT')
df[('Asum', 3)] = df['A'].sum(axis=1, skipna=False)
df = df.stack(level='INT').fillna(0)
print(df)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code> A B Asum
DateTime INT
2016-07-05 3 4700.0 4700.0 0.0
2016-07-06 1 5906.0 6830.0 0.0
3 1090.0 1090.0 6996.0
2016-07-07 1 7969.0 5273.0 0.0
3 1971.0 1971.0 9940.0
2016-07-08 1 3296.0 2764.0 0.0
3 1179.0 1179.0 4475.0
2016-07-11 1 4993.0 5798.0 0.0
3 1325.0 1325.0 6318.0
</code></pre>
| 3 | 2016-08-02T19:57:30Z | [
"python",
"pandas",
"multi-index"
] |
Add the count value of occurrences of different values in a table as a new column to the table using pandas | 38,725,483 | <p>In the following table, how can I count the number of occurrences of different User IDs and make a new table that has the User IDs and the count values only.</p>
<p><a href="http://i.stack.imgur.com/DRN6z.png" rel="nofollow"><img src="http://i.stack.imgur.com/DRN6z.png" alt="enter image description here"></a></p>
<p>For example, I want a new table that looks like this:</p>
<pre><code>User count
5173 3
5175 2
5181 1
5183 2
</code></pre>
| 1 | 2016-08-02T16:04:58Z | 38,725,559 | <p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow">value_counts()</a> method:</p>
<pre><code>df.User.value_counts().to_frame('count').reset_index().rename(columns=dict(index='User'))
</code></pre>
<p>or if you want to keep <code>User</code> column as an index:</p>
<pre><code>df.User.value_counts().to_frame('count')
</code></pre>
| 2 | 2016-08-02T16:08:38Z | [
"python",
"pandas",
"dataframe"
] |
Unable to download nltk data | 38,725,583 | <pre><code>import nltk
nltk.download()
</code></pre>
<p>It shows SSL:Certificate Verify Failed.In case of requests I use verify=False ,but what to do here.</p>
| 0 | 2016-08-02T16:09:50Z | 38,755,535 | <p>If you want to download manually, for example you need tokenizer/punkt data, you can download directly to :</p>
<p>raw.githubusercontent.com/nltk/nltk_data/gh-pages/packages/tokenizers/punkt.zip</p>
<p>and place the punkt extracted folder in C:\nltk_data\tokenizers.</p>
| 0 | 2016-08-04T00:05:56Z | [
"python",
"python-2.7",
"ssl",
"nltk"
] |
Python XML element found but evaluates to False - how to check existence pythonically? | 38,725,683 | <p>I am using <code>xml.etree</code> in Python to parse a SOAP response (don't ask...). The response contains a <code><Success /></code> element.</p>
<p>I go and search it, find it and get an <code>xml.etree.ElementTree.Element</code> instance, let's call it <code>my_element</code>.</p>
<p>Yet said instance evaluates to <code>False</code></p>
<ul>
<li><code>bool(my_element)</code> is False </li>
<li><code>my_element.__nonzero__()</code> is <code>False</code> (using Python 2.7, otherwise I'd check <code>__bool__()</code> of course). </li>
</ul>
<p>I assume that is because <code>my_element.text</code> is empty, as <code><Success /></code> is an empty xml element.</p>
<p>I also assume this is a pythonic thing to do, as empty lists and dicts behave similarly - even though I think the meaning of an empty but existing XML element is different: What is the most pythonic way to check whether it is there? </p>
<p>Is it really the following?</p>
<pre><code>from xml.etree.ElementTree import Element
...
if isinstance(my_element, Element):
</code></pre>
| 0 | 2016-08-02T16:16:48Z | 38,726,353 | <p>No, <code>isinstance</code> is not the recommend way to check for non-existence of an <code>Element</code>.</p>
<p>From <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#element-objects" rel="nofollow">the docs</a>: </p>
<blockquote>
<p>Caution: Elements with no subelements will test as False. This behavior will change in future versions. Use specific <code>len(elem)</code> or <code>elem is None</code> test instead.</p>
</blockquote>
<p>From <a href="https://hg.python.org/cpython/file/2.7/Lib/xml/etree/ElementTree.py#l250" rel="nofollow">the source</a>:</p>
<blockquote>
<pre><code> warnings.warn(
"The behavior of this method will change in future versions. "
"Use specific 'len(elem)' or 'elem is not None' test instead.",
FutureWarning, stacklevel=2
)
</code></pre>
</blockquote>
<p>To test for the non-existence of an element, do something like this:</p>
<pre><code>my_element = tree.find('.//Success')
if my_element is not None:
do_something(my_element)
</code></pre>
| 0 | 2016-08-02T16:52:57Z | [
"python",
"xml",
"python-2.7"
] |
Are there any examples of anomaly detection algorithms implemented with TensorFlow? | 38,725,851 | <p>I'm fairly new to this subject and I am working on a project that deals with detecting anomalies in time series data. I want to use TensorFlow so that I could potentially deploy the model onto a mobile device. I'm having a difficult time finding relevant material and examples of anomaly detection algorithms implemented in TensorFlow. </p>
<p>Some algorithms I'm looking into are clustering algorithms for classifying windowed samples and Holt winters for streaming data. </p>
<p>Any example would help me tremendously! </p>
| 3 | 2016-08-02T16:25:48Z | 40,004,981 | <p>Here is an example of sequential filtering using Holt-Winters. The same pattern should work for other types of sequential modeling such as the Kalman filter.</p>
<pre><code>from matplotlib import pyplot
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
seasonality = 10
def model_fn(features, targets):
"""Defines a basic Holt-Winters sequential filtering model in TensorFlow.
See http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc435.htm"""
times = features["times"]
values = features["values"]
# Initial estimates
initial_trend = tf.reduce_sum(
(values[seasonality:2*seasonality] - values[:seasonality])
/ seasonality ** 2)
initial_smoothed_observation = values[0]
# Seasonal indices are multiplicative, so having them near 0 leads to
# instability
initial_seasonal_indices = 1. + tf.exp(
tf.get_variable("initial_seasonal_indices", shape=[seasonality]))
with tf.variable_scope("smoothing_parameters",
initializer=tf.zeros_initializer):
# Trained scalars for smoothing, transformed to be in (0, 1)
observation_smoothing = tf.sigmoid(
tf.get_variable(name="observation_smoothing", shape=[]))
trend_smoothing = tf.sigmoid(
tf.get_variable(name="trend_smoothing", shape=[]))
seasonal_smoothing = tf.sigmoid(
tf.get_variable(name="seasonal_smoothing", shape=[]))
def filter_function(
current_index, seasonal_indices, previous_smoothed_observation,
previous_trend, previous_loss_sum):
current_time = tf.gather(times, current_index)
current_observation = tf.gather(values, current_index)
current_season = current_time % seasonality
one_step_ahead_prediction = (
(previous_smoothed_observation + previous_trend)
* tf.gather(seasonal_indices, current_season))
new_loss_sum = previous_loss_sum + (
one_step_ahead_prediction - current_observation) ** 2
new_smoothed_observation = (
(observation_smoothing * current_observation
/ tf.gather(seasonal_indices, current_season))
+ ((1. - observation_smoothing)
* (previous_smoothed_observation + previous_trend)))
new_trend = (
(trend_smoothing
* (new_smoothed_observation - previous_smoothed_observation))
+ (1. - trend_smoothing) * previous_trend)
updated_seasonal_index = (
seasonal_smoothing * current_observation / new_smoothed_observation
+ ((1. - seasonal_smoothing)
* tf.gather(seasonal_indices, current_season)))
new_seasonal_indices = tf.concat(
concat_dim=0,
values=[seasonal_indices[:current_season],
[updated_seasonal_index],
seasonal_indices[current_season + 1:]])
# Preserve shape to keep the while_loop shape invariants happy
new_seasonal_indices.set_shape(seasonal_indices.get_shape())
return (current_index + 1, new_seasonal_indices, new_smoothed_observation,
new_trend, new_loss_sum)
def while_run_condition(current_index, *unused_args):
return current_index < tf.shape(times)[0]
(_, final_seasonal_indices, final_smoothed_observation, final_trend,
sum_squared_errors) = tf.while_loop(
cond=while_run_condition,
body=filter_function,
loop_vars=[0, initial_seasonal_indices, initial_smoothed_observation,
initial_trend, 0.])
normalized_loss = sum_squared_errors / tf.cast(tf.shape(times)[0],
dtype=tf.float32)
train_op = tf.contrib.layers.optimize_loss(
loss=normalized_loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=0.1,
optimizer="Adam")
prediction_times = tf.range(30)
prediction_values = (
(final_smoothed_observation + final_trend * tf.cast(prediction_times,
dtype=tf.float32))
* tf.cast(tf.gather(params=final_seasonal_indices,
indices=prediction_times % seasonality),
dtype=tf.float32))
predictions = {"times": prediction_times,
"values": prediction_values}
return predictions, normalized_loss, train_op
# Create a synthetic time series with seasonality, trend, and a little noise
series_length = 50
times = np.arange(series_length, dtype=np.int32)
values = 5. + (
0.02 * times + np.sin(times * 2 * np.pi / float(seasonality))
+ np.random.normal(size=[series_length], scale=0.2)).astype(np.float32)
# Define an input function to feed the data into our model
input_fn = lambda: ({"times":tf.convert_to_tensor(times, dtype=tf.int32),
"values":tf.convert_to_tensor(values, dtype=tf.float32)},
{})
# Wrap the model in a tf.learn Estimator for training and inference
estimator = tf.contrib.learn.Estimator(model_fn=model_fn)
estimator.fit(input_fn=input_fn, steps=500)
predictions = estimator.predict(input_fn=input_fn, as_iterable=False)
# Plot the training data and predictions
pyplot.plot(range(series_length), values)
pyplot.plot(series_length + predictions["times"], predictions["values"])
pyplot.show()
</code></pre>
<p>(I was using TensorFlow 0.11.0rc0 when writing this)</p>
<p><a href="https://i.stack.imgur.com/051p5.png" rel="nofollow">Output of Holt-Winters on synthetic data: training data followed by predictions.</a></p>
<p>However, this code will be quite slow when scaling up to longer time series. The issue is that TensorFlow (and most other tools for automatic differentiation) do not have great performance on sequential computations (looping). Usually this is ameliorated by batching data and operating on large chunks. It's somewhat tricky to do for sequential models, since there is state which needs to be passed from one timestep to the next.</p>
<p>A much faster (but maybe less satisfying) approach is to use an autoregressive model. This has the added benefit of being very easy to implement in TensorFlow:</p>
<pre><code>import numpy as np
from matplotlib import pyplot
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
seasonality = 10
# Create a synthetic time series with seasonality, trend, and a little noise
series_length = 50
times = np.arange(series_length, dtype=np.int32)
values = 5. + (0.02 * times + np.sin(times * 2 * np.pi / float(seasonality))
+ np.random.normal(size=[series_length], scale=0.2)).astype(
np.float32)
# Parameters for stochastic gradient descent
batch_size = 16
window_size = 10
# Define a column format for the linear regression
input_column = tf.contrib.layers.real_valued_column(column_name="input_window",
dimension=window_size)
def training_input_fn():
window_starts = tf.random_uniform(shape=[batch_size], dtype=tf.int32,
maxval=series_length - window_size - 1)
element_indices = (tf.expand_dims(window_starts, 1)
+ tf.expand_dims(tf.range(window_size), 0))
return ({input_column: tf.gather(values, element_indices)},
tf.gather(values, window_starts + window_size))
estimator = tf.contrib.learn.LinearRegressor(feature_columns=[input_column])
estimator.fit(input_fn=training_input_fn, steps=500)
predictions = list(values[-10:])
def predict_input_fn():
return ({input_column: tf.reshape(predictions[-10:], [1, 10])}, {})
predict_length = 30
for i in xrange(predict_length):
prediction = estimator.predict(input_fn=predict_input_fn, as_iterable=False)
predictions.append(prediction[0])
predictions = predictions[10:]
pyplot.plot(range(series_length), values)
pyplot.plot(series_length + np.arange(predict_length), predictions)
pyplot.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/SmZX6.png" rel="nofollow">Output of the autoregressive model on the same synthetic dataset.</a></p>
<p>Notice that since the model has no state to keep, we can do mini-batch stochastic gradient descent very easily.</p>
<p>For clustering, something like <a href="http://stackoverflow.com/questions/33621643/how-would-i-implement-k-means-with-tensorflow">k-means</a> could work for time series.</p>
| 1 | 2016-10-12T17:38:56Z | [
"python",
"algorithm",
"machine-learning",
"tensorflow"
] |
How can I extract the text outside the <em> tag in BeautifulSoup | 38,725,870 | <p>Can someone help me extract the test that is after the <em>From</em>, I want to extract the sender name. It is situated right outside the em tag. I'm using the python BeautifulSoup package. </p>
<p>Here is a link to the webpage: <a href="http://seclists.org/fulldisclosure/2016/Jan/0" rel="nofollow">http://seclists.org/fulldisclosure/2016/Jan/0</a></p>
<p>I was able to extract the email title successfully since is was in a tag. There are no other div's or classes in the html page.</p>
<p>This is the html code of the page:</p>
<p>Here is what I've tried</p>
<pre><code>def title_spider(max_pages):
page = 0
while page <= max_pages:
url = 'http://seclists.org/fulldisclosure/2016/Jan/' + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for email_title in soup.find('b'):
title = email_title.string
print(title)
for date_stamp in soup.em:
date = date_stamp
print(date)
page += 1
title_spider(2)
</code></pre>
<p>` </p>
| 2 | 2016-08-02T16:27:00Z | 38,726,025 | <p>You want the next sibling and if you want the specific em's From and Date you can combine with a regex:</p>
<pre><code>import re
def title_spider(max_pages):
for page in range(max_pages + 1):
url = 'http://seclists.org/fulldisclosure/2016/Jan/{}'.format(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for email_title in soup.find('b'):
title = email_title.string
print(title)
for em in soup.find_all("em", text=re.compile("From|Date")):
print(em.text, em.next_sibling)
</code></pre>
<p>Which gives you:</p>
<pre><code>In [5]: title_spider(2)
Alcatel Lucent Home Device Manager - Management Console Multiple XSS
From : UÄur Cihan KOÃ <u.cihan.koc () gmail com>
Date : Sun, 3 Jan 2016 13:20:53 +0200
Executable installers/self-extractors are vulnerable^WEVIL (case 17): Kaspersky Labs utilities
From : "Stefan Kanthak" <stefan.kanthak () nexgo de>
Date : Sun, 3 Jan 2016 16:12:50 +0100
Possible vulnerability in F5 BIG-IP LTM - Improper input validation of the HTTP version number of the HTTP reqest allows any payload size and conent to pass through
From : Eitan Caspi <eitanc () yahoo com>
Date : Sun, 3 Jan 2016 21:10:27 +0000 (UTC)
</code></pre>
| 1 | 2016-08-02T16:35:23Z | [
"python",
"beautifulsoup",
"web-crawler"
] |
Why does create() in PayPal's batch payments via API return False? | 38,725,877 | <p><a href="https://developer.paypal.com/docs/api/payments.payouts-batch/#payouts_create">https://developer.paypal.com/docs/api/payments.payouts-batch/#payouts_create</a></p>
<p>Sample code:
<a href="https://github.com/paypal/PayPal-Python-SDK/blob/master/samples/payout/create.py">https://github.com/paypal/PayPal-Python-SDK/blob/master/samples/payout/create.py</a></p>
<p>Why does <code>create()</code> return False? How do I get an explanation of why?</p>
<p>Update: I was able to get this info, but it's not helpful either:</p>
<pre><code>ForbiddenAccess: Failed. Response status: 403. Response message: Forbidden. Error message: {"name":"AUTHORIZATION_ERROR","message":"Authorization error occurred","debug_id":"60e73559274d3","information_link":"https://developer.paypal.com/webapps/developer/docs/api/#AUTHORIZATION_ERROR"}
</code></pre>
| 8 | 2016-08-02T16:27:21Z | 39,030,774 | <p>The error message pretty clearly says that an authorization error occurred. Without any further information the only thing I can assume is that you either did not include an OAuth bearer token in the request, or the token was invalid (although I believe an invalid or expired token will return HTTP 401, not 403).</p>
<p>See <a href="https://developer.paypal.com/docs/integration/direct/make-your-first-call/" rel="nofollow">Make your first call</a> or <a href="https://developer.paypal.com/docs/integration/direct/paypal-oauth2/" rel="nofollow">How PayPal uses OAuth 2.0</a> for a high level overview</p>
<p>You might be able to get a more definitive answer if you post the request information that is failing.</p>
| 1 | 2016-08-19T03:44:08Z | [
"python",
"paypal",
"paypal-rest-sdk"
] |
Why does create() in PayPal's batch payments via API return False? | 38,725,877 | <p><a href="https://developer.paypal.com/docs/api/payments.payouts-batch/#payouts_create">https://developer.paypal.com/docs/api/payments.payouts-batch/#payouts_create</a></p>
<p>Sample code:
<a href="https://github.com/paypal/PayPal-Python-SDK/blob/master/samples/payout/create.py">https://github.com/paypal/PayPal-Python-SDK/blob/master/samples/payout/create.py</a></p>
<p>Why does <code>create()</code> return False? How do I get an explanation of why?</p>
<p>Update: I was able to get this info, but it's not helpful either:</p>
<pre><code>ForbiddenAccess: Failed. Response status: 403. Response message: Forbidden. Error message: {"name":"AUTHORIZATION_ERROR","message":"Authorization error occurred","debug_id":"60e73559274d3","information_link":"https://developer.paypal.com/webapps/developer/docs/api/#AUTHORIZATION_ERROR"}
</code></pre>
| 8 | 2016-08-02T16:27:21Z | 39,066,929 | <p>PayPal tech/dev support told me the debug ID said I didn't have Mass Pay enabled on my account, so I had to call them and talk to general support. I did, and they said they cannot enable it on Canadian accounts. I'm going to have to change payment processors to someone who offers the Mass Pay feature. I need to send out 500 micro payments to 500 different people.</p>
<p>They told me to open a US PayPal account. They asked if I had a US residence, and I do have a vacation home in the US. Then they asked me if I had a social security number, and I don't. So that option was not available.</p>
<p>Update: I told PayPal technical support that it could not be enabled in Canada. They told me that it works in Canada on the sandbox, so maybe it's coming soon. However, they said there is a feature called <a href="https://developer.paypal.com/docs/api/payments.payouts-batch/?mark=payouts" rel="nofollow">Payouts</a> that can work for me. They went ahead and enabled it for me. So I'm going with that instead of mass pay.</p>
<p>Moral of the story: PayPal technical support via email sorted it all out. Their phone support is useless and stubborn.</p>
| 7 | 2016-08-21T17:28:08Z | [
"python",
"paypal",
"paypal-rest-sdk"
] |
How to use django-channels | 38,725,955 | <p>I need to use websocket in django, so I read docs of channels. Now I know the basic concepts but still confused because there's few code in detail.
<br>
<br>Here are my questions:
<br>How can I make ASGI align with the WSGI in code? In other words, how to configure <code>WSGI Server to ASGI</code> or <code>ASGI to WSGI application</code>?
<br>Does it affect the way of using ajax?
<br>Does it affect that if the django sends http request to other website?
<br>Any help will be appreciated.</p>
| 0 | 2016-08-02T16:31:59Z | 38,729,993 | <p>If you want to use new ASGI spec then define <code>CHANNEL_LAYERS</code> in <code>settings.py</code>. If you are not set then this just run and work like a normal WSGI app.</p>
<p>When you set above ( <code>to ASGI</code> ) you have two options, either route all traffic through ( in this case <code>HTTP/Websocket</code> ) to interface server ( <code>daphne</code> ). Or you can route all the websockets and long polling http connection to interface server</p>
<p><a href="http://channels.readthedocs.io/en/latest/deploying.html?highlight=wsgi#deploying" rel="nofollow">Deploying</a> document here if you want to take look </p>
| 1 | 2016-08-02T20:36:07Z | [
"python",
"django",
"django-channels"
] |
Issue with saving checkbox value from form | 38,725,967 | <p>I have looked at the other solutions on Stackoverflow but nothing seems to help with the problem I have.</p>
<p>I have an form for filling out information which needs to be saved. I give the user three options which they can tick. However, it won't save the form because it says the value isn't valid.</p>
<p>Here is my model:</p>
<pre><code>from django.db import models
from django.utils import timezone
FLAG_CHOICES = (('Active', 'Active'), ('Inactive', 'Inactive'), )
STATUS_CHOICES=(('Critical', 'Critical'), ('Medium', 'Medium'), ('Low','Low'))
class Event(models.Model):
event_status=models.CharField(max_length=10, choices=STATUS_CHOICES)
event_title=models.CharField(max_length=50)
event_description=models.CharField(max_length=500)
event_flag=models.CharField(max_length=10, choices=FLAG_CHOICES)
date_active=models.DateField(default=timezone.now())
time_active=models.TimeField(default=timezone.now())
def __str__(self):
return self.event_title
</code></pre>
<p>Here is my form:</p>
<pre><code>from django import forms
from server_status.models import Event
FLAG_CHECKBOX = [('active', 'Active'), ('inactive', 'Inactive'), ]
STATUS_CHOICES=[('critical', 'Critical'), ('medium', 'Medium'), ('low','Low'),]
class Add_Event_Form(forms.ModelForm):
event_title = forms.CharField(max_length=50, help_text="Please enter an informative title.")
event_status = forms.MultipleChoiceField(choices=STATUS_CHOICES, widget=forms.CheckboxSelectMultiple,
help_text="Please select the status of the event")
event_description = forms.CharField(max_length=500, initial="",
help_text="Enter a short description of the event here")
event_flag = forms.MultipleChoiceField(choices=FLAG_CHECKBOX, required=True, widget=forms.CheckboxSelectMultiple,
help_text="Please select the status of this event.")
date_active = forms.DateField(required=True, widget=forms.DateInput(attrs={'class': 'datepicker'}),
help_text="Please select a date for this event.")
time_active = forms.TimeField(required=True, widget=forms.TimeInput(format='%HH:%MM'),
help_text="Please select a time for this event in HH:MM format.")
class Meta:
model = Event
fields = '__all__' # I want all fields to be editable
</code></pre>
<p>This error displays on the webpage itself when you press Save:</p>
<pre><code>Select a valid choice. [u'critical'] is not one of the available choices.
</code></pre>
<p>This error comes up when you tick the box that says 'Critical'.</p>
| 0 | 2016-08-02T16:32:34Z | 38,726,814 | <p>In your models you have the <code>STATUS_CHOICES</code>'s values(first value in each tuple) <code>Critical</code>, <code>Medium</code>, <code>Low</code> with first letter as upper case, but you did all lower cases in the form. You should modify the choices in models to use the same <code>STATUS_CHOICES</code> as the form one.</p>
| 1 | 2016-08-02T17:20:51Z | [
"python",
"django"
] |
Scraping all comments from specific site | 38,726,062 | <p>I would like to scrape all comments from all pages of a specific article. For example this article: <a href="http://www.spiegel.de/politik/ausland/syrien-die-islamisten-sind-aleppos-letzte-hoffnung-a-1105806.html" rel="nofollow">Link</a></p>
<p>I think the comments are in javascript, could anyone help me figure out how I can scrape all comments?</p>
<p>This is the code I use</p>
<pre><code>def getSpiegelComments(url):
try:
html = urlopen(url)
except HTTPError as e:
print(e)
return None
try:
bsObj = BeautifulSoup(html, 'html.parser')
comments = bsObj.findAll("div", {"class": "article-comment"})
except AttributeError as e:
return None
for comment in comments:
yield comment
url = 'http://www.spiegel.de/politik/ausland/syrien-die-islamisten-sind-aleppos-letzte-hoffnung-a-1105806.html'
comments = getSpiegelComments(url)
if comments == None:
print("Comments could not be found on " + url)
else:
for comment in comments:
print(comment)
</code></pre>
<p>I need all comments (currently 124) from all pages. Not only the 5 comments from the first page.</p>
| -1 | 2016-08-02T16:37:31Z | 38,726,314 | <p>All the text is available after each <em>div.article-comment-title</em> under the divs with the class <em>js-article-post-full-text</em>:</p>
<pre><code>import requests
soup = BeautifulSoup(requests.get("http://www.spiegel.de/politik/ausland/syrien-die-islamisten-sind-aleppos-letzte-hoffnung-a-1105806.html").content)
for comm in soup.select("div.article-comment-title"):
print(comm.a.text)
print(comm.find_next("div","js-article-post-full-text").text)
</code></pre>
<p>To get all 120+, you can use the url <code>http://www.spiegel.de/fragments/community/spon-495132-{}.html</code> passing in the offset number to start from:</p>
<pre><code>import re
def yield_comments(soup):
for comm in soup.select("div.article-comment-title"):
yield comm.a.text.strip(),comm.find_next("div", "js-article-post-full-text").text.strip()
next_url = "http://www.spiegel.de/fragments/community/spon-495132-{}.html"
url = "http://www.spiegel.de/politik/ausland/syrien-die-islamisten-sind-aleppos-letzte-hoffnung-a-1105806.html"
with requests.Session() as s:
soup = BeautifulSoup(s.get(url).content)
# get total comments
tot = re.search("\d+",soup.select_one("#js-article-comments-box-pager").find_previous_sibling("span").text)
tot = int(tot.group())
# get comments from first page
for title, com in yield_comments(soup):
print(title, com)
# start at the 6th comment until the end
for page in range(6, tot, 5):
soup = BeautifulSoup(s.get(next_url.format(page)).content)
for title, com in yield_comments(soup):
print(title, com)
</code></pre>
<p>That will give you comments all the way to 124:</p>
<pre><code>.......................................
(u'117. Horst John', u'Zitat von Horst JohnAl-Sham, Al-Nusra usw. sind Nachfolger der Al qaida. Diese als Befreier zu sehen ist der blanke Hohn. Schauen sie nur die youtube Videos diese langb\xe4rtigen gekauften Terrorbande sich an. Waffendepots in Krankenh\xe4user und Schulen, Bombenattacken auf Fl\xfcchtlingscamps sind das Werk dieser Terroristen.\r\n\r\nWer sieht die denn hier oder auf SPON als Befreier? das steht ja nur, das das die letzte Hoffnung ist. Und so wird es sich wahrscheinlich auch f\xfcr die dort hungernden Menschen anf\xfchlen.')
(u'118. Verantwortung', u"Die viel naheliegendere Frage ist, warum ein Umsturzversuch, egal aus welcher Motivation heraus, der hunderttausende Tote gefordert, Millionen Menschen vertrieben und ein ganzes Land in eine Tr\xfcmmerlandschaft verwandelt hat, nicht l\xe4ngst vor Jahren abgebrochen wurde. \r\nDie Antwort ist bekannt und hat mit einer weiteren Frage zu tun:\r\n\r\nWieso fand sich die Bundesrepublik in einer Gemeinschaft der 'Freunde Syriens' mit Saudi-Arabien, Katar und der T\xfcrkei wieder, die nun beim besten Willen mit Demokratie in Syrien nichts zu tun hatten, die aber sp\xe4testens seit anfang 2012 flei\xdfig Waffen nach Syrien pumpten?\r\nhttp://www.nytimes.com/interactive/2013/03/25/world/middleeast/an-arms-pipeline-to-the-syrian-rebels.html?ref=middleeast&_r=0")
(u'119. @ Piece', u'Zitat von Pieceich war bisher nicht als Kommentator bei SPON registriert, aber der hier publizierte Artikel ist derart nah an der Grenze zur islamistischen Propaganda, dass ich mich einer scharfen Protestnote nicht erwehren kann:\r\nWie um alles in der Welt ist es m\xf6glich radikalste Islamisten als "letzte Hoffnung f\xfcr Aleppo" derart offen zu glorifizieren? \r\n\r\nUnter gr\xf6\xdftem Wohlwollen verbleiben als Motivation f\xfcr diese Zeilen zwei M\xf6glichkeiten:\r\n\r\n1.Dem Autor dieses Artikels ist nicht bekannt welche Konsequenzen ein Sieg dieser "Freiheitsk\xe4mpfer" f\xfcr die Zivilbev\xf6lkerung nach sich z\xf6ge. \r\n\r\n2. Die Konsequenzen sind bekannt, werden jedoch als weniger schwerwiegend erachtet.\r\n\r\nBeides disqualifiziert den Autor im SPON zu publizieren.\r\n\r\n\r\nWas die wahren Motive der Dschihadisten (Stichwort: menschliche Schutzschilde usw.) anbetrifft, ganz zu schweigen.\r\n\r\nEs gibt noch eine Dritte M\xf6glichkeit: Sie haben den Autor nicht verstanden. Er hat sich keineswegs mit den Terroristen gemein gemacht, sondern lediglich versucht, die Gef\xfchlslage der dort eingeschlossenen Menschen nachzuvollziehen. Aber da m\xfcsste man dann nat\xfcrlich mal genauer lesen und auch noch mitdenken.')
(u'120. Die letzte Hofnung', u'Islamisten die letzte, im wahrsten Sinne des Wortes,\r\nHoffnung. Sind nicht die Islamisten und andere Rebellen der Grund daf\xfcr, dass die Bev\xf6lkerung so tief in der ...\r\nsitzt. Von wem wird dieser Schreiber eigentlich gesponsert.')
(u'121. @jethan', u'Zitat von jethaneigentlich islamische Terroristen, sind der Grund und der Ausl\xf6ser dieser Katastrophe.\r\nSie jetzt als Retter zu pr\xe4sentieren ist blanker Hohn gegen\xfcber der betroffenen Bev\xf6lkerung.\r\nDie h\xe4tte l\xe4ngst durch die Korridore die Stadt verlassen k\xf6nnen, w\xe4re sie nicht von den Terroristen, ja genau, denselben die sich jetzt als Retter feiern lassen, daran gehindert worden.\r\n\r\nDer Ausl\xf6ser ist Assat selbst: Er hat friedliche Demonstrationen mit Panzern niederwalzen lassen. DAS war der Beginn. Weitere Proteste der Bev\xf6lkerung gegen dieses Vorgehen folgte auf dem Fu\xdfe. Daraufhin lie\xdf Assat massiv seine Armee gegen die Demonstranten einsetzen.\r\n\r\nDas schaukelte sich dann soweit hoch, das teils ein machtvakuum entstand. Erst jetzt traten fanatische Islamisten auf den Plan.\r\n\r\nSie ignorieren die ganze Vorgeschichte.')
(u'122.', u'Zitat von jowittIch weis nicht, was Sie haben. In dem Artikel steht nichts davon, das die angreifenden Islamisten "gut" seien. Im gegenteil: Dort wird beschrieben, das sie einen "Gottesstaat" nach der Scharia wollen.\r\n\r\nNehmen Sie eigentlich alles so w\xf6rtlich? Dank zahlreicher Stilmittel l\xe4sst sich ein Text nicht nur interessant gestalten, manche lassen sich auch hervorragend dazu verwenden einen unterbewussten Eindruck beim Leser haften zu lassen. Propaganda egal welcher politischen Seite funktioniert auch oft auf die gleiche Art.\r\n\r\nDie "Rebellen" (stimme f\xfcr Unwort des Jahres in dieser Verwendung) l\xe4sst der Autor im Artikel an vielen Stellen tats\xe4chlich so dastehen als w\xfcrden diese nur den Zivilisten helfen wollen. Das wird so explizit nicht klar genannt, erschlie\xdft sich jedoch aus Wortwahl, Inhalt und Zusammenhang. W\xfcrde man in der Presse das offen so schreiben, w\xfcrde man hinterher definitiv darauf festgenagelt werden. Suchen Sie den Link des Mitkommentators zur ZON Meldung raus, nachdem die Rebellen die Zivilisten nicht aus der Stadt lassen. Dann vergleichen Sie den Inhalt dieses Artikels mit den vorliegenden Fakten, gerne auch auf den BBC News Seiten, die in dem Fall auch wesentlich objektiver sind. Das macht es u. U. leichter nachzuvollziehen, was der Vorposter gemeint hat.\r\n\r\nDie Rebellen, komischerweise tats\xe4chlich nicht Terroristen genannt obwohl die Definition passt, werden von Anfang an auf SPON und einigen anderen Nachrichtenseiten sehr besch\xf6nigend in den Artikeln abgebildet, \xe4hnlich der neuen Kiewer Regierung oder in weiten Teilen auch Erdogan. Da muss man noch nicht einmal auf RT oder \xe4hnliche Seiten, um das als Bl\xf6dsinn auszufiltern, da reichen ganz normale, objektive Nachrichten ohne Meinungsbildung alla BILD.')
(u'123. Guter Bericht', u'Danke f\xfcr diesen Bericht zur Lage in Syrien. \r\n\r\nUnd auch etwas zum Nachdenken: Wenn wir (die freien Demokratien) nicht helfen, aber radikale Islamisten, wird da nicht die H\xe4lfte Bev\xf6lkerung Syriens - immerhin mehr als 10 Millionen Menschen - den Islamisten geradezu in die Arme getrieben? Unterlassene Hilfeleistung wird bestraft. aj')
(u'124. Unterst\xfctzung von Extremisten', u'Zitat von jowittIch weis nicht, was Sie haben. In dem Artikel steht nichts davon, das die angreifenden Islamisten "gut" seien. Im gegenteil: Dort wird beschrieben, das sie einen "Gottesstaat" nach der Scharia wollen.\r\n\r\nWas steht in der \xdcberschrift?\r\n"Belagerte Stadt in Syrien: Die Islamisten sind Aleppos letzte Hoffnung"\r\nund weiter geht\'s:\r\n"W\xe4hrend die USA und Europa zusehen, wie Hunderttausende Menschen in Aleppo ausgehungert werden, kommen Islamisten den Eingeschlossenen zur Hilfe. Angef\xfchrt werden sie von einer Terrormiliz."\r\n\r\nWie w\xfcrden Sie denn diese SPON Einf\xfchrung interpretieren?\r\nEs steht doch w\xf6rtlich da - eine Terrormiliz kommt den Eingeschlossenen zu Hilfe und ist Hoffnung, sogar letzte Hoffnung.\r\n\'Hilfe\' assoziiere ich immer noch mit \'etwas Gutes leisten\'.\r\nEs steht im Text dann auch an keiner Stelle, da\xdf diese Hilfe vom Autor irgendwie ablehnt wird. \r\nWie auch an keiner Stelle belegt ist, da\xdf die Bev\xf6lkerung diese Hilfe \xfcberhaupt haben will, das aber nur nebenbei.\r\n\r\nEntweder der SPON Autor hat sich hier semantisch vertan, oder er hat sich als Unterst\xfctzer von Extremisten geouted.')
</code></pre>
| 3 | 2016-08-02T16:51:13Z | [
"python",
"beautifulsoup"
] |
Scraping all comments from specific site | 38,726,062 | <p>I would like to scrape all comments from all pages of a specific article. For example this article: <a href="http://www.spiegel.de/politik/ausland/syrien-die-islamisten-sind-aleppos-letzte-hoffnung-a-1105806.html" rel="nofollow">Link</a></p>
<p>I think the comments are in javascript, could anyone help me figure out how I can scrape all comments?</p>
<p>This is the code I use</p>
<pre><code>def getSpiegelComments(url):
try:
html = urlopen(url)
except HTTPError as e:
print(e)
return None
try:
bsObj = BeautifulSoup(html, 'html.parser')
comments = bsObj.findAll("div", {"class": "article-comment"})
except AttributeError as e:
return None
for comment in comments:
yield comment
url = 'http://www.spiegel.de/politik/ausland/syrien-die-islamisten-sind-aleppos-letzte-hoffnung-a-1105806.html'
comments = getSpiegelComments(url)
if comments == None:
print("Comments could not be found on " + url)
else:
for comment in comments:
print(comment)
</code></pre>
<p>I need all comments (currently 124) from all pages. Not only the 5 comments from the first page.</p>
| -1 | 2016-08-02T16:37:31Z | 38,731,364 | <p>You should be scraping from <a href="http://www.spiegel.de/forum/politik/belagerte-stadt-syrien-die-islamisten-sind-aleppos-letzte-hoffnung-thread-495132-1.html" rel="nofollow">this link</a> instead of the one you provided because its where the actual comments are coming from and this way it allows you to escape the javascript on the URL you posted that's hiding the comments you want. </p>
<p>If you scrape from this link, getting ALL the comments should be simple because, conveniently, each of the pages here have the same "base" URL (<code>http://www.spiegel.de/forum/politik/belagerte-stadt-syrien-die-islamisten-sind-aleppos-letzte-hoffnung-thread-495132-</code>) and then the page number is added on at the end as <code>1.html</code>, <code>2.html</code> and so on.</p>
<p>This means in your code you will want to loop through each of these pages. Here is some pseudo code to help you get started. </p>
<pre><code>def comment_spider(max_pages):
page = 1 #start on page 1
comments_list = []
while page <= max_pages:
url = 'http://www.spiegel.de/forum/politik/belagerte-stadt-syrien-die-islamisten-sind-aleppos-letzte-hoffnung-thread-495132-'+str(page)+'.html'
req = requests.get(url)
plain_text = req.text
soup = BeautifulSoup(plain_text, "lxml")
for items in soup.findAll('div', {'class': 'comment'}): #you need to get the actual information for this yourself
comment = items.get_text()
comments_list.append(comment)
page += 1
return comments_list
</code></pre>
<p>Then to call this function you would give it the total number of pages. E.g.</p>
<pre><code>comment_spider(13)
</code></pre>
| 0 | 2016-08-02T22:17:24Z | [
"python",
"beautifulsoup"
] |
Python : Plot heatmap for large matrix | 38,726,087 | <p>I have a large matrix of size 500 X 18904. </p>
<p><strong>Since most of the values are zeros, I'm not able to visualise the pattern clearly as the zeroes dominate in the colorbar.</strong></p>
<p>To look at the data more closely, I need to zoom in for different segments of image. Is there any reliable way to visualise this data using colorbar?. </p>
<p>Here is my code and output. </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import scipy.io as sio
j = sio.loadmat('UV_matrix.mat')
k = j['UV']
plt.imshow(k, aspect='auto')
plt.show()
</code></pre>
<p>The output
<a href="http://i.stack.imgur.com/K5bsA.png" rel="nofollow"><img src="http://i.stack.imgur.com/K5bsA.png" alt="enter image description here"></a></p>
| 0 | 2016-08-02T16:38:44Z | 39,507,431 | <p>I can think of two options by using numpy arrays.</p>
<ol>
<li><p>Assuming your data is mostly higher than zero but there are a lot of zeros.:</p>
<pre><code>vmin = some_value_higher_than_zero
plt.mathow(k,aspect='auto',vmin=vmin)
</code></pre></li>
<li><p>Setting all zeros to NaNs. they are automatically left out.</p>
<pre><code>k[k==0.0]=np.nan
plt.matshow(k,aspect='auto')
</code></pre></li>
</ol>
<p>NB. imshow and matshow work both here.</p>
<p>Another option, when your matrix is really sparse is to use scatterplots.</p>
<pre><code>x,y = k.nonzero()
plt.scatter(x,y,s=100,c=k[x,y]) #color as the values in k matrix
</code></pre>
| 1 | 2016-09-15T09:23:19Z | [
"python",
"python-3.x",
"matplotlib",
"heatmap",
"colorbar"
] |
passing list argument to be iterated on -int not iterable | 38,726,091 | <p>I have a python function that needs to write an arbirary number of bytes to a bus:</p>
<pre><code>def Writebytes(self,values):
for byte in values:
self.writebyte(byte)
</code></pre>
<p>The idea is you pass it a list of bytes and it writes them all out to the bus, e.g.</p>
<pre><code>data=[0x10,0x33,0x14]
object.Writebytes(data)
</code></pre>
<p>i'd like to support a single byte write without the [ ] so that you can enter:</p>
<pre><code>data=0x00
object.Writebytes(data)
</code></pre>
<p>the aim to be to ease usage. However you cannot iterate over the int that is passed to Writebytes. Is there a simple way to coerce the input to always be a list? (preferably in the function definition..)</p>
<p>All this iterables stuff is new to me in my first week of Python (which has gone quite well actually, to python's credit). I'm using Python3 mostly on an RPi btw..</p>
<p>Many thanks!</p>
| 0 | 2016-08-02T16:38:56Z | 38,726,198 | <p>I think the easiest way to go about this would be:</p>
<pre><code>def Writebytes(self,values):
if(not isinstance(values, list)):
values = [values]
for byte in values:
self.writebyte(byte)
</code></pre>
| 1 | 2016-08-02T16:45:16Z | [
"python",
"list",
"int",
"iterable"
] |
passing list argument to be iterated on -int not iterable | 38,726,091 | <p>I have a python function that needs to write an arbirary number of bytes to a bus:</p>
<pre><code>def Writebytes(self,values):
for byte in values:
self.writebyte(byte)
</code></pre>
<p>The idea is you pass it a list of bytes and it writes them all out to the bus, e.g.</p>
<pre><code>data=[0x10,0x33,0x14]
object.Writebytes(data)
</code></pre>
<p>i'd like to support a single byte write without the [ ] so that you can enter:</p>
<pre><code>data=0x00
object.Writebytes(data)
</code></pre>
<p>the aim to be to ease usage. However you cannot iterate over the int that is passed to Writebytes. Is there a simple way to coerce the input to always be a list? (preferably in the function definition..)</p>
<p>All this iterables stuff is new to me in my first week of Python (which has gone quite well actually, to python's credit). I'm using Python3 mostly on an RPi btw..</p>
<p>Many thanks!</p>
| 0 | 2016-08-02T16:38:56Z | 38,726,266 | <p>You can always use a <code>try except</code> statement</p>
<pre><code>def Writebytes(self,values):
try:
for byte in values:
self.writebyte(byte)
except TypeError:
self.writebyte(values)
</code></pre>
| 1 | 2016-08-02T16:48:51Z | [
"python",
"list",
"int",
"iterable"
] |
Determining whether a request came via socket or URL with Tornado | 38,726,118 | <p>I have a Tornado server that listens on both an address/port, and on a socket. I create the server roughly like so (stripped down heavily):</p>
<pre><code>from tornado import httpserver
from tornado.netutil import bind_unix_socket
server = httpserver.HTTPserver(
request_callback=some_callback,
io_loop=some_loop,
)
unix_socket = bind_unix_socket("mysock.sock")
server.add_socket(unix_socket)
server.listen(address="some_host", port=1234)
</code></pre>
<p>I'd like to be able to differentiate when a request is received via the socket, ie something like:</p>
<pre><code>curl -XGET --unix-socket mysock.sock http:/ping
</code></pre>
<p>As opposed to:</p>
<pre><code>curl http://some_address:1234/ping
</code></pre>
<p>I looked at the <a href="http://www.tornadoweb.org/en/stable/httputil.html#tornado.httputil.HTTPServerRequest" rel="nofollow">HTTPServerRequest</a> that Tornado uses when a request is received, but I'm not sure what the best way is to tell the difference between the two. I can look at <code>remote_ip</code> to see if it comes from localhost, but I don't think that's ideal.</p>
| 0 | 2016-08-02T16:40:28Z | 38,726,675 | <p><code>localhost</code> is an internet-domain interface with a well-known IP address. It's not the same as a unix-domain socket, which has no IP address. </p>
<p>A brief look at the source suggests that the <code>remote_ip</code> attribute will contain <code>'0.0.0.0'</code> for a connection received on the unix socket.</p>
<p>(The <code>remote_ip</code> would presumably be <code>'127.0.0.1'</code> for a connection received via a localhost connection.)</p>
| 1 | 2016-08-02T17:13:16Z | [
"python",
"sockets",
"tornado"
] |
pandas transpose select - do analysis along transposed Series | 38,726,120 | <p><code>transpose</code> works well if just transposing <code>rows</code> and <code>columns</code>, but how does one do a <code>transpose with selection</code>?</p>
<pre><code>df = pd.DataFrame({'year': [2012,2013,2014, 2012,2013,2014], 'barber': ['Sue', 'Sue', 'Sue', 'Mike', 'Mike', 'Mike'], 'num_haircuts': [3,3,1,0,0,6]})
</code></pre>
<p><strong><em>df:</em></strong></p>
<pre><code> barber num_haircuts year
0 Sue 3 2012
1 Sue 3 2013
2 Sue 1 2014
3 Mike 0 2012
4 Mike 0 2013
5 Mike 6 2014
</code></pre>
<p><strong><em>desired df:</em></strong></p>
<pre><code>barber 2012 2013 2014
Sue 3 3 1
Mike 0 0 6
</code></pre>
| -1 | 2016-08-02T16:40:30Z | 38,726,150 | <p>Use pivot:</p>
<pre><code>df.pivot(index='barber', columns='year', values='num_haircuts')
Out:
year 2012 2013 2014
barber
Mike 0 0 6
Sue 3 3 1
</code></pre>
| 2 | 2016-08-02T16:42:22Z | [
"python",
"pandas"
] |
Slicing a python variable removes the whole string | 38,726,202 | <p>I'm a python newbie, using the <code>cx_Oracle</code> python package to execute a sql query:</p>
<pre><code>cursor.execute("select tablespace_name from user_tablespaces")
</code></pre>
<p>to retrieve a list from an oracle 11g database. The results look like this: </p>
<pre><code>('SYSTEM',)
('SYSAUX',)
('UNDOTBS1',)
('TEMP',)
('USERS',)
</code></pre>
<p>I need to remove the single quotes and the comma from each entry and then put it into an array, and am attempting to use python slicing to do so:</p>
<pre><code>tablespaceNames = []
for result in cursor:
tablespaceNames.append(result[2:-3])
</code></pre>
<p>however, this is just giving me empty strings in my array: </p>
<pre><code>()
()
()
()
</code></pre>
<p>Is it a problem with the object I'm getting from the query result, or am I using python slicing wrong?</p>
| 0 | 2016-08-02T16:45:26Z | 38,726,262 | <pre><code>tableNameSpaces = [item [0][2:-3] for item in cursor]
</code></pre>
<p>The quotes will be in, though, since you're dealing with string literals.
Once you print them, however, the quotes will be gone.</p>
<p>As nicarus pointed out to me, probably you don't want to truncate your string literals at all, though I thought that's what you wanted.</p>
<p>In that case it would simply be:</p>
<pre><code>tableNameSpaces = [item [0] for item in cursor]
</code></pre>
| 2 | 2016-08-02T16:48:41Z | [
"python",
"substring",
"slice",
"cx-oracle"
] |
Slicing a python variable removes the whole string | 38,726,202 | <p>I'm a python newbie, using the <code>cx_Oracle</code> python package to execute a sql query:</p>
<pre><code>cursor.execute("select tablespace_name from user_tablespaces")
</code></pre>
<p>to retrieve a list from an oracle 11g database. The results look like this: </p>
<pre><code>('SYSTEM',)
('SYSAUX',)
('UNDOTBS1',)
('TEMP',)
('USERS',)
</code></pre>
<p>I need to remove the single quotes and the comma from each entry and then put it into an array, and am attempting to use python slicing to do so:</p>
<pre><code>tablespaceNames = []
for result in cursor:
tablespaceNames.append(result[2:-3])
</code></pre>
<p>however, this is just giving me empty strings in my array: </p>
<pre><code>()
()
()
()
</code></pre>
<p>Is it a problem with the object I'm getting from the query result, or am I using python slicing wrong?</p>
| 0 | 2016-08-02T16:45:26Z | 38,726,406 | <p>You are slicing the tuples that represent each retrieved row rather than the strings that are the first (and only) elements of those rows. Further, you don't need to "get rid of the quotes" - that's just the interpreter doing its best to represent the data structure.</p>
<p>Your database returns the equivalent of the structure below - a list of tuples. Since you only selected a single field, each tuple only contains one element.</p>
<pre><code>data = [
('SYSTEM',),
('SYSAUX',),
('UNDOTBS1',),
('TEMP',),
('USERS',)
]
</code></pre>
<p>So first let's extract those single elements to give ourselves a list of strings instead of a list of tuples.</p>
<pre><code>sdata = [s[0] for s in data]
print(sdata)
</code></pre>
<p>The output you will see is</p>
<pre><code>['SYSTEM', 'SYSAUX', 'UNDOTBS1', 'TEMP', 'USERS']
</code></pre>
<p>Then print out each of the strings in the tuple:</p>
<pre><code>for s in sdata:
print(s)
</code></pre>
<p>The output from this code is</p>
<pre><code>SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
</code></pre>
<p>See - no quotes!</p>
| 2 | 2016-08-02T16:56:44Z | [
"python",
"substring",
"slice",
"cx-oracle"
] |
showing TypeError: sequence item 0: expected string, int found ; when converting list to text | 38,726,250 | <p>Iam trying to solve the babynames problem in google python class and executed the following script. </p>
<pre><code>def extract_names(filename):
year=0
f=open(filename,'rU')
contents=f.read()
match = re.search(r'(/w+/s/w+/s)(/d/d/d/d)',contents)
if match:
year=match.group()
names=[]
names.append(year)
names_rank={}
m = re.findall(r'<td>(/d+)</td><td>(/w+)</td><td>(/w+) </td>',contents)
for item in m:
(r,boyname,girlname)=item
if boyname not in names_rank:
names_rank[boyname]=r
if girlname not in names_rank:
names_rank[girlname]=r
sorted_names=sorted(names_rank.keys())
for name in sorted_names:
names.append(name+' '+names_rank[name])
return names
def main():
args = sys.argv[1:]
if not args:
print 'usage: [--summaryfile] file [file ...]'
sys.exit(1)
summary = False
if args[0] == '--summaryfile':
summary = True
del args[0]
for filename in args:
names = extract_names(filename)
t = '\n'.join(names)
if summary:
fileout = open(filename, 'w')
fileout.write(t + '\n')
fileout.close()
else:
print text
if __name__ == '__main__':
</code></pre>
<p>in function main ,it indicates an error </p>
<pre><code>File "babynames.py", line 78, in main
t = '\n'.join(names)
TypeError: sequence item 0: expected string, int found
</code></pre>
<p>here names is a list and my aim is to make text out of the whole list by using </p>
<pre><code> t = '\n'.join(names)#where t is the text
</code></pre>
<p>but it showing int found , what may be the reason for that.how to resolve the problem?</p>
| -1 | 2016-08-02T16:47:56Z | 38,726,273 | <p>It's because the first element of <code>names</code> is an integer and it should be a string if you want to join them together. Try this:</p>
<pre><code>t = '\n'.join(str(n) for n in names)
</code></pre>
| 1 | 2016-08-02T16:49:09Z | [
"python"
] |
Multiple error messages trying to send file to server using Requests and Django | 38,726,293 | <p>I'm currently using the Requests library to send a file to a remote server from a form(InMemoryUploadedFile). I initially sent the 'file'(<code>file = self.request.FILES.get('file')</code>) as part of my payload, and when I ran the code I received a JSON error response from the server that says: </p>
<blockquote>
<p>{"outcome":"error","message":"string contains null byte"}</p>
</blockquote>
<p>Upon further reading(<a href="https://docs.djangoproject.com/en/1.9/ref/files/uploads/" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/files/uploads/</a>) it seems like it would make sense to read the file. So I decided to read the file using the .chunks() method(in case you have files larger than 2.5MB), but now I'm getting a:</p>
<blockquote>
<p>{"outcome":"error","message":"invalid byte sequence in UTF-8"}</p>
</blockquote>
<p>And if I use <code>.multiple_chunks()</code> I get a server 500 error.</p>
<p>Does anyone have any ideas what steps could be taken to resolve this issue?</p>
<pre><code>class AddDocumentView(LoginRequiredMixin, SuccessMessageMixin, CreateView):
login_url = reverse_lazy('users:login')
form_class = FileUploadForm
template_name = 'docman/forms/add-document.html'
success_message = 'Document was successfully added'
def form_valid(self, form):
pk = self.kwargs['pk']
user = get_object_or_404(User, pk=pk)
file = self.request.FILES.get('file')
if not self.post_to_server(file, user.id):
messages.error(self.request, "Upload failed", extra_tags='alert alert-danger')
return render(self.request, self.template_name, {'form': form})
def post_to_server(self, file, cid):
url = 'https://exampleapi.herokuapp.com/api/files/'
headers = {'token': 'secret-token93409023'}
payload = {'file': file.chunks(), 'client_id': cid}
r = requests.post(url, data=payload, headers=headers)
print(r.text)
if r.status_code == requests.codes.ok:
return True
else:
return False
</code></pre>
| 0 | 2016-08-02T16:50:16Z | 38,726,645 | <p>I figured it out, it had to do with me sending the file as part of the payload, which doesn't multipart encode the upload. So what I had to do was send the file as the correct 'file' parameter which does properly multipart encode the upload.</p>
<pre><code>class AddDocumentView(LoginRequiredMixin, SuccessMessageMixin, CreateView):
login_url = reverse_lazy('users:login')
form_class = FileUploadForm
template_name = 'docman/forms/add-document.html'
success_message = 'Document was successfully added'
def form_valid(self, form):
pk = self.kwargs['pk']
user = get_object_or_404(User, pk=pk)
file = self.request.FILES.get('file')
if not self.post_to_server(file, user.id):
messages.error(self.request, "Upload failed", extra_tags='alert alert-danger')
return render(self.request, self.template_name, {'form': form})
def post_to_server(self, file, cid):
url = 'https://exampleapi.herokuapp.com/api/files/'
headers = {'token': 'secret-token93409023'}
# Remove files as part of payload
payload = {'client_id': cid}
files = {'file': file}
# Place 'files' as file paramter to be properly multipart encoded
r = requests.post(url, files=files, data=payload, headers=headers)
print(r.text)
if r.status_code == requests.codes.ok:
return True
else:
return False
</code></pre>
| 0 | 2016-08-02T17:11:28Z | [
"python",
"django",
"django-views",
"python-requests"
] |
Converting Python to Scala in Spark ML? | 38,726,333 | <p>The question is about
<a href="http://stackoverflow.com/questions/37278999/logistic-regression-with-spark-ml-data-frames">Logistic regression with spark ml (data frames)</a></p>
<p>When I want to change the code Python to Scala</p>
<p>Python:</p>
<pre><code>[stage.coefficients for stage in model.stages
if isinstance(stage, LogisticRegressionModel)]
</code></pre>
<p>Scala:(changed)</p>
<pre><code> for (stage<-model.stages){
if(stage.isInstanceOf[LogisticRegressionModel]{
val a = Array(stage.coefficients)
}}
</code></pre>
<p>I have already checked <code>stage.isInstanceOf[LogisticRegressionModel]</code>, which returned the True. However, <code>stage.coefficients</code> has the error message. It says <code>"value coefficients is not a member of org.apache.spark.ml.Transformer"</code>. </p>
<p>I only check the stage, it will return</p>
<pre><code>org.apache.spark.ml.Transformer= logreg 382456482
</code></pre>
<p>Why the type is different when the isInstanceOf returns true? What should I do? Thanks</p>
| 1 | 2016-08-02T16:52:16Z | 38,726,749 | <blockquote>
<p>Why the type is different when the isInstanceOf returns true?</p>
</blockquote>
<p>Well, Scala is a statically typed language and <code>stages</code> is an <code>Array[Transformer]</code> so each element you access is a <code>Transformer</code>. <code>Transformers</code> in general have no <code>coefficients</code>, hence the error.</p>
<blockquote>
<p>What should I do?</p>
</blockquote>
<p>Be specific about the types. </p>
<pre><code>import org.apache.spark.ml.classification.LogisticRegressionModel
model.stages.collect {
case lr: LogisticRegressionModel => lr.coefficients
}.headOption
</code></pre>
| 2 | 2016-08-02T17:17:16Z | [
"python",
"scala",
"apache-spark",
"apache-spark-ml"
] |
Swapping values in assignemnt | 38,726,411 | <p>I'm a newbie taking Udacity's Intro to Computer Science course. We had a fairly simple question on our quiz about swapping values and I don't quite understand it. Here is the question:</p>
<p>Which of the following sequence of statements leaves the value of variable X the same as it was before the statements. Assume that both a and x refer to the integer values before this code.</p>
<p>Why is this true?</p>
<pre><code>a,x = x,a
a,x = x,a
</code></pre>
<p>For example if I have:</p>
<pre><code>a,x = 4,5
</code></pre>
<p>then
a = 4 and x = 5</p>
<p>For the second part:</p>
<pre><code>a,x = 5, 4
</code></pre>
<p>then a = 5 and x = 4
So x is not equal to what it was before. Can someone explain why this is true?</p>
| -1 | 2016-08-02T16:56:50Z | 38,726,458 | <p>It's simply swapping, then swapping again.</p>
<pre><code>>>> a = 2
>>> x = 1
>>> a,x = x,a # We swapped them, so a = 1 and x = 2 now
# During evaluation, this statement will be equivalent to "a,x = 1,2"
>>> a,x = x,a # And now we swap them again, so they're back to their original values
# During evaluation, this statement will be equivalent to "a,x = 2,1"
>>> a
2
>>> x
1
</code></pre>
<p>What it seems you're missing is that the values change in the middle. You can't go through at the beginning and replace all occurrences of <code>x</code> and <code>a</code> with 4 and 5 in your example because <code>x</code> and <code>a</code> change in the middle of the operation.</p>
| 0 | 2016-08-02T16:59:47Z | [
"python",
"computer-science"
] |
Swapping values in assignemnt | 38,726,411 | <p>I'm a newbie taking Udacity's Intro to Computer Science course. We had a fairly simple question on our quiz about swapping values and I don't quite understand it. Here is the question:</p>
<p>Which of the following sequence of statements leaves the value of variable X the same as it was before the statements. Assume that both a and x refer to the integer values before this code.</p>
<p>Why is this true?</p>
<pre><code>a,x = x,a
a,x = x,a
</code></pre>
<p>For example if I have:</p>
<pre><code>a,x = 4,5
</code></pre>
<p>then
a = 4 and x = 5</p>
<p>For the second part:</p>
<pre><code>a,x = 5, 4
</code></pre>
<p>then a = 5 and x = 4
So x is not equal to what it was before. Can someone explain why this is true?</p>
| -1 | 2016-08-02T16:56:50Z | 38,726,500 | <p>This happens because of the way Python's so-called unpacking assignments_ work. In essence the interpreter creates a list of destinations from the left-hand side and a list of values from the right-hand side of the assignment statement, then assigns the values to the destinations.</p>
<p>So in</p>
<pre><code>a, b = 4, 5
</code></pre>
<p>you are assigning 4 to <code>a</code> and 5 to <code>b</code>. Then</p>
<pre><code>a, b = b, a
</code></pre>
<p>assigns 5 (the current value of <code>b</code>) to <code>a</code> and 4 (the current value of <code>a</code>) to <code>b</code>.</p>
<p>Repeating the last statement therefore switches the values back to the original variables.</p>
| 0 | 2016-08-02T17:02:27Z | [
"python",
"computer-science"
] |
I have a text file that is one long line of text that I want to break up into a dictionary, how would I do that? | 38,726,533 | <p>I have a text file that looks like this, where the line that starts "Distance IQC/E: Distance XY" and the rest the follows is one long line until :END:</p>
<pre><code>:BEGIN
"Distance IQC/E: Distance XY",0.09066,0.09060,0.00040,0.00040,0.00006,,"Distance IQC/F: Distance XY",0.14603,0.14590,0.00080,0.00080,0.00013,,"Distance IQC/G: Distance XY",0.12074,0.12070,0.00080,0.00080,0.00004,,"Distance IQC/I: Distance XY",0.21476,0.21600,0.00200,0.00200,-0.00124,,"Distance IQC/H: Distance XY",0.12714,0.12760,0.00080,0.00080,-0.00046,,"Distance IQC/N: Distance XY",0.08661,0.08690,0.00080,0.00080,-0.00029,,"Distance IQC/M: Distance XY",0.12997,0.13000,0.00080,0.00080,-0.00003,
:END
</code></pre>
<p>I want to know how to split this text file so that each line starts with a "Distance" and is followed by the remaining floaters until the next "Distance".</p>
<p>I can use file.replace(":BEGIN","") to get rid of the Begin and End. </p>
<p>Do I make a dictionary and then rewrite that dictionary to a new text file?</p>
<p>Please help I'm a very new programmer!</p>
<p>Edit: I would expect the output to be:</p>
<pre class="lang-none prettyprint-override"><code>Distance IQC/E: Distance XY 0.09066 0.09060 0.00040 0.00040 0.00006
Distance IQC/F: Distance XY 0.14603 0.14590 0.00080 0.00080 0.00013
Distance IQC/G: Distance XY 0.12074 0.12070 0.00080 0.00080 0.00004
Distance IQC/I: Distance XY 0.21476 0.21600 0.00200 0.00200 -0.00124
Distance IQC/H: Distance XY 0.12714 0.12760 0.00080 0.00080 -0.00046
Distance IQC/N: Distance XY 0.08661 0.08690 0.00080 0.00080 -0.00029
Distance IQC/M: Distance XY 0.12997 0.13000 0.00080 0.00080 -0.00003
</code></pre>
<p>This way I could archive the data cleanly into an excel file or something similar.</p>
<p>edit 2:</p>
<p>Here is the small bit of code I have so far:</p>
<pre><code>with open("file.txt","r") as read_data:
f=read_data.read().replace(":BEGIN",'').replace(":END",'')
</code></pre>
| -1 | 2016-08-02T17:03:55Z | 38,726,866 | <p>Considering this is your input data:</p>
<pre><code>line = ':BEGIN\n"Distance IQC/E: Distance XY",0.09066,0.09060,0.00040,0.00040,0.00006,,"Distance IQC/F: Distance XY",0.14603,0.14590,0.00080,0.00080,0.00013,,"Distance IQC/G: Distance XY",0.12074,0.12070,0.00080,0.00080,0.00004,,"Distance IQC/I: Distance XY",0.21476,0.21600,0.00200,0.00200,-0.00124,,"Distance IQC/H: Distance XY",0.12714,0.12760,0.00080,0.00080,-0.00046,,"Distance IQC/N: Distance XY",0.08661,0.08690,0.00080,0.00080,-0.00029,,"Distance IQC/M: Distance XY",0.12997,0.13000,0.00080,0.00080,-0.00003,\n:END'
</code></pre>
<p>Then, as you said, you could first get rid of the BEGIN and END.</p>
<pre><code>data = line.replace(':BEGIN\n', '')
data = line.replace(',\n:END', '')
</code></pre>
<p>And then split the rest of the data into lines using two commas as separator.</p>
<pre><code>data_list = data.split(',,')
</code></pre>
<p>Finally, you can create your dictionary by splitting each line with commas as separators. The first element of the list could be the key (including the quotation marks in this case). For the value, you could group them into a list.</p>
<pre><code>data_dict = dict()
for data_element in data_list:
element_as_list = data_element.split(',')
key = element_as_list[0]
value = element_as_list[1:]
data_dict[key] = value
</code></pre>
| 0 | 2016-08-02T17:24:01Z | [
"python",
"python-3.x"
] |
I have a text file that is one long line of text that I want to break up into a dictionary, how would I do that? | 38,726,533 | <p>I have a text file that looks like this, where the line that starts "Distance IQC/E: Distance XY" and the rest the follows is one long line until :END:</p>
<pre><code>:BEGIN
"Distance IQC/E: Distance XY",0.09066,0.09060,0.00040,0.00040,0.00006,,"Distance IQC/F: Distance XY",0.14603,0.14590,0.00080,0.00080,0.00013,,"Distance IQC/G: Distance XY",0.12074,0.12070,0.00080,0.00080,0.00004,,"Distance IQC/I: Distance XY",0.21476,0.21600,0.00200,0.00200,-0.00124,,"Distance IQC/H: Distance XY",0.12714,0.12760,0.00080,0.00080,-0.00046,,"Distance IQC/N: Distance XY",0.08661,0.08690,0.00080,0.00080,-0.00029,,"Distance IQC/M: Distance XY",0.12997,0.13000,0.00080,0.00080,-0.00003,
:END
</code></pre>
<p>I want to know how to split this text file so that each line starts with a "Distance" and is followed by the remaining floaters until the next "Distance".</p>
<p>I can use file.replace(":BEGIN","") to get rid of the Begin and End. </p>
<p>Do I make a dictionary and then rewrite that dictionary to a new text file?</p>
<p>Please help I'm a very new programmer!</p>
<p>Edit: I would expect the output to be:</p>
<pre class="lang-none prettyprint-override"><code>Distance IQC/E: Distance XY 0.09066 0.09060 0.00040 0.00040 0.00006
Distance IQC/F: Distance XY 0.14603 0.14590 0.00080 0.00080 0.00013
Distance IQC/G: Distance XY 0.12074 0.12070 0.00080 0.00080 0.00004
Distance IQC/I: Distance XY 0.21476 0.21600 0.00200 0.00200 -0.00124
Distance IQC/H: Distance XY 0.12714 0.12760 0.00080 0.00080 -0.00046
Distance IQC/N: Distance XY 0.08661 0.08690 0.00080 0.00080 -0.00029
Distance IQC/M: Distance XY 0.12997 0.13000 0.00080 0.00080 -0.00003
</code></pre>
<p>This way I could archive the data cleanly into an excel file or something similar.</p>
<p>edit 2:</p>
<p>Here is the small bit of code I have so far:</p>
<pre><code>with open("file.txt","r") as read_data:
f=read_data.read().replace(":BEGIN",'').replace(":END",'')
</code></pre>
| -1 | 2016-08-02T17:03:55Z | 38,727,211 | <pre><code>str = 'Distance IQC/E: Distance XY",0.09066,0.09060,0.00040,0.00040,0.00006,,"Distance IQC/F: Distance XY",0.14603,0.14590,0.00080,0.00080,0.00013,,"Distance IQC/G: Distance XY",0.12074,0.12070,0.00080,0.00080,0.00004,,"Distance IQC/I: Distance XY",0.21476,0.21600,0.00200,0.00200,-0.00124,,"Distance IQC/H: Distance XY",0.12714,0.12760,0.00080,0.00080,-0.00046,,"Distance IQC/N: Distance XY",0.08661,0.08690,0.00080,0.00080,-0.00029,,"Distance IQC/M: Distance XY",0.12997,0.13000,0.00080,0.00080,-0.00003,'
sub_str = str.split(',,')
temp_arr = []
for i in sub_str:
temp_arr.append(i.split(','))
for i in temp_arr:
str_i = ' '.join(i)
print(str_i)
</code></pre>
<p>This would solve the problem.</p>
| 0 | 2016-08-02T17:46:11Z | [
"python",
"python-3.x"
] |
What framework to use for a java web app | 38,726,542 | <p>I need to develop a single page Web app. The app is currently programmed in java and displays a graph based on the users preferences. But it needs porting to a website. So I need to display it on a page, but not as a java applet. The java app accepts arguments that can be used to make new calculations based on the users requirements and output them to a text file. </p>
<p>The requirements for the project are: not displayed as a java applet, no calculations done on the clients side and the app is placed in line with other html code. </p>
<p>The calculations <em>could</em> be done client side as long as there are no ways that the mathematics is visible. Nor can it be easily found. </p>
<p>I've looked at using node.js as the backend. Displaying a Web form that sends the inputted data as a json, server converts it to xml and then runs the java app and it loads in the xml. Java does the calculations and saves the data points as a txt file and sends it back to the client. However, this seems long winded and not the best way to do it. I'm also unsure on how to then go about the user making changes to the requirements and updating the graph. Id like the app to be as efficient as possible and stable when handling multiple connections. </p>
<p>Hopefully this makes sense. I'm looking for any kind of guidance on languages, frameworks and some kind of design tips please! </p>
| -5 | 2016-08-02T17:04:48Z | 38,726,608 | <p>U can use Node.Js for singal page web application.</p>
| 0 | 2016-08-02T17:09:15Z | [
"javascript",
"java",
"python",
"node.js",
"web-applications"
] |
What framework to use for a java web app | 38,726,542 | <p>I need to develop a single page Web app. The app is currently programmed in java and displays a graph based on the users preferences. But it needs porting to a website. So I need to display it on a page, but not as a java applet. The java app accepts arguments that can be used to make new calculations based on the users requirements and output them to a text file. </p>
<p>The requirements for the project are: not displayed as a java applet, no calculations done on the clients side and the app is placed in line with other html code. </p>
<p>The calculations <em>could</em> be done client side as long as there are no ways that the mathematics is visible. Nor can it be easily found. </p>
<p>I've looked at using node.js as the backend. Displaying a Web form that sends the inputted data as a json, server converts it to xml and then runs the java app and it loads in the xml. Java does the calculations and saves the data points as a txt file and sends it back to the client. However, this seems long winded and not the best way to do it. I'm also unsure on how to then go about the user making changes to the requirements and updating the graph. Id like the app to be as efficient as possible and stable when handling multiple connections. </p>
<p>Hopefully this makes sense. I'm looking for any kind of guidance on languages, frameworks and some kind of design tips please! </p>
| -5 | 2016-08-02T17:04:48Z | 38,726,833 | <p>I did not read your full description of the question prior to submit an equivocated answer. But now I got what you want.</p>
<p>So what I would suggest to you is split the Frontend from Backend. </p>
<p>About the frontend you want to build a page with HTML + CSS + Javascript. If your app is complex (eg. more than one screen, modals, bunch of different UI components etc) you wanna use some framework like Angular or React, if it's not you can keep with the holy jQuery and you will be fine. Said that, you will post every request to your backend using AJAX.</p>
<p>Now about the backend: actually I'm writing this because you said that you want a backend, but you also said the requirement is to have all the calcs in frontend, well you need to check if you really need to persist data, process business rules etc. If the answer is yes, you will need a backend. For backend you can choose your team favorite language, which can be Java, Python, PHP, Node.js, Ruby, etc. It's up to you. </p>
<p>Forget about XML, you are going to post objects from the frontend in JSON, unless for some reason you are required to have XML. And then your backend will easily reply JSON data. There are some good frameworks for backend: Dropwizard, Springboot and Play for Java, Flask or Django for Python, Laravel for PHP, Express for Node.js, Rails for Ruby. </p>
<p>You can easily achieve what you need using the above directions. All of these frameworks in frontend and backend, are designed to integrate like that.</p>
| 0 | 2016-08-02T17:21:47Z | [
"javascript",
"java",
"python",
"node.js",
"web-applications"
] |
What framework to use for a java web app | 38,726,542 | <p>I need to develop a single page Web app. The app is currently programmed in java and displays a graph based on the users preferences. But it needs porting to a website. So I need to display it on a page, but not as a java applet. The java app accepts arguments that can be used to make new calculations based on the users requirements and output them to a text file. </p>
<p>The requirements for the project are: not displayed as a java applet, no calculations done on the clients side and the app is placed in line with other html code. </p>
<p>The calculations <em>could</em> be done client side as long as there are no ways that the mathematics is visible. Nor can it be easily found. </p>
<p>I've looked at using node.js as the backend. Displaying a Web form that sends the inputted data as a json, server converts it to xml and then runs the java app and it loads in the xml. Java does the calculations and saves the data points as a txt file and sends it back to the client. However, this seems long winded and not the best way to do it. I'm also unsure on how to then go about the user making changes to the requirements and updating the graph. Id like the app to be as efficient as possible and stable when handling multiple connections. </p>
<p>Hopefully this makes sense. I'm looking for any kind of guidance on languages, frameworks and some kind of design tips please! </p>
| -5 | 2016-08-02T17:04:48Z | 38,726,843 | <p>Node.js and other scripting languages has their own advantage in terms of performance and scalabilty.If you have single page application and there are future chances it could be extended,then you should MVC frameworks like Struts and Spring. According to me spring is really very advanced and has some really good integration with charts and graphs. You can also use json as response with Spring as micro service so that you can saggrigate the UI part with Application part and remove the dependency of technologies.</p>
| 0 | 2016-08-02T17:22:29Z | [
"javascript",
"java",
"python",
"node.js",
"web-applications"
] |
NATIVE_APP context are showing on a hybrid Android app which is using Cordova , when trying to automate using Appium | 38,726,570 | <pre><code> def test_list_content(self):
self.driver.implicitly_wait(600)
time.sleep(10)
allContexts = self.driver.contexts
print allContexts
current = self.driver.current_context
print current
</code></pre>
<p>Appium version is 1.4.16.1, Windows 7, Android device with OS 4.4.2.</p>
<p>I'm working with an hybrid app and the Appium only returns the NATIVE_APP context. I can see the app in chrome inspector and able to click on web elements using that but using Appium I am not able to switch to Web view because of which appium is not able to find the elements on screen.</p>
<p>Please help how can I automate this kind of Android App</p>
| 1 | 2016-08-02T17:06:39Z | 38,761,780 | <p>This is an issue with Crosswalk webviews. They are a little different. This hasn't been fixed in the appium yet. Issue is here
<a href="https://github.com/appium/appium/issues/4597" rel="nofollow">Appium issue with Crosswalk webviews. </a></p>
<p>But there is a workaround for the issue here. Follow the instructions there and it should work then. I did try it out and it worked for me. The webview name was a little bit different that usual.
<a href="https://github.com/ITKarel/ChromeDriver" rel="nofollow">Crosswalk workaround for appium</a></p>
| 0 | 2016-08-04T08:05:26Z | [
"android",
"python",
"automation",
"appium"
] |
Generate custom xml with django | 38,726,602 | <p>As a heads up i am completely new to Python and Django.</p>
<p>I am trying to move from PHP to Python. And i came to a problem of how to generate a custom xml file with all of the entries form database. I need to create something like this:</p>
<p></p>
<pre><code><inv>
<invID>1</invID>
<group>Group</group>
<name>Name</name>
<description></description>
</inv>
<inv>
<invID>2</invID>
<group>Group</group>
<name>Name</name>
<description></description>
</inv>
</code></pre>
<p></p>
<p>UPDATE</p>
<p>For those wondering here is the final code for the saving of the XML There is obviously got to be a better way but this is what i came up with.</p>
<pre><code>def xml(request):
#Getting all of the items in the Database
products = Product.objects.all()
#Putting all of it in to Context to pass to template
context = {
'products': products
}
#calling template with all of the information
content = render_to_string('catalog/xml_template.xml', context)
#Saving template tp a static folder so it can be accessible without calling view
with open (os.path.join(BASE_DIR, 'static/test.xml'), 'w') as xmlfile:
xmlfile.write(content.encode('utf8'))
#Not Sure if i actually need to call the return but i did not let me run it without content
return render(request, 'catalog/xml_template.xml', context)
</code></pre>
| 0 | 2016-08-02T17:09:06Z | 40,051,563 | <pre><code>def xml(request):
#Getting all of the items in the Database
products = Product.objects.all()
#Putting all of it in to Context to pass to template
context = {
'products': products
}
#calling template with all of the information
content = render_to_string('catalog/xml_template.xml', context)
#Saving template tp a static folder so it can be accessible without calling view
with open (os.path.join(BASE_DIR, 'static/test.xml'), 'w') as xmlfile:
xmlfile.write(content.encode('utf8'))
#Not Sure if i actually need to call the return but i did not let me run it without content
return render(request, 'catalog/xml_template.xml', context)
</code></pre>
| 0 | 2016-10-14T20:33:27Z | [
"php",
"python",
"xml",
"django"
] |
Python 3.4 Pygame My sprite does not appear | 38,726,603 | <p>I have written simple code to get a green block which is my sprite to scroll across the screen. When the game starts the sprite is meant to appear in the centre of the screen, however when I run my code the screen is just black and the green block does not appear unless I click on the x cross on the window to exit the screen, then it appears for a second when the window is closing. Any ideas how I can resolve this.</p>
<pre><code>import pygame, random
WIDTH = 800 #Size of window
HEIGHT = 600 #size of window
FPS = 30
WHITE = (255, 255, 255)
BLACK = (0, 0, 0)
RED = (255, 0, 0)
GREEN = (0, 255, 0)
BLUE = (0, 0, 255)
class Player(pygame.sprite.Sprite):
#sprite for the player
def __init__(self):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.Surface((50, 50))
self.image.fill(GREEN)
self.rect = self.image.get_rect()
self.rect.center = (WIDTH/2, HEIGHT/2)
def update(self):
self.rect.x += 5
#initialize pygame and create window
pygame.init()
pygame.mixer.init()
screen = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("My Game")
clock = pygame.time.Clock()
all_sprites = pygame.sprite.Group()
player = Player()
all_sprites.add(player)
#Game loop
running = True
while running:
clock.tick(FPS)
for event in pygame.event.get():
#check for closing window
if event.type == pygame.QUIT:
running = False
#update
all_sprites.update()
#Render/Draw
screen.fill(BLACK)
all_sprites.draw(screen)
pygame.display.flip()
pygame.quit()
</code></pre>
| 0 | 2016-08-02T17:09:08Z | 38,726,909 | <p>All your code to updat the sprites, fill the screens and draw the sprites is outside your main loop (<code>while running</code>) </p>
<p>You must have in mind that identation Python's syntax: your commands are just outside your mainloop.</p>
<p>Moreover, I'd strongly advise to put that mainloop inside a proper function, instead of just leaving it on the module root. </p>
<pre><code>...
#Game loop
running = True
while running:
clock.tick(FPS)
for event in pygame.event.get():
#check for closing window
if event.type == pygame.QUIT:
running = False
#update
all_sprites.update()
#Render/Draw
screen.fill(BLACK)
all_sprites.draw(screen)
pygame.display.flip()
pygame.quit()
</code></pre>
| 0 | 2016-08-02T17:25:55Z | [
"python",
"pygame",
"sprite"
] |
python json print empty value | 38,726,629 | <p>When I print <code>banner["STATUS"]["Description"]</code>, I get an empty value?</p>
<pre><code>banner = {
"STATUS":[
{
"STATUS": "S",
"When": 1470157636,
"Code": 11,
"Msg": "Summary",
"Description": "nsgminer 0.9.2"
}
],
"SUMMARY": [
{
"Elapsed":1502,
"MHS av":0.00,
"Found Blocks":0,
"Getworks":58,
"Accepted":0,
"Rejected":0,
"Hardware Errors":0,
"Utility":0.00,
"Discarded":116,
"Stale":0,
"Get Failures":2,
"Local Work":180,
"Remote Failures":0,
"Network Blocks":17,
"Total MH":0.0000,
"Work Utility":0.00,
"Difficulty Accepted":0.00000000,
"Difficulty Rejected":0.00000000,
"Difficulty Stale":0.00000000,
"Best Share":0
}
],
"id":1
}
</code></pre>
| 1 | 2016-08-02T17:10:32Z | 38,726,741 | <pre><code>print banner['STATUS'][0]['Description'] # 0 will tell its the 0th index of the list
</code></pre>
<p>'Status' key holds a list. Since 'Description' is placed in the 1st dictionary in that list, [0] is needed to access the 1st dictionary.</p>
| 2 | 2016-08-02T17:16:59Z | [
"python",
"json"
] |
Recenter Text After Input Tkinter (Python) | 38,726,769 | <p>I am using Tkinter for Python right now, and I'm trying to figure out if it's possible to recenter text after a user inputs something into a Text widget.</p>
<p>Currently, my code outputs like this:</p>
<p><a href="http://i.stack.imgur.com/HjubY.png" rel="nofollow"><img src="http://i.stack.imgur.com/HjubY.png" alt="The program when it starts."></a>
<a href="http://i.stack.imgur.com/TqXKh.png" rel="nofollow"><img src="http://i.stack.imgur.com/TqXKh.png" alt="The program after a user inputs information."></a></p>
<p>What I'd like is for the entered text to appear centered within the text field. My code is below. All of the text entries are essentially the same, so most of the code is similar. I only included it for comprehensiveness, but I'm pretty sure it just has to deal with modifying the Text(...) object.</p>
<p>Thank you.</p>
<pre><code>root = Tk()
root.title("Generate Report")
bdFrame = ttk.Frame(root, padding="3 3 12 12", relief="groove", borderwidth=.5)
bdFrame.grid(column=0, row=0, rowspan = 7,sticky=(N, W, E, S))
vrFrame = ttk.Frame(root, padding ="4 4 12 12", relief="groove", borderwidth=.5)
vrFrame.grid(column=1, row=0, rowspan = 7,sticky=(N,W,E,S))
bdFileName = StringVar()
bdRowStart = StringVar()
bdCNCol = StringVar()
bdCNumCol = StringVar()
bdTBCol = StringVar()
bdBRCol = StringVar()
bdLCCol = StringVar()
vrFileName = StringVar()
vrRowStart = StringVar()
vrCNCol = StringVar()
vrCNumCol = StringVar()
vrMSCol = StringVar()
ttk.Label(bdFrame, text="X Variables").grid(column=0, row=0, columnspan=2, padx=2)
ttk.Label(vrFrame, text="Y Report Data").grid(column=0, row=0, columnspan=2, padx=2)
bdFileNameEntry = Text(bdFrame, background = "LightSteelBlue", width=16, height=1, wrap="word")
bdFileNameEntry.tag_configure("tag-center",justify='center')
bdFileNameEntry.insert(INSERT, "bd.xlsx", "tag-center")
bdFileNameEntry.bind("<Tab>", focus_next_window)
bdFileNameEntry.bind("<Return>", focus_next_window)
bdFileNameEntry.bind("<Shift-Tab>", focus_previous_window)
bdFileNameEntry.grid(column=1, row = 1, sticky=(N,S,E,W))
ttk.Label(bdFrame, text="File Name").grid(column=0, row=1, sticky=(N,S,W,E))
bdRowStartEntry = Text(bdFrame, background = "LightSteelBlue", width=16, height=1, wrap="word")
bdRowStartEntry.tag_configure("tag-center",justify='center')
bdRowStartEntry.insert(INSERT, "6", "tag-center")
bdRowStartEntry.bind("<Tab>", focus_next_window)
bdRowStartEntry.bind("<Return>", focus_next_window)
bdRowStartEntry.bind("<Shift-Tab>", focus_previous_window)
bdRowStartEntry.grid(column=1, row = 2, sticky=(N,S,E,W))
ttk.Label(bdFrame, text="Row Start").grid(column=0, row=2, sticky=(N,S,W,E))
bdCNColEntry = Text(bdFrame, background = "LightSteelBlue", width=16, height=1, wrap="word")
bdCNColEntry.tag_configure("tag-center",justify='center')
bdCNColEntry.insert(INSERT, "B", "tag-center")
bdCNColEntry.bind("<Tab>", focus_next_window)
bdCNColEntry.bind("<Return>", focus_next_window)
bdCNColEntry.bind("<Shift-Tab>", focus_previous_window)
bdCNColEntry.grid(column=1,row=3, sticky=(N,S,E,W))
ttk.Label(bdFrame, text="Customer Name Column").grid(column=0, row=3, sticky=(N,S,W,E))
bdCNumColEntry = Text(bdFrame, background = "LightSteelBlue", width=16, height=1, wrap="word")
bdCNumColEntry.tag_configure("tag-center",justify='center')
bdCNumColEntry.insert(INSERT, "C", "tag-center")
bdCNumColEntry.bind("<Tab>", focus_next_window)
bdCNumColEntry.bind("<Return>", focus_next_window)
bdCNumColEntry.bind("<Shift-Tab>", focus_previous_window)
bdCNumColEntry.grid(column=1,row=4, sticky=(N,S,E,W))
ttk.Label(bdFrame, text="Customer Code Column").grid(column=0, row=4, sticky=(N,S,W,E))
bdTBColEntry = Text(bdFrame, background = "LightSteelBlue", width=16, height=1, wrap="word")
bdTBColEntry.tag_configure("tag-center",justify='center')
bdTBColEntry.insert(INSERT, "G", "tag-center")
bdTBColEntry.bind("<Tab>", focus_next_window)
bdTBColEntry.bind("<Return>", focus_next_window)
bdTBColEntry.bind("<Shift-Tab>", focus_previous_window)
bdTBColEntry.grid(column=1,row=5, sticky=(N,S,E,W))
ttk.Label(bdFrame, text="TotOwnBrNetRev Column").grid(column=0, row=5, sticky=(N,S,W,E))
bdBRColEntry = Text(bdFrame, background = "LightSteelBlue", width=16, height=1, wrap="word")
bdBRColEntry.tag_configure("tag-center",justify='center')
bdBRColEntry.insert(INSERT, "J", "tag-center")
bdBRColEntry.bind("<Tab>", focus_next_window)
bdBRColEntry.bind("<Return>", focus_next_window)
bdBRColEntry.bind("<Shift-Tab>", focus_previous_window)
bdBRColEntry.grid(column=1,row=6, sticky=(N,S,E,W))
ttk.Label(bdFrame, text="BrNetRevPcnt Column").grid(column=0, row=6, sticky=(N,S,W,E))
bdLCColEntry = Text(bdFrame, background = "LightSteelBlue", width=16, height=1, wrap="word")
bdLCColEntry.tag_configure("tag-center",justify='center')
bdLCColEntry.insert(INSERT, "D", "tag-center")
bdLCColEntry.bind("<Tab>", focus_next_window)
bdLCColEntry.bind("<Return>", focus_next_window)
bdLCColEntry.bind("<Shift-Tab>", focus_previous_window)
bdLCColEntry.grid(column=1,row=7, sticky=(N,S,E,W))
ttk.Label(bdFrame, text="LoadCount Column").grid(column=0, row=7, sticky=(N,S,W,E))
vrFileNameEntry = Text(vrFrame, background = "LightSteelBlue", width=16, height=1, wrap="word")
vrFileNameEntry.tag_configure("tag-center",justify='center')
vrFileNameEntry.insert(INSERT, "vr.xlsx", "tag-center")
vrFileNameEntry.bind("<Tab>", focus_next_window)
vrFileNameEntry.bind("<Return>", focus_next_window)
vrFileNameEntry.bind("<Shift-Tab>", focus_previous_window)
vrFileNameEntry.grid(column=1, row = 1, sticky=(N,S,E,W))
ttk.Label(vrFrame, text="File Name").grid(column=0, row=1, sticky=(N,S,W,E))
vrRowStartEntry = Text(vrFrame, background = "LightSteelBlue", width=16, height = 1, wrap="word")
vrRowStartEntry.tag_configure("tag-center",justify='center')
vrRowStartEntry.insert(INSERT, "2", "tag-center")
vrRowStartEntry.bind("<Tab>", focus_next_window)
vrRowStartEntry.bind("<Return>", focus_next_window)
vrRowStartEntry.bind("<Shift-Tab>", focus_previous_window)
vrRowStartEntry.grid(column=1, row = 2, sticky=(N,S,E,W))
ttk.Label(vrFrame, text="Row Start").grid(column=0, row=2, sticky=(N,S,W,E))
vrCNColEntry = Text(vrFrame, background = "LightSteelBlue", width=16, height=1, wrap="word")
vrCNColEntry.tag_configure("tag-center",justify='center')
vrCNColEntry.insert(INSERT, "A", "tag-center")
vrCNColEntry.bind("<Tab>", focus_next_window)
vrCNColEntry.bind("<Return>", focus_next_window)
vrCNColEntry.bind("<Shift-Tab>", focus_previous_window)
vrCNColEntry.grid(column=1, row = 3, sticky=(N,S,E,W))
ttk.Label(vrFrame, text="Customer Name Column").grid(column=0, row=3, sticky=(N,S,W,E))
vrCNumColEntry = Text(vrFrame, background = "LightSteelBlue", width=16, height=1, wrap="word")
vrCNumColEntry.tag_configure("tag-center",justify='center')
vrCNumColEntry.insert(INSERT, "A", "tag-center")
vrCNumColEntry.bind("<Tab>", focus_next_window)
vrCNumColEntry.bind("<Return>", focus_next_window)
vrCNumColEntry.bind("<Shift-Tab>", focus_previous_window)
vrCNumColEntry.grid(column=1, row = 4, sticky=(N,S,E,W))
ttk.Label(vrFrame, text="Customer Number Column").grid(column=0, row=4, sticky=(N,S,W,E))
vrMSColEntry = Text(vrFrame, background = "LightSteelBlue", width=16, height=1, wrap="word")
vrMSColEntry.tag_configure("tag-center",justify='center')
vrMSColEntry.insert(INSERT, "A", "tag-center")
vrMSColEntry.bind("<Tab>", focus_next_window)
vrMSColEntry.bind("<Return>", focus_next_window)
vrMSColEntry.bind("<Shift-Tab>", focus_previous_window)
vrMSColEntry.grid(column=1, row = 5, sticky=(N,S,E,W))
ttk.Label(vrFrame, text="Market Share Column").grid(column=0, row=5, sticky=(N,S,W,E))
vrMSColEntry = Text(vrFrame, background = "LightSteelBlue", width=16, height=1, wrap="word")
vrMSColEntry.tag_configure("tag-center",justify='center')
vrMSColEntry.insert(INSERT, "A", "tag-center")
vrMSColEntry.bind("<Tab>", focus_next_window)
vrMSColEntry.bind("<Return>", focus_next_window)
vrMSColEntry.bind("<Shift-Tab>", focus_previous_window)
vrMSColEntry.grid(column=1, row = 6, sticky=(N,S,E,W))
ttk.Label(vrFrame, text="Market Share Column").grid(column=0, row=6, sticky=(N,S,W,E))
ttk.Button(bdFrame, text="Calculate", command=saveValues).grid(column=0, row=8, columnspan=2, sticky=(N,S,E,W))
ttk.Button(vrFrame, text="Cancel", command=saveValues).grid(column=0, row=8, columnspan=2, sticky=(N,S,E,W))
for y in range (0,7):
root.rowconfigure(y, weight = 1)
bdFrame.rowconfigure(y, weight = 1)
vrFrame.rowconfigure(y, weight = 1)
bdFrame.rowconfigure(7, weight = 1)
for x in range (0, 2):
root.columnconfigure(x, weight = 2)
bdFrame.columnconfigure(x, weight = 1)
vrFrame.columnconfigure(x, weight = 1)
for child in bdFrame.winfo_children(): child.grid_configure(padx=5, pady=5)
for child in vrFrame.winfo_children(): child.grid_configure(padx=5, pady=7)
bdFileNameEntry.focus()
root.mainloop()
</code></pre>
| 0 | 2016-08-02T17:18:11Z | 38,727,577 | <p>Assuming you can't use an <code>Entry</code> widget which has a built-in <code>justify</code> attribute, perhaps the simplest solution is to add a binding on any key release which re-tags the data in the widget:</p>
<pre><code>def retag(event):
event.widget.tag_add("tag-center", "1.0", "end")
...
bdFileNameEntry.bind("<Any-KeyRelease>", retag)
</code></pre>
| 0 | 2016-08-02T18:05:28Z | [
"python",
"tkinter"
] |
Unable to kill Python subprocess using process.kill() or process.terminate() or os.kill() or using psutil | 38,726,804 | <p>Using python, I am starting two subprocesses in parallel. One is an HTTP Server, while other is an execution of another program(CustomSimpleHTTPServer.py, which is a python script generated by selenium IDE plugin to open firefox, navigate to a website and do some interactions). On the other hand, I want to stop execution of the first subprocess(the HTTP Server) when the second subprocess is finished executing.</p>
<p>The logic of my code is that the selenium script will open a website. The website will automatically make a few GET calls to my HTTP Server. After the selenium script is finished executing, the HTTP Server is supposed to be closed so that it can log all the captured requests in a file.</p>
<p>Here is my main Python code:</p>
<pre><code>class Myclass(object):
HTTPSERVERPROCESS = ""
def startHTTPServer(self):
print "********HTTP Server started*********"
try:
self.HTTPSERVERPROCESS=subprocess.Popen('python CustomSimpleHTTPServer.py', \
shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
except Exception as e:
print "Exception captured while starting HTTP Server process: %s\n" % e
def startNavigatingFromBrowser(self):
print "********Opening firefox to start navigation*********"
try:
process=subprocess.Popen('python navigationScript.py', \
shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
process.communicate()
process.wait()
except Exception as e:
print "Exception captured starting Browser Navigation process : %s\n" % e
try:
if process.returncode==0:
print "HTTPSERVEPROCESS value: %s" % self.HTTPSERVERPROCESS.returncode
print self.HTTPSERVERPROCESS
#self.HTTPSERVERPROCESS.kill()
#self.HTTPSERVERPROCESS.terminate()
#self.kill(self.HTTPSERVERPROCESS.pid)
except Exception as e:
print "Exception captured while killing HTTP Server process : %s\n" % e
def kill(self,proc_pid):
process = psutil.Process(proc_pid)
for proc in process.get_children(recursive=True):
proc.kill()
process.kill()
def startCapture(self):
print "********Starting Parallel execution of Server initiation and firefox navigation script*********"
t1 = threading.Thread(target=self.startHTTPServer())
t2 = threading.Thread(target=self.startNavigatingFromBrowser())
t1.start()
t2.start()
t2.join()
</code></pre>
<p><strong>Note: Execution starts by calling startCapture()</strong></p>
<p>Here is the code for CustomSimpleHTTPServer.py, which is supposed to write the captured requests to logfile.txt upon termination:</p>
<pre><code>import SimpleHTTPServer
import SocketServer
PORT = 5555
class MyHTTPHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
log_file = open('logfile.txt', 'w')
def log_message(self, format, *args):
self.log_file.write("%s - - [%s] %s\n" %
(self.client_address[0],
self.log_date_time_string(),
format%args))
Handler = MyHTTPHandler
httpd = SocketServer.TCPServer(("", PORT), Handler)
httpd.serve_forever()
</code></pre>
<p>When I use self.HTTPSERVERPROCESS.kill() or self.HTTPSERVERPROCESS.terminate() or os.kill(), I get following in my terminal upon running the main Python code</p>
<pre><code>********Starting Parallel execution of Server initiation and firefox navigation script*********
********SimpleHTTPServer started*********
********Opening firefox to start navigation*********
HTTPSERVEPROCESS value: <subprocess.Popen object at 0x1080f8410>
2459
Exception captured while killing HTTP Server process : [Errno 3] No such process
Process finished with exit code 0
</code></pre>
<p>And when I use self.kill(self.HTTPSERVERPROCESS.pid), I get following in my terminal upon running the main Python code </p>
<pre><code>********Starting Parallel execution of Server initiation and firefox navigation script*********
********SimpleHTTPServer started*********
********Opening firefox to start navigation*********
HTTPSERVEPROCESS value: <subprocess.Popen object at 0x1080f8410>
2459
Exception captured while killing HTTP Server process : 'Process' object has no attribute 'get_children'
Process finished with exit code 0
</code></pre>
<p>Neither of the following 3 are able to kill the HTTPServer process:</p>
<pre><code>self.HTTPSERVERPROCESS.kill()
self.HTTPSERVERPROCESS.terminate()
self.kill(self.HTTPSERVERPROCESS.pid)
</code></pre>
<p>I know that CustomSimpleHTTPServer.py is correct because when I run it seperately and manually browse to the website, and then manually terminate the CustomSimpleHTTPServer.py script by hitting CTRL-c in terminal, the logs are populated in logfle.txt.</p>
<p>What changes do I make to my code so that it works properly and logs are populated?</p>
| 0 | 2016-08-02T17:20:13Z | 38,727,527 | <p>You should just use <code>os.kill()</code> to signal processes:</p>
<pre><code>import os
import signal
...
os.kill(the_pid, signal.SIGTERM) # usually kills processes
os.kill(the_pid, signal.SIGKILL) # should always kill a process
</code></pre>
<p>Also, if you kill the parent process it also usually kills the children.</p>
<p>Update:</p>
<p>I made two small changes to the Server program:</p>
<ul>
<li>Add a call to <code>self.log_file.flush()</code> to make sure log entries
are flushed out to the log file.</li>
<li>Override <code>allow_reuse_address</code> so that you can reuse the
same address shortly after terminating the server.
(See <a href="http://stackoverflow.com/a/35363839/866915">this SO question</a>)</li>
</ul>
<p>File <code>Server</code>:</p>
<pre><code>#!/usr/bin/env python
import SimpleHTTPServer
import SocketServer
PORT = 5555
class MyServer(SocketServer.TCPServer):
allow_reuse_address = True
class MyHTTPHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
log_file = open('logfile.txt', 'w')
def log_message(self, format, *args):
self.log_file.write("%s - - [%s] %s\n" %
(self.client_address[0],
self.log_date_time_string(),
format%args))
self.log_file.flush()
Handler = MyHTTPHandler
httpd = MyServer(("", PORT), Handler)
httpd.serve_forever()
</code></pre>
<p>Here is an simple navigation program (file <code>Navigate</code>):</p>
<pre><code>#!/usr/bin/env python
import requests
import time
URL = "http://localhost:5555/"
def main():
for x in range(5):
print "x =", x
r = requests.get(URL + "asd")
time.sleep(1)
print "Quitting"
main()
</code></pre>
<p>And here is the main program:</p>
<pre><code>#!/usr/bin/env python
import os
import subprocess
import signal
def main():
# start web server
web = subprocess.Popen(["./Server"])
print "web server pid:", web.pid
# start navigator
nav = subprocess.Popen(["./Navigate"])
print "nav pid: ", nav.pid
# wait for nav to exit
nav.wait()
print "done waiting for nav"
print "killing web server"
os.kill(web.pid, signal.SIGTERM )
web.wait()
print "server terminated"
main()
</code></pre>
| 1 | 2016-08-02T18:01:49Z | [
"python",
"multithreading",
"selenium"
] |
Using PhotoImage with threading tkinter and matplotlib | 38,726,827 | <p>I relatively new user of python and the world of programming in general, so please keep it very very simple.
I have been trying to do a program in python in order to keep track of the number of pages i read each day for a specific book using matplotlib...Everything have been working good in general, but when i tried to put an image with PhotoImage and
canvas.create_image in order to differentiate each book, it turns out that an error appears, here´s the code:</p>
<pre><code>from random import *
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from tkinter import *
from threading import *
def graficar():
fig = plt.figure()
ax1 = fig.add_subplot(1,1,1)
def animate(i):
pullData = open('lista_coordenadas.txt','r').read()
dataArray = pullData.split('\n')
xar = []
yar = []
for e in dataArray:
if len(e)>1:
x,y = e.split(',')
xar.append(int(x))
yar.append(int(y))
ax1.clear()
ax1.plot(xar,yar)
ani = animation.FuncAnimation(fig,animate,interval=1000)
plt.show()
def gui():
root = Tk()
root.geometry('1000x800+0+0')
canvas = Canvas(root,width=500,height=500,bg='blue')
ima = PhotoImage(file='niño.png')
canvas.create_image(100,100,image=ima)
def escribir():
f = open('lista_coordenadas.txt','r')
lectura = f.read()
lectura2 = lectura.split('\n')
for e in lectura2:
if e!='':
ele = e.split(',')[0]
f.close()
f = open('lista_coordenadas.txt','a')
f.write(str(int(ele)+1)+','+str(randrange(100))+'\n')
f.close()
b = Button(root,text='Press',command=escribir)
b.pack()
canvas.pack()
root.mainloop()
T1 = Thread(target=gui)
T2 = Thread(target=graficar)
T2.start()
T1.start()
</code></pre>
<p>And the error:</p>
<blockquote>
<blockquote>
<blockquote>
<p>Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\Sonia\AppData\Local\Programs\Python\Python35-32\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Users\Sonia\AppData\Local\Programs\Python\Python35-32\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Sonia\AppData\Local\Programs\Python\Python35-32\thideaslibros.py", line 31, in gui
ima = PhotoImage(file='niño.png')
File "C:\Users\Sonia\AppData\Local\Programs\Python\Python35-32\lib\tkinter__init__.py", line 3393, in <strong>init</strong>
Image.<strong>init</strong>(self, 'photo', name, cnf, master, **kw)
File "C:\Users\Sonia\AppData\Local\Programs\Python\Python35-32\lib\tkinter__init__.py", line 3349, in <strong>init</strong>
self.tk.call(('image', 'create', imgtype, name,) + options)
RuntimeError: main thread is not in main loop</p>
</blockquote>
</blockquote>
</blockquote>
<p>Pleasee!! help!!</p>
| 0 | 2016-08-02T17:21:34Z | 38,728,842 | <p>I ran this code on my python and got no errors (I commented out a line that you will need, i didn't need it cos i was working with a gif). Obviously since i had no idea what was in the text file, i didn't get a suitable result, but the main thing was there were no code errors.</p>
<pre><code>import matplotlib #added this
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from random import *
from tkinter import *
from threading import *
import os #added this
#import Image, ImageTk #you'll need this to work with a png/jpg
def graficar():
fig = plt.figure()
ax1 = fig.add_subplot(1,1,1)
def animate(i):
pullData = open('lista_coordenadas.txt','r').read()
dataArray = pullData.split('\n')
xar = []
yar = []
for e in dataArray:
if len(e)>1:
x,y = e.split(',')
xar.append(int(x))
yar.append(int(y))
ax1.clear()
ax1.plot(xar,yar)
ani = animation.FuncAnimation(fig,animate,interval=1000)
plt.show()
def gui():
root = Tk()
root.geometry('1000x800+0+0')
canvas = Canvas(root,width=500,height=500,bg='blue')
filepath = os.path.join('C:/Users/Rachel/Desktop/Downloads/', 'cat.gif')
#defined a filepath
f = open(filepath, 'cat.gif')
#opened the image
ima = PhotoImage(file='cat.gif') #my own image
canvas.create_image(100,100,image=ima)
def escribir():
f = open('lista_coordenadas.txt','r')
lectura = f.read()
lectura2 = lectura.split('\n')
for e in lectura2:
if e!='':
ele = e.split(',')[0]
f.close()
f = open('lista_coordenadas.txt','a')
f.write(str(int(ele)+1)+','+str(randrange(100))+'\n')
f.close()
b = Button(root,text='Press',command=escribir)
b.pack()
canvas.pack()
root.mainloop()
T1 = Thread(target=gui)
T2 = Thread(target=graficar)
T1.start()
T2.start()
</code></pre>
<p>Maybe give it a whirl..</p>
| 0 | 2016-08-02T19:22:05Z | [
"python",
"multithreading",
"matplotlib",
"photoimage"
] |
Python "Expected an indented block" IndentationError? | 38,726,845 | <p>I am running into this error with some code I am writing using Scrapy, and I have no clue why it is happening. I have searched for answers on here and elsewhere, and every example I see is someone who didn't indent after an if statement or something like that. That's not the problem I'm having; I get the error for this code:</p>
<pre><code> filename = "output.txt"
with open(filename, 'rb+') as f:
f.seek(-1,2)
last_char = f.read()
</code></pre>
<p>The error appears on the f.seek(-1,2) line, and I am extremely confused, because it is clearly indented. I assume I'm missing something very obvious, since I haven't programmed in Python in about a year and a half, but any help would be great, thanks!</p>
| -2 | 2016-08-02T17:22:32Z | 38,726,912 | <p>Few ideas that could help:</p>
<p>Double check your tabs and spaces. Delete from <code>f.seek(</code> backwards until it appears right after the <code>:</code>, press return once and let your IDE take care of the indentation for you.</p>
| 1 | 2016-08-02T17:26:04Z | [
"python",
"file",
"io"
] |
Pandas count the number of times an event has occurred in last n days by group | 38,726,855 | <p>I have table of events occurring by id. How would I count the number of times in the last n days that each event type has occurred prior to the current row?</p>
<p>For example with a list of events like:</p>
<pre><code>df = pd.DataFrame([{'id': 1, 'event_day': '2016-01-01', 'event_type': 'type1'},
{'id': 1, 'event_day': '2016-01-02', 'event_type': 'type1'},
{'id': 2, 'event_day': '2016-02-01', 'event_type': 'type2'},
{'id': 2, 'event_day': '2016-02-15', 'event_type': 'type3'},
{'id': 3, 'event_day': '2016-01-06', 'event_type': 'type3'},
{'id': 3, 'event_day': '2016-03-11', 'event_type': 'type3'},])
df['event_day'] = pd.to_datetime(df['event_day'])
df = df.sort_values(['id', 'event_day'])
</code></pre>
<p>or: </p>
<pre><code> event_day event_type id
0 2016-01-01 type1 1
1 2016-01-02 type1 1
2 2016-02-01 type2 2
3 2016-02-15 type3 2
4 2016-01-06 type3 3
5 2016-03-11 type3 3
</code></pre>
<p>by <code>id</code> I want to count the number of times each <code>event_type</code> has occurred prior to the current row in the last n days. For example, in row 3 id=2, so how many times up to (but not including) that point in the event history have events types 1, 2, and 3 occurred in the last n days for id 2?</p>
<p>The desired output would look something like below:</p>
<pre><code> event_day event_type event_type1_in_last_30days event_type2_in_last_30days event_type3_in_last_30days id
0 2016-01-01 type1 0 0 0 1
1 2016-01-02 type1 1 0 0 1
2 2016-02-01 type2 0 0 0 2
3 2016-02-15 type3 0 1 0 2
4 2016-01-06 type3 0 0 0 3
5 2016-03-11 type3 0 0 0 3
</code></pre>
| 3 | 2016-08-02T17:23:15Z | 38,727,983 | <pre><code>res = ((((df['event_day'].values >= df['event_day'].values[:, None] - pd.to_timedelta('30 days'))
& (df['event_day'].values < df['event_day'].values[:, None]))
& (df['id'].values == df['id'].values[:, None]))
.dot(pd.get_dummies(df['event_type'])))
res
Out:
array([[ 0., 0., 0.],
[ 1., 0., 0.],
[ 0., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]])
</code></pre>
<hr>
<p>The first part is to generate a matrix as follows:</p>
<pre><code>(df['event_day'].values >= df['event_day'].values[:, None] - pd.to_timedelta('30 days'))
Out:
array([[ True, True, True, True, True, True],
[ True, True, True, True, True, True],
[False, True, True, True, True, True],
[False, False, True, True, False, True],
[ True, True, True, True, True, True],
[False, False, False, True, False, True]], dtype=bool)
</code></pre>
<p>It's a 6x6 matrix and for each row it makes a comparison against the other rows. It makes use of NumPy's broadcasting for pairwise comparision (<code>.values[:, None]</code> adds another axis). To make it complete, we need to check if this row occurs sooner than the other row as well: </p>
<pre><code>(((df['event_day'].values >= df['event_day'].values[:, None] - pd.to_timedelta('30 days'))
& (df['event_day'].values < df['event_day'].values[:, None])))
Out:
array([[False, False, False, False, False, False],
[ True, False, False, False, False, False],
[False, True, False, False, True, False],
[False, False, True, False, False, False],
[ True, True, False, False, False, False],
[False, False, False, True, False, False]], dtype=bool)
</code></pre>
<p>Another condition is about the id's. Using a similar approach, you can construct a pairwise comparison matrix that shows when id's match:</p>
<pre><code>(df['id'].values == df['id'].values[:, None])
Out:
array([[ True, True, False, False, False, False],
[ True, True, False, False, False, False],
[False, False, True, True, False, False],
[False, False, True, True, False, False],
[False, False, False, False, True, True],
[False, False, False, False, True, True]], dtype=bool)
</code></pre>
<p>It becomes:</p>
<pre><code>(((df['event_day'].values >= df['event_day'].values[:, None] - pd.to_timedelta('30 days'))
& (df['event_day'].values < df['event_day'].values[:, None]))
& (df['id'].values == df['id'].values[:, None]))
Out:
array([[False, False, False, False, False, False],
[ True, False, False, False, False, False],
[False, False, False, False, False, False],
[False, False, True, False, False, False],
[False, False, False, False, False, False],
[False, False, False, False, False, False]], dtype=bool)
</code></pre>
<p>Lastly, you want to see it for each type so you can use get_dummies:</p>
<pre><code>pd.get_dummies(df['event_type'])
Out:
type1 type2 type3
0 1.0 0.0 0.0
1 1.0 0.0 0.0
2 0.0 1.0 0.0
3 0.0 0.0 1.0
4 0.0 0.0 1.0
5 0.0 0.0 1.0
</code></pre>
<p>If you multiply the resulting matrix with this one, it should give you the number of rows satisfying that condition for each type. You can pass the resulting array to a DataFrame constructor and concat:</p>
<pre><code>pd.concat([df, pd.DataFrame(res, columns = ['e1', 'e2', 'e3'])], axis=1)
Out:
event_day event_type id e1 e2 e3
0 2016-01-01 type1 1 0.0 0.0 0.0
1 2016-01-02 type1 1 1.0 0.0 0.0
2 2016-02-01 type2 2 0.0 0.0 0.0
3 2016-02-15 type3 2 0.0 1.0 0.0
4 2016-01-06 type3 3 0.0 0.0 0.0
5 2016-03-11 type3 3 0.0 0.0 0.0
</code></pre>
| 2 | 2016-08-02T18:30:33Z | [
"python",
"pandas"
] |
Pandas count the number of times an event has occurred in last n days by group | 38,726,855 | <p>I have table of events occurring by id. How would I count the number of times in the last n days that each event type has occurred prior to the current row?</p>
<p>For example with a list of events like:</p>
<pre><code>df = pd.DataFrame([{'id': 1, 'event_day': '2016-01-01', 'event_type': 'type1'},
{'id': 1, 'event_day': '2016-01-02', 'event_type': 'type1'},
{'id': 2, 'event_day': '2016-02-01', 'event_type': 'type2'},
{'id': 2, 'event_day': '2016-02-15', 'event_type': 'type3'},
{'id': 3, 'event_day': '2016-01-06', 'event_type': 'type3'},
{'id': 3, 'event_day': '2016-03-11', 'event_type': 'type3'},])
df['event_day'] = pd.to_datetime(df['event_day'])
df = df.sort_values(['id', 'event_day'])
</code></pre>
<p>or: </p>
<pre><code> event_day event_type id
0 2016-01-01 type1 1
1 2016-01-02 type1 1
2 2016-02-01 type2 2
3 2016-02-15 type3 2
4 2016-01-06 type3 3
5 2016-03-11 type3 3
</code></pre>
<p>by <code>id</code> I want to count the number of times each <code>event_type</code> has occurred prior to the current row in the last n days. For example, in row 3 id=2, so how many times up to (but not including) that point in the event history have events types 1, 2, and 3 occurred in the last n days for id 2?</p>
<p>The desired output would look something like below:</p>
<pre><code> event_day event_type event_type1_in_last_30days event_type2_in_last_30days event_type3_in_last_30days id
0 2016-01-01 type1 0 0 0 1
1 2016-01-02 type1 1 0 0 1
2 2016-02-01 type2 0 0 0 2
3 2016-02-15 type3 0 1 0 2
4 2016-01-06 type3 0 0 0 3
5 2016-03-11 type3 0 0 0 3
</code></pre>
| 3 | 2016-08-02T17:23:15Z | 38,730,063 | <p>Ok, I really enjoyed ayhan's approach. But I have another which is probably slower (just my assumption that <code>apply</code> is usually slow), although I think the logic is more straightforward. If anyone wants to try to compare the two, especially how they scale, I'd be very interested:</p>
<pre><code>In [1]: import pandas as pd, numpy as np
In [2]: df = pd.DataFrame([{'id': 1, 'event_day': '2016-01-01', 'event_type': 'type1'},
{'id': 1, 'event_day': '2016-01-02', 'event_type': 'type1'},
{'id': 2, 'event_day': '2016-02-01', 'event_type': 'type2'},
{'id': 2, 'event_day': '2016-02-15', 'event_type': 'type3'},
{'id': 3, 'event_day': '2016-01-06', 'event_type': 'type3'},
{'id': 3, 'event_day': '2016-03-11', 'event_type': 'type3'},])
In [3]: df['event_day'] = pd.to_datetime(df['event_day'])
In [4]: df = df.sort_values(['id', 'event_day'])
In [5]: dummies = pd.get_dummies(df)
In [6]: dummies.set_index('event_day', inplace=True)
In [7]: dummies
Out[7]:
id event_type_type1 event_type_type2 event_type_type3
event_day
2016-01-01 1 1.0 0.0 0.0
2016-01-02 1 1.0 0.0 0.0
2016-02-01 2 0.0 1.0 0.0
2016-02-15 2 0.0 0.0 1.0
2016-01-06 3 0.0 0.0 1.0
2016-03-11 3 0.0 0.0 1.0
In [8]: import datetime
In [9]: delta30 = datetime.timedelta(days=30)
In [10]: delta1 = datetime.timedelta(days=1)
In [11]: dummies.apply(lambda x: dummies[dummies.id == x.id].loc[x.name - delta30:x.name - delta1].sum() ,axis=1)
Out[11]:
id event_type_type1 event_type_type2 event_type_type3
event_day
2016-01-01 0.0 0.0 0.0 0.0
2016-01-02 1.0 1.0 0.0 0.0
2016-02-01 0.0 0.0 0.0 0.0
2016-02-15 2.0 0.0 1.0 0.0
2016-01-06 0.0 0.0 0.0 0.0
2016-03-11 0.0 0.0 0.0 0.0
</code></pre>
<p>Finally, you can <code>merge</code> <code>dummies</code> and your original dataframe after dropping the 'id' column in <code>dummies</code>:</p>
<pre><code>In [12]: dummies.drop('id', inplace = True,axis=1)
In [13]: dummies
Out[13]:
event_day event_type_type1 event_type_type2 event_type_type3
0 2016-01-01 0.0 0.0 0.0
1 2016-01-02 1.0 0.0 0.0
2 2016-02-01 0.0 0.0 0.0
3 2016-02-15 0.0 1.0 0.0
4 2016-01-06 0.0 0.0 0.0
5 2016-03-11 0.0 0.0 0.0
In [14]: pd.merge(df, dummies, on="event_day")
Out[14]:
event_day event_type id event_type_type1 event_type_type2 \
0 2016-01-01 type1 1 0.0 0.0
1 2016-01-02 type1 1 1.0 0.0
2 2016-02-01 type2 2 0.0 0.0
3 2016-02-15 type3 2 0.0 1.0
4 2016-01-06 type3 3 0.0 0.0
5 2016-03-11 type3 3 0.0 0.0
event_type_type3
0 0.0
1 0.0
2 0.0
3 0.0
4 0.0
5 0.0
</code></pre>
| 2 | 2016-08-02T20:40:34Z | [
"python",
"pandas"
] |
How to use `annot` method of `sns.heatmap` to give custom labels? (Python | Seaborn) | 38,726,886 | <p><strong>How can I use the <code>annot</code> method of <code>sns.heatmap</code> to give it a custom naming scheme?</strong> </p>
<p>Essentially, I want to drop all the labels that are lower than my threshold (0 in this case). I tried doing what @ojy said in <a href="http://stackoverflow.com/questions/33158075/custom-annotation-seaborn-heatmap">Custom Annotation Seaborn Heatmap</a> but I'm getting the following error. I saw an example where somebody iterated through every cell, is that the only way to do it? </p>
<pre><code>Seaborn documentation:
annot : bool or rectangular dataset, optional
If True, write the data value in each cell. If an array-like with the same shape as data, then use this to annotate the heatmap instead of the raw data.
</code></pre>
<p>So I tried the following:</p>
<pre><code># Load Datasets
from sklearn.datasets import load_iris
iris = load_iris()
DF_X = pd.DataFrame(iris.data, index = ["%d_%d"%(i,c) for i,c in zip(range(X.shape[0]), iris.target)], columns=iris.feature_names)
# Correlation
DF_corr = DF_X.corr()
# Figure
fig, ax= plt.subplots(ncols=2, figsize=(16,6))
sns.heatmap(DF_corr, annot=True, ax=ax[0])
# Masked Figure
threshold = 0
DF_mask = DF_corr.copy()
DF_mask[DF_mask < threshold] = 0
sns.heatmap(DF_mask, annot=True, ax=ax[1])
# Annotating
Ar_annotation = DF_mask.as_matrix()
Ar_annotation[Ar_annotation == 0] = None
Ar_annotation
# array([[ 1. , nan, 0.87175416, 0.81795363],
# [ nan, 1. , nan, nan],
# [ 0.87175416, nan, 1. , 0.9627571 ],
# [ 0.81795363, nan, 0.9627571 , 1. ]])
print(DF_mask.shape, Ar_annotation.shape)
# (4, 4) (4, 4)
sns.heatmap(DF_mask, annot=Ar_annotation, fmt="")
# ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p><a href="http://i.stack.imgur.com/Apqgb.png" rel="nofollow"><img src="http://i.stack.imgur.com/Apqgb.png" alt="enter image description here"></a></p>
<p>Before mask (left), after mask (right)</p>
| 0 | 2016-08-02T17:24:57Z | 38,778,453 | <p>That's easy, boi! </p>
<p>Update to 0.7.1 and restart Jupyter kernel.</p>
<p><a href="https://github.com/mwaskom/seaborn/issues/981" rel="nofollow">https://github.com/mwaskom/seaborn/issues/981</a></p>
| 1 | 2016-08-04T22:35:27Z | [
"python",
"matplotlib",
"heatmap",
"seaborn",
"data-science"
] |
uint32 vs uint64: What bases do I need for the 'int()' function to work properly | 38,726,956 | <p>If I have two hex-strings and want to convert one to an 32-bit unsigned integer and the other to a 64-bit unsigned integer, what bases would I provide the <code>int()</code> function?</p>
| 2 | 2016-08-02T17:29:03Z | 38,727,097 | <p>Well, python usually decides how much memory to allocate itself. See the following example:</p>
<pre><code>>>> type(int('0x7fffffff', 16))
<type 'int'>
>>> type(int('0x80000000', 16))
<type 'long'>
</code></pre>
<p>Based on the size of the number, Python allocates the right amount of memory. </p>
<p><strong>BUT</strong> if you use the method <code>long()</code> instead of <code>int()</code>, <strong>always</strong> 8 bytes will be allocated, no matter what the number is:</p>
<pre><code>>>> type(long('0x7fffffff', 16))
<type 'long'>
>>> type(long('0x80000000', 16))
<type 'long'>
</code></pre>
<p>*Tested for Python 2.7 (not tested with 3.x)</p>
| 1 | 2016-08-02T17:39:48Z | [
"python",
"int",
"hex"
] |
uint32 vs uint64: What bases do I need for the 'int()' function to work properly | 38,726,956 | <p>If I have two hex-strings and want to convert one to an 32-bit unsigned integer and the other to a 64-bit unsigned integer, what bases would I provide the <code>int()</code> function?</p>
| 2 | 2016-08-02T17:29:03Z | 38,857,013 | <p>So I goofed up and int() does not determine the size or sign of your hex string. </p>
<p>By definition, hex is 16. So you would put in your string with the hex base of 16</p>
<pre><code>int('A1S31231', 16)
</code></pre>
<p>The issue between 32 bit and 64 bit was simply the size of the string put in as an argument. </p>
<p>By virtue of their size,</p>
<blockquote>
<p>2 hex characters = 1 byte</p>
</blockquote>
<p>So if I had a 64 bit int, it would be 8 bytes or a 16 character hex string.
If I had a 32 bit int, it would be 4 bytes or 8 character hex string.</p>
<p><a href="http://stackoverflow.com/a/20766900/2592424">Based off Duncan's answer</a>: In order to make your result unsigned. You would need to take your result and <code>&</code> them with their proper mask. </p>
<p>If you're looking to go from hex to an uint32 you would do the aforementioned int() conversion and then </p>
<pre><code>result & 0xffffffff
</code></pre>
<p>If you wanted to go from hex to uint64 you would do the aforementioned int() conversion and then</p>
<pre><code>result & 0xffffffffffffffff
</code></pre>
| -1 | 2016-08-09T17:26:05Z | [
"python",
"int",
"hex"
] |
(Long) Removing Single Quotes From Strings in a List | 38,726,986 | <p>This is all a bit vague because the program is rather in-depth, but stick with me as I will try to explain it as best I can. I wrote a program that takes a <code>.csv</code> file and turns it into <code>INSERT INTO</code> statements for a MySQL database. For example:</p>
<pre><code>ID Number Letter Decimal Random
0 1 a 1.8 A9B34
1 4 b 2.4 C8J91
2 7 c 3.7 L9O77
</code></pre>
<p>would result in an insert statement like:</p>
<p><code>INSERT INTO table_name ('ID' int, 'Number' int, 'Letter' varchar(). 'Decimal', float(), 'Random' varchar()) VALUES ('0', '1', 'a', '1.8', 'A9B34');</code></p>
<p>However not all of the <code>.csv</code> files have the same column headers yet they need to be inserted into the same table. For files that do not have certain column headers I would like to insert a <code>NULL</code> value to show this. For example:</p>
<p>Lets say the first <code>.csv</code> file, <strong>A</strong>, has the information:</p>
<pre><code>ID Number Decimal Random
0 1 1.8 A9B34
1 4 2.4 C8J91
</code></pre>
<p>The second <code>.csv</code> file, <strong>B</strong>, has different column headers:</p>
<pre><code>ID Number Letter Decimal
0 3 x 5.6
1 8 y 4.8
</code></pre>
<p>After being converted to an <code>INSERT</code> statement and put in the database it would ideally look like this:</p>
<pre><code>ID TableID Number Decimal Letter Random
0 A 1 1.8 NULL A9B34
1 A 4 2.4 NULL C8J91
2 B 3 5.6 x NULL
3 B 8 4.8 y NULL
</code></pre>
<p><strong>Now this is where I will probably start to lose you.</strong></p>
<p>In order to accomplish what I needed I first take each file and create a master list of the <em>all</em> the column headers that the <code>.csv</code> files:</p>
<pre><code>def createMaster(path):
global master
master = []
for file in os.listdir(path):
if file.endswith('.csv'):
with open(path + file) as inFile:
csvFile = csv.reader(inFile)
col = next(csvFile) # gets the first line of the file, aka the column headers
master.extend(col) # adds the column headers from each file to the master list
masterTemp = OrderedDict.fromkeys(master) # gets rid of duplicates while maintaining order
masterFinal = list(masterTemp.keys()) # turns from OrderedDict to list
return masterFinal
</code></pre>
<p>Which would take all the column headers from multiple <code>.csv</code> files and assemble them into a master list in order without duplicates:</p>
<p><code>['ID', 'Number', 'Decimal', 'Letter', 'Random']</code></p>
<p>This provides me with the first part of the <code>INSERT</code> statement. Now I need to add the <code>VALUES</code> part to the statement, so I take and make a list of all the values in each row of each <code>.csv</code> file one at a time. For each row a temporary list is created, and then the list of column headers for that file is compared to the master list of column headers for all files. It then goes through each thing in the master list and tries to get the index of that same item in the column list. If it finds the item in the column list it inserts the item from the row list at that same index into the temporary list. If it can't find the item it inserts <code>'NULL'</code> into the temporary list instead. Once it has finished the temporary list it then turns the list into a string in the proper MySQL syntax and appends it to a <code>.sql</code> file for insertion. Here is the same idea in code:</p>
<pre><code>def createInsert(inPath, outPath):
for file in os.listdir(inpath):
if file.endswith('.csv'):
with open(inPath + file) as inFile:
with open(outPath + 'table_name' + '.sql', 'a') as outFile:
csvFile = csv.reader(inFile)
col = next(csvFile) # gets the first row of column headers
for row in csvFile:
tempMaster = [] # creates a tempMaster list
insert = 'INSERT INTO ' + 'table_name' + ' (' + ','.join(master)+ ') VALUES ' # SQL syntax crap
for x in master:
try:
i = col.index(x) # looks for the value in the column list
r = row[i] # gets the row value at the same index as the found column
tempMaster.append(r) # appends the row value to a temporary list
except ValueError:
tempMaster.append('NULL') # if the value is not found in the column list it just appends the string to the row master list
values = map((lambda x: "'" + x.strip() + "'"), tempMaster) # converts tempMaster from a list to a string
printOut = insert + ' (' + ','.join(values) + '):')
outFile.write(printOut + '\n') # writes the insert statement to the file
</code></pre>
<p><strong>Finally now time for the question.</strong></p>
<p>The problem with this program is that <code>createInsert()</code> takes all the row values from the tempMaster list and joins them with <code>'</code> marks via the line:</p>
<pre><code>values = map((lambda x: "'" + x.strip() + "'"), tempMaster)
</code></pre>
<p>This is all fine and dandy <em>except</em> that MySQL wants <code>NULL</code> values to be inserted and just <code>NULL</code> instead of <code>'NULL'</code>.</p>
<p><strong>How can I take the assembled row list and search for <code>'NULL'</code> strings and change them into just <code>NULL</code>?</strong></p>
<p><em>I have two different ideas:</em></p>
<p>I could do something along these lines pull the <code>NULL</code> string from the <code>'</code> marks and replace it in the list.</p>
<pre><code>def findBetween(s, first, last):
try:
start = s.index(first) + len(first)
end = s.index(last, start)
return s[start:end]
except ValueError:
print('ERROR: findBetween function failure.')
def removeNull(aList):
tempList = []
for x in aList:
if x == 'NULL':
norm = findBetween(x, "'", "'")
tempList.append(norm)
else:
tempList.append(x)
return tempList
</code></pre>
<p>Or I could maybe add the <code>NULL</code> values into the list without <code>'</code> to begin with. <em>This is within the <code>createInsert()</code> function.</em></p>
<pre><code>for x in tempMaster:
if x == 'NULL':
value = x
tempMaster.append(value)
else:
value = "'" + x + "'"
tempMaster.append(value)
values = map((lambda x: x.strip()), tempMaster)
printOut = insert + ' (' + ','.join(values) + ');')
outFile.write(printOut + '\n')
</code></pre>
<p>However I think neither of these are viable because they would slow the program down significantly (with the larger files these raise a <code>MemoryError</code>). Therefore I am asking your opinion. I apologize if this was confusing or hard to follow. Please let me know what I could fix to make it easier to understand if this is the case and congratulations for making it to the end!</p>
| 2 | 2016-08-02T17:31:29Z | 38,727,076 | <p>instead of </p>
<pre><code>values = map((lambda x: "'" + x.strip() + "'"), tempMaster)
</code></pre>
<p>put this</p>
<pre><code> values = map((lambda x: "'" + x.strip() + "'" if x!='NULL' else x), tempMaster)
</code></pre>
<p><H3>Edit</H3>
Thanks for accepting/upvoting my simple trick but I'm not sure this is optimal.
On a more global scope, you could have avoided this map/lambda stuff (unless I'm missing something).</p>
<pre><code> for row in csvFile:
values = [] # creates the final list
insert = 'INSERT INTO ' + 'table_name' + ' (' + ','.join(master)+ ') VALUES ' # SQL syntax crap
for x in master:
try:
i = col.index(x) # looks for the value in the column list
r = row[i] # gets the row value at the same index as the found column
value.append("'"+r.strip()+"'") # appends the row value to the final list
except ValueError:
value.append('NULL') # if the value is not found in the column list it just appends the string to the row master list
</code></pre>
<p>Then you have <code>value</code> properly populated, saves memory & CPU.</p>
| 2 | 2016-08-02T17:38:00Z | [
"python",
"mysql",
"python-3.x",
"csv"
] |
(Long) Removing Single Quotes From Strings in a List | 38,726,986 | <p>This is all a bit vague because the program is rather in-depth, but stick with me as I will try to explain it as best I can. I wrote a program that takes a <code>.csv</code> file and turns it into <code>INSERT INTO</code> statements for a MySQL database. For example:</p>
<pre><code>ID Number Letter Decimal Random
0 1 a 1.8 A9B34
1 4 b 2.4 C8J91
2 7 c 3.7 L9O77
</code></pre>
<p>would result in an insert statement like:</p>
<p><code>INSERT INTO table_name ('ID' int, 'Number' int, 'Letter' varchar(). 'Decimal', float(), 'Random' varchar()) VALUES ('0', '1', 'a', '1.8', 'A9B34');</code></p>
<p>However not all of the <code>.csv</code> files have the same column headers yet they need to be inserted into the same table. For files that do not have certain column headers I would like to insert a <code>NULL</code> value to show this. For example:</p>
<p>Lets say the first <code>.csv</code> file, <strong>A</strong>, has the information:</p>
<pre><code>ID Number Decimal Random
0 1 1.8 A9B34
1 4 2.4 C8J91
</code></pre>
<p>The second <code>.csv</code> file, <strong>B</strong>, has different column headers:</p>
<pre><code>ID Number Letter Decimal
0 3 x 5.6
1 8 y 4.8
</code></pre>
<p>After being converted to an <code>INSERT</code> statement and put in the database it would ideally look like this:</p>
<pre><code>ID TableID Number Decimal Letter Random
0 A 1 1.8 NULL A9B34
1 A 4 2.4 NULL C8J91
2 B 3 5.6 x NULL
3 B 8 4.8 y NULL
</code></pre>
<p><strong>Now this is where I will probably start to lose you.</strong></p>
<p>In order to accomplish what I needed I first take each file and create a master list of the <em>all</em> the column headers that the <code>.csv</code> files:</p>
<pre><code>def createMaster(path):
global master
master = []
for file in os.listdir(path):
if file.endswith('.csv'):
with open(path + file) as inFile:
csvFile = csv.reader(inFile)
col = next(csvFile) # gets the first line of the file, aka the column headers
master.extend(col) # adds the column headers from each file to the master list
masterTemp = OrderedDict.fromkeys(master) # gets rid of duplicates while maintaining order
masterFinal = list(masterTemp.keys()) # turns from OrderedDict to list
return masterFinal
</code></pre>
<p>Which would take all the column headers from multiple <code>.csv</code> files and assemble them into a master list in order without duplicates:</p>
<p><code>['ID', 'Number', 'Decimal', 'Letter', 'Random']</code></p>
<p>This provides me with the first part of the <code>INSERT</code> statement. Now I need to add the <code>VALUES</code> part to the statement, so I take and make a list of all the values in each row of each <code>.csv</code> file one at a time. For each row a temporary list is created, and then the list of column headers for that file is compared to the master list of column headers for all files. It then goes through each thing in the master list and tries to get the index of that same item in the column list. If it finds the item in the column list it inserts the item from the row list at that same index into the temporary list. If it can't find the item it inserts <code>'NULL'</code> into the temporary list instead. Once it has finished the temporary list it then turns the list into a string in the proper MySQL syntax and appends it to a <code>.sql</code> file for insertion. Here is the same idea in code:</p>
<pre><code>def createInsert(inPath, outPath):
for file in os.listdir(inpath):
if file.endswith('.csv'):
with open(inPath + file) as inFile:
with open(outPath + 'table_name' + '.sql', 'a') as outFile:
csvFile = csv.reader(inFile)
col = next(csvFile) # gets the first row of column headers
for row in csvFile:
tempMaster = [] # creates a tempMaster list
insert = 'INSERT INTO ' + 'table_name' + ' (' + ','.join(master)+ ') VALUES ' # SQL syntax crap
for x in master:
try:
i = col.index(x) # looks for the value in the column list
r = row[i] # gets the row value at the same index as the found column
tempMaster.append(r) # appends the row value to a temporary list
except ValueError:
tempMaster.append('NULL') # if the value is not found in the column list it just appends the string to the row master list
values = map((lambda x: "'" + x.strip() + "'"), tempMaster) # converts tempMaster from a list to a string
printOut = insert + ' (' + ','.join(values) + '):')
outFile.write(printOut + '\n') # writes the insert statement to the file
</code></pre>
<p><strong>Finally now time for the question.</strong></p>
<p>The problem with this program is that <code>createInsert()</code> takes all the row values from the tempMaster list and joins them with <code>'</code> marks via the line:</p>
<pre><code>values = map((lambda x: "'" + x.strip() + "'"), tempMaster)
</code></pre>
<p>This is all fine and dandy <em>except</em> that MySQL wants <code>NULL</code> values to be inserted and just <code>NULL</code> instead of <code>'NULL'</code>.</p>
<p><strong>How can I take the assembled row list and search for <code>'NULL'</code> strings and change them into just <code>NULL</code>?</strong></p>
<p><em>I have two different ideas:</em></p>
<p>I could do something along these lines pull the <code>NULL</code> string from the <code>'</code> marks and replace it in the list.</p>
<pre><code>def findBetween(s, first, last):
try:
start = s.index(first) + len(first)
end = s.index(last, start)
return s[start:end]
except ValueError:
print('ERROR: findBetween function failure.')
def removeNull(aList):
tempList = []
for x in aList:
if x == 'NULL':
norm = findBetween(x, "'", "'")
tempList.append(norm)
else:
tempList.append(x)
return tempList
</code></pre>
<p>Or I could maybe add the <code>NULL</code> values into the list without <code>'</code> to begin with. <em>This is within the <code>createInsert()</code> function.</em></p>
<pre><code>for x in tempMaster:
if x == 'NULL':
value = x
tempMaster.append(value)
else:
value = "'" + x + "'"
tempMaster.append(value)
values = map((lambda x: x.strip()), tempMaster)
printOut = insert + ' (' + ','.join(values) + ');')
outFile.write(printOut + '\n')
</code></pre>
<p>However I think neither of these are viable because they would slow the program down significantly (with the larger files these raise a <code>MemoryError</code>). Therefore I am asking your opinion. I apologize if this was confusing or hard to follow. Please let me know what I could fix to make it easier to understand if this is the case and congratulations for making it to the end!</p>
| 2 | 2016-08-02T17:31:29Z | 38,727,505 | <p>I checked your requirement i found that you have multiple CSV in your directory. These csv have dynamic column. My approach is create a static list of all columns</p>
<p><code>staticColumnList = ["ID","TableID","Number","Decimal","Letter","Random"]</code> </p>
<p>Now when reading your file take header row and make a temp list for tuples for corresponding columns like</p>
<p><code>[(ID, column no in csv), (TableID, 'A' - File Name), (Number, column no in csv) etc...]</code> </p>
<p>If you don't have column in csv then put x in correspondence like <code>("Letter", x)</code>. Now with each row make a loop and assign or pick values like this:-</p>
<pre><code>wholeDataList = []
rowList = []
for column in staticColumnList:
if int of type(column[1]):
rowList.append("'"+str(rowCSV[column[1]])+"'")
elif 'X' == column[1]:
rowList.append('null')
else:
rowList.append("'"+column[1]+"'")
wholeDataList.append("("+",".join(rowList)+")")
</code></pre>
<p>At last you have well prepared statements, like this:-</p>
<pre><code>qry = "INSERT into .. ("+",".join(staticColumnList)+") values " + ",".join(wholeDataList)
</code></pre>
| 0 | 2016-08-02T18:01:07Z | [
"python",
"mysql",
"python-3.x",
"csv"
] |
How to turn off matplotlib inline function and install pygtk? | 38,727,035 | <p>I got two questions when I was plotting graph in ipython.</p>
<ol>
<li><p>once, i implement <code>%matplotlib inline</code>, I don't know how to switch back to use floating windows. </p></li>
<li><p>when I search for the method to switch back, people told me to implement
<code>%matplotlib</code> osx or <code>%matplotlib</code>, however, I finally get an error, which is </p>
<blockquote>
<p>Gtk* backend requires pygtk to be installed.</p>
</blockquote></li>
</ol>
<p>Can anyone help me, giving me some idea?</p>
<p>p.s. I am using windows 10 and python 2.7</p>
| 0 | 2016-08-02T17:35:34Z | 38,729,045 | <p>You need to install pyGTK. How to do so depends on what you're using to run Python. You could also not use '%matplotlib inline' and then it'll default to whatever is installed on your system. </p>
| 0 | 2016-08-02T19:34:16Z | [
"python",
"matplotlib",
"ipython"
] |
Read a JSON file containing unicode data | 38,727,049 | <p>Here is the content of my JSON file</p>
<pre><code>cat ./myfile.json
{u'Records': [{u'eventVersion': u'2.0', }]}
</code></pre>
<p>How do I read this JSON file?</p>
<p>I tried reading the file with the following code,</p>
<pre><code>def Read_json_file(jsonFile):
jsonDy = {}
if os.path.exists(jsonFile):
with open(jsonFile, 'rt') as fin:
jsonDy = json.load(fin)
else:
print("JSON file not available ->",
jsonFile)
sys.exit(1)
print("jsonDy -> ", jsonDy)
</code></pre>
<p>But getting the following error,</p>
<pre><code>Traceback (most recent call last):
File "a.py", line 125, in <module>
Main()
File "a.py", line 18, in Main
content = Read_json_file(eventFile)
File "a.py", line 44, in Read_json_file
jsonDy = json.load(fin)
File "/usr/lib64/python2.7/json/__init__.py", line 290, in load
**kw)
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 365, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 381, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting property name: line 1 column 2 (char 1)
</code></pre>
<p>What I understand is here <code>u'</code> represents the <code>unicode</code> notation, but not sure how to read this file</p>
<p><code>PS</code> : I am using Python 2.7</p>
| 0 | 2016-08-02T17:36:36Z | 38,727,156 | <p>Try this,</p>
<pre><code>import simplejson as json
w = json.dumps({u'Records': [{u'eventVersion': u'2.0', }]})
print json.loads(w)
</code></pre>
<p>or use:</p>
<pre><code>import json
w = json.dumps({u'Records': [{u'eventVersion': u'2.0', }]})
print json.loads(w)
</code></pre>
<p>I have dumped to json to recreate the issue. You can just use <code>json.loads</code></p>
| 1 | 2016-08-02T17:43:11Z | [
"python",
"json",
"python-2.7",
"unicode",
"python-unicode"
] |
Read a JSON file containing unicode data | 38,727,049 | <p>Here is the content of my JSON file</p>
<pre><code>cat ./myfile.json
{u'Records': [{u'eventVersion': u'2.0', }]}
</code></pre>
<p>How do I read this JSON file?</p>
<p>I tried reading the file with the following code,</p>
<pre><code>def Read_json_file(jsonFile):
jsonDy = {}
if os.path.exists(jsonFile):
with open(jsonFile, 'rt') as fin:
jsonDy = json.load(fin)
else:
print("JSON file not available ->",
jsonFile)
sys.exit(1)
print("jsonDy -> ", jsonDy)
</code></pre>
<p>But getting the following error,</p>
<pre><code>Traceback (most recent call last):
File "a.py", line 125, in <module>
Main()
File "a.py", line 18, in Main
content = Read_json_file(eventFile)
File "a.py", line 44, in Read_json_file
jsonDy = json.load(fin)
File "/usr/lib64/python2.7/json/__init__.py", line 290, in load
**kw)
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 365, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 381, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting property name: line 1 column 2 (char 1)
</code></pre>
<p>What I understand is here <code>u'</code> represents the <code>unicode</code> notation, but not sure how to read this file</p>
<p><code>PS</code> : I am using Python 2.7</p>
| 0 | 2016-08-02T17:36:36Z | 38,727,199 | <p>That's not a valid JSON structure. It's a string representation of Python data structure. The appropriate JSON structure would be:</p>
<pre><code>{"Records": [{"eventVersion": "2.0"}]}
</code></pre>
<p>It looks like something is writing the JSON with the output of <code>json.loads</code> instead of <code>json.dumps</code>.</p>
| 3 | 2016-08-02T17:45:17Z | [
"python",
"json",
"python-2.7",
"unicode",
"python-unicode"
] |
Core Reporting API - How to use multiple dimensionFilterClauses filters? | 38,727,095 | <p>I'm trying to use multiple dimensionFilterClauses into a Core Reporting API V4 query. If I use just a filter on the <code>ga:adwordsCustomerID</code> dimension everything goes fine, but when I add a second filter on the <code>ga:adTargetingType</code> dimension it throws a "Status 400: Bad request" error.</p>
<p>This is the query I wrote:</p>
<pre><code> return analytics.reports().batchGet(
body={"reportRequests": [{
"pageSize": 10000,
"viewId": VIEW_ID,
"dateRanges": [
{"startDate": "31daysAgo", "endDate": "yesterday"}
],
"dimensions": [
{"name": "ga:adwordsCampaignID"},
{"name": "ga:adwordsAdGroupID"},
{"name": "ga:adwordsCriteriaID"}
],
"metrics": [
{"expression": "ga:adClicks"},
{"expression": "ga:adCost"},
{"expression": "ga:uniquePurchases"},
{"expression": "ga:itemRevenue"},
{"expression": "ga:CPC"},
{"expression": "ga:ROAS"}
],
"dimensionFilterClauses": [
{"filters": [
{"dimensionName": "ga:adwordsCustomerID",
"operator": "EXACT",
"expressions": ["2096809090"]},
{"dimensionName": "ga:adTargetingType",
"operator": "EXACT",
"expressions": ["Keyword"]}
]}
],
"metricFilterClauses": [
{"filters": [
{"metricName": "ga:adCost",
"operator": "GREATER_THAN",
"comparisonValue": "0"}
]}
],
"orderBys": [
{"fieldName": "ga:adClicks",
"sortOrder": "DESCENDING"}
]}
]}
).execute()
</code></pre>
<p>Do you know what's wrong with the above query body?</p>
| 0 | 2016-08-02T17:39:46Z | 38,755,771 | <h2>Analytics Reporting API V4 <a href="https://developers.google.com/analytics/devguides/reporting/core/v4/basics#filtering_1" rel="nofollow">Filtering</a></h2>
<p>The <a href="https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/reports/batchGet#ReportRequest" rel="nofollow">ReportRequest</a> takes an array of <code>DimensionFilterClauses</code>. These clauses are combined with the logical <strong><code>AND</code></strong> operator. Ie. If you had two <code>DimensionFilterClause</code> objects: <strong>A</strong> and <strong>B</strong>; the API will only return values that meet both conditions in A <code>AND</code> B.</p>
<p>Each <a href="https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/reports/batchGet#DimensionFilterClause" rel="nofollow"><code>DimensionFilterClause</code></a> takes an array of <code>DimensionFilters</code> (called <code>filters</code>). These filters are combined with the logical <strong><code>OR</code></strong> operator. Ie. if you had two <code>DimensionFilter</code> objects <strong>C</strong> and <strong>D</strong> within a <code>DimensionFilterClause</code>; the API would return results that satisfied either C or D.</p>
<h2>Example</h2>
<p>Below is an example request with two <code>DimensionFilterClauses</code>; <code>ga:adWordsCampaignID==8675309</code> <code>AND</code> <code>ga:adwordsAdGroupID==12345</code>
<a href="https://developers.google.com/apis-explorer/#p/analyticsreporting/v4/analyticsreporting.reports.batchGet?_h=8&resource=%257B%250A++%2522reportRequests%2522%253A+%250A++%255B%250A++++%257B%250A++++++%2522viewId%2522%253A+%25221174%2522%252C%250A++++++%2522dimensions%2522%253A+%250A++++++%255B%250A++++++++%257B%250A++++++++++%2522name%2522%253A+%2522ga%253AadwordsCampaignID%2522%250A++++++++%257D%252C%250A++++++++%257B%250A++++++++++%2522name%2522%253A+%2522ga%253AadwordsAdGroupID%2522%250A++++++++%257D%252C%250A++++++++%257B%250A++++++++++%2522name%2522%253A+%2522ga%253AadwordsCriteriaID%2522%250A++++++++%257D%250A++++++%255D%252C%250A++++++%2522metrics%2522%253A+%250A++++++%255B%250A++++++++%257B%250A++++++++++%2522expression%2522%253A+%2522ga%253AadClicks%2522%250A++++++++%257D%252C%250A++++++++%257B%250A++++++++++%2522expression%2522%253A+%2522ga%253AadCost%2522%250A++++++++%257D%250A++++++%255D%252C%250A++++++%2522metricFilterClauses%2522%253A+%250A++++++%255B%250A++++++++%257B%250A++++++++++%2522filters%2522%253A+%250A++++++++++%255B%250A++++++++++++%257B%250A++++++++++++++%2522metricName%2522%253A+%2522ga%253AadCost%2522%252C%250A++++++++++++++%2522operator%2522%253A+%2522GREATER_THAN%2522%252C%250A++++++++++++++%2522comparisonValue%2522%253A+%25220%2522%250A++++++++++++%257D%250A++++++++++%255D%250A++++++++%257D%250A++++++%255D%252C%250A++++++%2522dimensionFilterClauses%2522%253A+%250A++++++%255B%250A++++++++%257B%250A++++++++++%2522filters%2522%253A+%250A++++++++++%255B%250A++++++++++++%257B%250A++++++++++++++%2522dimensionName%2522%253A+%2522ga%253AadwordsCampaignID%2522%252C%250A++++++++++++++%2522operator%2522%253A+%2522EXACT%2522%252C%250A++++++++++++++%2522expressions%2522%253A+%250A++++++++++++++%255B%25228675309%2522%250A++++++++++++++%255D%250A++++++++++++%257D%250A++++++++++%255D%250A++++++++%257D%252C%250A++++++++%257B%250A++++++++++%2522filters%2522%253A+%250A++++++++++%255B%250A++++++++++++%257B%250A++++++++++++++%2522dimensionName%2522%253A+%2522ga%253AadwordsAdGroupID%2522%252C%250A++++++++++++++%2522expressions%2522%253A+%250A++++++++++++++%255B%252212345%2522%250A++++++++++++++%255D%252C%250A++++++++++++++%2522operator%2522%253A+%2522EXACT%2522%250A++++++++++++%257D%250A++++++++++%255D%250A++++++++%257D%250A++++++%255D%250A++++%257D%250A++%255D%250A%257D&" rel="nofollow">API Explorer example</a>:</p>
<pre><code>{
"reportRequests":
[
{
"viewId": "XXXX",
"dimensions":
[
{"name": "ga:adwordsCampaignID"},
{"name": "ga:adwordsAdGroupID"},
{"name": "ga:adwordsCriteriaID"}
],
"metrics":
[
{"expression": "ga:adClicks"},
{"expression": "ga:adCost"}
],
"metricFilterClauses":
[
{
"filters":
[
{
"metricName": "ga:adCost",
"operator": "GREATER_THAN",
"comparisonValue": "0"
}
]
}
],
"dimensionFilterClauses":
[
{
"filters":
[
{
"dimensionName": "ga:adwordsCampaignID",
"operator": "EXACT",
"expressions": ["8675309"]
}
]
},
{
"filters":
[
{
"dimensionName": "ga:adwordsAdGroupID",
"operator": "EXACT",
"expressions":
["12345"],
}
]
}
]
}
]
}
</code></pre>
<h2>Conclusion</h2>
<p>From the outset it does not appear that there is anything wrong with your API request body, if there was you would get an error message. But its more likely that you didn't intend to request for:</p>
<pre><code> "dimensionFilterClauses": [
{"filters": [
{"dimensionName": "ga:adwordsCustomerID",
"operator": "EXACT",
"expressions": ["2096809090"]},
{"dimensionName": "ga:adTargetingType",
"operator": "EXACT",
"expressions": ["Keyword"]}
]}
],
</code></pre>
<p>But you intended to request for:</p>
<pre><code> "dimensionFilterClauses": [
{"filters": [
{"dimensionName": "ga:adwordsCustomerID",
"operator": "EXACT",
"expressions": ["2096809090"]},
]},{"filters": [
{"dimensionName": "ga:adTargetingType",
"operator": "EXACT",
"expressions": ["Keyword"]}
]}
],
</code></pre>
<p>My recommendation when no data shows up is to first remove the filter, and then verify you have the exact string you are searching for. Also if you do not have the full string you can use a different <a href="https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/reports/batchGet#Operator" rel="nofollow"><code>filter.operator</code></a> such as <code>PARTIAL</code> or <code>BEGINS_WITH</code>.</p>
<h2>Error handling</h2>
<p>It is also smart to use proper error handling especially when debugging:</p>
<pre><code>try:
response = analyticsreporting.reports().batchGet(
body=requestBody
).execute()
except TypeError, error:
# Handle errors in constructing a query.
print 'There was an error in constructing your query : %s' % error
except HttpError, error:
# Handle API errors.
print ('There was an API error : %s : %s' %
(error.resp.status, error.resp.reason))
</code></pre>
<h2>Updated: <code>ga:adTargetingType==Keyword</code> Example</h2>
<p>From the comment below it was requested to give an example of dimension filter with <code>ga:adTargetingType==Keyword</code>. Use the <a href="https://developers.google.com/apis-explorer/#p/analyticsreporting/v4/analyticsreporting.reports.batchGet?_h=4&resource=%257B%250A++%2522reportRequests%2522%253A+%250A++%255B%250A++++%257B%250A++++++%2522viewId%2522%253A+%2522VIEW_ID%2522%252C%250A++++++%2522metrics%2522%253A+%250A++++++%255B%250A++++++++%257B%250A++++++++++%2522expression%2522%253A+%2522ga%253Asessions%2522%250A++++++++%257D%250A++++++%255D%252C%250A++++++%2522dimensions%2522%253A+%250A++++++%255B%250A++++++++%257B%250A++++++++++%2522name%2522%253A+%2522ga%253AadTargetingType%2522%250A++++++++%257D%250A++++++%255D%252C%250A++++++%2522dimensionFilterClauses%2522%253A+%250A++++++%255B%250A++++++++%257B%250A++++++++++%2522filters%2522%253A+%250A++++++++++%255B%250A++++++++++++%257B%250A++++++++++++++%2522dimensionName%2522%253A+%2522ga%253AadTargetingType%2522%252C%250A++++++++++++++%2522operator%2522%253A+%2522EXACT%2522%252C%250A++++++++++++++%2522expressions%2522%253A+%250A++++++++++++++%255B%2522Keyword%2522%250A++++++++++++++%255D%250A++++++++++++%257D%250A++++++++++%255D%250A++++++++%257D%250A++++++%255D%250A++++%257D%250A++%255D%250A%257D&" rel="nofollow">API Example here</a> to prove to yourself that it works (just change <code>VIEW_ID</code> to your view view id and hit "Authorize and Execute"). JSON body below:</p>
<pre><code>{
"reportRequests":
[
{
"viewId": "VIEW_ID",
"metrics": [{"expression": "ga:sessions"}],
"dimensions": [{"name": "ga:adTargetingType"}],
"dimensionFilterClauses":
[
{
"filters":
[
{
"dimensionName": "ga:adTargetingType",
"operator": "EXACT",
"expressions": ["Keyword"]
}
]
}
]
}
]
}
</code></pre>
<p>I always like to start small and work up. By removing the other parameters and fields I can prove to myself what is working and what is not. This example is the bare minimum required request that filters for <code>ga:keyword==Keyword</code>.</p>
<h2>Second Update:</h2>
<p>The actual error message you are getting is as follows:</p>
<pre><code>"Selected dimensions and metrics cannot be queried together."
</code></pre>
<p>The dimension <code>ga:adTargetingType</code> cannot be queried with the following metrics:</p>
<ul>
<li><code>ga:impressions</code></li>
<li><code>ga:adClicks</code></li>
<li><code>ga:adCost</code></li>
<li><code>ga:CPM</code></li>
<li><code>ga:CPC</code></li>
<li><code>ga:CTR</code></li>
<li><code>ga:costPerTransaction</code></li>
<li><code>ga:costPerGoalConversion</code></li>
<li><code>ga:costPerConversion</code></li>
<li><code>ga:RPC</code></li>
<li><code>ga:ROAS</code></li>
</ul>
| 0 | 2016-08-04T00:40:22Z | [
"python",
"python-3.x",
"google-analytics",
"google-analytics-api",
"google-analytics-v4"
] |
ImportError: No module named log | 38,727,132 | <p>Isn't log a built in package in python?</p>
<pre><code># /usr/bin/python
Python 2.7.5 (default, Oct 11 2015, 17:47:16)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import log as logging
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named log
>>>
</code></pre>
<p>Do I need to install <code>log</code> with pip ?</p>
| -1 | 2016-08-02T17:42:01Z | 38,727,148 | <p><code>logging</code> is a <a href="https://docs.python.org/2/library/logging.html" rel="nofollow">built-in package</a>, not <code>log</code>:</p>
<pre><code>import logging as log
</code></pre>
| 1 | 2016-08-02T17:42:47Z | [
"python"
] |
What is the best way sort 2-D list and transform it to dictionary | 38,727,181 | <p>I have a 2-D <code>[2xn]</code> array with keys in the first column (which repeat) and values in the second. I need to make a dictionary, where keys are unique, and there values are collected from the values. </p>
<p>What is the smartest way to do it? Should I first preprocess (sort keys-values in groups or something) or should I put a <code>set(keys)</code> to dictionary keys and then operate with values there? Or should I put <code>keys</code> and <code>values</code> lists in a dictionary and "squeeze" it somehow? </p>
<p>Input:</p>
<pre><code>[
[Isis, 3],
[Isis, 4],
[Al-Qaeda, 2],
[Isis, 2]
]
</code></pre>
<p>Desired output:</p>
<pre><code>{'Isis':[3,4,2], 'Al-Qaeda':[2]}
</code></pre>
| 0 | 2016-08-02T17:44:42Z | 38,730,213 | <pre><code>output = {}
for i in input:
key = i[0]
value = i[1]
if not key in output:
output[key] = []
output[key].append(value)
</code></pre>
| 1 | 2016-08-02T20:51:19Z | [
"python",
"dictionary"
] |
What is the best way sort 2-D list and transform it to dictionary | 38,727,181 | <p>I have a 2-D <code>[2xn]</code> array with keys in the first column (which repeat) and values in the second. I need to make a dictionary, where keys are unique, and there values are collected from the values. </p>
<p>What is the smartest way to do it? Should I first preprocess (sort keys-values in groups or something) or should I put a <code>set(keys)</code> to dictionary keys and then operate with values there? Or should I put <code>keys</code> and <code>values</code> lists in a dictionary and "squeeze" it somehow? </p>
<p>Input:</p>
<pre><code>[
[Isis, 3],
[Isis, 4],
[Al-Qaeda, 2],
[Isis, 2]
]
</code></pre>
<p>Desired output:</p>
<pre><code>{'Isis':[3,4,2], 'Al-Qaeda':[2]}
</code></pre>
| 0 | 2016-08-02T17:44:42Z | 38,730,322 | <p>Use a <code>defaultdict</code>:</p>
<pre><code>from collections import defaultdict
output = defaultdict(list)
for k,v in input:
output[k].append(v)
</code></pre>
| 3 | 2016-08-02T20:58:45Z | [
"python",
"dictionary"
] |
How would I iterate over a list of files and plot them as subplots on a single figure? | 38,727,364 | <p>I'm trying to plot files onto 8 subplots for 2 figures. I am using a for loop and enumerate operator, along with axarray to do this.
I am almost there with the last step (with axarray) but need guidance as to how to finish it.
Here's my code:</p>
<pre><code>'import matplotlib.pyplot as plt
import parse_gctoo
import glob
f, ax1 = plt.subplots()
def histo_plotter(file, plot_title, ax):
# read in file as string
GCT_object = parse_gctoo.parse(file)
# for c in range(9):
# print type(GCT_object.data_df.iloc[0][c])
# computing median of rows in data_df
# gene_medians = GCT_object.data_df.quantile(q=0.5,axis=1)
# plot_title = "Gene expression levels for {}".format(cell)
if plot_title == "ZSPCQNORM":
gene_means = GCT_object.data_df.mean(axis=1)
#making histogram of means
ax.hist(gene_means)
plt.title("MeanGeneExpressionZSPCQNORM")
plt.xlabel("MedianGeneExpression")
plt.ylabel("Count")
elif plot_title == "QNORM":
gene_medians = GCT_object.data_df.median(axis=1)
#making histogram of medians
ax.hist(gene_medians)
plt.title("MedianGeneExpressionQNORM")
plt.xlabel("MedianGeneExpression")
plt.ylabel("Count")
plt.show()
f.savefig("hist_example1.png")
# plt.ylim(-1, 1)
# plt.xlim(-1,1)
# histo_plotter("/Users/eibelman/Desktop/ZSCOREDATA- CXA061_SKL_48H_X1_B29_ZSPCQNORM_n372x978.gct.txt", "ZSPCQNORM", ax1)
# histo_plotter("/Users/eibelman/Desktop/NewLJP005_A375_24H_X2_B19_QNORM_n373x978.gct.txt", "QNORM", ax1)
#########
# Create list of x2 LJP005 cell line files
z_list = glob.glob("/Volumes/cmap_obelix/pod/custom/LJP/roast/LJP005_[A375, A549, BT20, HA1E, HC515, HEPG2, HS578T, HT29]*X2*/zs/*ZSPCQNORM*.gct")
q_list = glob.glob("/Volumes/cmap_obelix/pod/custom/LJP/roast/LJP005_[A375, A549, BT20, HA1E, HC515, HEPG2, HS578T, HT29]*_X2_*/*_QNORM_*.gct")
# for loop which allows plotting multiple files in a single figure
f, axarray = plt.subplots(2, 4)
for n, single_q in enumerate(q_list):
# axarray = plt.subplot(len(q_list), 1, n+1)
axarray = histo_plotter(n, "QNORM", ax1)
# axarray[n].plot()
plt.show()
# f, axarray = plt.subplots(2, 4)
# for n, single_z in enumerate(z_list):
# # ax = plt.subplot(len(z_list), 1, n+1)
# histo_plotter(single_z, "ZSPCQNORM", ax1)'
</code></pre>
| 1 | 2016-08-02T17:54:28Z | 38,727,518 | <p>First, it's suffice to call <code>plt.figure()</code> once at the beginning of the loop.</p>
<p>Second, you need to use <code>subplot</code> correctly. Here is the doc of the <code>subplot</code> function:</p>
<blockquote>
<p>Typical call signature:</p>
<p>subplot(nrows, ncols, plot_number) Where nrows and ncols are used to
notionally split the figure into nrows * ncols sub-axes, and
plot_number is used to identify the particular subplot that this
function is to create within the notional grid. plot_number starts at
1, increments across rows first and has a maximum of nrows * ncols.</p>
<p><strong>EDIT</strong></p>
</blockquote>
<p>If you want a new figure for each file, then on each iteration you should call <code>plt.figure()</code> without arguments.</p>
| 1 | 2016-08-02T18:01:34Z | [
"python",
"for-loop",
"matplotlib",
"plot"
] |
How would I iterate over a list of files and plot them as subplots on a single figure? | 38,727,364 | <p>I'm trying to plot files onto 8 subplots for 2 figures. I am using a for loop and enumerate operator, along with axarray to do this.
I am almost there with the last step (with axarray) but need guidance as to how to finish it.
Here's my code:</p>
<pre><code>'import matplotlib.pyplot as plt
import parse_gctoo
import glob
f, ax1 = plt.subplots()
def histo_plotter(file, plot_title, ax):
# read in file as string
GCT_object = parse_gctoo.parse(file)
# for c in range(9):
# print type(GCT_object.data_df.iloc[0][c])
# computing median of rows in data_df
# gene_medians = GCT_object.data_df.quantile(q=0.5,axis=1)
# plot_title = "Gene expression levels for {}".format(cell)
if plot_title == "ZSPCQNORM":
gene_means = GCT_object.data_df.mean(axis=1)
#making histogram of means
ax.hist(gene_means)
plt.title("MeanGeneExpressionZSPCQNORM")
plt.xlabel("MedianGeneExpression")
plt.ylabel("Count")
elif plot_title == "QNORM":
gene_medians = GCT_object.data_df.median(axis=1)
#making histogram of medians
ax.hist(gene_medians)
plt.title("MedianGeneExpressionQNORM")
plt.xlabel("MedianGeneExpression")
plt.ylabel("Count")
plt.show()
f.savefig("hist_example1.png")
# plt.ylim(-1, 1)
# plt.xlim(-1,1)
# histo_plotter("/Users/eibelman/Desktop/ZSCOREDATA- CXA061_SKL_48H_X1_B29_ZSPCQNORM_n372x978.gct.txt", "ZSPCQNORM", ax1)
# histo_plotter("/Users/eibelman/Desktop/NewLJP005_A375_24H_X2_B19_QNORM_n373x978.gct.txt", "QNORM", ax1)
#########
# Create list of x2 LJP005 cell line files
z_list = glob.glob("/Volumes/cmap_obelix/pod/custom/LJP/roast/LJP005_[A375, A549, BT20, HA1E, HC515, HEPG2, HS578T, HT29]*X2*/zs/*ZSPCQNORM*.gct")
q_list = glob.glob("/Volumes/cmap_obelix/pod/custom/LJP/roast/LJP005_[A375, A549, BT20, HA1E, HC515, HEPG2, HS578T, HT29]*_X2_*/*_QNORM_*.gct")
# for loop which allows plotting multiple files in a single figure
f, axarray = plt.subplots(2, 4)
for n, single_q in enumerate(q_list):
# axarray = plt.subplot(len(q_list), 1, n+1)
axarray = histo_plotter(n, "QNORM", ax1)
# axarray[n].plot()
plt.show()
# f, axarray = plt.subplots(2, 4)
# for n, single_z in enumerate(z_list):
# # ax = plt.subplot(len(z_list), 1, n+1)
# histo_plotter(single_z, "ZSPCQNORM", ax1)'
</code></pre>
| 1 | 2016-08-02T17:54:28Z | 38,729,716 | <p>You can try this:</p>
<pre><code>import matplotlib.pyplot as plt
plt.figure()
for n, single_q in enumerate(q_list):
ax = plt.subplot(len(q_list), 1, n+1)
GCT_object = parse_gctoo.parse(single_q)
gene_medians = GCT_object.data_df.median(axis=1)
plt.hist(gene_medians)
# tweak title, labels, etc.
plt.show()
</code></pre>
<p>Explaining:</p>
<ul>
<li><code>enumerate</code> iterates over the items (<code>s</code>) while also returning their indices (<code>n</code>);</li>
<li>the function <code>subplot(size, column, row)</code> requires these parameters: <code>size</code> is the total amount of subplots in the figure, and <code>row</code> and <code>column</code> determine the position for the current plot. <code>n+1</code> is necessary to put the plot in the correct position along the plot grid;</li>
<li>I edited the rest of the code with your own data</li>
</ul>
| 1 | 2016-08-02T20:18:23Z | [
"python",
"for-loop",
"matplotlib",
"plot"
] |
How can I decode a utf-8 byte array to a string in Python2? | 38,727,390 | <p>I have an array of bytes representing a utf-8 encoded string. I want to decode these bytes back into the string in Pyton2. I am relying on Python2 for my overall program, so I can not switch to Python3. </p>
<pre><code>array = [67, 97, 102, **-61, -87**, 32, 70, 108, 111, 114, 97]
</code></pre>
<p>-> Caf<strong>é</strong> Flora</p>
<p>Since every character in the string I want is not necessarily represented by exactly 1 byte in the array, I can not use a solution like:</p>
<pre><code>"".join(map(chr, array))
</code></pre>
<p>I tried to create a function that would step through the array, and whenever it encounters a number not in the range 0-127 (ASCII), create a new 16 bit int, shift the current bits over 8 to the left, and then add the following byte using a bitwise OR. Finally it would use unichr() to decode it.</p>
<pre><code>result = []
for i in range(len(byte_array)):
x = byte_array[i]
if x < 0:
b16 = x & 0xFFFF # 16 bit
b16 = b16 << 8
b16 = b16 | byte_array[i+1]
result.append(unichr(m16))
else:
result.append(chr(x))
return "".join(result)
</code></pre>
<p>However, this was unsuccessful.</p>
<p>The following article explains the issue very well, and includes a nodeJS solution:</p>
<p><a href="http://ixti.net/development/node.js/2011/10/26/get-utf-8-string-from-array-of-bytes-in-node-js.html" rel="nofollow">http://ixti.net/development/node.js/2011/10/26/get-utf-8-string-from-array-of-bytes-in-node-js.html</a></p>
| 1 | 2016-08-02T17:55:45Z | 38,727,464 | <p>Use the little-used <a href="https://docs.python.org/2/library/array.html" rel="nofollow"><code>array</code> module</a> to convert your input to a bytestring and then <code>decode</code> it with the UTF-8 codec:</p>
<pre><code>import array
decoded = array.array('b', your_input).tostring().decode('utf-8')
</code></pre>
| 2 | 2016-08-02T17:59:29Z | [
"python",
"arrays",
"python-2.7",
"utf-8"
] |
How can I decode a utf-8 byte array to a string in Python2? | 38,727,390 | <p>I have an array of bytes representing a utf-8 encoded string. I want to decode these bytes back into the string in Pyton2. I am relying on Python2 for my overall program, so I can not switch to Python3. </p>
<pre><code>array = [67, 97, 102, **-61, -87**, 32, 70, 108, 111, 114, 97]
</code></pre>
<p>-> Caf<strong>é</strong> Flora</p>
<p>Since every character in the string I want is not necessarily represented by exactly 1 byte in the array, I can not use a solution like:</p>
<pre><code>"".join(map(chr, array))
</code></pre>
<p>I tried to create a function that would step through the array, and whenever it encounters a number not in the range 0-127 (ASCII), create a new 16 bit int, shift the current bits over 8 to the left, and then add the following byte using a bitwise OR. Finally it would use unichr() to decode it.</p>
<pre><code>result = []
for i in range(len(byte_array)):
x = byte_array[i]
if x < 0:
b16 = x & 0xFFFF # 16 bit
b16 = b16 << 8
b16 = b16 | byte_array[i+1]
result.append(unichr(m16))
else:
result.append(chr(x))
return "".join(result)
</code></pre>
<p>However, this was unsuccessful.</p>
<p>The following article explains the issue very well, and includes a nodeJS solution:</p>
<p><a href="http://ixti.net/development/node.js/2011/10/26/get-utf-8-string-from-array-of-bytes-in-node-js.html" rel="nofollow">http://ixti.net/development/node.js/2011/10/26/get-utf-8-string-from-array-of-bytes-in-node-js.html</a></p>
| 1 | 2016-08-02T17:55:45Z | 38,727,472 | <p>You have to have in mind that a "string" in Python2 is not proper text, just a sequence of bytes in memory, which happens to map to characters when you "print" them - if the mapping of the intend characters in the byte sequence matches the one in the terminal, you will see properly formatted text.</p>
<p>If your terminal is not UTF-8, even if you get the proper byte-strign in memory, just printing it would show you the wrong results. That is why the extra "decode" step is needed at the end of the expression.</p>
<pre><code>text = b''.join(chr(i if i > 0 else 256 + i) for i in array).decode('utf-8')
</code></pre>
<p>As your source encoded the numbers between 128 and 255 as negative numbers, we have the inline "if" operator to renormalize the value before calling "chr".</p>
<p>Just to be clear - you say "Since every character in the string I want is not necessarily represented by exactly 1 byte in the array," - So - what takes care of that if you use Python2.x strings, is the <em>terminal</em> anyway. If you want to deal with proper tet, after joining your numbers to a proper (byte) string, is to use the "decode" method - this is the part that will know about UTF-8 multi-byte encoded characters and give you back a (text) string object (an 'unicode' object in Python 2) - that will treat each character as an entity.</p>
| 1 | 2016-08-02T17:59:38Z | [
"python",
"arrays",
"python-2.7",
"utf-8"
] |
How can I decode a utf-8 byte array to a string in Python2? | 38,727,390 | <p>I have an array of bytes representing a utf-8 encoded string. I want to decode these bytes back into the string in Pyton2. I am relying on Python2 for my overall program, so I can not switch to Python3. </p>
<pre><code>array = [67, 97, 102, **-61, -87**, 32, 70, 108, 111, 114, 97]
</code></pre>
<p>-> Caf<strong>é</strong> Flora</p>
<p>Since every character in the string I want is not necessarily represented by exactly 1 byte in the array, I can not use a solution like:</p>
<pre><code>"".join(map(chr, array))
</code></pre>
<p>I tried to create a function that would step through the array, and whenever it encounters a number not in the range 0-127 (ASCII), create a new 16 bit int, shift the current bits over 8 to the left, and then add the following byte using a bitwise OR. Finally it would use unichr() to decode it.</p>
<pre><code>result = []
for i in range(len(byte_array)):
x = byte_array[i]
if x < 0:
b16 = x & 0xFFFF # 16 bit
b16 = b16 << 8
b16 = b16 | byte_array[i+1]
result.append(unichr(m16))
else:
result.append(chr(x))
return "".join(result)
</code></pre>
<p>However, this was unsuccessful.</p>
<p>The following article explains the issue very well, and includes a nodeJS solution:</p>
<p><a href="http://ixti.net/development/node.js/2011/10/26/get-utf-8-string-from-array-of-bytes-in-node-js.html" rel="nofollow">http://ixti.net/development/node.js/2011/10/26/get-utf-8-string-from-array-of-bytes-in-node-js.html</a></p>
| 1 | 2016-08-02T17:55:45Z | 38,727,511 | <p>you can use <code>struct.pack</code> for this</p>
<pre><code>>>> a = [67, 97, 102, -61, -87, 32, 70, 108, 111, 114, 97]
>>> struct.pack("b"*len(a),*a)
'Caf\xc3\xa9 Flora'
>>> print struct.pack("b"*len(a),*a).decode('utf8')
Café Flora
</code></pre>
| 1 | 2016-08-02T18:01:24Z | [
"python",
"arrays",
"python-2.7",
"utf-8"
] |
Tagging lists within lists for removal | 38,727,418 | <p>First off, I'm doing this in python,
I've got a complex list that will consist of a varying number of lists, I've tagged each list with a number (occupying the third position of each entry), because I want to cull lists that contain this number beyond the first, essentially removing duplicates beyond the first, so I might get a list like this:</p>
<pre><code>list = [[[-.3,.9,1],[.2,.1,4]],[[.11,.22,1],[.01,.5,2],[.55,.5,3]],[[.3,.3,3],[.6,.7,4],[.8,.7,5]]]
</code></pre>
<p>I've got a list of varying lists, and those varying lists contain varying lists
Breaking down the list I want to look at the third value in each list -
<code>[[[1],[4]],[[1],[2],[3]],[[3],[4],[5]]]</code> and then cull every entry beyond the first to get something like this; </p>
<pre><code>finalList = [[[-.3,.9,1],[.2,.1,4]],[[.01,.5,2],[.55,.5,3]],[[.8,.7,5]]
</code></pre>
| 0 | 2016-08-02T17:57:30Z | 38,727,803 | <p>So here's a lazy solution (untested). I'm assuming that your original list is a "list of lists of lists" as per your example:</p>
<pre><code>finalList = []
found = []
for i in list:
miniList = []
for j in i:
if not j[2] in found:
miniList.append(j)
found.append(j[2])
finalList.append(miniList)
</code></pre>
| 0 | 2016-08-02T18:19:24Z | [
"python",
"list",
"duplicates"
] |
Edit G code Line by Line | 38,727,438 | <p>I want to read a g code file line by line, and based off of a comment at the end, do an action to it. The g code is coming out of Slic3r. Some example lines in no particular order look like this:</p>
<pre><code>G1 Z0.242 F7800.000 ; move to next layer (0)
G1 E-2.00000 F2400.00000 ; retract
G1 X0.000 Y30.140 F7800.000 ; move to first skirt point
G1 E0.00000 F2400.00000 ; unretract
G1 X-53.493 Y30.140 E2.14998 F1800.000 ; skirt
G1 X57.279 Y-37.776 E22.65617 ; perimeter
G1 X-52.771 Y-38.586 E56.83128 ; infill
</code></pre>
<p>The comment always starts with a semicolon and also uses consistent terms, such as perimeter or infill. Ideally the script would read the line, search for the particular comment case, perform an action based on that, then update the file, and then go to the next line. I am somewhat new to python, so I know that this would be done with a for-loop with nested if statements, however I am not sure how to set up the architecture to be based of those key terms.</p>
| 2 | 2016-08-02T17:58:21Z | 38,739,613 | <p>I am not sure what you want to modify exactly, so I chose to add the word <code>'(up)'</code> in front of a comment that indicates retraction, and <code>'(down)'</code> in front of a comment that indicates unretraction.</p>
<pre><code>file_name = "file.gcode" # put your filename here
with open(file_name, 'r+') as f:
new_code = "" # where the new modified code will be put
content = f.readlines() # gcode as a list where each element is a line
for line in content:
gcode, comment = line.strip('\n').split(";") # seperates the gcode from the comments
if 'unretract' in comment:
comment = ' (down) ' + comment
elif 'retract' in comment:
comment = ' (up)' + comment
new_code += gcode + ';' + comment + '\n' # rebuild the code from the modified pieces
f.seek(0) # set the cursor to the beginning of the file
f.write(new_code) # write the new code over the old one
</code></pre>
<p>The file contents would now be:</p>
<pre><code>G1 Z0.242 F7800.000 ; move to next layer (0)
G1 E-2.00000 F2400.00000 ; (up) retract
G1 X0.000 Y30.140 F7800.000 ; move to first skirt point
G1 E0.00000 F2400.00000 ; (down) unretract
G1 X-53.493 Y30.140 E2.14998 F1800.000 ; skirt
G1 X57.279 Y-37.776 E22.65617 ; perimeter
G1 X-52.771 Y-38.586 E56.83128 ; infill
</code></pre>
<p>If you want to modify the gcode instead, lets say replace the first letter by a <code>'U'</code> if the comment indicates retraction, and <code>'D'</code> if the comment indicates unretraction, you just have to replace this:</p>
<pre><code>if 'unretract' in comment:
comment = ' (down) ' + comment
elif 'retract' in comment:
comment = ' (up)' + comment
</code></pre>
<p>By this:</p>
<pre><code>if 'unretract' in comment:
gcode = 'D' + gcode[1:]
elif 'retract' in comment:
gcode = 'U' + gcode[1:]
</code></pre>
<p>New file contents:</p>
<pre><code>G1 Z0.242 F7800.000 ; move to next layer (0)
U1 E-2.00000 F2400.00000 ; retract
G1 X0.000 Y30.140 F7800.000 ; move to first skirt point
D1 E0.00000 F2400.00000 ; unretract
G1 X-53.493 Y30.140 E2.14998 F1800.000 ; skirt
G1 X57.279 Y-37.776 E22.65617 ; perimeter
G1 X-52.771 Y-38.586 E56.83128 ; infill
</code></pre>
<p>I hope this helps !</p>
<hr>
<p><strong>EDIT</strong></p>
<p>To answer your request to get the <code>X</code>, <code>Y</code>, and <code>F</code> values, here is the updated script to store these values:</p>
<pre><code>file_name = "file.gcode" # put your filename here
with open(file_name, 'r+') as f:
coordinates = []
content = f.readlines() # gcode as a list where each element is a line
for line in content:
gcode, comment = line.strip('\n').split(";")
coordinate_set = {}
if 'retract' not in comment and 'layer' not in comment:
for num in gcode.split()[1:]:
coordinate_set[num[:1]] = float(num[1:])
coordinates.append(coordinate_set)
</code></pre>
<p>If you <code>print(coordinates)</code> you get:</p>
<pre><code>[{'X': 0.0, 'F': 7800.0, 'Y': 30.14}, {'E': 2.14998, 'X': -53.493, 'F': 1800.0, 'Y': 30.14}, {'E': 22.65617, 'X': 57.279, 'Y': -37.776}, {'E': 56.83128, 'X': -52.771, 'Y': -38.586}]
</code></pre>
<hr>
<p><strong>EDIT 2</strong></p>
<p>Script 1:</p>
<pre><code>file_name = "file.gcode"
with open(file_name, 'r+') as f:
new_code = ""
content = f.readlines()
for line in content:
if ';' in line:
try:
gcode, comment = line.strip('\n').split(";")
except:
print('ERROR\n', line)
else: # when there are no comments
gcode = line.strip('\n')
comment = ""
if 'unretract' in comment:
comment = ' (down) ' + comment
elif 'retract' in comment:
comment = ' (up)' + comment
if comment != "":
new_code += gcode + ';' + comment + '\n'
else: # when there are no comments
new_code += gcode + '\n'
f.seek(0)
f.write(new_code)
</code></pre>
<p>Script 2:</p>
<pre><code>file_name = "file.gcode"
with open(file_name, 'r+') as f:
coordinates = []
content = f.readlines()
for line in content:
if ';' in line:
try:
gcode, comment = line.strip('\n').split(";")
except:
print(' ERROR \n', line, '\n')
else:
gcode = line.strip('\n')
comment = ""
coordinate_set = {}
if 'retract' not in comment and 'layer' not in comment and gcode:
for num in gcode.split()[1:]:
coordinate_set[num[:1]] = float(num[1:])
coordinates.append(coordinate_set)
</code></pre>
<p>Some lines where Slic3r does some weird text do not work, this is what the <code>ERROR</code> printed thing is. Also, I recommend you don't try and <code>print</code> all of the coordinates, as this made Python crash. If you want to see the totality of the coordinates, you can paste them on a seperate <code>.txt</code> file, as such:</p>
<pre><code>with ('coordinates.txt', 'w') as f2:
f2.write(coordinates)
</code></pre>
<hr>
<p><strong>EDIT 3</strong></p>
<p>Updated scripts to work with comments in parentheses.</p>
<p>Script 1:</p>
<pre><code>file_name = "file.gcode"
with open(file_name, 'r+') as f:
new_code = ""
content = f.readlines()
for line in content:
if ';' in line:
try:
gcode, comment = line.strip('\n').split(";")
except:
print('ERROR\n', line)
else: # when there are no comments
gcode = line.strip('\n')
comment = ""
if 'unretract' in comment:
comment = ' (down) ' + comment
elif 'retract' in comment:
comment = ' (up)' + comment
if comment != "":
new_code += gcode + ';' + comment + '\n'
else: # when there are no comments
new_code += gcode + '\n'
f.seek(0)
f.write(new_code)
</code></pre>
<p>Script 2:</p>
<pre><code>file_name = "3Samples_0skin_3per.gcode" # put your filename here
with open(file_name, 'r+') as f:
coordinates = []
content = f.readlines()
for line in content:
if ';' in line:
try:
gcode, comment = line.strip('\n').split(";")
except:
print(' ERROR 1: \n', line, '\n')
elif '(' in line:
try:
gcode, comment = line.strip('\n').strip(')').split("(")
except:
print('ERROR 2: \n', line, '\n')
else:
gcode = line.strip('\n')
comment = ""
coordinate_set = {}
if 'retract' not in comment and 'layer' not in comment and gcode:
for num in gcode.split()[1:]:
if len(num) > 1:
try:
coordinate_set[num[:1]] = float(num[1:])
except:
print('ERROR 3: \n', gcode, '\n')
coordinates.append(coordinate_set)
</code></pre>
| 1 | 2016-08-03T09:26:47Z | [
"python",
"if-statement",
"for-loop",
"g-code"
] |
Installing OpenCV for Python 3 on Fedora 24 | 38,727,447 | <p>I followed very this carefully <a href="https://gist.github.com/wuerges/b3c32dfcc6302aa1c105" rel="nofollow">makefile</a> and well, it finishes well.</p>
<p>On CMake output, there's this: <strong>python(for build): 2.7</strong> instead of <strong>3.5</strong></p>
<p>But I read <a href="http://www.pyimagesearch.com/2015/07/20/install-opencv-3-0-and-python-3-4-on-ubuntu/" rel="nofollow">here</a> and I quote: </p>
<blockquote>
<p>You can ignore the âfor buildâ section, that part of the CMake script
is buggy.</p>
</blockquote>
<p>But after I finish compiling and installing:</p>
<pre><code>$ python3
> import cv2
</code></pre>
<p><strong>ImportError: No module named 'cv2'</strong></p>
<p>What I am doing wrong? do I need to specify PYTHON3_PACKAGES_PATH, PYTHON3_LIBRARY, PYTHON3_INCLUDE_DIR? I want to install OpenCV system wide so I can symlink it and used it in any virtualenv that I might require opencv. Thanks in advance.</p>
| 0 | 2016-08-02T17:58:37Z | 38,775,801 | <p>After some comments, I think that your problem may be with setting the path of the library. So, after compiling OpenCV (see <a href="http://paste.fedoraproject.org/401387/" rel="nofollow">here</a>) I didn't install it on a default location on my system, I rather installed it in a local folder (it is easier to delete after this test), so I needed to provide the path to the library, that's why I did:</p>
<pre><code>$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/german/Programs/test-install/usr/local/lib
</code></pre>
<p>Then I also need to tell python where the module is, because again it isn't in a default path, so I append the path of OpenCV to the path list:</p>
<pre><code>$ python3
>>> import sys
>>> sys.path.append("/home/german/Programs/test-install/usr/local/lib/python3.5/site-packages/")
</code></pre>
<p>You may want to check your python path after the append:</p>
<pre><code>>>> sys.path
['',
'/usr/bin',
'/usr/lib64/python35.zip',
'/usr/lib64/python3.5',
'/usr/lib64/python3.5/plat-linux',
'/usr/lib64/python3.5/lib-dynload',
'/usr/lib64/python3.5/site-packages',
'/usr/lib/python3.5/site-packages',
'/usr/lib/python3.5/site-packages/IPython/extensions',
'/home/german/.ipython',
'/home/german/Programs/test-install/usr/local/lib/python3.5/site-packages/']
</code></pre>
<p>Hope this helps to figure out your problem!</p>
| 1 | 2016-08-04T19:26:32Z | [
"python",
"opencv",
"fedora"
] |
Applying language filter to Entrez.esearch and Entrez.efetch | 38,727,469 | <p>I'm querying PubMed for some results using <code>Biopython</code>. This is a portion of the code:</p>
<pre><code>def search(query):
Entrez.email = 'gandalf@rivendell.lotr'
handle = Entrez.esearch(db = 'pubmed',
sort = 'relevance',
retmax = '30000',
retmode = 'xml',
term = query)
results = Entrez.read(handle)
return results
</code></pre>
<p>I want to the results to have papers only in English. I checked the documentation at <a href="http://www.ncbi.nlm.nih.gov/books/NBK25499/" rel="nofollow">http://www.ncbi.nlm.nih.gov/books/NBK25499/</a> but found no attributes for this filter.</p>
<p><code>PubMed</code>'s manual search allows filtering by language. How should I modify the code?</p>
| 1 | 2016-08-02T17:59:36Z | 38,728,096 | <p>You can modify the search term as shown below:</p>
<pre><code>query = "{} AND English[Language]".format(query)
handle = Entrez.esearch(db='pubmed',
sort='relevance',
retmax='30',
retmode='xml',
term=query)
</code></pre>
| 2 | 2016-08-02T18:37:09Z | [
"python",
"filter",
"biopython",
"pubmed"
] |
Redirecting C# Standard Output and Reading it with Python | 38,727,491 | <p>I'm trying to redirect that standard output from a Command Line Project I wrote in C# and reading through the data in a Python file. </p>
<p>Currently, the C# application writes data that it reads from a sensor into a CSV file. I have to run the Python file later to get and process the data (has to be done in Python and the data collection has to be done in .NET to use the SDK).</p>
<p>I want to be able to run the C# and the Python projects at the same time and transfer the stream of data directly from the C# to Python project without the use of an intermediate, local file (the CSV). </p>
<p>I've done my own hunting on SO and in the MSDN Documentation. I'm looking at using the <a href="https://msdn.microsoft.com/en-us/library/system.diagnostics.processstartinfo.redirectstandardoutput(v=vs.110).aspx" rel="nofollow">ProcessStartInfo.RedirectStandardOutput Property</a> to redirect the console output of the C# application.</p>
<p>I don't yet know how to pick up on the data in this Stream from a Python file. <a href="http://www.diveintopython.net/scripts_and_streams/stdin_stdout_stderr.html" rel="nofollow">This article</a> was helpful in grasping a better understanding of how to approach it, but I'm still stuck. </p>
<p>I'm also looking at subprocess.Popen.communicate in Python but I'm not yet sure if that would work for what I am asking. </p>
<p>Any help would be appreciated. </p>
| 0 | 2016-08-02T18:00:25Z | 38,728,687 | <p>Thanks to @Quantic for providing some great resources. Using <code>subprocess.Popen</code>, I can run the built .exe of my C# project and redirect the output to my Python file. I am able now to <code>print()</code> to output to the Python console all the information being output to the C# console (<code>Console.WriteLine()</code>).</p>
<p>Python code: </p>
<pre><code>from subprocess import Popen, PIPE, STDOUT
p = Popen('ConsoleDataImporter.exe', stdout = PIPE, stderr = STDOUT, shell = True)
while True:
line = p.stdout.readline()
print(line)
if not line:
break
</code></pre>
<p>This gets the console output of the .NET project line by line as it is created and breaks out of the enclosing <code>while</code> loop upon the project's termination. </p>
| 0 | 2016-08-02T19:11:38Z | [
"c#",
"python",
"stream",
"stdout"
] |
Adding default parameter value with type hint in Python | 38,727,520 | <p>If I have a function like this:</p>
<pre><code>def foo(name, opts={}):
pass
</code></pre>
<p>And I want to add type hints to the parameters, how do I do it? The way I assumed gives me a syntax error:</p>
<pre><code>def foo(name: str, opts={}: dict) -> str:
pass
</code></pre>
<p>The following doesn't throw a syntax error but it doesn't seem like the intuitive way to handle this case:</p>
<pre><code>def foo(name: str, opts: dict={}) -> str:
pass
</code></pre>
<p>I can't find anything in the <a href="https://docs.python.org/3/library/typing.html" rel="nofollow"><code>typing</code> documentation</a> or on a Google search. </p>
<p>Edit: I didn't know how default arguments worked in Python, but for the sake of this question, I will keep the examples above. In general it's much better to do the following:</p>
<pre><code>def foo(name: str, opts: dict=None) -> str:
if not opts:
opts={}
pass
</code></pre>
| 3 | 2016-08-02T18:01:36Z | 38,727,786 | <p>Your second way is correct. </p>
<pre><code>def foo(opts: dict = {}):
pass
print(foo.__annotations__)
</code></pre>
<p>this outputs</p>
<pre><code>{'opts': <class 'dict'>}
</code></pre>
<p>It's true that's it's not listed in <a href="https://www.python.org/dev/peps/pep-0484/" rel="nofollow">PEP 484</a>, but type hints are an application of function annotations, which are documented in PEP 3107. <a href="https://www.python.org/dev/peps/pep-3107/#syntax" rel="nofollow">The syntax section</a> makes it clear that keyword arguments works with function annotations in this way. </p>
<p>I strongly advise against using mutable keyword arguments. More information <a href="http://docs.python-guide.org/en/latest/writing/gotchas/#mutable-default-arguments" rel="nofollow">here</a>.</p>
| 3 | 2016-08-02T18:18:22Z | [
"python",
"python-3.x",
"type-hinting"
] |
How to Loop through a CSV and use each Row String to Create a QR Code - PYTHON | 38,727,579 | <p>I am trying to write a code that will loop through a CSV file I have, combine the two pieces of data ( In this case "Rep" and "Entry") and then create a QR Code for each returned value... I have figured out how to make the QR code and how to Combine the data, but I cannot figure out the loop and how to put all of this together. Thank you for any help!</p>
<pre><code>import csv
import qrcode
with open('SLS_labels.csv') as csvfile:
fieldnames= ["Rep", "Entry"]
reader= csv.reader(csvfile)
for row in reader:
labeldata = row[0] + row[1]
print labeldata
qr = qrcode.QRCode(
version=1,
error_correction=qrcode.constants.ERROR_CORRECT_L,
box_size=1,
border=4,
)
qr.add_data(labeldata)
qr.make(fit=True)
img = qr.make_image()
img.save("test.jpg")
</code></pre>
| 0 | 2016-08-02T18:05:33Z | 38,727,735 | <p>You are going to want to create you label within the for loop:</p>
<pre><code>for row in reader:
labeldata = row[0] + row[1]
qr = qrcode.QRCode(
version=1,
error_correction=qrcode.constants.ERROR_CORRECT_L,
box_size=1,
border=4)
qr.add_data(labeldata)
qr.make(fit=True)
img = qr.make_image()
img.save(labeldata+".jpg") #this assumes your label data would make a good file name
</code></pre>
<p>Alternatively you could add all of your <code>labeldata</code> to a list and iterate over that...</p>
<pre><code>labeldata = []
for row in reader:
labeldata += [row[0] + row[1]]
...
for label in labeldata:
#make labels
</code></pre>
| 0 | 2016-08-02T18:15:47Z | [
"python",
"python-2.7",
"loops",
"csv",
"qr-code"
] |
How to Loop through a CSV and use each Row String to Create a QR Code - PYTHON | 38,727,579 | <p>I am trying to write a code that will loop through a CSV file I have, combine the two pieces of data ( In this case "Rep" and "Entry") and then create a QR Code for each returned value... I have figured out how to make the QR code and how to Combine the data, but I cannot figure out the loop and how to put all of this together. Thank you for any help!</p>
<pre><code>import csv
import qrcode
with open('SLS_labels.csv') as csvfile:
fieldnames= ["Rep", "Entry"]
reader= csv.reader(csvfile)
for row in reader:
labeldata = row[0] + row[1]
print labeldata
qr = qrcode.QRCode(
version=1,
error_correction=qrcode.constants.ERROR_CORRECT_L,
box_size=1,
border=4,
)
qr.add_data(labeldata)
qr.make(fit=True)
img = qr.make_image()
img.save("test.jpg")
</code></pre>
| 0 | 2016-08-02T18:05:33Z | 38,727,767 | <pre><code>import csv
import qrcode
with open('SLS_labels.csv') as csvfile:
fieldnames= ["Rep", "Entry"]
reader= csv.reader(csvfile)
qr = qrcode.QRCode(
version=1,
error_correction=qrcode.constants.ERROR_CORRECT_L,
box_size=1,
border=4,
)
for i, row in enumerate(reader):
labeldata = row[0] + row[1]
print labeldata
qr.add_data(labeldata)
qr.make(fit=True)
img = qr.make_image()
img.save("test{}.jpg".format(i))
</code></pre>
<p>I added enumerate so you'd also get an index number for your file names instead of having multiple <code>test.jpg</code>.</p>
<p>If you want to clear the data added you can call <code>qr.clear()</code> after <code>img.save()</code></p>
| 0 | 2016-08-02T18:17:21Z | [
"python",
"python-2.7",
"loops",
"csv",
"qr-code"
] |
How does one set different learning rates for different layers or variables in TensorFlow? | 38,727,612 | <p>I know that one can simply do it for all of them using something as in the tutorials:</p>
<pre><code>opt = tf.train.GradientDescentOptimizer(learning_rate)
</code></pre>
<p>however it would be nice it one could pass a dictionary that maps the variable name to its corresponding learning rate. Is that possible?</p>
<p>I know that one could simply use <code>compute_gradients()</code> followed by <code>apply_gradients()</code> and do it manually but that seems silly. Is there a smarter way to assign specific learning rates to specific variables?</p>
<p>Is the only way to do this to create specific optimizer as in:</p>
<pre><code># Create an optimizer with the desired parameters.
opt = GradientDescentOptimizer(learning_rate=0.1)
# Add Ops to the graph to minimize a cost by updating a list of variables.
# "cost" is a Tensor, and the list of variables contains tf.Variable
# objects.
opt_op = opt.minimize(cost, var_list=<list of variables>)
</code></pre>
<p>and simply give the specific learning rate to each optimizer? But that would mean we have a list of optimizers and hence, we would need to apply the learning rule with sess.run to each optimizer. Right?</p>
| 1 | 2016-08-02T18:08:00Z | 38,728,740 | <p>As far as I can tell this is not possible. Mostly because this is not really a valid gradient descent then. There are plenty of optimizers which learn on their own variable specific scaling factors (like Adam or AdaGrad). Specyfing per-variable learning rate (constant one) would mean that you do not follow the gradient anymore, and while it makes sense for well formulated mathematically methods, simply setting them to a pre-defined values is just a heuristic, which I believe is a reason for not implementing this in core TF.</p>
<p>As you said - you can always do it on your own, define your own optimizer, iterate over variables between compute gradients and apply them, which would be around 3-4 lines of code (one to compute the gradients, one to iterate and add multiplication ops, and one to apply them back), and as far as I know - this is the simplest solution to achieve your goal.</p>
| 1 | 2016-08-02T19:15:22Z | [
"python",
"machine-learning",
"neural-network",
"tensorflow",
"conv-neural-network"
] |
How do you plot a line with two slopes using python | 38,727,734 | <p>I am using the below codes to plot a line with two slopes as shown in the picture.The slope should should decline after certain limit [limit=5]. I am using vectorisation method to set the slope values.Is there any other method to set the slope values.Could anyone help me in this?</p>
<pre><code> import matplotlib.pyplot as plt
import numpy as np
#Setting the condition
L=5 #Limit
m=1 #Slope
c=0 #Intercept
x=np.linspace(0,10,1000)
#Calculate the y value
y=m*x+c
#plot the line
plt.plot(x,y)
#Set the slope values using vectorisation
m[(x<L)] = 1.0
m[(x>L)] = 0.75
# plot the line again
plt.plot(x,y)
#Display with grids
plt.grid()
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/3dtgA.png" rel="nofollow"><img src="http://i.stack.imgur.com/3dtgA.png" alt="enter image description here"></a></p>
| 2 | 2016-08-02T18:15:45Z | 38,728,070 | <p>Following your code, you should modify the main part like this:</p>
<pre><code>x=np.linspace(0,10,1000)
m = np.empty(x.shape)
c = np.empty(x.shape)
m[(x<L)] = 1.0
c[x<L] = 0
m[(x>L)] = 0.75
c[x>L] = L*(1.0 - 0.75)
y=m*x+c
plt.plot(x,y)
</code></pre>
<p>Note that <code>c</code> needs to change as well for the line to be continuous. This is the result: <a href="http://i.stack.imgur.com/Jd2N6.png" rel="nofollow"><img src="http://i.stack.imgur.com/Jd2N6.png" alt="enter image description here"></a></p>
| 1 | 2016-08-02T18:35:45Z | [
"python",
"numpy",
"matplotlib"
] |
How do you plot a line with two slopes using python | 38,727,734 | <p>I am using the below codes to plot a line with two slopes as shown in the picture.The slope should should decline after certain limit [limit=5]. I am using vectorisation method to set the slope values.Is there any other method to set the slope values.Could anyone help me in this?</p>
<pre><code> import matplotlib.pyplot as plt
import numpy as np
#Setting the condition
L=5 #Limit
m=1 #Slope
c=0 #Intercept
x=np.linspace(0,10,1000)
#Calculate the y value
y=m*x+c
#plot the line
plt.plot(x,y)
#Set the slope values using vectorisation
m[(x<L)] = 1.0
m[(x>L)] = 0.75
# plot the line again
plt.plot(x,y)
#Display with grids
plt.grid()
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/3dtgA.png" rel="nofollow"><img src="http://i.stack.imgur.com/3dtgA.png" alt="enter image description here"></a></p>
| 2 | 2016-08-02T18:15:45Z | 38,728,071 | <p>You may be overthinking the problem. There are two line segments in the picture:</p>
<ol>
<li>From (0, 0) to (A, A')</li>
<li>From (A, A') to (B, B')</li>
</ol>
<p>You know that <code>A = 5</code>, <code>m = 1</code>, so <code>A' = 5</code>. You also know that <code>B = 10</code>. Given that <code>(B' - A') / (B - A) = 0.75</code>, we have <code>B' = 8.75</code>. You can therefore make the plot as follows:</p>
<pre><code>from matplotlib import pyplot as plt
m0 = 1
m1 = 0.75
x0 = 0 # Intercept
x1 = 5 # A
x2 = 10 # B
y0 = 0 # Intercept
y1 = y0 + m0 * (x1 - x0) # A'
y2 = y1 + m1 * (x2 - x1) # B'
plt.plot([x0, x1, x2], [y0, y1, y2])
</code></pre>
<p>Hopefully you see the pattern for computing y values for a given set of limits. Here is the result:</p>
<p><a href="http://i.stack.imgur.com/o1iIM.png" rel="nofollow"><img src="http://i.stack.imgur.com/o1iIM.png" alt="enter image description here"></a></p>
<p>Now let's say you really did want to use vectorization for some obscure reason. You would want to compute all the y values up front and plot once, otherwise you will get weird results. Here are some modifications to your original code:</p>
<pre><code>from matplotlib import pyplot as plt
import numpy as np
#Setting the condition
L = 5 #Limit
x = np.linspace(0, 10, 1000)
lMask = (x<=L) # Avoid recomputing this mask
# Compute a vector of slope values for each x
m = np.zeros_like(x)
m[lMask] = 1.0
m[~lMask] = 0.75
# Compute the y-intercept for each segment
b = np.zeros_like(x)
#b[lMask] = 0.0 # Already set to zero, so skip this step
b[~lMask] = L * (m[0] - 0.75)
# Compute the y-vector
y = m * x + b
# plot the line again
plt.plot(x, y)
#Display with grids
plt.grid()
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/1afa7.png" rel="nofollow"><img src="http://i.stack.imgur.com/1afa7.png" alt="enter image description here"></a></p>
| 1 | 2016-08-02T18:35:46Z | [
"python",
"numpy",
"matplotlib"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.