title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Python NameError: name 'encrypt' is not defined | 38,631,474 | <p>When I attempt to run this it says NameError: name 'encrypt' is not defined.</p>
<pre><code>MAX_KEY_SIZE = 26
def getMode():
while True:
print('Do you wish to encrypt or decrypt a message?')
mode = input().lower()
if mode in "encrypt" 'e' 'decrypt' 'd'.split():
return mode
else:
print('Enter either "encrypt" or "e" or "decrypt" or "d".')
</code></pre>
| 0 | 2016-07-28T08:52:31Z | 38,631,721 | <p>Gotcha! <code>input</code> tries to eval your input (as such, it's named very misleadingly). Use <code>raw_input</code> for capturing user's wishes in string format.</p>
<p>Basically what <code>input</code> does is it takes <code>raw_input</code> and pipes it to <code>eval</code>: now you're trying to evaluate a string "encrypt" as Python code, so it has the same effect as writing "encrypt" to your file. Naturally that would result in an error because no such variable is introduced anywhere. Both <code>eval</code> and <code>input</code> are pretty dangerous stuff so try not to use them, there's very seldom a real use case for them.</p>
<p>More info on this difference around this site:
<a href="http://stackoverflow.com/a/15129556/308668">http://stackoverflow.com/a/15129556/308668</a></p>
| 1 | 2016-07-28T09:03:49Z | [
"python",
"nameerror"
] |
Python NameError: name 'encrypt' is not defined | 38,631,474 | <p>When I attempt to run this it says NameError: name 'encrypt' is not defined.</p>
<pre><code>MAX_KEY_SIZE = 26
def getMode():
while True:
print('Do you wish to encrypt or decrypt a message?')
mode = input().lower()
if mode in "encrypt" 'e' 'decrypt' 'd'.split():
return mode
else:
print('Enter either "encrypt" or "e" or "decrypt" or "d".')
</code></pre>
| 0 | 2016-07-28T08:52:31Z | 38,632,362 | <pre><code>MAX_KEY_SIZE = 26
def getMode():
while True:
print ('Do you wish to encrypt or decrypt a message?')
mode = input().lower()
if mode in "encrypt" 'e' 'decrypt' 'd'.split():
return mode
else:
print('Enter either "encrypt" or "e" or "decrypt" or "d".')
</code></pre>
<p>Hope this is your code.. if Yes, then it should not give any error, also the method you are trying to get the result is supposedly will not solve your purpose, because <code>"encrypt" 'e' 'decrypt' 'd'.split()</code> will give you <code>['encryptedecryptd']</code> and you cannot search mode through "in" method that you are trying. Either you can search mode like: <code>if any(mode in s for s in "encrypt" 'e' 'decrypt' 'd'.split()): or you can store</code>"encrypt" 'e' 'decrypt' 'd'` in list and then use "in" method to match with the user's input.</p>
<p>Hope it helps..</p>
| 0 | 2016-07-28T09:29:51Z | [
"python",
"nameerror"
] |
Python NameError: name 'encrypt' is not defined | 38,631,474 | <p>When I attempt to run this it says NameError: name 'encrypt' is not defined.</p>
<pre><code>MAX_KEY_SIZE = 26
def getMode():
while True:
print('Do you wish to encrypt or decrypt a message?')
mode = input().lower()
if mode in "encrypt" 'e' 'decrypt' 'd'.split():
return mode
else:
print('Enter either "encrypt" or "e" or "decrypt" or "d".')
</code></pre>
| 0 | 2016-07-28T08:52:31Z | 38,632,416 | <p>Expanding on <a href="https://stackoverflow.com/questions/38631474/38631721#38631721">pogo's answer</a>, which is correct...</p>
<p>What surprised me (and apparently many others) is that the cluster of strings in the <code>if mode in ...:</code> line is <em>not</em> a syntax error.</p>
<pre><code>if mode in "encrypt" 'e' 'decrypt' 'd'.split():
</code></pre>
<p>Those strings are all compile-time constants, so <a href="https://docs.python.org/2/reference/lexical_analysis.html#string-literal-concatenation" rel="nofollow">string literal concatenation</a> glues them into <em>one</em> string before execution starts:</p>
<pre><code>>>> "encrypt" 'e' 'decrypt' 'd'
'encryptedecryptd'
</code></pre>
<p>The <code>split()</code> method is then called on that string, which by chance does not contain any whitespace. The return value is a list containing a single string:</p>
<pre><code>>>> "encrypt" 'e' 'decrypt' 'd'.split()
['encryptedecryptd']
</code></pre>
<p>The <code>in</code> operator won't complain about being given a string (<code>mode</code>) and a list of strings, but it will return <code>False</code> for every value of <code>mode</code> except one... which no one is ever likely to type:</p>
<pre><code>>>> 'encrypt' in ['encryptedecryptd']
False
>>> 'encryptedecryptd' in ['encryptedecryptd']
True
</code></pre>
| 0 | 2016-07-28T09:31:42Z | [
"python",
"nameerror"
] |
how to paging from top 99 rows in sqlalchemy? | 38,631,475 | <p>select *from
(select *from new_count_goods_details limit 99)
sub limit 10 offset 90;</p>
<p>how to achieve the mysql statement by sqlalchemy?</p>
<p>this is my code:</p>
<pre><code> limit_subquery = q.filter(StyleList.add_time >= yesterday_18).\
filter(StyleList.add_time <= today_18).\
order_by(StyleList.rank_num.desc()).\
limit(99).\
subquery("limit_subquery")
q = limit_subquery.offset((p-1)*ps).limit(ps)
</code></pre>
<p>it is error code, error info:
AttributeError: 'Alias' object has no attribute 'offset'</p>
| 0 | 2016-07-28T08:52:34Z | 38,632,661 | <p>You need to select from subquery</p>
<pre><code>q = session.query(limit_subquery).offset((p-1)*ps).limit(ps)
</code></pre>
| 0 | 2016-07-28T09:41:55Z | [
"python",
"sqlalchemy"
] |
Python third party Module global import | 38,631,493 | <p>i'm currently into learning a bit of python and i want to import the paperclip third party module into my python file.</p>
<p>Yes, i already installed the pyperclip module with
<code>pip install pyperclip</code>.</p>
<p>if i create a file on my desktop, i get an error which says
<code>
Traceback (most recent call last):
File "test.py", line 1, in <module>
import pyperclip
ImportError: No module named pyperclip
</code></p>
<p>However if i put the test.py in my python folder, it runs.</p>
<p>The question now is, is there a way to make all my installed modules available on a global scope ? I just want to have my file e.g. on my Desktop and run it without having import issues.</p>
<p>Thank you.</p>
<p>Greetings</p>
<p>Edit: I'm working on a Mac, maybe this leads to the problem</p>
| 0 | 2016-07-28T08:53:17Z | 38,631,601 | <p>You can do this :</p>
<p>pip3.5 install paperclip</p>
<p>pyperclip is install but not paperclip</p>
| 0 | 2016-07-28T08:58:32Z | [
"python",
"import",
"module"
] |
Python third party Module global import | 38,631,493 | <p>i'm currently into learning a bit of python and i want to import the paperclip third party module into my python file.</p>
<p>Yes, i already installed the pyperclip module with
<code>pip install pyperclip</code>.</p>
<p>if i create a file on my desktop, i get an error which says
<code>
Traceback (most recent call last):
File "test.py", line 1, in <module>
import pyperclip
ImportError: No module named pyperclip
</code></p>
<p>However if i put the test.py in my python folder, it runs.</p>
<p>The question now is, is there a way to make all my installed modules available on a global scope ? I just want to have my file e.g. on my Desktop and run it without having import issues.</p>
<p>Thank you.</p>
<p>Greetings</p>
<p>Edit: I'm working on a Mac, maybe this leads to the problem</p>
| 0 | 2016-07-28T08:53:17Z | 38,632,431 | <p>Found the problem. </p>
<p>The <code>pip install</code>automatically used <code>pip3.5 install</code> <br/>
whereas <code>python test.py</code>didn't use <code>python3.5 test.py</code></p>
<p>Thank you @Bakurìu </p>
<p>Is there a way i can define <code>python3.5</code>as <code>python</code>?</p>
| 0 | 2016-07-28T09:32:12Z | [
"python",
"import",
"module"
] |
Python 3.4.4 - pip collection error | 38,631,525 | <p>I would really appreciate some help on the below query - I've been trying to get pip to work for the best of a day and really struggling. </p>
<p>Regardless of which module I try to install I keep getting a "Could not find a version that satisfies the requirement openpyxl (from version s: )
No matching distribution found for openpyxl</p>
<p>Please see a screenshot below of the error:</p>
<p><a href="http://i.stack.imgur.com/AsaOx.png" rel="nofollow">enter image description here</a></p>
<p>I've looked at around the website, including the following posts, to help with installing pip but still unsure as to why it isn't working.</p>
<p><a href="http://stackoverflow.com/questions/23708898/pip-is-not-recognized-as-an-internal-or-external-command">'pip' is not recognized as an internal or external command</a></p>
<p><a href="http://stackoverflow.com/questions/24627525/fatal-error-in-launcher-unable-to-create-process-using-c-program-files-x86">Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""</a></p>
<p>Any help would be massively appreciated!</p>
| 0 | 2016-07-28T08:54:47Z | 38,631,627 | <p>Have you tried to check the internet connection?
From your screenshot, seems like you cannot access the URL as follow:.<br>
<a href="https://pypi.python.org/simple/openpyxl/" rel="nofollow">https://pypi.python.org/simple/openpyxl/</a></p>
| 0 | 2016-07-28T08:59:58Z | [
"python",
"pip"
] |
How can I create a word that does not contain the previous letter contained in the word? | 38,631,534 | <p>I see the point of the question stays in the first <code>elif</code>:</p>
<pre><code>import random as rnd
vowels="aeiou"
consonants="bcdfghlmnpqrstvz"
alphabet=vowels+consonants
vocabulary={}
index=0
word=""
positions=[]
while index<5:
random_lenght=rnd.randint(2,5)
while len(word)<random_lenght:
random_letter=rnd.randint(0,len(alphabet)-1)
if len(word)==0:
word+=alphabet[random_letter]
elif random_letter != positions[-1] and len(word)>0:
if word[-1] not in vowels:
word+=alphabet[random_letter]
if word[-1] not in consonants:
word+=alphabet[random_letter]
elif random_letter == positions[-1]:
break
if random_letter not in positions:
positions.append(random_letter)
if word not in vocabulary:
vocabulary[index]=word
index+=1
word=""
</code></pre>
<p>The result doesn't satisfy me as you suppose:</p>
<pre><code>{0: 'in', 1: 'th', 2: 'cuu', 3: 'th', 4: 'vd'}
</code></pre>
<p>Any help would be appreciated.</p>
| -1 | 2016-07-28T08:55:14Z | 38,633,158 | <p>What you want should be something like this (based on your implementation) :</p>
<pre><code>import random as rnd
vowels="aeiou"
consonants="bcdfghlmnpqrstvz"
alphabet=vowels+consonants
vocabulary={}
index=0
word=""
positions=[]
while index<5:
random_lenght=rnd.randint(2,5)
while len(word)<random_lenght:
random_letter=rnd.randint(0,len(alphabet)-1)
if len(word) == 0:
word+=alphabet[random_letter]
elif random_letter != positions[-1] and len(word)>0:
if word[-1] not in vowels and alphabet[random_letter] not in consonants:
word+=alphabet[random_letter]
elif word[-1] not in consonants and alphabet[random_letter] not in vowels:
word+=alphabet[random_letter]
if random_letter not in positions:
positions.append(random_letter)
if word not in vocabulary:
vocabulary[index]=word
index+=1
word=""
</code></pre>
<p>And another version :</p>
<pre><code>import string
import random
isVowel = lambda letter: letter in "aeiou"
def generateWord(lengthMin, lengthMax):
word = ""
wordLength = random.randint(lengthMin, lengthMax)
while len(word) != wordLength:
letter = string.ascii_lowercase[random.randint(0,25)]
if len(word) == 0 or isVowel(word[-1]) != isVowel(letter):
word = word + letter
return word
for i in range(0, 5):
print(generateWord(2, 5))
</code></pre>
| 0 | 2016-07-28T10:02:27Z | [
"python",
"vocabulary"
] |
How can I turn csv file row column value into (row, column, value) in Python | 38,631,570 | <p><a href="http://i.stack.imgur.com/Yvgkg.png" rel="nofollow">for example, I read this 3x3 csv file.</a></p>
<pre><code> 01 02 03
01 | 11 | 22 | 33 |
02 | 44 | 55 | 66 |
03 | 77 | 88 | 99 |
</code></pre>
<p><a href="http://i.stack.imgur.com/JzCkm.png" rel="nofollow">Then ,I want to output a new textfile like this photo.</a></p>
<pre><code>â (row, column, value)
â (01, 01, 11)
â (01, 02, 22)
â (01, 03, 33)
â (02, 01, 44)
</code></pre>
<p>I want to use python by array or for loop ~~</p>
<p>like this ~</p>
<pre><code>for x in range(len(row))
</code></pre>
| 0 | 2016-07-28T08:56:57Z | 38,632,180 | <p>suppose you have example.csv file like this:</p>
<pre><code>11|22|33
44|55|66
77|88|99
</code></pre>
<hr>
<pre><code>with open("example.csv") as handler:
for r,l in enumerate(handler):
for col, e in enumerate(l.split('|')):
print('row: %s, col %s, value: %s' % (r+1, col+1, e))
</code></pre>
| 0 | 2016-07-28T09:22:49Z | [
"python",
"python-2.7",
"csv",
"row"
] |
How can I turn csv file row column value into (row, column, value) in Python | 38,631,570 | <p><a href="http://i.stack.imgur.com/Yvgkg.png" rel="nofollow">for example, I read this 3x3 csv file.</a></p>
<pre><code> 01 02 03
01 | 11 | 22 | 33 |
02 | 44 | 55 | 66 |
03 | 77 | 88 | 99 |
</code></pre>
<p><a href="http://i.stack.imgur.com/JzCkm.png" rel="nofollow">Then ,I want to output a new textfile like this photo.</a></p>
<pre><code>â (row, column, value)
â (01, 01, 11)
â (01, 02, 22)
â (01, 03, 33)
â (02, 01, 44)
</code></pre>
<p>I want to use python by array or for loop ~~</p>
<p>like this ~</p>
<pre><code>for x in range(len(row))
</code></pre>
| 0 | 2016-07-28T08:56:57Z | 38,634,200 | <pre><code>with open("t.csv","rb") as open_file:#Read the file
my_file = open_file.read().decode('utf-8','ignore')
data = my_file.splitlines()
data = [r.split('|') for r in data]
row_len = len(data)
for i,j in enumerate(data):
col_len = len(data[0])
start_index = 0
while start_index<col_len:
print (str(i).ljust(2,'0'),str(start_index).ljust(2,'0'),str(data[i][start_index]))
start_index+=1
</code></pre>
| 0 | 2016-07-28T10:49:49Z | [
"python",
"python-2.7",
"csv",
"row"
] |
Can you extend SQLAlchemy Query class and use different ones in the same session? | 38,631,651 | <p>I am using SQL Alchemy ORM and have some classes/tables each of which may have some custom queries. Let's say, I want to add to table <code>Fruit</code> the filtering possibility called <code>with_seed</code> giving me only fruits with seeds, and to table <code>Cutlery</code> the filtering method <code>is_sharp</code> giving me only sharp cutlery. I want to define these filters as extensions to the <code>Query</code> object, and I want to use them in the same transaction:</p>
<pre><code>def delete_sharp_cutlery_and_seedy_fruits(session_factory):
session = session_factory()
session.query(Fruit).with_seed().delete(synchronize_session='fetch')
session.query(Cutlery).is_sharp().delete(synchronize_session='fetch')
session.commit()
</code></pre>
<p>Is this possible?</p>
<hr>
<p>This is related to the question <a href="http://stackoverflow.com/q/15936111/1545579">here</a>. But the solution there requires different sessions to be created for the different query classes.</p>
| 1 | 2016-07-28T09:01:12Z | 38,655,960 | <p>You can pass session to Query constructor</p>
<pre><code>CustomQuery(entities=[Fruit], session=session).with_seed().delete(synchronize_session='fetch')
</code></pre>
| 0 | 2016-07-29T10:01:32Z | [
"python",
"orm",
"sqlalchemy"
] |
while true loop doesn't work inside another in python | 38,631,668 | <p>I have a <code>while true</code> loop (code below) within another loop. I want to check if you clicked on a button and if so, change the cursor into an image that I have imported before. I tried to do that by hiding the cursor and let an image follow it. But when I run this, it hides the cursor draws the image where the it was, but doesn't move with the cursor.</p>
<pre><code>while True:
for event in pygame.event.get():
if event.type == MOUSEBUTTONUP:
mousex, mousey = pygame.mouse.get_pos()
if mousex > 100 and mousex < 200 and mousey > 50 and mousey < 100: # a button on my screen
pygame.mouse.set_visible(False)
while True:
mousex, mousey = pygame.mouse.get_pos()
DISPLAYSURF.blit(cursorImg, (mousex,mousey))
pygame.display.update()
</code></pre>
<p>Can anyone tell me what I am doing wrong please?</p>
| 2 | 2016-07-28T09:01:47Z | 38,633,478 | <p>Change your code to this:</p>
<pre><code>while True:
for event in pygame.event.get():
if event.type == MOUSEBUTTONUP:
mousex, mousey = pygame.mouse.get_pos()
if mousex > 100 and mousex < 200 and mousey > 50 and mousey < 100: # a button on my screen
pygame.mouse.set_visible(False)
mousex, mousey = pygame.mouse.get_pos()
DISPLAYSURF.blit(cursorImg, (mousex,mousey))
pygame.display.update()
</code></pre>
| 1 | 2016-07-28T10:15:47Z | [
"python",
"while-loop"
] |
Running a timer for function which will be called later (python) | 38,631,717 | <pre><code>def child_thread(i):
global lock
while True:
try:
lock.acquire()
f1()
f2()
f3()
finally :
lock.release()
thread1 = threading.Thread(target=child_thread, args=(0,))
thread1.start()
</code></pre>
<p>here i need a timer for f2 function which will be called.were the thread should wait for certain time. I dont want to use sleep.</p>
| 0 | 2016-07-28T09:03:40Z | 38,631,841 | <p>The function will be call after 30.0 seconds</p>
<pre><code>from threading import Timer
def hello():
print "hello, world"
t = Timer(30.0, hello)
t.start()
</code></pre>
| 0 | 2016-07-28T09:08:30Z | [
"python"
] |
Python extract variables from an imported file | 38,632,067 | <p>I've 3 files. One is for defining the variables and other two contain the required modules.</p>
<p><strong>variable.py</strong></p>
<pre><code>my_var = ""
</code></pre>
<p><strong>test.py</strong></p>
<pre><code>import variable
def trial():
variable.my_var = "Hello"
</code></pre>
<p><strong>main.py</strong> (not working)</p>
<pre><code>from variable import *
from test import trial
if __name__ == "__main__":
trial()
print my_var
</code></pre>
<p>And when I run <code>main.py</code>, it gives nothing. However, if I change <code>main.py</code> like this,</p>
<p><strong>main.py</strong> (working)</p>
<pre><code>import variable
from test import trial
if __name__ == "__main__":
trial()
print variable.my_var
</code></pre>
<p>And it gives me the expected output i.e. <code>Hello</code>.</p>
<p>That <code>variable.py</code>, in my case, contains more variables and I don't want to use <code>variable.<variable name></code> while accessing them. While modifying them, using <code>variable.<variable name></code> is not a problem as I'm gonna modify only once depending on the user's input and then access them across multiple files multiple times rest of the time.</p>
<p>Now my question is, is there a way to extract all the variables in <code>main.py</code> after they are modified by <code>test.py</code> and use them without the prefix <code>variable.</code>?</p>
| 1 | 2016-07-28T09:17:30Z | 38,633,208 | <p>What you want to achieve is not possible. <code>from MODULE import *</code> imports all the variables names and the values into your namespace, but not the variables themselves. The variables are allocated in your local namespace and changing their value is therefore not reflected onto their origin. However, changes on mutable objects are reflected because the values you import are basically the references to instances.</p>
<p>That is one of the many reasons why unqualified imports using the above construct are not recommended. Instead, you should question your program structure and the design decisions you took. Working with a huge amount of module-level variables can become cumbersome and very inconvenient in the long term.</p>
<p>Think about using a more structured approach to your application, for example by using classes.</p>
<p>Again I would like to point out and give credits to <a href="http://stackoverflow.com/q/4758562/6525140">this thread</a> which, in essence, addresses the same problem.</p>
| 2 | 2016-07-28T10:04:09Z | [
"python",
"import"
] |
Hi, I am trying to add weekstart column | 38,632,209 | <p>In my current table I am having date column and from that column I am able to find out weekday.By using to_timedelta I have created week_start column but it is not giving correct date.
Here the code is:</p>
<pre><code>final_data['weekday'] = final_data['DateOfInvoice'].dt.weekday
final_data['Weekstart'] = final_data['DateOfInvoice'] - pd.to_timedelta(final_data['weekday'],unit='ns', box=True, coerce=True)
</code></pre>
<p>output is as:</p>
<pre><code> Date weekday weekstart
2016-07-23 5 2016-07-22
</code></pre>
| 1 | 2016-07-28T09:23:57Z | 38,632,260 | <p>IIUC you can construct a TimedeltaIndex and subtract from the other column:</p>
<pre><code>In [152]:
df['weekstart'] = df['Date'] - pd.TimedeltaIndex(df['weekday'], unit='D')
df
Out[152]:
Date weekday weekstart
0 2016-07-23 5 2016-07-18
</code></pre>
<p>in fact the weekday column is unnecessary:</p>
<pre><code>In [153]:
df['weekstart'] = df['Date'] - pd.TimedeltaIndex(df['Date'].dt.dayofweek, unit='D')
df
Out[153]:
Date weekday weekstart
0 2016-07-23 5 2016-07-18
</code></pre>
| 1 | 2016-07-28T09:25:38Z | [
"python",
"pandas"
] |
How to set parameters of the Adadelta Algorithm in Tensorflow correctly? | 38,632,536 | <p>I've been using Tensorflow for regression purposes.
My neural net is very small with 10 input neurons, 12 hidden neurons in a single layer and 5 output neurons.</p>
<ul>
<li>activation function is relu</li>
<li>cost is square distance between output and real value</li>
<li>my neural net trains correctly with other optimizers such as GradientDescent, Adam, Adagrad.</li>
</ul>
<p>However when I try to use Adadelta, the neural net simply won't train. Variables stay the same at every step.</p>
<p>I have tried with every initial learning_rate possible (from 1.0e-6 to 10) and with different weights initialization : it does always the same.</p>
<p>Does anyone have a slight idea of what is going on ?</p>
<p>Thanks so much</p>
| 1 | 2016-07-28T09:36:13Z | 38,636,092 | <h3>Short answer: don't use Adadelta</h3>
<p>Very few people use it today, you should instead stick to:</p>
<ul>
<li><code>tf.train.MomentumOptimizer</code> with <code>0.9</code> momentum is very standard and works well. The drawback is that you have to find yourself the best learning rate.</li>
<li><code>tf.train.RMSPropOptimizer</code>: the results are less dependent on a good learning rate. This algorithm is <strong>very similar to Adadelta</strong>, but performs better in my opinion.</li>
</ul>
<p>If you really want to use Adadelta, use the parameters from the paper: <code>learning_rate=1., rho=0.95, epsilon=1e-6</code>. A bigger <code>epsilon</code> will help at the start, but be prepared to wait a bit longer than with other optimizers to see convergence.</p>
<p>Note that in the paper, they don't even use a learning rate, which is the same as keeping it equal to <code>1</code>.</p>
<hr>
<h3>Long answer</h3>
<p>Adadelta has a very slow start. The full algorithm from the <a href="http://www.matthewzeiler.com/pubs/googleTR2012/googleTR2012.pdf" rel="nofollow">paper</a> is:</p>
<p><a href="http://i.stack.imgur.com/aojCe.png" rel="nofollow"><img src="http://i.stack.imgur.com/aojCe.png" alt="Adadelta"></a></p>
<p>The issue is that they accumulate the square of the updates.</p>
<ul>
<li>At step 0, the running average of these updates is zero, so the first update will be very small.</li>
<li>As the first update is very small, the running average of the updates will be very small at the beginning, which is kind of a vicious circle at the beginning</li>
</ul>
<p>I think Adadelta performs better with bigger networks than yours, and after some iterations it should equal the performance of RMSProp or Adam.</p>
<hr>
<p>Here is my code to play a bit with the Adadelta optimizer:</p>
<pre><code>import tensorflow as tf
v = tf.Variable(10.)
loss = v * v
optimizer = tf.train.AdadeltaOptimizer(1., 0.95, 1e-6)
train_op = optimizer.minimize(loss)
accum = optimizer.get_slot(v, "accum") # accumulator of the square gradients
accum_update = optimizer.get_slot(v, "accum_update") # accumulator of the square updates
sess = tf.Session()
sess.run(tf.initialize_all_variables())
for i in range(100):
sess.run(train_op)
print "%.3f \t %.3f \t %.6f" % tuple(sess.run([v, accum, accum_update]))
</code></pre>
<p>The first 10 lines:</p>
<pre><code> v accum accum_update
9.994 20.000 0.000001
9.988 38.975 0.000002
9.983 56.979 0.000003
9.978 74.061 0.000004
9.973 90.270 0.000005
9.968 105.648 0.000006
9.963 120.237 0.000006
9.958 134.077 0.000007
9.953 147.205 0.000008
9.948 159.658 0.000009
</code></pre>
| 1 | 2016-07-28T12:15:58Z | [
"python",
"neural-network",
"tensorflow"
] |
Encode IP address using all printable characters in Python 2.7.x | 38,632,580 | <p>I would like to encode an IP address in as short a string as possible using all the printable characters. According to <a href="https://en.wikipedia.org/wiki/ASCII#Printable_characters" rel="nofollow">https://en.wikipedia.org/wiki/ASCII#Printable_characters</a> these are codes 20hex to 7Ehex.</p>
<p>For example:</p>
<pre><code>shorten("172.45.1.33") --> "^.1 9" maybe.
</code></pre>
<p>In order to make decoding easy I also need the length of the encoding always to be the same. I also would like to avoid using the space character in order to make parsing easier in the future.</p>
<blockquote>
<p>How can one do this?</p>
</blockquote>
<p>I am looking for a solution that works in Python 2.7.x.</p>
<hr>
<p>My attempt so far to modify Eloims's answer to work in Python 2:</p>
<p>First I installed the ipaddress backport for Python 2 (<a href="https://pypi.python.org/pypi/ipaddress" rel="nofollow">https://pypi.python.org/pypi/ipaddress</a>) .</p>
<pre><code>#This is needed because ipaddress expects character strings and not byte strings for textual IP address representations
from __future__ import unicode_literals
import ipaddress
import base64
#Taken from http://stackoverflow.com/a/20793663/2179021
def to_bytes(n, length, endianess='big'):
h = '%x' % n
s = ('0'*(len(h) % 2) + h).zfill(length*2).decode('hex')
return s if endianess == 'big' else s[::-1]
def def encode(ip):
ip_as_integer = int(ipaddress.IPv4Address(ip))
ip_as_bytes = to_bytes(ip_as_integer, 4, endianess="big")
ip_base85 = base64.a85encode(ip_as_bytes)
return ip_base
print(encode("192.168.0.1"))
</code></pre>
<p>This now fails because base64 doesn't have an attribute 'a85encode'.</p>
| 4 | 2016-07-28T09:38:18Z | 38,633,276 | <p>An IP stored in binary is 4 bytes.</p>
<p>You can encode it in 5 printable ASCII characters using Base85.</p>
<p>Using more printable characters won't be able to shorten the resulting string more than that.</p>
<pre><code>import ipaddress
import base64
def encode(ip):
ip_as_integer = int(ipaddress.IPv4Address(ip))
ip_as_bytes = ip_as_integer.to_bytes(4, byteorder="big")
ip_base85 = base64.a85encode(ip_as_bytes)
return ip_base85
print(encode("192.168.0.1"))
</code></pre>
| 5 | 2016-07-28T10:06:54Z | [
"python"
] |
Encode IP address using all printable characters in Python 2.7.x | 38,632,580 | <p>I would like to encode an IP address in as short a string as possible using all the printable characters. According to <a href="https://en.wikipedia.org/wiki/ASCII#Printable_characters" rel="nofollow">https://en.wikipedia.org/wiki/ASCII#Printable_characters</a> these are codes 20hex to 7Ehex.</p>
<p>For example:</p>
<pre><code>shorten("172.45.1.33") --> "^.1 9" maybe.
</code></pre>
<p>In order to make decoding easy I also need the length of the encoding always to be the same. I also would like to avoid using the space character in order to make parsing easier in the future.</p>
<blockquote>
<p>How can one do this?</p>
</blockquote>
<p>I am looking for a solution that works in Python 2.7.x.</p>
<hr>
<p>My attempt so far to modify Eloims's answer to work in Python 2:</p>
<p>First I installed the ipaddress backport for Python 2 (<a href="https://pypi.python.org/pypi/ipaddress" rel="nofollow">https://pypi.python.org/pypi/ipaddress</a>) .</p>
<pre><code>#This is needed because ipaddress expects character strings and not byte strings for textual IP address representations
from __future__ import unicode_literals
import ipaddress
import base64
#Taken from http://stackoverflow.com/a/20793663/2179021
def to_bytes(n, length, endianess='big'):
h = '%x' % n
s = ('0'*(len(h) % 2) + h).zfill(length*2).decode('hex')
return s if endianess == 'big' else s[::-1]
def def encode(ip):
ip_as_integer = int(ipaddress.IPv4Address(ip))
ip_as_bytes = to_bytes(ip_as_integer, 4, endianess="big")
ip_base85 = base64.a85encode(ip_as_bytes)
return ip_base
print(encode("192.168.0.1"))
</code></pre>
<p>This now fails because base64 doesn't have an attribute 'a85encode'.</p>
| 4 | 2016-07-28T09:38:18Z | 39,681,744 | <p>I found this question looking for a way to use base85/ascii85 on python 2. Eventually I discovered a couple of projects available to install via pypi. I settled on one called <code>hackercodecs</code> because the project is specific to encoding/decoding whereas the others I found just offered the implementation as a byproduct of necessity</p>
<pre><code>from __future__ import unicode_literals
import ipaddress
from hackercodecs import ascii85_encode
def encode(ip):
return ascii85_encode(ipaddress.ip_address(ip).packed)[0]
print(encode("192.168.0.1"))
</code></pre>
<hr>
<ul>
<li><a href="https://pypi.python.org/pypi/hackercodecs" rel="nofollow">https://pypi.python.org/pypi/hackercodecs</a></li>
<li><a href="https://github.com/jdukes/hackercodecs" rel="nofollow">https://github.com/jdukes/hackercodecs</a></li>
</ul>
| 1 | 2016-09-24T23:00:56Z | [
"python"
] |
Can I run multiple threads in a single heroku (python) dyno? | 38,632,621 | <p>Does the <code>threading</code> module work when running a single dyno on heroku?
eg:</p>
<pre><code>import threading
import time
import random
def foo(x, s):
time.sleep(s)
print ("%s %s %s" % (threading.current_thread(), x, s))
for x in range(4):
threading.Thread(target=foo, args=(x, random.random())).start()
</code></pre>
<p>should return something like...</p>
<pre><code>$ python3 mythread.py
<Thread(Thread-3, started 123145318068224)> 2 0.27166873449907303
<Thread(Thread-4, started 123145323323392)> 3 0.5510182055055494
<Thread(Thread-1, started 123145307557888)> 0 0.642366815814484
<Thread(Thread-2, started 123145312813056)> 1 0.8985126103340428
</code></pre>
<p>Does it?</p>
| 1 | 2016-07-28T09:40:06Z | 38,729,726 | <p>Yes. This works fine =) Just tested on the latest Python 3 release. You can easily test this on Heroku yourself.</p>
<p>Heroku dynos are using virtual CPU cores, but threading still works fine.</p>
<p><strong>EDIT</strong>: Here's my Heroku logs</p>
<pre><code>2016-08-02T20:18:35.040230+00:00 heroku[test.1]: State changed from starting to up
2016-08-02T20:18:36.871061+00:00 app[test.1]: <Thread(Thread-3, started 140472762279680)> 2 0.10491314677740204
2016-08-02T20:18:36.969173+00:00 app[test.1]: <Thread(Thread-1, started 140472795842304)> 0 0.2034461123977389
2016-08-02T20:18:37.117934+00:00 app[test.1]: <Thread(Thread-2, started 140472779060992)> 1 0.35186381754517004
2016-08-02T20:18:37.476239+00:00 app[test.1]: <Thread(Thread-4, started 140472542557952)> 3 0.7093646481085698
</code></pre>
| 1 | 2016-08-02T20:19:16Z | [
"python",
"multithreading",
"heroku"
] |
Parsing a json nested dictionary in python | 38,632,687 | <p>I need to reach the value "y "of "B" in result.</p>
<pre><code>{
"Response": {
"Result": [2]
0: {
"A": "x"
"B": "y"
"C": "z"
}
1: {
"A": "d"
"B": "e"
"C": "f"
"D": "g"
}
}
}
</code></pre>
<p>my attempt <strong>['Response']['Result'][0]['B']</strong> produces the given error</p>
<blockquote>
<p>IndexError: list index out of range</p>
</blockquote>
<p>Any help will be appreciated. Thanks. </p>
| -2 | 2016-07-28T09:43:02Z | 38,632,782 | <p>The key <code>0</code> is not under <code>"Result"</code> you should use <code>['Response'][0]['B']</code></p>
| 2 | 2016-07-28T09:47:34Z | [
"python",
"json"
] |
Pandas: selecting multiple groups for inset plot | 38,632,724 | <p>I have a group data frame (<code>grouped_df</code>) used for plotting in the following way:</p>
<pre><code>grouped_df[['col1','col2','col3']].sum().plot(kind='bar')
</code></pre>
<p>resulting in the expected plot, which contains a group-wise sum for all three columns. However, for some of the groups these sums are very small compared to the rest and hence not easy to display in the same bar plot (see image below). </p>
<p>I want to have an inset plot for these groups. Trying,</p>
<pre><code>grouped_df[['col1','col2','col3']].sum() < "cut-off"
</code></pre>
<p>returns a boolean "list" of these groups but I cannot use any further for slicing/selection the a subset of groups of the data frame.</p>
<p>Of course, I could generate two lists of groups and then loop through the <code>grouped_df</code> but I do not think this is really a bright solution to the problem.</p>
<p><a href="http://i.stack.imgur.com/ccuqt.png" rel="nofollow"><img src="http://i.stack.imgur.com/ccuqt.png" alt="Example"></a></p>
<p>For clarity and consistence I provide a sample data frame which would be grouped by <code>grpcol</code>:</p>
<pre><code>grpcol col1 col2 col3 comment
A 0.0505 0.0134 0.0534 foo
B 0.0505 0.0134 0.2034 bar
A 0.0505 0.0134 0.0134 bar
C 0.0505 0.0134 0.0331 None
D 0.0505 0.0134 0.0342 foo
E 0.0505 0.0134 0.2134 baz
F 0.0505 0.0134 0.0302 baz
D 0.0302 0.0134 0.2134 foo
D 0.0204 0.0134 0.0400 foo
G 0.0505 0.0134 0.2200 foo
H 0.0505 0.0134 0.1734 None
H 0.0505 0.0134 0.0073 None
</code></pre>
| 0 | 2016-07-28T09:45:03Z | 38,633,133 | <p>Is this what you are looking for?</p>
<pre><code>def apply_cut_off(x1,x2,x3, CUT_OFF):
if x1 < CUT_OFF: return False
elif x2 < CUT_OFF: return False
elif x3 < CUT_OFF: return False
return True
grouped_sum = grouped_df[['col1','col2','col3']].sum()
cutoff_df = grouped_sum[ grouped_sum.apply(lambda x: apply_cut_off(x['col1'], x['col2'], x['col3'], YOUR_CUT_OFF), axis=1)]
</code></pre>
<p>This would return a data frame with the columns for which at least one element is below the cutoff and then you can do whatever you want with it.</p>
<p>Maybe I didn't get the requirement </p>
| 2 | 2016-07-28T10:01:35Z | [
"python",
"pandas"
] |
XlsxWriter: add color to cells | 38,632,753 | <p>I try to write dataframe to xlsx and give color to that.
I use</p>
<pre><code>worksheet.conditional_format('A1:C1', {'type': '3_color_scale'})
</code></pre>
<p>But it's not give color to cell. And I want to one color to this cells.
I saw <code>cell_format.set_font_color('#FF0000')</code>
but there is don't specify number of cells</p>
<pre><code>sex = pd.concat([df2[["All"]],df3], axis=1)
excel_file = 'example.xlsx'
sheet_name = 'Sheet1'
writer = pd.ExcelWriter(excel_file, engine='xlsxwriter')
sex.to_excel(writer, sheet_name=sheet_name, startrow=1)
workbook = writer.book
worksheet = writer.sheets[sheet_name]
format = workbook.add_format()
format.set_pattern(1)
format.set_bg_color('gray')
worksheet.write('A1:C1', 'Ray', format)
writer.save()
</code></pre>
<p>I need to give color to <code>A1:C1</code>, but I should give <code>name</code> to cell. How can I paint several cells of my df?</p>
| 1 | 2016-07-28T09:46:11Z | 38,633,106 | <p>The problem is that <code>worksheet.write('A1:C1', 'Ray', format)</code> is used only to write a single cell.
A possible solution to write more cells in a row, is use <code>write_row()</code>.</p>
<pre><code>worksheet.write_row("A1:C1", ['Ray','Ray2','Ray3'], format)
</code></pre>
<p>Remember that <strong>write_row()</strong> takes a list of string to write in cells.</p>
<p>If you use <code>worksheet.write_row("A1:C1", 'Ray', format)</code>, you have <strong>R</strong> in the first cell, <strong>a</strong> in second and <strong>y</strong> in the third.</p>
| 1 | 2016-07-28T10:00:27Z | [
"python",
"pandas",
"xlsxwriter"
] |
Python 3: PyQt: Make checkbox disabled + not grayed out + display tooltip | 38,632,825 | <p>The only way I found to do this is <a href="http://stackoverflow.com/questions/35190259/how-to-make-qcheckbox-readonly-but-not-grayed-out">here: How to make QCheckBox readonly, but not grayed-out</a>. This, however, disables mouse interactions with the control. But I need the tooltip to be displayed when mouse is over the control. How can I achieve this?</p>
| 0 | 2016-07-28T09:49:04Z | 38,667,489 | <pre><code>#If your are not expecting this answer, sorry.
self.checkBox = QtGui.QCheckBox()
self.checkBox.setEnabled (False)
self.checkBox.setToolTip ('my checkBox')
</code></pre>
| 0 | 2016-07-29T21:06:22Z | [
"python",
"checkbox",
"pyqt",
"pyqt4",
"disabled-control"
] |
Python 3: PyQt: Make checkbox disabled + not grayed out + display tooltip | 38,632,825 | <p>The only way I found to do this is <a href="http://stackoverflow.com/questions/35190259/how-to-make-qcheckbox-readonly-but-not-grayed-out">here: How to make QCheckBox readonly, but not grayed-out</a>. This, however, disables mouse interactions with the control. But I need the tooltip to be displayed when mouse is over the control. How can I achieve this?</p>
| 0 | 2016-07-28T09:49:04Z | 38,752,619 | <p>If I've understood correctly, this is what you'd be asking for, a disabled checkbox showing tooltips:</p>
<pre><code> import sys
from PyQt4 import QtGui, QtCore
class Example(QtGui.QWidget):
def __init__(self):
super(Example, self).__init__()
self.initUI()
def initUI(self):
self.cb = QtGui.QCheckBox('Disabled CheckBox showing tooltips', self)
self.cb.move(20, 20)
self.cb.toggle()
# self.cb.setEnabled(False)
# self.cb.setStyleSheet("color: black")
# self.cb.setAttribute(QtCore.Qt.WA_AlwaysShowToolTips)
self.cb.setToolTip ('my checkBox')
self.cb.toggled.connect(self.prevent_toggle)
self.setGeometry(300, 300, 250, 50)
self.setWindowTitle('QtGui.QCheckBox')
self.show()
def prevent_toggle(self):
self.cb.setChecked(QtCore.Qt.Checked)
def main():
app = QtGui.QApplication(sys.argv)
ex = Example()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
</code></pre>
| 0 | 2016-08-03T20:01:39Z | [
"python",
"checkbox",
"pyqt",
"pyqt4",
"disabled-control"
] |
Shodan. Get all open ports for a net | 38,632,844 | <p>I want to get all the open ports for a network with Shodan (I know I can use <code>nmap</code> but I want to carry this out with Shodan).</p>
<p>The problem is that the website only shows the "TOP Services", and I would like to be given all the services.</p>
<p>For example, for this net: 195.53.102.0/24 I am given the following ports:</p>
<pre><code>TOP SERVICES
HTTP 15
HTTPS 2
DNS 2
FTP 2
IKE-NAT-T 1
</code></pre>
<p>But if I scan this net: 195.53.0.0/16, I am given these ports:</p>
<pre><code>TOP SERVICES
HTTP 1,012
HTTPS 794
179 290
IKE 238
IKE-NAT-T 227
</code></pre>
<p>So I am missing services like <code>dns</code> and <code>ftp</code>.</p>
<p>I am trying with the API, from python:</p>
<pre><code>import shodan
SHODAN_API_KEY = "XXXXXXXXXXXXXXXXXXXXXXx"
api = shodan.Shodan(SHODAN_API_KEY)
# Wrap the request in a try/ except block to catch errors
try:
# Search Shodan
results = api.search('net:195.53.102.0/24')
for service in results['matches']:
print service['ip_str']
print service['port']
except shodan.APIError, e:
print 'Error: %s' % e
</code></pre>
<p>And this is the results I get:</p>
<pre><code>195.53.102.193
80
195.53.102.138
80
195.53.102.148
80
195.53.102.136
80
195.53.102.157
80
195.53.102.226
443
195.53.102.66
500
195.53.102.133
80
195.53.102.142
80
195.53.102.66
4500
195.53.102.141
80
195.53.102.131
21
195.53.102.152
53
195.53.102.153
21
195.53.102.209
80
195.53.102.132
53
195.53.102.226
80
195.53.102.147
80
195.53.102.142
443
195.53.102.178
80
195.53.102.135
143
195.53.102.146
80
195.53.102.143
80
195.53.102.144
80
</code></pre>
<p>Just 1 port per IP, and for example, this IP: 195.53.102.131 has ports 21, 80 and 443 open, my results say just:</p>
<pre><code>195.53.102.131
21
</code></pre>
<p>Instead of:</p>
<pre><code>195.53.102.131
21
80
443
</code></pre>
<p>So I want either to, from the website, be given all the ports/services instead of just the <code>TOP SERVICES</code> or, from the API, being able to get all the ports per IP, not just 1. Or if anyone has a better solution, I would like to hear it too.</p>
<p>As I said, I would like to perform it with Shodan, not nmap. Thank you in advance.</p>
| 0 | 2016-07-28T09:49:58Z | 38,667,275 | <p>When you use the api.search() Shodan search for a service banner. And a service banner can only have 1 port. </p>
<p>So, if you want to return all the port that a host can have, you should use api.host() </p>
<p>For example, </p>
<pre><code>import shodan
SHODAN_API_KEY = "XXXXXXXXXXXXXXXXXXXXXXx"
api = shodan.Shodan(SHODAN_API_KEY)
# Wrap the request in a try/ except block to catch errors
try:
# Search Shodan
results = api.search('net:195.53.102.0/24')
for service in results['matches']:
hostinfo = api.host(service['ip_str'])
print service['ip_str']
#Not sure if it's correct, but you should do something,
#like this:
for port in hostinfo['port']:
print port
except shodan.APIError, e:
print 'Error: %s' % e
</code></pre>
| 0 | 2016-07-29T20:50:56Z | [
"python",
"cidr",
"port-scanning",
"shodan"
] |
Sublime Text plugin - how to find all regions in selection | 38,632,861 | <p>How to find all regions in selection (regeon type too)?
If we calling this method:</p>
<pre><code>def chk_links(self,vspace):
url_regions = vspace.find_all("https?://[^\"'\s]+")
i=0
for region in url_regions:
cl = vspace.substr(region)
code = self.get_response(cl)
vspace.add_regions('url'+str(i), [region], "mark", "Packages/User/icons/"+str(code)+".png")
i = i+1
return i
</code></pre>
<p>in view context, e.g.: </p>
<pre><code>chk_links(self.view)
</code></pre>
<p>all works fine, but in this way:</p>
<pre><code>chk_links(self.view.sel()[0])
</code></pre>
<p>I get error: AttributeError: 'Region' object has no attribute 'find_all'</p>
<p>Full code of plugin you can find <a href="https://github.com/ink-ru/sublime-triks/blob/master/linkcheck.py" rel="nofollow">here</a></p>
<p><a href="https://www.sublimetext.com/docs/3/api_reference.html#sublime.View" rel="nofollow">Sublime "View" method documentation</a></p>
| 0 | 2016-07-28T09:50:56Z | 38,644,598 | <p>The <code>Selection</code> class (returned by <code>View.sel()</code>) is essentially just a list of <code>Region</code> objects that represent the current selection. A <code>Region</code> can be empty, so the list always contains a least one region with a length of 0.</p>
<p>The only <a href="http://www.sublimetext.com/docs/3/api_reference.html#sublime.Selection" rel="nofollow">methods available on the <code>Selection</code> class</a> are to modify and query it's extents. Similar <a href="http://www.sublimetext.com/docs/3/api_reference.html#sublime.Region" rel="nofollow">methods are available on the <code>Region</code> class</a>.</p>
<p>What you <em>can</em> do is instead find all of the interesting regions as your code is currently doing, and then as you're iterating them to perform your check, see if they are contained in the selection or not.</p>
<p>Here's a stripped down version of your example above to illustrate this (some of your logic has been removed for clarity). First the entire list of URL's is collected, and then as the list is iterated each region is only considered if there is <strong><em>NO</em></strong> selection or if <strong><em>THERE IS</em></strong> a selection <strong><em>AND</em></strong> the URL region is contained in the selection bounds.</p>
<pre><code>import sublime, sublime_plugin
class ExampleCommand(sublime_plugin.TextCommand):
# Check all links in view
def check_links(self, view):
# The view selection list always has at least one item; if its length is
# 0, then there is no selection; otherwise one or more regions are
# selected.
has_selection = len(view.sel()[0]) > 0
# Find all URL's in the view
url_regions = view.find_all ("https?://[^\"'\s]+")
i = 0
for region in url_regions:
# Skip any URL regions that aren't contained in the selection.
if has_selection and not view.sel ().contains (region):
continue
# Region is either in the selection or there is no selection; process
# Check and
view.add_regions ('url'+str(i), [region], "mark", "Packages/Default/Icon.png")
i = i + 1
def run(self, edit):
if self.view.is_read_only() or self.view.size () == 0:
return
self.check_links (self.view)
</code></pre>
| 1 | 2016-07-28T18:55:04Z | [
"python",
"sublimetext2",
"sublimetext3",
"sublimetext",
"sublime-text-plugin"
] |
Python function with default argument inside loop | 38,632,891 | <pre><code>for i in range(5):
def test(i=i):
print(i)
test()
test()
test()
test()
test()
</code></pre>
<p>This prints 4 every time? Can someone help me understanding this.</p>
| 2 | 2016-07-28T09:52:01Z | 38,632,942 | <p>You redefine the <code>test</code> 4 times:</p>
<p>same as:</p>
<pre><code>#define test
def test(i = 0):
print(i)
#redefine test
def test(i = 1):
print(i)
#redefine test
def test(i = 2):
print(i)
#redefine test
def test(i = 3):
print(i)
#redefine test
def test(i = 4):
print(i)
</code></pre>
<p>so you have only 1 <code>test()</code> the last one.</p>
| 6 | 2016-07-28T09:54:09Z | [
"python",
"python-2.7"
] |
Python function with default argument inside loop | 38,632,891 | <pre><code>for i in range(5):
def test(i=i):
print(i)
test()
test()
test()
test()
test()
</code></pre>
<p>This prints 4 every time? Can someone help me understanding this.</p>
| 2 | 2016-07-28T09:52:01Z | 38,632,965 | <p>The function <code>test</code> is redefined every iteration of the loop.</p>
<p>By the time the loop is done, <code>test</code> is simply:</p>
<pre><code>def test(i=4):
print(i)
</code></pre>
| 3 | 2016-07-28T09:55:26Z | [
"python",
"python-2.7"
] |
Python function with default argument inside loop | 38,632,891 | <pre><code>for i in range(5):
def test(i=i):
print(i)
test()
test()
test()
test()
test()
</code></pre>
<p>This prints 4 every time? Can someone help me understanding this.</p>
| 2 | 2016-07-28T09:52:01Z | 38,633,127 | <p>first your script completes for loop, by the end of for loop, i value is 4</p>
<p>Then how many ever times, you can call <code>test()</code>, it will print 4</p>
<p>I have added few prints to your code, so that you can understand the flow better</p>
<pre><code>for i in range(5):
print(i)
def test(i=i):
print("test")
print(i)
test()
test()
test()
test()
test()
</code></pre>
<p>Output will be:</p>
<pre><code>0
1
2
3
4
test
4
test
4
test
4
test
4
test
4
</code></pre>
| 2 | 2016-07-28T10:01:10Z | [
"python",
"python-2.7"
] |
Can not load the json file of urls in python | 38,633,009 | <p>I tried to make a dict with python , I try to retrieve data via the url however json error what is wrong ? I use python 2.7.6</p>
<pre><code>import json
import urllib
json_string = 'http://localhost/csv/taxo.json'
parsed_json = json.loads(json_string)
print(parsed_json['genus'])
</code></pre>
<p>This error</p>
<blockquote>
<p>Traceback (most recent call last):<br>
File "dic2.py", line 11, in <br>
parsed_json = json.loads(json_string)<br>
File "/usr/lib/python2.7/json/<strong>init</strong>.py", line 338, in loads<br>
return _default_decoder.decode(s)<br>
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode<br>
obj, end = self.raw_decode(s, idx=_w(s, 0).end())<br>
File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode<br>
raise ValueError("No JSON object could be decoded")<br>
ValueError: No JSON object could be decoded<br></p>
</blockquote>
| -1 | 2016-07-28T09:57:06Z | 38,633,095 | <p><code>json_string</code> is not a <code>json</code> string. It is simply a URL...</p>
<p>You should get the <strong>content</strong> of this URL with one of the HTTP modules that are available for Python.</p>
<p>You should do it the other way around. The <code>requests</code> module gives you the option to do a GET request and easily parse the response to a Python dictionary (given that the response is a valid JSON):</p>
<pre><code>import requests
my_dict = requests.get('http://localhost/csv/taxo.json').json()
</code></pre>
<p>If you want to run this code you will need to install the <code>requests</code> module.</p>
| 3 | 2016-07-28T10:00:11Z | [
"python",
"python-2.7"
] |
Can not load the json file of urls in python | 38,633,009 | <p>I tried to make a dict with python , I try to retrieve data via the url however json error what is wrong ? I use python 2.7.6</p>
<pre><code>import json
import urllib
json_string = 'http://localhost/csv/taxo.json'
parsed_json = json.loads(json_string)
print(parsed_json['genus'])
</code></pre>
<p>This error</p>
<blockquote>
<p>Traceback (most recent call last):<br>
File "dic2.py", line 11, in <br>
parsed_json = json.loads(json_string)<br>
File "/usr/lib/python2.7/json/<strong>init</strong>.py", line 338, in loads<br>
return _default_decoder.decode(s)<br>
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode<br>
obj, end = self.raw_decode(s, idx=_w(s, 0).end())<br>
File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode<br>
raise ValueError("No JSON object could be decoded")<br>
ValueError: No JSON object could be decoded<br></p>
</blockquote>
| -1 | 2016-07-28T09:57:06Z | 38,633,173 | <p>First you need to get the content of the url, for example with <strong>urllib</strong> try sth like this, but it won't work for python3:</p>
<pre><code>import json
import urllib
json_url = 'http://localhost/csv/taxo.json'
parsed_json = json.load(urllib.urlopen(json_url ))
print(parsed_json['genus'])
</code></pre>
<p>You should remember to change <strong>json.loads</strong> to <strong>json.load</strong> as the second one will also execute <strong>.read()</strong> method behind the scene, on retrived object, which is needed to correctly ready the data.</p>
| 1 | 2016-07-28T10:02:59Z | [
"python",
"python-2.7"
] |
Can not load the json file of urls in python | 38,633,009 | <p>I tried to make a dict with python , I try to retrieve data via the url however json error what is wrong ? I use python 2.7.6</p>
<pre><code>import json
import urllib
json_string = 'http://localhost/csv/taxo.json'
parsed_json = json.loads(json_string)
print(parsed_json['genus'])
</code></pre>
<p>This error</p>
<blockquote>
<p>Traceback (most recent call last):<br>
File "dic2.py", line 11, in <br>
parsed_json = json.loads(json_string)<br>
File "/usr/lib/python2.7/json/<strong>init</strong>.py", line 338, in loads<br>
return _default_decoder.decode(s)<br>
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode<br>
obj, end = self.raw_decode(s, idx=_w(s, 0).end())<br>
File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode<br>
raise ValueError("No JSON object could be decoded")<br>
ValueError: No JSON object could be decoded<br></p>
</blockquote>
| -1 | 2016-07-28T09:57:06Z | 38,633,416 | <p>I think you want to do this:</p>
<p>First download the webpage:</p>
<pre><code>from urllib import urlopen
response_object = urlopen('http://localhost/csv/taxo.json')
</code></pre>
<p>After convert to string:</p>
<pre><code>response_string = response_object.read()
</code></pre>
<p>Then convert to dictionary:</p>
<pre><code>from json import loads
json_string = loads(response_string)
</code></pre>
| 0 | 2016-07-28T10:12:53Z | [
"python",
"python-2.7"
] |
Scripting in logstash | 38,633,063 | <p>Is it possible to do python like scripting in logstash? I can import the csv data into elasticsearch using logstash. But I need to use update API instead of simply indexing all rows.</p>
<p>Here is my sample csv file...</p>
<pre><code>vi /tmp/head.txt
"Home","Home-66497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/1VSZj","919359000000","HMSHOP","916265100000","2016-05-18 08:41:49"
"Home","Home-26497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/1V1","919359000001","HMSHOP","916265100000","2016-05-18 18:41:49"
"Home","Home-36497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/SZj1","919359000001","HMSHOP","916265100000","2016-05-18 12:41:49"
"Home","Home-46497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/1","919359000000","HMSHOP","916265100000","2016-05-18 14:41:49"
"Home","Home-56497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/1VSZj1xc","919359000000","HMSHOP","916265100000","2016-05-18 16:41:49"
</code></pre>
<p>Here is logstash config file...</p>
<pre><code>vi logstash.conf
input {
file {
path => "/tmp/head.txt"
type => "csv"
start_position => beginning
}
}
filter {
csv {
columns => ["user", "messageid", "message", "destination", "code", "mobile", "mytimestamp"]
separator => ","
}
}
output {
elasticsearch {
action => "index"
hosts => ["172.17.0.1"]
index => "logstash-%{+YYYY.MM.dd}"
workers => 1
}
}
</code></pre>
<p>I have confirmed that the above configuration is working as expected and all 5 records are stored as 5 separate documents. </p>
<p>here is my docker command...</p>
<pre><code>docker run -d -v "/tmp/logstash.conf":/usr/local/logstash/config/logstash.conf -v /tmp/:/tmp/ logstash -f /usr/local/logstash/config/logstash.conf
</code></pre>
<hr>
<p>The problem is that I need to merge the documents based on destination number. The destination should be the ID of the document. There are some rows with the same destination. For e.g. _id: 919359000001 This document should have both the following records as nested objects.</p>
<pre><code>"user": "Home", "messageid": "Home-26497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V1", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 18:41:49"
"user": "Home", "messageid" "Home-36497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/SZj1", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp": "2016-05-18 12:41:49"
</code></pre>
<p>Elasticsearch is correctly converting the csv data to json as shown above. What I need is to reformat the statement to take advantage of scripting using update API
The following code is working correctly.</p>
<pre><code>POST /test_index/doc/_bulk
{ "update" : { "_id" : "919359000001"} }
{ "script" : { "inline": "ctx._source.parent += ['user': user, 'messageid': messageid, 'message': message, 'code': code, 'mobile': mobile, 'mytimestamp': mytimestamp]", "lang" : "groovy", "params" : {"user": "Home", "messageid": "Home-26497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V1", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 18:41:49"}}, "upsert": {"parent" : [{"user": "Home", "messageid": "Home-26497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V1", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 18:41:49"}] }}
{ "update" : { "_id" : "919359000001"} }
{ "script" : { "inline": "ctx._source.parent += ['user': user, 'messageid': messageid, 'message': message, 'code': code, 'mobile': mobile, 'mytimestamp': mytimestamp]", "lang" : "groovy", "params" : {"user": "Home", "messageid": "Home-36497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V13343", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 12:41:49"}}, "upsert": {"parent" : [{"user": "Home", "messageid": "Home-36497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V13343", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 12:41:49"}] }}
</code></pre>
<p>How do I code in logstash to convert my csv data to look like the above?</p>
<hr>
<p><strong>Update</strong></p>
<p>I have python code that works as expected. I will like to know how to modify this code to suit the "output" parameters suggested as per the answer.
In the following example, df_json is a python object that is nothing but python dataframe flattened to json.</p>
<pre><code>import copy
with open('myfile.txt', 'w') as f:
for doc1 in df_json:
import json
doc = mydict(doc1)
docnew = copy.deepcopy(doc)
del docnew['destination']
action = '{ "update": {"_id": %s }}\n' % doc['destination']
f.write(action)
entry = '{ "script" : { "inline": "ctx._source.parent += [\'user\': user, \'messageid\': messageid, \'message\': message, \'code\': code, \'mobile\': mobile, \'mytimestamp\': mytimestamp]", "lang" : "groovy", "params" : %s}, "upsert": {"parent" : [%s ] }}\n' % (doc, docnew)
f.write(entry)
! curl -s -XPOST XXX.xx.xx.x:9200/test_index222/doc/_bulk --data-binary @myfile.txt; echo
</code></pre>
<hr>
<p><strong>Update 2</strong></p>
<p>I tried the following configuration and it is replacing (not updating as per script) documents.</p>
<pre><code>output {
elasticsearch {
action => "index"
hosts => ["172.17.0.1"]
document_id => "%{destination}"
index => "logstash3-%{+YYYY.MM.dd}"
workers => 1
script => "ctx._source.parent += ['user': user, 'messageid': messageid, 'message': message, 'code': code, 'mobile': mobile, 'mytimestamp': mytimestamp]"
script_type => "inline"
script_lang => "groovy"
scripted_upsert => "true"
}
}
</code></pre>
<p>When I changed the action to "update", I get the following error...</p>
<pre><code>:response=>{"update"=>{"_index"=>"logstash4-2016.07.29", "_type"=>"csv", "_id"=>"919359000000",
"status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to execute script",
"caused_by"=>{"type"=>"script_exception", "reason"=>"failed to run in line script
[ctx._source.parent += ['user': user, 'messageid': messageid, 'message': message, 'code': code, 'mobile': mobile, 'mytimestamp': mytimestamp]]
using lang [groovy]", "caused_by"=>{"type"=>"missing_property_exception", "reason"=>"No such property: user for class: fe1b423dc4966b0f0b511b732474637705bf3bb1"}}}}}, :level=>:warn}
</code></pre>
<hr>
<p><strong>Update 3</strong></p>
<p>As per Val's answer I added event and I get this error...</p>
<pre><code>:response=>{"update"=>{"_index"=>"logstash4-2016.08.06", "_type"=>"csv", "_id"=>"%{destination}", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to execute script", "caused_by"=>{"type"=>"script_exception", "reason"=>"failed to run inline script [ctx._source.parent += ['user': event.user, 'messageid': event.messageid, 'message': event.message, 'code': event.code, 'mobile': event.mobile, 'mytimestamp': event.mytimestamp]] using lang [groovy]", "caused_by"=>{"type"=>"null_pointer_exception", "reason"=>"Cannot execute null+{user=null, messageid=null, message=, code=null, mobile=null, mytimestamp=null}"}}}}}
</code></pre>
<p><strong>Update 4</strong></p>
<p>As per Val's updated answer I tried this...</p>
<pre><code>script => "ctx._source.parent = (ctx._source.parent ?: []) + ['user': event.user, 'messageid': event.messageid, 'message': event.message, 'code': event.code, 'mobile': event.mobile, 'mytimestamp': event.mytimestamp]"
</code></pre>
<p>And got this error:</p>
<pre><code>{:timestamp=>"2016-08-12T09:40:48.869000+0000", :message=>"Pipeline main started"}
{:timestamp=>"2016-08-12T09:40:49.517000+0000", :message=>"Error parsing csv", :field=>"message", :source=>"", :exception=>#<NoMethodError: undefined method `each_index' for nil:NilClass>, :level=>:warn}
</code></pre>
<p>Only 2 records were added to the database.</p>
| 7 | 2016-07-28T09:59:00Z | 38,648,184 | <p><code>elasticsearch</code> output plugin supports script parameters:</p>
<pre><code>output {
elasticsearch {
action => "update"
hosts => ["172.17.0.1"]
index => "logstash-%{+YYYY.MM.dd}"
workers => 1
script => "<your script here>"
script_type => "inline"
# Set the language of the used script
# script_lang =>
# if enabled, script is in charge of creating non-existent document (scripted update)
# scripted_upsert => (default is false)
}
}
</code></pre>
| 1 | 2016-07-28T23:12:43Z | [
"python",
"elasticsearch",
"groovy",
"logstash"
] |
Scripting in logstash | 38,633,063 | <p>Is it possible to do python like scripting in logstash? I can import the csv data into elasticsearch using logstash. But I need to use update API instead of simply indexing all rows.</p>
<p>Here is my sample csv file...</p>
<pre><code>vi /tmp/head.txt
"Home","Home-66497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/1VSZj","919359000000","HMSHOP","916265100000","2016-05-18 08:41:49"
"Home","Home-26497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/1V1","919359000001","HMSHOP","916265100000","2016-05-18 18:41:49"
"Home","Home-36497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/SZj1","919359000001","HMSHOP","916265100000","2016-05-18 12:41:49"
"Home","Home-46497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/1","919359000000","HMSHOP","916265100000","2016-05-18 14:41:49"
"Home","Home-56497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/1VSZj1xc","919359000000","HMSHOP","916265100000","2016-05-18 16:41:49"
</code></pre>
<p>Here is logstash config file...</p>
<pre><code>vi logstash.conf
input {
file {
path => "/tmp/head.txt"
type => "csv"
start_position => beginning
}
}
filter {
csv {
columns => ["user", "messageid", "message", "destination", "code", "mobile", "mytimestamp"]
separator => ","
}
}
output {
elasticsearch {
action => "index"
hosts => ["172.17.0.1"]
index => "logstash-%{+YYYY.MM.dd}"
workers => 1
}
}
</code></pre>
<p>I have confirmed that the above configuration is working as expected and all 5 records are stored as 5 separate documents. </p>
<p>here is my docker command...</p>
<pre><code>docker run -d -v "/tmp/logstash.conf":/usr/local/logstash/config/logstash.conf -v /tmp/:/tmp/ logstash -f /usr/local/logstash/config/logstash.conf
</code></pre>
<hr>
<p>The problem is that I need to merge the documents based on destination number. The destination should be the ID of the document. There are some rows with the same destination. For e.g. _id: 919359000001 This document should have both the following records as nested objects.</p>
<pre><code>"user": "Home", "messageid": "Home-26497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V1", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 18:41:49"
"user": "Home", "messageid" "Home-36497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/SZj1", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp": "2016-05-18 12:41:49"
</code></pre>
<p>Elasticsearch is correctly converting the csv data to json as shown above. What I need is to reformat the statement to take advantage of scripting using update API
The following code is working correctly.</p>
<pre><code>POST /test_index/doc/_bulk
{ "update" : { "_id" : "919359000001"} }
{ "script" : { "inline": "ctx._source.parent += ['user': user, 'messageid': messageid, 'message': message, 'code': code, 'mobile': mobile, 'mytimestamp': mytimestamp]", "lang" : "groovy", "params" : {"user": "Home", "messageid": "Home-26497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V1", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 18:41:49"}}, "upsert": {"parent" : [{"user": "Home", "messageid": "Home-26497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V1", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 18:41:49"}] }}
{ "update" : { "_id" : "919359000001"} }
{ "script" : { "inline": "ctx._source.parent += ['user': user, 'messageid': messageid, 'message': message, 'code': code, 'mobile': mobile, 'mytimestamp': mytimestamp]", "lang" : "groovy", "params" : {"user": "Home", "messageid": "Home-36497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V13343", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 12:41:49"}}, "upsert": {"parent" : [{"user": "Home", "messageid": "Home-36497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V13343", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 12:41:49"}] }}
</code></pre>
<p>How do I code in logstash to convert my csv data to look like the above?</p>
<hr>
<p><strong>Update</strong></p>
<p>I have python code that works as expected. I will like to know how to modify this code to suit the "output" parameters suggested as per the answer.
In the following example, df_json is a python object that is nothing but python dataframe flattened to json.</p>
<pre><code>import copy
with open('myfile.txt', 'w') as f:
for doc1 in df_json:
import json
doc = mydict(doc1)
docnew = copy.deepcopy(doc)
del docnew['destination']
action = '{ "update": {"_id": %s }}\n' % doc['destination']
f.write(action)
entry = '{ "script" : { "inline": "ctx._source.parent += [\'user\': user, \'messageid\': messageid, \'message\': message, \'code\': code, \'mobile\': mobile, \'mytimestamp\': mytimestamp]", "lang" : "groovy", "params" : %s}, "upsert": {"parent" : [%s ] }}\n' % (doc, docnew)
f.write(entry)
! curl -s -XPOST XXX.xx.xx.x:9200/test_index222/doc/_bulk --data-binary @myfile.txt; echo
</code></pre>
<hr>
<p><strong>Update 2</strong></p>
<p>I tried the following configuration and it is replacing (not updating as per script) documents.</p>
<pre><code>output {
elasticsearch {
action => "index"
hosts => ["172.17.0.1"]
document_id => "%{destination}"
index => "logstash3-%{+YYYY.MM.dd}"
workers => 1
script => "ctx._source.parent += ['user': user, 'messageid': messageid, 'message': message, 'code': code, 'mobile': mobile, 'mytimestamp': mytimestamp]"
script_type => "inline"
script_lang => "groovy"
scripted_upsert => "true"
}
}
</code></pre>
<p>When I changed the action to "update", I get the following error...</p>
<pre><code>:response=>{"update"=>{"_index"=>"logstash4-2016.07.29", "_type"=>"csv", "_id"=>"919359000000",
"status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to execute script",
"caused_by"=>{"type"=>"script_exception", "reason"=>"failed to run in line script
[ctx._source.parent += ['user': user, 'messageid': messageid, 'message': message, 'code': code, 'mobile': mobile, 'mytimestamp': mytimestamp]]
using lang [groovy]", "caused_by"=>{"type"=>"missing_property_exception", "reason"=>"No such property: user for class: fe1b423dc4966b0f0b511b732474637705bf3bb1"}}}}}, :level=>:warn}
</code></pre>
<hr>
<p><strong>Update 3</strong></p>
<p>As per Val's answer I added event and I get this error...</p>
<pre><code>:response=>{"update"=>{"_index"=>"logstash4-2016.08.06", "_type"=>"csv", "_id"=>"%{destination}", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to execute script", "caused_by"=>{"type"=>"script_exception", "reason"=>"failed to run inline script [ctx._source.parent += ['user': event.user, 'messageid': event.messageid, 'message': event.message, 'code': event.code, 'mobile': event.mobile, 'mytimestamp': event.mytimestamp]] using lang [groovy]", "caused_by"=>{"type"=>"null_pointer_exception", "reason"=>"Cannot execute null+{user=null, messageid=null, message=, code=null, mobile=null, mytimestamp=null}"}}}}}
</code></pre>
<p><strong>Update 4</strong></p>
<p>As per Val's updated answer I tried this...</p>
<pre><code>script => "ctx._source.parent = (ctx._source.parent ?: []) + ['user': event.user, 'messageid': event.messageid, 'message': event.message, 'code': event.code, 'mobile': event.mobile, 'mytimestamp': event.mytimestamp]"
</code></pre>
<p>And got this error:</p>
<pre><code>{:timestamp=>"2016-08-12T09:40:48.869000+0000", :message=>"Pipeline main started"}
{:timestamp=>"2016-08-12T09:40:49.517000+0000", :message=>"Error parsing csv", :field=>"message", :source=>"", :exception=>#<NoMethodError: undefined method `each_index' for nil:NilClass>, :level=>:warn}
</code></pre>
<p>Only 2 records were added to the database.</p>
| 7 | 2016-07-28T09:59:00Z | 38,800,490 | <p>The event is passed to the script in your output using the <code>event</code> variable name (by default, but you can change it using the <code>script_var_name</code> setting).</p>
<p>So the script in your output needs to account for it.</p>
<pre><code> script => "ctx._source.parent = (ctx._source.parent ?: []) + ['user': event.user, 'messageid': event.messageid, 'message': event.message, 'code': event.code, 'mobile': event.mobile, 'mytimestamp': event.mytimestamp]"
</code></pre>
| 1 | 2016-08-06T04:17:30Z | [
"python",
"elasticsearch",
"groovy",
"logstash"
] |
Scripting in logstash | 38,633,063 | <p>Is it possible to do python like scripting in logstash? I can import the csv data into elasticsearch using logstash. But I need to use update API instead of simply indexing all rows.</p>
<p>Here is my sample csv file...</p>
<pre><code>vi /tmp/head.txt
"Home","Home-66497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/1VSZj","919359000000","HMSHOP","916265100000","2016-05-18 08:41:49"
"Home","Home-26497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/1V1","919359000001","HMSHOP","916265100000","2016-05-18 18:41:49"
"Home","Home-36497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/SZj1","919359000001","HMSHOP","916265100000","2016-05-18 12:41:49"
"Home","Home-46497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/1","919359000000","HMSHOP","916265100000","2016-05-18 14:41:49"
"Home","Home-56497273a5a83c99","Spice Xlife 350, 3.5inch Android, bit.ly/1VSZj1xc","919359000000","HMSHOP","916265100000","2016-05-18 16:41:49"
</code></pre>
<p>Here is logstash config file...</p>
<pre><code>vi logstash.conf
input {
file {
path => "/tmp/head.txt"
type => "csv"
start_position => beginning
}
}
filter {
csv {
columns => ["user", "messageid", "message", "destination", "code", "mobile", "mytimestamp"]
separator => ","
}
}
output {
elasticsearch {
action => "index"
hosts => ["172.17.0.1"]
index => "logstash-%{+YYYY.MM.dd}"
workers => 1
}
}
</code></pre>
<p>I have confirmed that the above configuration is working as expected and all 5 records are stored as 5 separate documents. </p>
<p>here is my docker command...</p>
<pre><code>docker run -d -v "/tmp/logstash.conf":/usr/local/logstash/config/logstash.conf -v /tmp/:/tmp/ logstash -f /usr/local/logstash/config/logstash.conf
</code></pre>
<hr>
<p>The problem is that I need to merge the documents based on destination number. The destination should be the ID of the document. There are some rows with the same destination. For e.g. _id: 919359000001 This document should have both the following records as nested objects.</p>
<pre><code>"user": "Home", "messageid": "Home-26497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V1", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 18:41:49"
"user": "Home", "messageid" "Home-36497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/SZj1", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp": "2016-05-18 12:41:49"
</code></pre>
<p>Elasticsearch is correctly converting the csv data to json as shown above. What I need is to reformat the statement to take advantage of scripting using update API
The following code is working correctly.</p>
<pre><code>POST /test_index/doc/_bulk
{ "update" : { "_id" : "919359000001"} }
{ "script" : { "inline": "ctx._source.parent += ['user': user, 'messageid': messageid, 'message': message, 'code': code, 'mobile': mobile, 'mytimestamp': mytimestamp]", "lang" : "groovy", "params" : {"user": "Home", "messageid": "Home-26497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V1", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 18:41:49"}}, "upsert": {"parent" : [{"user": "Home", "messageid": "Home-26497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V1", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 18:41:49"}] }}
{ "update" : { "_id" : "919359000001"} }
{ "script" : { "inline": "ctx._source.parent += ['user': user, 'messageid': messageid, 'message': message, 'code': code, 'mobile': mobile, 'mytimestamp': mytimestamp]", "lang" : "groovy", "params" : {"user": "Home", "messageid": "Home-36497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V13343", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 12:41:49"}}, "upsert": {"parent" : [{"user": "Home", "messageid": "Home-36497273a5a83c99", "message": "Spice Xlife 350, 3.5inch Android, bit.ly/1V13343", "code": "HMSHOP", "mobile": "916265100000", "mytimestamp" : "2016-05-18 12:41:49"}] }}
</code></pre>
<p>How do I code in logstash to convert my csv data to look like the above?</p>
<hr>
<p><strong>Update</strong></p>
<p>I have python code that works as expected. I will like to know how to modify this code to suit the "output" parameters suggested as per the answer.
In the following example, df_json is a python object that is nothing but python dataframe flattened to json.</p>
<pre><code>import copy
with open('myfile.txt', 'w') as f:
for doc1 in df_json:
import json
doc = mydict(doc1)
docnew = copy.deepcopy(doc)
del docnew['destination']
action = '{ "update": {"_id": %s }}\n' % doc['destination']
f.write(action)
entry = '{ "script" : { "inline": "ctx._source.parent += [\'user\': user, \'messageid\': messageid, \'message\': message, \'code\': code, \'mobile\': mobile, \'mytimestamp\': mytimestamp]", "lang" : "groovy", "params" : %s}, "upsert": {"parent" : [%s ] }}\n' % (doc, docnew)
f.write(entry)
! curl -s -XPOST XXX.xx.xx.x:9200/test_index222/doc/_bulk --data-binary @myfile.txt; echo
</code></pre>
<hr>
<p><strong>Update 2</strong></p>
<p>I tried the following configuration and it is replacing (not updating as per script) documents.</p>
<pre><code>output {
elasticsearch {
action => "index"
hosts => ["172.17.0.1"]
document_id => "%{destination}"
index => "logstash3-%{+YYYY.MM.dd}"
workers => 1
script => "ctx._source.parent += ['user': user, 'messageid': messageid, 'message': message, 'code': code, 'mobile': mobile, 'mytimestamp': mytimestamp]"
script_type => "inline"
script_lang => "groovy"
scripted_upsert => "true"
}
}
</code></pre>
<p>When I changed the action to "update", I get the following error...</p>
<pre><code>:response=>{"update"=>{"_index"=>"logstash4-2016.07.29", "_type"=>"csv", "_id"=>"919359000000",
"status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to execute script",
"caused_by"=>{"type"=>"script_exception", "reason"=>"failed to run in line script
[ctx._source.parent += ['user': user, 'messageid': messageid, 'message': message, 'code': code, 'mobile': mobile, 'mytimestamp': mytimestamp]]
using lang [groovy]", "caused_by"=>{"type"=>"missing_property_exception", "reason"=>"No such property: user for class: fe1b423dc4966b0f0b511b732474637705bf3bb1"}}}}}, :level=>:warn}
</code></pre>
<hr>
<p><strong>Update 3</strong></p>
<p>As per Val's answer I added event and I get this error...</p>
<pre><code>:response=>{"update"=>{"_index"=>"logstash4-2016.08.06", "_type"=>"csv", "_id"=>"%{destination}", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to execute script", "caused_by"=>{"type"=>"script_exception", "reason"=>"failed to run inline script [ctx._source.parent += ['user': event.user, 'messageid': event.messageid, 'message': event.message, 'code': event.code, 'mobile': event.mobile, 'mytimestamp': event.mytimestamp]] using lang [groovy]", "caused_by"=>{"type"=>"null_pointer_exception", "reason"=>"Cannot execute null+{user=null, messageid=null, message=, code=null, mobile=null, mytimestamp=null}"}}}}}
</code></pre>
<p><strong>Update 4</strong></p>
<p>As per Val's updated answer I tried this...</p>
<pre><code>script => "ctx._source.parent = (ctx._source.parent ?: []) + ['user': event.user, 'messageid': event.messageid, 'message': event.message, 'code': event.code, 'mobile': event.mobile, 'mytimestamp': event.mytimestamp]"
</code></pre>
<p>And got this error:</p>
<pre><code>{:timestamp=>"2016-08-12T09:40:48.869000+0000", :message=>"Pipeline main started"}
{:timestamp=>"2016-08-12T09:40:49.517000+0000", :message=>"Error parsing csv", :field=>"message", :source=>"", :exception=>#<NoMethodError: undefined method `each_index' for nil:NilClass>, :level=>:warn}
</code></pre>
<p>Only 2 records were added to the database.</p>
| 7 | 2016-07-28T09:59:00Z | 38,912,079 | <p>Since you have working python script, maybe that will be useful ? <a href="https://www.elastic.co/guide/en/elasticsearch/plugins/current/lang-python.html" rel="nofollow">https://www.elastic.co/guide/en/elasticsearch/plugins/current/lang-python.html</a></p>
<p>Regarding update nr 2 - I think the error can be fixed by first checking if a document has given field(In this case it's user).</p>
| 0 | 2016-08-12T07:17:37Z | [
"python",
"elasticsearch",
"groovy",
"logstash"
] |
Django sort by Articles with most similar tags | 38,633,110 | <p>I have a model for Articles and Tags, using filter for suggestions. </p>
<p>The article tags as <code>tags = article.tags.all()</code> then filter <code>Article.objects.filter(tags__in=tags)[:5]</code> what I'd like is to add a sort by tags similar.</p>
<p>Model for Article and Tags</p>
<pre><code>class Article(models.Model):
...
tags = models.ForeignKey(Tag, blank=True, null=True)
class Tag(models.Model):
name = models.CharField(max_length=20, blank=True)
</code></pre>
| 0 | 2016-07-28T10:00:35Z | 38,635,533 | <p>I would suggest to use app <a href="http://django-taggit.readthedocs.io/en/latest/index.html" rel="nofollow">django-taggit</a>. <code>TaggableManager</code> has method <code>similar_objects</code> which does exactly what you want.</p>
| 0 | 2016-07-28T11:51:08Z | [
"python",
"django"
] |
Arrange the multi-similar data efficiently | 38,633,246 | <p>The datafile showed here is the measuring record exported from instrument. </p>
<p>I uploaded it <a href="https://drive.google.com/file/d/0B7FE0kxAL8kQSmxoMEhHUjV0VUk/view?usp=sharing" rel="nofollow">here</a>, anyone interested can download it. </p>
<h3>Background</h3>
<pre><code>Sample
RECORD-1
FID1, FID2, front_temperature, laser, laserlow, pressure, mode
-925 284 1452 315 143 16653 He -28500
-924 281 1462 322 136 16641 He -28628
-920 281 1455 311 139 16649 He -28756
-923 279 1454 312 139 16636 He -28884
......
Sample
RECORD-2
FID1, FID2, front_temperature, laser, laserlow, pressure, mode
-925 284 1452 315 143 16653 He -28500
......
......
</code></pre>
<p>Generally, there are several record for different samples in the order of testing routine. And the data record for these samples are all in the same format. </p>
<h3>My attempt</h3>
<p>If there was just one sample in the datafile( in *.txt format), I can arrange the datafile into pandas. Dataframe, then I can handle the data with more analysis process in Python. </p>
<p>My code was shown here: </p>
<pre><code># Whole datafile with several samples record inside
with open("record.txt") as f:
mylist = f.read().splitlines()
## The record for each sample length in 803 lines
lines = mylist[0:803]
### The sample_name was extract from the third line
sample_name = lines[2]
### For each sample, the measure record was saved in several aspects,
### which were regarded as some columns here
columns = lines[22].split()
### Generate an empty columns for saving data record later.
df = {columns[0][:-1]:[],columns[1][:-1]:[],columns[2][:-1]:[],columns[3][:-1]:[],columns[4][:-1]:[],
columns[5][:-1]:[],columns[6][:-1]:[],} #### I only though about this dumb method for now
## Data extracting
### the valid data record of sample 1 was from line 23
for i in range(0, len(lines[23:]),1):
for j in range(0, len(columns),1):
df[columns[j][:-1]].append(lines[23+i].split()[j])
pd.DataFrame(df)
</code></pre>
<p>The result shows like this: </p>
<p><a href="http://i.stack.imgur.com/l2m0X.png" rel="nofollow"><img src="http://i.stack.imgur.com/l2m0X.png" alt="enter image description here"></a></p>
<h3>My target</h3>
<p>From the code above, I could deal with datafile for one sample. But when there are several samples represented in the record text. I couldn't find a clue to deal with it efficiently. </p>
<p>Here is an illustration of my target. To generate an dataframe dict for saving all samples records. </p>
<p><a href="http://i.stack.imgur.com/B1p0m.png" rel="nofollow"><img src="http://i.stack.imgur.com/B1p0m.png" alt="enter image description here"></a> </p>
<p>Any advice would be appreciate!</p>
| 0 | 2016-07-28T10:05:46Z | 38,639,897 | <p>I think you are looking for something like this:</p>
<pre><code>import pandas as pd
# Whole datafile with several samples record inside
with open("record.txt",'r') as f:
mylist = f.read().splitlines()
dataset = []
while True:
try:
## The record for each sample length in 803 lines
lines, mylist = mylist[0:803], mylist[803:] #this split your list!!
### The sample_name was extract from the third line
sample_name = lines[2]
### For each sample, the measure record was saved in several aspects,
### which were regarded as some columns here
columns = lines[22].split()
### Generate an empty columns for saving data record later.
df = {columns[0][:-1]:[],columns[1][:-1]:[],columns[2][:-1]:[],columns[3][:-1]:[],columns[4][:-1]:[],
columns[5][:-1]:[],columns[6][:-1]:[],} #### I only though about this dumb method for now
## Data extracting
### the valid data record of sample 1 was from line 23
for i in range(0, len(lines[23:]),1):
for j in range(0, len(columns),1):
df[columns[j][:-1]].append(lines[23+i].split()[j])
except IndexError:
break
df = pd.DataFrame(df)
dataset.append(df)
</code></pre>
<p>Now <code>dataset[0]</code> should contain the df of Sample 1.</p>
| 1 | 2016-07-28T14:53:12Z | [
"python",
"arrays",
"pandas",
"dataframe"
] |
Error while testing the raise of self-defined exceptions (using assertRaises()) | 38,633,263 | <p>I am creating tests for a python project. The normal tests work just fine, however I want to test if in a certain condition my function raises a self-defined exception. Therefor I want to use assertRaises(Exception, Function). Any ideas?</p>
<p>The function that raises the exception is:</p>
<pre><code>def connect(comp1, comp2):
if comp1 == comp2:
raise e.InvalidConnectionError(comp1, comp2)
...
</code></pre>
<p>The exception is:</p>
<pre><code>class InvalidConnectionError(Exception):
def __init__(self, connection1, connection2):
self._connection1 = connection1
self._connection2 = connection2
def __str__(self):
string = '...'
return string
</code></pre>
<p>The test method is the following:</p>
<pre><code>class TestConnections(u.TestCase):
def test_connect_error(self):
comp = c.PowerConsumer('Bus', True, 1000)
self.assertRaises(e.InvalidConnectionError, c.connect(comp, comp))
</code></pre>
<p>However I get the following error:</p>
<pre><code>Error
Traceback (most recent call last):
File "C:\Users\t5ycxK\PycharmProjects\ElectricPowerDesign\test_component.py", line 190, in test_connect_error
self.assertRaises(e.InvalidConnectionError, c.connect(comp, comp))
File "C:\Users\t5ycxK\PycharmProjects\ElectricPowerDesign\component.py", line 428, in connect
raise e.InvalidConnectionError(comp1, comp2)
InvalidConnectionError: <unprintable InvalidConnectionError object>
</code></pre>
| 1 | 2016-07-28T10:06:18Z | 38,633,401 | <p><code>assertRaises</code> expects to actually <a href="https://docs.python.org/3.4/library/unittest.html#unittest.TestCase.assertRaises" rel="nofollow"><em>perform</em> the call</a>. Yet, you already perform it by yourself, thereby throwing the error before <code>assertRaises</code> actually executes.</p>
<pre><code>self.assertRaises(e.InvalidConnectionError, c.connect(comp, comp))
# run this ^ with first static argument ^ and second argument ^ from `c.connect(comp, comp)`
</code></pre>
<p>Use either of those instead:</p>
<pre><code>self.assertRaises(e.InvalidConnectionError, c.connect, comp, comp)
with self.assertRaises(e.InvalidConnectionError):
c.connect(comp, comp)
</code></pre>
| 5 | 2016-07-28T10:12:12Z | [
"python",
"exception"
] |
Use base class's property/attribute as a table column? | 38,633,319 | <p>A game engine provides me with a <code>Player</code> class with a <code>steamid</code> property (coming from C++, this is just a basic example on what it would look like in Python):</p>
<pre><code># game_engine.py
class Player:
def __init__(self, steamid):
self.__steamid = steamid
@property
def steamid(self):
return self.__steamid
</code></pre>
<p>I then proceed to subclass this class while adding a <code>gold</code> attribute:</p>
<pre><code># my_plugin.py
class MyPlayer(game_engine.Player, Base):
gold = Column(Integer)
</code></pre>
<p>Now I need to store the player's <code>gold</code> to a database with the player's <code>steamid</code> as a primary key to identify the player. How do I tell SQLAlchemy to use the base class's <code>steamid</code> property as the primary key?</p>
<p>Here's something silly I tried:</p>
<pre><code>from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.hybrid import hybrid_property
import game_engine
Base = declarative_base()
class Player(game_engine.Player, Base):
__tablename__ = 'player'
_steamid = game_engine.Player.steamid
@hybrid_property
def steamid(self):
return type(self)._steamid.__get__(self)
</code></pre>
<p>But yeah, it was a long shot...</p>
<pre><code>sqlalchemy.exc.ArgumentError: Mapper Mapper|Player|player could not assemble any primary key columns for mapped table 'player'
</code></pre>
| 8 | 2016-07-28T10:08:58Z | 38,675,933 | <p>this could be done using <a href="http://docs.sqlalchemy.org/en/latest/orm/mapping_styles.html#classical-mappings" rel="nofollow">classical mapping</a></p>
<pre><code>from sqlalchemy import Column, Integer, Table
from sqlalchemy.orm import mapper
from sqlalchemy.ext.hybrid import hybrid_property
class MyPlayer(Player):
def __init__(self, steamid, gold):
super().__init__(steamid)
self.gold = gold
self._steamid = super().steamid
player = Table('player', Base.metadata,
Column('_steamid', Integer, primary_key=True),
Column('gold', Integer),
)
mapper(MyPlayer, player)
</code></pre>
| 1 | 2016-07-30T16:09:32Z | [
"python",
"python-3.x",
"inheritance",
"properties",
"sqlalchemy"
] |
Use base class's property/attribute as a table column? | 38,633,319 | <p>A game engine provides me with a <code>Player</code> class with a <code>steamid</code> property (coming from C++, this is just a basic example on what it would look like in Python):</p>
<pre><code># game_engine.py
class Player:
def __init__(self, steamid):
self.__steamid = steamid
@property
def steamid(self):
return self.__steamid
</code></pre>
<p>I then proceed to subclass this class while adding a <code>gold</code> attribute:</p>
<pre><code># my_plugin.py
class MyPlayer(game_engine.Player, Base):
gold = Column(Integer)
</code></pre>
<p>Now I need to store the player's <code>gold</code> to a database with the player's <code>steamid</code> as a primary key to identify the player. How do I tell SQLAlchemy to use the base class's <code>steamid</code> property as the primary key?</p>
<p>Here's something silly I tried:</p>
<pre><code>from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.hybrid import hybrid_property
import game_engine
Base = declarative_base()
class Player(game_engine.Player, Base):
__tablename__ = 'player'
_steamid = game_engine.Player.steamid
@hybrid_property
def steamid(self):
return type(self)._steamid.__get__(self)
</code></pre>
<p>But yeah, it was a long shot...</p>
<pre><code>sqlalchemy.exc.ArgumentError: Mapper Mapper|Player|player could not assemble any primary key columns for mapped table 'player'
</code></pre>
| 8 | 2016-07-28T10:08:58Z | 38,678,734 | <p>This is simpler than you might expect. The solution below is roughly equivalent to the one from r-m-n, but more straightforward because it uses modern declarative mapping. There is no need for <code>@hybrid_property</code>, you can just inherit <code>steamid</code> from the parent class.</p>
<pre><code># my_plugin.py
class MyPlayer(game_engine.Player, Base):
def __init__(self, steamid, gold):
super().__init__(steamid)
self._id = self.steamid
self.gold = gold
_id = Column('steamid', Integer, primary_key=True)
gold = Column(Integer)
</code></pre>
| 2 | 2016-07-30T21:39:43Z | [
"python",
"python-3.x",
"inheritance",
"properties",
"sqlalchemy"
] |
ZeroMQ: How to prioritise sockets in a .poll() method? | 38,633,359 | <p>Imagine the following code:</p>
<pre><code>import threading, zmq, time
context = zmq.Context()
receivers = []
poller = zmq.Poller()
def thread_fn(number: int):
sender = context.socket(zmq.PUSH)
sender.connect("tcp://localhost:%d" % (6666 + number))
for i in range(10):
sender.send_string("message from thread %d" % number)
for i in range(3):
new_receiver = context.socket(zmq.PULL)
new_receiver.bind("tcp://*:%d" % (6666 + i))
poller.register(new_receiver, zmq.POLLIN)
receivers.append(new_receiver)
threading.Thread(target=lambda: thread_fn(i), daemon=True).start()
while True:
try:
socks = dict(poller.poll())
except KeyboardInterrupt:
break
for i in range(3):
if receivers[i] in socks:
print("%d: process message %s" % (i, receivers[i].recv_string()))
time.sleep(0.2) # 'process' the data
</code></pre>
<p>The threads send some messages without interruption which arrive in some random order at the corresponding <code>PULL</code>-sockets where they get 'processed'.</p>
<p><strong><code>Note:</code></strong> usually you would connect to one <code>PULL</code>-socket but this example intends to provide more than one receiving socket.</p>
<p>Output is:</p>
<pre><code>0: process message message from thread 0
1: process message message from thread 1
0: process message message from thread 0
1: process message message from thread 1
2: process message message from thread 2
0: process message message from thread 0
1: process message message from thread 1
2: process message message from thread 2
....
</code></pre>
<p>Now I want to read from all sockets like in the example but I'd like to <strong>prioritise</strong> one socket.</p>
<p>I.e.: I want the output to be:</p>
<pre><code>0: process message message from thread 0 <-- socket 0 processed first
0: process message message from thread 0
0: process message message from thread 0
0: process message message from thread 0
0: process message message from thread 0
1: process message message from thread 1
1: process message message from thread 1
2: process message message from thread 2
1: process message message from thread 1
2: process message message from thread 2
....
</code></pre>
<p>Of course I can just poll the sockets separately with <code>timeout=0</code> but I want to <strong>be sure ZeroMQ doesn't do this</strong> for me already.</p>
<p><strong><code>So the questions are:</code></strong><br><br><strong><code>Q1:</code></strong><br>Is there another way <sup> ( except the built-in <code>.poll( timeout )</code> ) </sup><br> to make sure I've read messages from one socket <strong>first</strong><br><strong>before</strong> waiting for messages on the other sockets?</p>
<p><strong><code>Q2:</code></strong><br>Is there <strong>a known best practice</strong> to do it manually?</p>
| 4 | 2016-07-28T10:10:37Z | 38,655,009 | <blockquote>
<p>Welcome to the Wild Worlds of (managed)-chaos,</p>
<h2><code>A1:</code> Yes<sup><code>TL;DR</code> check <code>zmq.select()</code> in recent API / python wrapper</sup><br><code>A2:</code> Yes<sub><code>TL;DR</code> a must do part of the rigorous dependable system design</sub></h2>
</blockquote>
<p>For the sake of <code>Q2</code>, the system design ought remain flexible, not only to handle the said prioritisation segmentation, but also to provide serious means for a robust handling of remote failures to comply with ( just optimistically ) expected modus operandi.</p>
<p><strong>What does that mean?</strong></p>
<p>If the postulated behaviour were implemented with some trivial and naive serial alignment of easy to implement principal syntax-constructs' sections alike this idea:</p>
<pre><code># --------------------------------------------------------
# FIRST scan all HI-PRIO socket(s) for incoming messages:
while true:
# process 'em first, based on a ZeroMQ-socket's behaviour-fixed ordering
...
break
# --------------------------------------------------------
# NEXT scan all LO-PRIO socket(s) for incoming messages:
while true:
# process 'em, again, based on a ZeroMQ-socket's behaviour-fixed ordering
...
break
</code></pre>
<p>any benefits your system architecture strives to create are lost in the very moment you forget to have a <strong>robust Plan B</strong> - how to handle blocking states, lost messages, dead counterparty FSA-process, DoS-attack, just a faulty remote-NIC that suddenly sprays your inbound interface with spurious and massive flow of bytes, all nightmares that may and do appear out of your controls.</p>
<p>This means, carefully plan for how to survive a case, the first group of sockets started to "feed" your receiver with so many messages ( processing tasks ), that you "can" never exit from the HI-PRIO section.</p>
<p>If still not getting the point, let me remind the great system design, introduced for this very purpose into <strong>Apollo Guidance Computer</strong> (AGC) software by an MIT team headed by <a href="https://en.wikipedia.org/wiki/Margaret_Hamilton_(scientist)" rel="nofollow">Ms. Margaret HAMILTON,</a><a href="http://i.stack.imgur.com/jCvXK.gif" rel="nofollow"><img src="http://i.stack.imgur.com/jCvXK.gif" alt="enter image description here"></a> <strong>which did survive such "infinite-attack" of events</strong>, that were not anticipated by engineering guys, but which did happen during real-life, the worse, during the landing of the Eagle ( the Lunar Module ) on the Moon - the second most critical phase of the whole journey "there and back".</p>
<p>It is not any overhype to state that the smart design from Ms. Hamilton's team did save both the very moment and the whole glory of the U.S. most prestigeous Apollo Programme.</p>
<h2>Every responsible design has to be planned so as to survive this.</h2>
<p>The current <code>ZeroMQ</code> wrapper for <code>python</code> provides for such purpose a tool, a <strong><code>Poller()</code></strong> class, that -- once due care is taken -- may save both your design targets and provide a space for adding a reliability-motivated functions, incl. a fallback escape strategies to be taken on colliding priorities/resources situations.</p>
<pre><code> # ------------------------------------------------------------
# Initialize separate engines for polling set(s)
HiPRIOpoller = zmq.Poller()
LoPRIOpoller = zmq.Poller()
# ------------------------------------------------------------
# Associate
HiPRIOpoller.register( socket_0_pull, zmq.POLLIN ) # 0:
LoPRIOpoller.register( ... , zmq.POLLIN ) # 1:
LoPRIOpoller.register( ... , zmq.POLLIN ) # 2:
...
# ------------------------------------------------------------
# Detect, who is waiting in front of the closed door
aListOfHiPRIOevents = HiPRIOpoller.poll( timeout = 0.200 ) # 200 [us]
aListOfLoPRIOevents = LoPRIOpoller.poll( timeout = 0 ) # no wait at all
# ------------------------------------------------------------
# Now AFTER you have a COMPLETE view what is waiting there
# one
# CAN & SHALL ADAPT order / scope of event-handling,
# IMMUNE to infinite-PRIO-event-flow.
...
# ------------------------------------------------------------
</code></pre>
<p><strong><code>Poller.poll()</code></strong> method returns a <strong><code>list</code></strong> of events that are ready to be processed. This is a list of tuples of the form <strong><code>( socket, event )</code></strong>, where the first element is <code>{ a-0MQ-Socket-instance | integer-system-native-fd }</code>, and the second is a poll-event mask <code>( POLLIN, POLLOUT )</code>. It is common to call this decorated as <strong><code>aDictOfEVENTs = dict( aPoller.poll() )</code></strong>, which turns the list of tuples into a mapping of <strong><code>{ aSocket : anEvent, ... }</code></strong> if one wishes to.</p>
<blockquote>
<p>finally,</p>
<h2>Epilogue: a Tribute to Margaret HAMILTON and her MIT team</h2>
</blockquote>
<p>If all the story was nothing else to inspire our thoughts, Margaret's brave efforts have taught us a lot about <strong>professional system design</strong>. May learn a lot from this trully pioneering epoch - more about <a href="http://www.ibiblio.org/apollo/ForDummies.html" rel="nofollow"><strong>what computer science has implemented on penny-scaled resources' footprints</strong> already in early 60-ies, that today systems' designs are ( if not worse ) quite too often in debt for ...</a></p>
| 2 | 2016-07-29T09:16:52Z | [
"python",
"select",
"zeromq"
] |
pip install askbot error - Command "python setup.py egg_info" failed with error code 1 | 38,633,376 | <p>I want to install askbot app (<a href="http://askbot.org/doc/install.html" rel="nofollow">http://askbot.org/doc/install.html</a>). But I encountered error during installation. </p>
<p>I've did below actions.</p>
<p>1) made virtual environment under ananconda (python 3.5.2 / ubuntu 14.04)</p>
<p>2) installed django 1.9.8</p>
<p>3) made django project myproject</p>
<p>4) modified settings.py to connect MariaDB</p>
<p>5) installed mysql client</p>
<pre><code># sudo apt-get install libmysqlclient-dev
# pip install mysqlclient
</code></pre>
<p>6) migrated </p>
<pre><code>python manage.py migrate
</code></pre>
<p>7) registered app</p>
<pre><code>INSTALLED_APPS = [
'myproject',
]
</code></pre>
<p>But when I try to install askbot as below, I found error.</p>
<pre><code>(envask)root@localhost:~/vikander# pip install askbot
Collecting askbot
Downloading askbot-0.10.0.tar.gz (8.6MB)
100% |ââââââââââââââââââââââââââââââââ| 8.6MB 116kB/s
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-vppvsnhk/askbot/setup.py", line 135
**************************************************************"""
^
SyntaxError: Missing parentheses in call to 'print'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-vppvsnhk/askbot/
</code></pre>
<p>Is this python version problem? Is there no way to install askbot under python 3.x envirionment? Thanks in advnace.</p>
| 1 | 2016-07-28T10:11:14Z | 38,633,471 | <p><a href="https://pypi.python.org/pypi/askbot/0.7.56" rel="nofollow">Askbot</a> is not compatible with python 3, which changes <code>print</code> from a statement like so:</p>
<pre><code>print `Hello World`
</code></pre>
<p>into a function:</p>
<pre><code>print('Hello world')
</code></pre>
<p>More about this change <a href="https://docs.python.org/3.0/whatsnew/3.0.html#print-is-a-function" rel="nofollow">here</a></p>
<p>You'll need to find an alternative, or push a fix to the Askbot repo.</p>
| 2 | 2016-07-28T10:15:20Z | [
"python",
"django",
"askbot"
] |
Unnamed error using Urllib2 and Beautiful soup | 38,633,439 | <p>The output of this code block always returns me the "except". No specific error is shown in my terminal. What am i doing wrong ?
Any help is appreciated!</p>
<pre><code>from bs4 import BeautifulSoup
import csv
import urllib2
# get page source and create a BeautifulSoup object based on it
try:
print("Fetching page.")
page = urllib2.open("http://siph0n.net")
soup = BeautifulSoup(page, 'lxml')
#specify tags the parameters are stored in
metaData = soup.find_all("a")
except:
print("Error during fetch.")
exit()
</code></pre>
| 0 | 2016-07-28T10:13:50Z | 38,633,567 | <blockquote>
<p>"No specific error is shown in my terminal" </p>
</blockquote>
<p>That's because your <code>except</code> block is shadowing it. Either remove the <code>try/except</code> or print the exception in the <code>except</code> block:</p>
<pre><code>try:
.
.
.
except Exception as ex:
print(ex)
</code></pre>
<p>Note that catching the general type <code>Exception</code> is generally a bad idea. Your <code>except</code> blocks should always catch the specific exception type as possible.</p>
| 0 | 2016-07-28T10:19:47Z | [
"python",
"python-2.7",
"beautifulsoup",
"urllib2",
"scraper"
] |
Unnamed error using Urllib2 and Beautiful soup | 38,633,439 | <p>The output of this code block always returns me the "except". No specific error is shown in my terminal. What am i doing wrong ?
Any help is appreciated!</p>
<pre><code>from bs4 import BeautifulSoup
import csv
import urllib2
# get page source and create a BeautifulSoup object based on it
try:
print("Fetching page.")
page = urllib2.open("http://siph0n.net")
soup = BeautifulSoup(page, 'lxml')
#specify tags the parameters are stored in
metaData = soup.find_all("a")
except:
print("Error during fetch.")
exit()
</code></pre>
| 0 | 2016-07-28T10:13:50Z | 38,634,351 | <p>You can use requests for getting the data.</p>
<pre><code>from bs4 import BeautifulSoup
import requests
import csv
import urllib2
# get page source and create a BeautifulSoup object based on it
try:
print("Fetching page.")
page = requests.get("http://siph0n.net")
soup = BeautifulSoup(page, 'lxml')
#specify tags the parameters are stored in
metaData = soup.find_all("a")
except Exception as ex:
print(ex)
</code></pre>
| 0 | 2016-07-28T10:56:14Z | [
"python",
"python-2.7",
"beautifulsoup",
"urllib2",
"scraper"
] |
if form field is None don't find in model in that field | 38,633,484 | <p>I have model with blank=True fields. In my form I have fields that are optional if in model they are blank=True. So, if in my form I want to make them empty, and then I want to filter my model objects, I don't want to use empty fields to search. Do I need to create other query in this case?</p>
<p>forms.py</p>
<pre><code>class searchGoods(forms.Form):
region_from = forms.ModelChoiceField(required=False, queryset = Region.objects.all(), widget = forms.Select())
region_to = forms.ModelChoiceField(required=False, queryset = Region.objects.all(), widget = forms.Select())
</code></pre>
<p>models.py</p>
<pre><code>class Add_good(models.Model):
loading_region = models.ForeignKey(Region, blank=True, related_name="loading_region", null=True)
unloading_region = models.ForeignKey(Region, blank=True, related_name="unloading_region", null=True)
</code></pre>
<p>views.py</p>
<pre><code>if form['region_from'] == None:
if form['region_to'] == None:
data_from_db = Add_good.objects.filter(loading_country=form['country_from'],
unloading_country=form['country_to'],
loading_city=form['city_from'],
unloading_city=form['city_to'],
loading_goods_date_from__gte=form['date_from'],
loading_goods_date_to__lte=form['date_to'],
mass__gte=form["mass_from"],
mass__lte=form["mass_to"],
volume__gte=form['volume_from'],
volume__lte=form['volume_to'],
auto_current_type__in=auto_types,
)
else:
data_from_db = Add_good.objects.filter(loading_country=form['country_from'],
unloading_country=form['country_to'],
loading_city=form['city_from'],
unloading_city=form['city_to'],
unloading_region=form["region_to"],
loading_goods_date_from__gte=form['date_from'],
loading_goods_date_to__lte=form['date_to'],
mass__gte=form["mass_from"],
mass__lte=form["mass_to"],
volume__gte=form['volume_from'],
volume__lte=form['volume_to'],
auto_current_type__in=auto_types,
)
else:
if form['region_to'] == None:
data_from_db = Add_good.objects.filter(loading_country=form['country_from'],
unloading_country=form['country_to'],
loading_city=form['city_from'],
unloading_city=form['city_to'],
loading_region=form["region_from"],
loading_goods_date_from__gte=form['date_from'],
loading_goods_date_to__lte=form['date_to'],
mass__gte=form["mass_from"],
mass__lte=form["mass_to"],
volume__gte=form['volume_from'],
volume__lte=form['volume_to'],
auto_current_type__in=auto_types,
)
else:
data_from_db = Add_good.objects.filter(loading_country=form['country_from'],
unloading_country=form['country_to'],
loading_city=form['city_from'],
unloading_city=form['city_to'],
loading_region=form["region_from"],
unloading_region=form["region_to"],
loading_goods_date_from__gte=form['date_from'],
loading_goods_date_to__lte=form['date_to'],
mass__gte=form["mass_from"],
mass__lte=form["mass_to"],
volume__gte=form['volume_from'],
volume__lte=form['volume_to'],
auto_current_type__in=auto_types,
)
</code></pre>
<p>Well, exactly, in my models there is more fields, all of them you can see in views when I save them</p>
| 0 | 2016-07-28T10:16:01Z | 39,246,339 | <p>The situation is that I forget to do <code>makemigrations</code>. After that all works fine!</p>
| 0 | 2016-08-31T09:53:29Z | [
"python",
"django",
"forms"
] |
Must be called from a blob upload callback request | 38,633,490 | <p>I have implemented this GWT example for blobstore API in Java:
<a href="https://cloud.google.com/appengine/docs/java/blobstore/" rel="nofollow">https://cloud.google.com/appengine/docs/java/blobstore/</a></p>
<p>it works fine when POST is made via the client side form (inside a browser).</p>
<p>However, now I am sending files (images) to the same /upload service handler but from a python request inside my offline program (not browser):</p>
<pre><code>r = requests.post(url+'upload', files= {'myFile': open('fig.jpeg', 'rb')})
</code></pre>
<p>and I get the following exception </p>
<blockquote>
<p>Must be called from a blob upload callback request</p>
</blockquote>
<p>in the first line of (server side):</p>
<pre><code>Map<String, List<BlobKey>> blobs = blobstoreService.getUploads(req);
List<BlobKey> blobKeys = blobs.get("myFile");
</code></pre>
<p>What am I doing wrong??</p>
| 0 | 2016-07-28T10:16:20Z | 38,635,109 | <p>That <code>/upload</code> handler is not meant for you to call directly, neither from the browser nor a python application. Instead, your application will need to make two calls: the first to your server to get a temporary URL, then the second to upload to that URL, which will connect with the blobstore directly. Your server should generate that temporary URL using <code>blobstoreService.createUploadUrl</code>, as described in step 1 of <a href="https://cloud.google.com/appengine/docs/java/blobstore/#Java_Uploading_a_blob" rel="nofollow">this section</a> of the documentation you linked.</p>
<p>During the course of the second call (the upload), the blobstore will directly call your upload handler to inform your app about the new blob(s). That is what <code>blobstoreService.getUploads(req)</code> knows how to interpret.</p>
<p>So your python application will make 2 calls, and your server will also handle 2 requests. The first request comes directly from the python application simply requesting the url. The second request will happen <em>during</em> the upload, but will actually come directly from the blobstore.</p>
| 1 | 2016-07-28T11:31:53Z | [
"java",
"python",
"google-app-engine",
"gwt"
] |
Train model using queue Tensorflow | 38,633,539 | <p>I designed a neural network in tensorflow for my regression problem by following and adapting the tensorflow tutorial. However, due to the structure of my problem (~300.000 data points and use of the costful FTRLOptimizer), my problem took too long to execute even with my 32 CPUs machine (I don't have GPUs).</p>
<p>According to <a href="https://github.com/tensorflow/tensorflow/issues/2919#issuecomment-226660045" rel="nofollow">this comment</a> and a quick confirmation via <em>htop</em>, it appears that I have some single-threaded operations and it should be feed_dict.</p>
<p>Therefore, as adviced <a href="https://github.com/yaroslavvb/stuff/blob/master/queues_talk/slides.pdf" rel="nofollow">here</a>, I tried to use queues for multi-threading my program.</p>
<p>I wrote a simple code file with queue to train a model as following:</p>
<pre><code>import numpy as np
import tensorflow as tf
import threading
#Function for enqueueing in parallel my data
def enqueue_thread():
sess.run(enqueue_op, feed_dict={x_batch_enqueue: x, y_batch_enqueue: y})
#Set the number of couples (x, y) I use for "training" my model
BATCH_SIZE = 5
#Generate my data where y=x+1+little_noise
x = np.random.randn(10, 1).astype('float32')
y = x+1+np.random.randn(10, 1)/100
#Create the variables for my model y = x*W+b, then W and b should both converge to 1.
W = tf.get_variable('W', shape=[1, 1], dtype='float32')
b = tf.get_variable('b', shape=[1, 1], dtype='float32')
#Prepare the placeholdeers for enqueueing
x_batch_enqueue = tf.placeholder(tf.float32, shape=[None, 1])
y_batch_enqueue = tf.placeholder(tf.float32, shape=[None, 1])
#Create the queue
q = tf.RandomShuffleQueue(capacity=2**20, min_after_dequeue=BATCH_SIZE, dtypes=[tf.float32, tf.float32], seed=12, shapes=[[1], [1]])
#Enqueue operation
enqueue_op = q.enqueue_many([x_batch_enqueue, y_batch_enqueue])
#Dequeue operation
x_batch, y_batch = q.dequeue_many(BATCH_SIZE)
#Prediction with linear model + bias
y_pred=tf.add(tf.mul(x_batch, W), b)
#MAE cost function
cost = tf.reduce_mean(tf.abs(y_batch-y_pred))
learning_rate = 1e-3
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
available_threads = 1024
#Feed the queue
for i in range(available_threads):
threading.Thread(target=enqueue_thread).start()
#Train the model
for step in range(1000):
_, cost_step = sess.run([train_op, cost])
print(cost_step)
Wf=sess.run(W)
bf=sess.run(b)
</code></pre>
<p>This code doesn't work because each time I call x_batch, one y_batch is also dequeued and vice versa. Then, I do not compare the features with the corresponding "result".</p>
<p>Is there an easy way to avoid this problem ?</p>
| 2 | 2016-07-28T10:18:25Z | 38,715,186 | <p>My mistake, everything worked fine.
I was misled because I estimated at each step of the algorithm my performance on different batches and also because my model was too complicated for a dummy one (I should had something like y=W*x or y=x+b).
Then, when I tried to print in the console, I exucuted several times sess.run on different variables and got obviously non-consistent results.</p>
| 0 | 2016-08-02T08:17:49Z | [
"python",
"multithreading",
"machine-learning",
"queue",
"tensorflow"
] |
Quick evaluation of many functions at same point in Python | 38,633,584 | <p><strong>Problem</strong>: I need a very fast way in <code>Python3</code> to evaluate many (in the thousands) functions at the same argument. So in a sense, I kind of need the opposite of <code>NumPy's</code> Broadcasting which allows to quickly evaluate <em>one</em> function at <em>multiple</em> points.</p>
<p><strong>My solution</strong>: At the moment I just store my functions in a list and then iterate over the list with a classic for loop to evaluate all functions individually. This however is much too slow. </p>
<p>Examples, ideas and links to packages very much welcome.</p>
<p><em>Edit</em>: People have asked what the functions look like: 1. They are computational in nature. No I/O. 2. They only involve the usual algebraic operations like +, -, *, / and ** and also an indicator function. So no trigonometric functions or other special functions.</p>
| 2 | 2016-07-28T10:20:40Z | 38,633,667 | <p>Evaluate them using threading by running them in multiple threads, as long as they do not have resource conflicts.</p>
<p><a href="http://www.tutorialspoint.com/python/python_multithreading.htm" rel="nofollow">http://www.tutorialspoint.com/python/python_multithreading.htm</a></p>
| 0 | 2016-07-28T10:24:43Z | [
"python",
"numpy"
] |
Quick evaluation of many functions at same point in Python | 38,633,584 | <p><strong>Problem</strong>: I need a very fast way in <code>Python3</code> to evaluate many (in the thousands) functions at the same argument. So in a sense, I kind of need the opposite of <code>NumPy's</code> Broadcasting which allows to quickly evaluate <em>one</em> function at <em>multiple</em> points.</p>
<p><strong>My solution</strong>: At the moment I just store my functions in a list and then iterate over the list with a classic for loop to evaluate all functions individually. This however is much too slow. </p>
<p>Examples, ideas and links to packages very much welcome.</p>
<p><em>Edit</em>: People have asked what the functions look like: 1. They are computational in nature. No I/O. 2. They only involve the usual algebraic operations like +, -, *, / and ** and also an indicator function. So no trigonometric functions or other special functions.</p>
| 2 | 2016-07-28T10:20:40Z | 38,634,425 | <p>If your functions are IO bound (meaning they spend most of their time waiting for some IO operation to complete), then using multiple threads may be a fair solution.</p>
<p>If your functions are CPU bound (meaning they spend most of their time doing actual computational work), then multiple threads will not help you, unless you are using a python implementation that does not have a <a href="https://wiki.python.org/moin/GlobalInterpreterLock" rel="nofollow">global interpreter lock</a>. </p>
<p>What you can do here, is use multiple python processes. The easiest solution being <code>multiprocessing</code> module. Here is an example:</p>
<pre><code>#!/usr/bin/env python3
from multiprocessing import Pool
from functools import reduce
def a(x):
return reduce(lambda memo, i: memo + i, x)
def b(x):
return reduce(lambda memo, i: memo - i, x)
def c(x):
return reduce(lambda memo, i: memo + i**2, x)
my_funcs = [a, b, c]
#create a process pool of 4 worker processes
pool = Pool(4)
async_results = []
for f in my_funcs:
#seconds parameter to apply_async should be a tuple of parameters to pass to the function
async_results.append(pool.apply_async(f, (range(1, 1000000),)))
results = list(map(lambda async_result: async_result.get(), async_results))
print(results)
</code></pre>
<p>This method allows you to utilize all your CPU power in parallel: just pick a pool size that matches the number of CPUs in your environment. The limitation of this approach is that all your functions must be <a href="https://docs.python.org/2/library/pickle.html" rel="nofollow">pickleable</a>. </p>
| 3 | 2016-07-28T11:00:15Z | [
"python",
"numpy"
] |
JSON2HTML: Not a valid JSON list python | 38,633,629 | <p>I have a piece of JSON in a file I would like to convert to HTML. I seen online there is a tool called json2html for python which takes care of this for me.</p>
<pre><code>[{
"name": "Steve",
"timestampe": "2016-07-28 10:04:15",
"age": 22
},
{
"name": "Dave",
"timestamp": "2016-07-28 10:04:15",
"age": 34
}]
</code></pre>
<p>Above is my JSON, when using the online converter tool - <a href="http://json2html.varunmalhotra.xyz/" rel="nofollow">http://json2html.varunmalhotra.xyz/</a> it works great and produces a nice table for me.</p>
<p>However when I install the library using pip and run the following:</p>
<pre><code>_json = [{
"name": "Steve",
"timestampe": "2016-07-28 10:04:15",
"age": 22
},
{
"name": "Dave",
"timestamp": "2016-07-28 10:04:15",
"age": 34
}]
print json2html.convert(json=_json)
</code></pre>
<p>I get an error</p>
<pre><code> File "/root/.pyenv/versions/venv/lib/python2.7/site-packages/json2html/jsonconv.py", line 162, in iterJson
raise Exception('Not a valid JSON list')
Exception: Not a valid JSON list
</code></pre>
<p>I even ran the json through <a href="http://jsonlint.com/" rel="nofollow">http://jsonlint.com/</a> and it came back as valid JSON. </p>
<p>I was wondering if anyone would have a fix for this, or could point me in the right direction on how to solve this. I can't find much documentation on this library.</p>
<p>For reference this is the link to the pypi library - <a href="https://pypi.python.org/pypi/json2html" rel="nofollow">https://pypi.python.org/pypi/json2html</a></p>
<p>Any help would be appreciated, thanks in advance!</p>
| 0 | 2016-07-28T10:22:52Z | 38,633,762 | <p>Try setting the value of _json using json.loads(), as in this answer - <a href="http://stackoverflow.com/a/31011255/1772475">Converting json to html table in python</a></p>
| 0 | 2016-07-28T10:29:17Z | [
"python",
"json",
"python-2.7",
"json2html"
] |
JSON2HTML: Not a valid JSON list python | 38,633,629 | <p>I have a piece of JSON in a file I would like to convert to HTML. I seen online there is a tool called json2html for python which takes care of this for me.</p>
<pre><code>[{
"name": "Steve",
"timestampe": "2016-07-28 10:04:15",
"age": 22
},
{
"name": "Dave",
"timestamp": "2016-07-28 10:04:15",
"age": 34
}]
</code></pre>
<p>Above is my JSON, when using the online converter tool - <a href="http://json2html.varunmalhotra.xyz/" rel="nofollow">http://json2html.varunmalhotra.xyz/</a> it works great and produces a nice table for me.</p>
<p>However when I install the library using pip and run the following:</p>
<pre><code>_json = [{
"name": "Steve",
"timestampe": "2016-07-28 10:04:15",
"age": 22
},
{
"name": "Dave",
"timestamp": "2016-07-28 10:04:15",
"age": 34
}]
print json2html.convert(json=_json)
</code></pre>
<p>I get an error</p>
<pre><code> File "/root/.pyenv/versions/venv/lib/python2.7/site-packages/json2html/jsonconv.py", line 162, in iterJson
raise Exception('Not a valid JSON list')
Exception: Not a valid JSON list
</code></pre>
<p>I even ran the json through <a href="http://jsonlint.com/" rel="nofollow">http://jsonlint.com/</a> and it came back as valid JSON. </p>
<p>I was wondering if anyone would have a fix for this, or could point me in the right direction on how to solve this. I can't find much documentation on this library.</p>
<p>For reference this is the link to the pypi library - <a href="https://pypi.python.org/pypi/json2html" rel="nofollow">https://pypi.python.org/pypi/json2html</a></p>
<p>Any help would be appreciated, thanks in advance!</p>
| 0 | 2016-07-28T10:22:52Z | 38,634,133 | <p>parameter <code>json</code> must be a dictionary object and you pass a list.
try this:</p>
<pre><code>_json = { "data" : [{"name": "Steve",
"timestampe": "2016-07-28 10:04:15",
"age": 22
},
{
"name": "Dave",
"timestamp": "2016-07-28 10:04:15",
"age": 34
}]
}
print json2html.convert(json=_json)
</code></pre>
| 1 | 2016-07-28T10:47:15Z | [
"python",
"json",
"python-2.7",
"json2html"
] |
Python Gmail API 'not JSON serializable' | 38,633,781 | <p>I want to send an Email through Python using the Gmail API. Everythingshould be fine, but I still get the error "An error occurred: b'Q29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS..." Here is my code:</p>
<pre><code>import base64
import httplib2
from email.mime.text import MIMEText
from apiclient.discovery import build
from oauth2client.client import flow_from_clientsecrets
from oauth2client.file import Storage
from oauth2client.tools import run_flow
# Path to the client_secret.json file downloaded from the Developer Console
CLIENT_SECRET_FILE = 'client_secret.json'
# Check https://developers.google.com/gmail/api/auth/scopes for all available scopes
OAUTH_SCOPE = 'https://www.googleapis.com/auth/gmail.compose'
# Location of the credentials storage file
STORAGE = Storage('gmail.storage')
# Start the OAuth flow to retrieve credentials
flow = flow_from_clientsecrets(CLIENT_SECRET_FILE, scope=OAUTH_SCOPE)
http = httplib2.Http()
# Try to retrieve credentials from storage or run the flow to generate them
credentials = STORAGE.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, STORAGE, http=http)
# Authorize the httplib2.Http object with our credentials
http = credentials.authorize(http)
# Build the Gmail service from discovery
gmail_service = build('gmail', 'v1', http=http)
# create a message to send
message = MIMEText("Message")
message['to'] = "myemail@gmail.com"
message['from'] = "python.api123@gmail.com"
message['subject'] = "Subject"
body = {'raw': base64.b64encode(message.as_bytes())}
# send it
try:
message = (gmail_service.users().messages().send(userId="me", body=body).execute())
print('Message Id: %s' % message['id'])
print(message)
except Exception as error:
print('An error occurred: %s' % error)
</code></pre>
| 2 | 2016-07-28T10:30:07Z | 39,693,258 | <p>I had this same issue, I assume you are using Python3 I found this on another post and the suggestion was to do the following:</p>
<pre><code>raw = base64.urlsafe_b64encode(message.as_bytes())
raw = raw.decode()
body = {'raw': raw}
</code></pre>
<p>Check out:
<a href="https://github.com/google/google-api-python-client/issues/93" rel="nofollow">https://github.com/google/google-api-python-client/issues/93</a></p>
| 0 | 2016-09-26T00:44:50Z | [
"python",
"json",
"api",
"email",
"gmail"
] |
How to edit lines of all text files in a directory with python | 38,633,787 | <p>I would like to edit and replace lines of all .txt files in a directory with python for this purpose I am using the following code:</p>
<pre><code>path = '.../dbfiles'
for filename in os.listdir(path):
for i in os.listdir(path):
if i.endswith(".txt"):
with open(i, 'r') as f_in:
for line in f_in:
line=tweet_to_words(line).encode('utf-8')
open(i, 'w').write(line)
</code></pre>
<p>where <strong><code>tweet_to_words(line)</code></strong> is a predefined function for edition lines of the text file.
Although I am not sure if the logic of the code is right!? I am also facing the following error:</p>
<blockquote>
<p>IOError: [Errno 2] No such file or directory: 'thirdweek.txt'</p>
</blockquote>
<p>but the <strong>'thirdweek.txt'</strong> exist in the directory!
So my question is to see if the method I am using for editing lines in a file is right or not!? and if so how can I fix the error ?</p>
| 0 | 2016-07-28T10:30:22Z | 38,633,899 | <p>You should add the base path when you use <code>open</code>:</p>
<pre><code> with open(path + '/' + i, 'r') as f_in:
</code></pre>
<p>the same goes for:</p>
<pre><code> open(path + '/' + i, 'w').write(line)
</code></pre>
| 2 | 2016-07-28T10:35:40Z | [
"python",
"text-files",
"ioerror"
] |
How to edit lines of all text files in a directory with python | 38,633,787 | <p>I would like to edit and replace lines of all .txt files in a directory with python for this purpose I am using the following code:</p>
<pre><code>path = '.../dbfiles'
for filename in os.listdir(path):
for i in os.listdir(path):
if i.endswith(".txt"):
with open(i, 'r') as f_in:
for line in f_in:
line=tweet_to_words(line).encode('utf-8')
open(i, 'w').write(line)
</code></pre>
<p>where <strong><code>tweet_to_words(line)</code></strong> is a predefined function for edition lines of the text file.
Although I am not sure if the logic of the code is right!? I am also facing the following error:</p>
<blockquote>
<p>IOError: [Errno 2] No such file or directory: 'thirdweek.txt'</p>
</blockquote>
<p>but the <strong>'thirdweek.txt'</strong> exist in the directory!
So my question is to see if the method I am using for editing lines in a file is right or not!? and if so how can I fix the error ?</p>
| 0 | 2016-07-28T10:30:22Z | 38,634,280 | <p>The glob module is useful for getting files with similar endings:</p>
<pre><code>import glob
print glob.glob("*.txt") # Returns a list of all .txt files, with path info
for item in glob.glob("*.txt"):
temp = [] # Might be useful to use a temp list before overwriting your file
with open(item, "r") as f:
for line in f:
print line # Do something useful here
temp.append(line)
with open(item, "w") as f:
f.writelines(temp)
</code></pre>
| 1 | 2016-07-28T10:53:24Z | [
"python",
"text-files",
"ioerror"
] |
Python Request: Post Images on Facebook using Multipart/form-data | 38,633,791 | <p>I'm using the facebook API to post images on a page, I can post image from web using this :</p>
<pre><code>import requests
data = 'url=' + url + '&caption=' + caption + '&access_token=' + token
status = requests.post('https://graph.facebook.com/v2.7/PAGE_ID/photos',
data=data)
print status
</code></pre>
<p>But when I want to post a local image (using multipart/form-data) i get the error : <code>ValueError: Data must not be a string.</code> </p>
<p>I was using this code:</p>
<pre><code>data = 'caption=' + caption + '&access_token=' + token
files = {
'file': open(IMG_PATH, 'rb')
}
status = requests.post('https://graph.facebook.com/v2.7/PAGE_ID/photos',
data=data, files=files)
print status
</code></pre>
<p>I read (<a href="http://stackoverflow.com/questions/19439961/python-requests-post-json-and-file-in-single-request/19440099#19440099">Python Requests: Post JSON and file in single request</a>) that maybe it's not possible to send both data and files in a multipart encoded file so I updated my code :</p>
<pre><code>data = 'caption=' + caption + '&access_token=' + token
files = {
'data': data,
'file': open(IMG_PATH, 'rb')
}
status = requests.post('https://graph.facebook.com/v2.7/PAGE_ID/photos',
files=files)
print status
</code></pre>
<p>But that doesn't seem to work, I get the same error as above.<br>
Do you guys know why it's not working, and maybe a way to fix this.</p>
| 0 | 2016-07-28T10:30:31Z | 38,633,884 | <p>Pass in <code>data</code> as a <em>dictionary</em>:</p>
<pre><code>data = {
'caption', caption,
'access_token', token
}
files = {
'file': open(IMG_PATH, 'rb')
}
status = requests.post(
'https://graph.facebook.com/v2.7/PAGE_ID/photos',
data=data, files=files)
</code></pre>
<p><code>requests</code> can't produce <code>multipart/form-data</code> parts (together with the files you are uploading) from a <code>application/x-www-form-urlencoded</code> encoded string.</p>
<p>Using a dictionary for the POST data has the additional advantage that <code>requests</code> takes care of properly encoding the values; <code>caption</code> especially could contain data that you must escape properly.</p>
| 1 | 2016-07-28T10:35:09Z | [
"python",
"facebook",
"python-requests",
"multipartform-data"
] |
How can I open up webcam and process images with docker and OpenCV? | 38,633,799 | <p>I have a python script that uses OpenCV and when somebody runs my script I want process the image from their webcam and give back a result. How can I make it?</p>
<p>This is how I tried:</p>
<p>My simple test python script:</p>
<pre><code>import cv2
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
print ret
</code></pre>
<p>This is in my dockerfile:</p>
<pre><code>FROM gaborvecsei/opencvinstall
ADD testcode.py ./testcode.py
#Start sample app
CMD ["python", "testcode.py"]
</code></pre>
<p>After I build this and I run it it always prints <code>False</code> so that means I do not have any image from the webcam.</p>
<p>How can I get the images?</p>
| 0 | 2016-07-28T10:30:46Z | 38,634,175 | <p>You have to show the frame. Using <code>cv2.imshow('Preview', frame)</code>. And outside the <code>while loop</code> you have to release the camera, so you should type <code>cap.release()</code>. </p>
| 0 | 2016-07-28T10:49:06Z | [
"python",
"opencv",
"docker",
"dockerfile"
] |
How to save data from the data stream while not blocking the stream? (PyQt5 signal emit() performance) | 38,633,914 | <p>I'm developing a PyQt5 application.
In my application, it has a data stream, and its speed is about 5~20 data/sec.</p>
<p>Every time data arrives, the following <code>onData()</code> method of class <code>Analyzer</code> is called. (Following code is simplified code of my app)</p>
<pre><code>class Analyzer():
def __init__(self):
self.cnt = 0
self.dataDeque = deque(MAXLENGTH=10000)
def onData(self, data):
self.dataDeque.append({
"data": data,
"createdTime": time.time()
})
self.cnt += 1
if self.cnt % 10000 == 0:
pickle.dump(dataDeque, open(file, 'wb'))
</code></pre>
<p>But the problem is, this dataDeque object is so large(50~150MB) so that dumping the pickle takes about 1~2 seconds.</p>
<p>During that moment(1~2 seconds), requests for calling <code>onData()</code> method got queued, and after 1~2 seconds, the queued requests call lots of <code>onData()</code> method at simultaneously, eventually distorts the <code>createdTime</code> of data.</p>
<p>To solve this problem, I edited my code to use Thread (QThread) to save the pickle.</p>
<p>The following code is the edited code.</p>
<pre><code>from PickleDumpingThread import PickleDumpingThread
pickleDumpingThread = PickleDumpingThread()
pickleDumpingThread.start()
class Analyzer():
def __init__(self):
self.cnt = 0
self.dataDeque = deque(MAXLENGTH=10000)
def onData(self, data):
self.dataDeque.append({
"data": data,
"createdTime": time.time()
})
self.cnt += 1
if self.cnt % 10000 == 0:
pickleDumpingThread.pickleDumpingSignal.emit({
"action": savePickle,
"deque": self.dataDeque
})
# pickle.dump(dataDeque, open(file, 'wb'))
</code></pre>
<p>The following code is <code>PickleDumpingThread</code> class.</p>
<pre><code>class PickleDumpingThread(QThread):
def __init__(self):
super().__init__()
self.daemon = True
self.pickleDumpingSignal[dict].connect(self.savePickle)
def savePickle(self, signal_dict):
pickle.dump(signal_dict["deque"], open(file, 'wb'))
</code></pre>
<p>I expected this newly edited code will dramatically decrease the stream blocking time(1~2 seconds), but this code still blocks the stream about 0.5~2 seconds.</p>
<p>It seems like <code>pickleDumpingThread.pickleDumpingSignal.emit(somedict)</code> takes 0.5~2 seconds. </p>
<p>My question is 3 things.</p>
<ol>
<li><p>Is signal emit() function's performance is not good like this?</p></li>
<li><p>Is there any possible alternatives of emit() function in my case?</p></li>
<li><p>Or is there any way to save pickle while not blocking the data stream?
(any suggestion of modifying my code is highly appreciated)</p></li>
</ol>
<p>Thank you for reading this long question!</p>
| 1 | 2016-07-28T10:36:08Z | 38,635,201 | <p>something like this might work</p>
<pre><code>class PickleDumpingThread(QThread):
def __init__(self, data):
super().__init__()
self.data = data
def run(self):
pickle.dump(self.data["deque"], open(file, 'wb'))
self.emit(QtCore.SIGNAL('threadFinished(int)'), self.currentThreadId())
class Analyzer():
def __init__(self):
self.cnt = 0
self.dataDeque = deque(MAXLENGTH=10000)
self.threadHandler = {}
def onData(self, data):
self.dataDeque.append({ "data": data, "createdTime": time.time() })
self.cnt += 1
if self.cnt % 10000 == 0:
thread = PickleDumpingThread(self.dataDeque)
self.connect(thread, QtCore.SIGNAL("threadFinished(int)"), self.threadFinished)
thread.start()
self.threadHandler[thread.currentThreadId()] = thread
@QtCore.pyqtSlot(int)
def threadFinished(id):
del self.threadHandler[id]
</code></pre>
<p><code>self.threadHandler</code> is just to know how many threads are still running, you can get rid of it and <code>threadFinished</code> method</p>
| 0 | 2016-07-28T11:35:58Z | [
"python",
"pyqt",
"python-multithreading",
"pyqt5",
"qthread"
] |
How to save data from the data stream while not blocking the stream? (PyQt5 signal emit() performance) | 38,633,914 | <p>I'm developing a PyQt5 application.
In my application, it has a data stream, and its speed is about 5~20 data/sec.</p>
<p>Every time data arrives, the following <code>onData()</code> method of class <code>Analyzer</code> is called. (Following code is simplified code of my app)</p>
<pre><code>class Analyzer():
def __init__(self):
self.cnt = 0
self.dataDeque = deque(MAXLENGTH=10000)
def onData(self, data):
self.dataDeque.append({
"data": data,
"createdTime": time.time()
})
self.cnt += 1
if self.cnt % 10000 == 0:
pickle.dump(dataDeque, open(file, 'wb'))
</code></pre>
<p>But the problem is, this dataDeque object is so large(50~150MB) so that dumping the pickle takes about 1~2 seconds.</p>
<p>During that moment(1~2 seconds), requests for calling <code>onData()</code> method got queued, and after 1~2 seconds, the queued requests call lots of <code>onData()</code> method at simultaneously, eventually distorts the <code>createdTime</code> of data.</p>
<p>To solve this problem, I edited my code to use Thread (QThread) to save the pickle.</p>
<p>The following code is the edited code.</p>
<pre><code>from PickleDumpingThread import PickleDumpingThread
pickleDumpingThread = PickleDumpingThread()
pickleDumpingThread.start()
class Analyzer():
def __init__(self):
self.cnt = 0
self.dataDeque = deque(MAXLENGTH=10000)
def onData(self, data):
self.dataDeque.append({
"data": data,
"createdTime": time.time()
})
self.cnt += 1
if self.cnt % 10000 == 0:
pickleDumpingThread.pickleDumpingSignal.emit({
"action": savePickle,
"deque": self.dataDeque
})
# pickle.dump(dataDeque, open(file, 'wb'))
</code></pre>
<p>The following code is <code>PickleDumpingThread</code> class.</p>
<pre><code>class PickleDumpingThread(QThread):
def __init__(self):
super().__init__()
self.daemon = True
self.pickleDumpingSignal[dict].connect(self.savePickle)
def savePickle(self, signal_dict):
pickle.dump(signal_dict["deque"], open(file, 'wb'))
</code></pre>
<p>I expected this newly edited code will dramatically decrease the stream blocking time(1~2 seconds), but this code still blocks the stream about 0.5~2 seconds.</p>
<p>It seems like <code>pickleDumpingThread.pickleDumpingSignal.emit(somedict)</code> takes 0.5~2 seconds. </p>
<p>My question is 3 things.</p>
<ol>
<li><p>Is signal emit() function's performance is not good like this?</p></li>
<li><p>Is there any possible alternatives of emit() function in my case?</p></li>
<li><p>Or is there any way to save pickle while not blocking the data stream?
(any suggestion of modifying my code is highly appreciated)</p></li>
</ol>
<p>Thank you for reading this long question!</p>
| 1 | 2016-07-28T10:36:08Z | 38,649,894 | <p>The problem was that I was not using <code>QThread</code> properly.</p>
<p>The result of printing</p>
<pre><code>print("(Current Thread)", QThread.currentThread(),"\n")
print("(Current Thread)", int(QThread.currentThreadId()),"\n")
</code></pre>
<p>noticed me that the <code>PickleDumpingThread</code> I created was running in the main thread, not in some seperated thread.</p>
<p>The reason of this is <code>run()</code> is the only function in <code>QThread</code> that runs in seperate thread, so method like <code>savePickle</code> in <code>QThread</code> run in main thread.</p>
<hr>
<p><strong>First Solution</strong></p>
<p>The proper usage of using signal was using Worker as following.</p>
<pre><code>from PyQt5.QtCore import QThread
class GenericThread(QThread):
def run(self, *args):
# print("Current Thread: (GenericThread)", QThread.currentThread(),"\n")
self.exec_()
class PickleDumpingWorker(QObject):
pickleDumpingSignal = pyqtSignal(dict)
def __init__(self):
super().__init__()
self.pickleDumpingSignal[dict].connect(self.savePickle)
def savePickle(self, signal_dict)
pickle.dump(signal_dict["deque"], open(file, "wb"))
pickleDumpingThread = GenericThread()
pickleDumpingThread.start()
pickleDumpingWorker = PickleDumpingWorker()
pickleDumpingWorker.moveToThread(pickleDumpingThread)
class Analyzer():
def __init__(self):
self.cnt = 0
self.dataDeque = deque(MAXLENGTH=10000)
def onData(self, data):
self.dataDeque.append({
"data": data,
"createdTime": time.time()
})
self.cnt += 1
if self.cnt % 10000 == 0:
pickleDumpingWorker.pickleDumpingSignal.emit({
"action": savePickle,
"deque": self.dataDeque
})
# pickle.dump(dataDeque, open(file, 'wb'))
</code></pre>
<p>This solution worked (pickle was dumped in seperate thread), but drawback of it is the data stream still delays about 0.5~1 seconds because of signal emit() function.</p>
<p>I found the best solution for my case is @PYPL 's code, but the code needs a few modifications to work.</p>
<hr>
<p><strong>Final Solution</strong></p>
<p>Final solution is modifying @PYPL 's following code</p>
<pre><code>thread = PickleDumpingThread(self.dataDeque)
thread.start()
</code></pre>
<p>to</p>
<pre><code>self.thread = PickleDumpingThread(self.dataDeque)
self.thread.start()
</code></pre>
<p>The original code have some runtime error. It seems like thread is being garbage collected before it dumps the pickle because there's no reference to that thread after <code>onData()</code> function is finished.</p>
<p>Referencing the thread by adding <code>self.thread</code> solved this issue.</p>
<p>Also, it seems that the old <code>PickleDumpingThread</code> is being garbage collected after new <code>PickleDumpingThread</code> is being referenced by <code>self.thread</code> (because the old <code>PickleDumpingThread</code> loses its reference).</p>
<p>However, this claim is not verified (as I don't know how to view current active thread).. </p>
<p>Whatever, the problem is solved by this solution. </p>
<hr>
<p><strong>EDIT</strong></p>
<p>My final solution have delay too. It takes some amount of time to call Thread.start()..</p>
<p>The real final solution I choosed is running infinite loop in thread and monitor some variables of that thread to determine when to save pickle. Just using infinite loop in thread takes a lots of cpu, so I added time.sleep(0.1) to decrease the cpu usage.</p>
<hr>
<p><strong>FINAL EDIT</strong></p>
<p>OK..My 'real final solution' also had delay..
Even though I moved dumping job to another QThread, the main thread still have delay about pickle dumping time! That was weird.</p>
<p>But I found the reason. The reason was neither emit() performance nor whatever I thought.</p>
<p>The reason was, embarrassingly, <a href="http://stackoverflow.com/questions/18114285/python-what-are-the-differences-between-the-threading-and-multiprocessing-modul/18114882#18114882">python's Global Interpreter Lock prevents two threads in the same process from running Python code at the same time</a>.</p>
<p>So probably I should use <a href="https://docs.python.org/3.4/library/multiprocessing.html" rel="nofollow">multiprocessing</a> module in this case.</p>
<p>I'll post the result after modifying my code to use <a href="https://docs.python.org/3.4/library/multiprocessing.html" rel="nofollow">multiprocessing</a> module.</p>
<p><strong>Edit after using <code>multiprocessing</code> module and future attempts</strong></p>
<p><strong>Using <code>multiprocessing</code> module</strong></p>
<p>Using <code>multiprocessing</code> module solved the issue of running python code concurrently, but the new essential problem arised. The new problem was 'passing shared memory variables between processes takes considerable amount of time' (in my case, passing <code>deque</code> object to child process took 1~2 seconds). I found that this problem cannot be removed as long as I use <code>multiprocessing</code> module. So I gave up to use `multiprocessing module</p>
<p><strong>Possible future attempts</strong></p>
<p><strong>1. Doing only File I/O in <code>QThread</code></strong></p>
<p>The essential problem of pickle dumping is not writing to file, but serializing before writing to file. Python releases GIL when it writes to file, so disk I/O can be done concurrently in <code>QThread</code>. The problem is, serializing <code>deque</code> object to string before writing to file in <code>pickle.dump</code> method takes some amount of time, and during this moment, main thread is going to be blocked because of GIL.</p>
<p>Hence, following approach will effectively decrease the length of delay.</p>
<ol>
<li><p>We somehow stringify the data object every time when <code>onData()</code> is called and push it to deque object</p></li>
<li><p>In <code>PickleDumpingThread</code>, just <code>join</code> the <code>list(deque)</code> object to stringify the <code>deque</code> object.</p></li>
<li><p><code>file.write(stringified_deque_object)</code>. This can be done concurrently.</p></li>
</ol>
<p>The step 1 takes really small time so it almost non-block the main thread.
The step 2 might take some time, but it obviously takes smaller time than serializing python object in <code>pickle.dump</code> method.
The step 3 doesn't block main thread.</p>
<p><strong>2. Using C extension</strong></p>
<p>We can manually release the GIL and reacquire the GIL in our custom C-extension module. But this might be dirty.</p>
<p><strong>3. Porting CPython to Jython or IronPython</strong></p>
<p>Jython and IronPython are other python implementations using Java and C#, respectively. Hence, they don't use GIL in their implementation, which means that <code>thread</code> really works like thread.
One problem is <code>PyQt</code> is not supported in these implementations..</p>
<p><strong>4. Porting to another language</strong></p>
<p>..</p>
<p>Note: </p>
<ol>
<li><p><code>json.dump</code> also took 1~2 seconds for my data.</p></li>
<li><p>Cython is not an option for this case. Although Cython has <code>with nogil:</code>, only non-python object can be accessed in that block (<code>deque</code> object cannot be accessed in that block) and we can't use <code>pickle.dump</code> method in that block.</p></li>
</ol>
| 0 | 2016-07-29T03:03:30Z | [
"python",
"pyqt",
"python-multithreading",
"pyqt5",
"qthread"
] |
How to save data from the data stream while not blocking the stream? (PyQt5 signal emit() performance) | 38,633,914 | <p>I'm developing a PyQt5 application.
In my application, it has a data stream, and its speed is about 5~20 data/sec.</p>
<p>Every time data arrives, the following <code>onData()</code> method of class <code>Analyzer</code> is called. (Following code is simplified code of my app)</p>
<pre><code>class Analyzer():
def __init__(self):
self.cnt = 0
self.dataDeque = deque(MAXLENGTH=10000)
def onData(self, data):
self.dataDeque.append({
"data": data,
"createdTime": time.time()
})
self.cnt += 1
if self.cnt % 10000 == 0:
pickle.dump(dataDeque, open(file, 'wb'))
</code></pre>
<p>But the problem is, this dataDeque object is so large(50~150MB) so that dumping the pickle takes about 1~2 seconds.</p>
<p>During that moment(1~2 seconds), requests for calling <code>onData()</code> method got queued, and after 1~2 seconds, the queued requests call lots of <code>onData()</code> method at simultaneously, eventually distorts the <code>createdTime</code> of data.</p>
<p>To solve this problem, I edited my code to use Thread (QThread) to save the pickle.</p>
<p>The following code is the edited code.</p>
<pre><code>from PickleDumpingThread import PickleDumpingThread
pickleDumpingThread = PickleDumpingThread()
pickleDumpingThread.start()
class Analyzer():
def __init__(self):
self.cnt = 0
self.dataDeque = deque(MAXLENGTH=10000)
def onData(self, data):
self.dataDeque.append({
"data": data,
"createdTime": time.time()
})
self.cnt += 1
if self.cnt % 10000 == 0:
pickleDumpingThread.pickleDumpingSignal.emit({
"action": savePickle,
"deque": self.dataDeque
})
# pickle.dump(dataDeque, open(file, 'wb'))
</code></pre>
<p>The following code is <code>PickleDumpingThread</code> class.</p>
<pre><code>class PickleDumpingThread(QThread):
def __init__(self):
super().__init__()
self.daemon = True
self.pickleDumpingSignal[dict].connect(self.savePickle)
def savePickle(self, signal_dict):
pickle.dump(signal_dict["deque"], open(file, 'wb'))
</code></pre>
<p>I expected this newly edited code will dramatically decrease the stream blocking time(1~2 seconds), but this code still blocks the stream about 0.5~2 seconds.</p>
<p>It seems like <code>pickleDumpingThread.pickleDumpingSignal.emit(somedict)</code> takes 0.5~2 seconds. </p>
<p>My question is 3 things.</p>
<ol>
<li><p>Is signal emit() function's performance is not good like this?</p></li>
<li><p>Is there any possible alternatives of emit() function in my case?</p></li>
<li><p>Or is there any way to save pickle while not blocking the data stream?
(any suggestion of modifying my code is highly appreciated)</p></li>
</ol>
<p>Thank you for reading this long question!</p>
| 1 | 2016-07-28T10:36:08Z | 39,421,279 | <p>When the GIL is a problem the workaround is to subdivide the task into chunks in such a way that you can refresh the GUI between chunks. </p>
<p>E.g say you have one huge list of size S to dump, then you could try defining a class that derives from list and overrides getstate to return N subpickle objects, each one an instance of a class say Subpickle, containing S/N items of your list. Each subpickle exists only while pickling, and defines getstate to do 2 things: </p>
<ul>
<li>call qApp.processEvents() on gui, and </li>
<li>return the sublist of S/N items.</li>
</ul>
<p>While unpickling, each subpickle will refresh GUI and take the list of items; at end the total list is recreated in the original object from all the subpickles it will receive in its setstate. </p>
<p>You should abstract out the call to process events in case you want to unpickle the pickle in a console app (or non-pyqt gui). You would do this by defining a class-wide attribute on Subpickle, say process_events, to be None by default; if not None, the setstate calls it as a function. So by default there is no GUI refreshing between the subpickles, unless the app that unpikles sets this attribute to a callable before unpickling starts. </p>
<p>This strategy will give your GUi a chance to redraw during the unpickling process (and with only one thread, if you want).</p>
<p>Implementation depends on your exact data, but here is an example that demonstrates the principles for a large list: </p>
<pre><code>import pickle
class SubList:
on_pickling = None
def __init__(self, sublist):
print('SubList', sublist)
self.data = sublist
def __getstate__(self):
if SubList.on_pickling is not None:
print('SubList pickle state fetch: calling sub callback')
SubList.on_pickling()
return self.data
def __setstate__(self, obj):
if SubList.on_pickling is not None:
print('SubList pickle state restore: calling sub callback')
SubList.on_pickling()
self.data = obj
class ListSubPickler:
def __init__(self, data: list):
self.data = data
def __getstate__(self):
print('creating SubLists for pickling long list')
num_chunks = 10
span = int(len(self.data) / num_chunks)
SubLists = [SubList(self.data[i:(i + span)]) for i in range(0, len(self.data), span)]
return SubLists
def __setstate__(self, subpickles):
self.data = []
print('restoring Pickleable(list)')
for subpickle in subpickles:
self.data.extend(subpickle.data)
print('final', self.data)
def refresh():
# do something: refresh GUI (for example, qApp.processEvents() for Qt), show progress, etc
print('refreshed')
data = list(range(100)) # your large data object
list_pickler = ListSubPickler(data)
SubList.on_pickling = refresh
print('\ndumping pickle of', list_pickler)
pickled = pickle.dumps(list_pickler)
print('\nloading from pickle')
new_list_pickler = pickle.loads(pickled)
assert new_list_pickler.data == data
print('\nloading from pickle, without on_pickling')
SubList.on_pickling = None
new_list_pickler = pickle.loads(pickled)
assert new_list_pickler.data == data
</code></pre>
<p>Easy to apply to dict, or even to make it adapt to the type of data it receives by using isinstance. </p>
| 0 | 2016-09-10T00:13:39Z | [
"python",
"pyqt",
"python-multithreading",
"pyqt5",
"qthread"
] |
Django URL error when using forms | 38,633,980 | <p>I am fairly new to Django and I am totally stuck on what is causing this error. I have done lots of searching but to no avail! Any help would be super appreciated.</p>
<p>The actual form works fine but when I try and submit the input data I get the error:</p>
<pre><code>Using the URLconf defined in mysite.urls, Django tried these URL patterns, in this order:
^admin/
^$ [name='home']
^patientlist [name='patient_list']
^patientdetail/(?P<pk>\d+)/$ [name='patient_detail']
^add_patient/$ [name='add_patient']
The current URL, spirit3/add_patient/, didn't match any of these.
</code></pre>
<p>My urls.py in the mysite directory looks like: </p>
<pre><code>from django.conf.urls import url
from django.contrib import admin
from django.conf.urls import include
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'', include('spirit3.urls')),
]
</code></pre>
<p>My urls.py in the app looks like:</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.home, name='home'),
url(r'^patientlist', views.patient_list, name='patient_list'),
url(r'^patientdetail/(?P<pk>\d+)/$', views.patient_detail, name='patient_detail'),
url(r'^add_patient/$', views.add_patient, name='add_patient'),
]
</code></pre>
<p>The relevant part of views.py:</p>
<pre><code>def add_patient(request):
if request.method == 'POST':
form = PatientForm(request.POST)
if form.is_valid():
form.save(commit=True)
return redirect('home')
else:
print form.errors
else:
form = PatientForm()
return render(request, 'spirit3/add_patient.html', {'form':form})
</code></pre>
<p>And the html looks like:</p>
<pre><code>{% extends 'spirit3/base.html' %}
{% block content %}
<body>
<h1> Add a Patient </h>
<form action="/spirit3/add_patient/" method="post">
{% csrf_token %}
{{ form }}
<input type="submit" value="Create Patient" />
</form>
</body>
{% endblock %}
</code></pre>
<p>Thanks in advance! :)</p>
| 0 | 2016-07-28T10:39:14Z | 38,634,126 | <p>the form "action" attribute is wrong... seeing your urls configuration you dont have a <code>/spirit3/add_patient/</code> url, I think It is <code>/add_patient/</code></p>
<p>or you could just use a form tag without an "action" it will post to the current page:</p>
<pre><code><form role="form" method="post">
{% csrf_token %}
{{ form }}
<input type="submit" value="Create Patient" />
</form>
</code></pre>
<p>Hope this helps</p>
| 2 | 2016-07-28T10:46:52Z | [
"python",
"django"
] |
Django URL error when using forms | 38,633,980 | <p>I am fairly new to Django and I am totally stuck on what is causing this error. I have done lots of searching but to no avail! Any help would be super appreciated.</p>
<p>The actual form works fine but when I try and submit the input data I get the error:</p>
<pre><code>Using the URLconf defined in mysite.urls, Django tried these URL patterns, in this order:
^admin/
^$ [name='home']
^patientlist [name='patient_list']
^patientdetail/(?P<pk>\d+)/$ [name='patient_detail']
^add_patient/$ [name='add_patient']
The current URL, spirit3/add_patient/, didn't match any of these.
</code></pre>
<p>My urls.py in the mysite directory looks like: </p>
<pre><code>from django.conf.urls import url
from django.contrib import admin
from django.conf.urls import include
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'', include('spirit3.urls')),
]
</code></pre>
<p>My urls.py in the app looks like:</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.home, name='home'),
url(r'^patientlist', views.patient_list, name='patient_list'),
url(r'^patientdetail/(?P<pk>\d+)/$', views.patient_detail, name='patient_detail'),
url(r'^add_patient/$', views.add_patient, name='add_patient'),
]
</code></pre>
<p>The relevant part of views.py:</p>
<pre><code>def add_patient(request):
if request.method == 'POST':
form = PatientForm(request.POST)
if form.is_valid():
form.save(commit=True)
return redirect('home')
else:
print form.errors
else:
form = PatientForm()
return render(request, 'spirit3/add_patient.html', {'form':form})
</code></pre>
<p>And the html looks like:</p>
<pre><code>{% extends 'spirit3/base.html' %}
{% block content %}
<body>
<h1> Add a Patient </h>
<form action="/spirit3/add_patient/" method="post">
{% csrf_token %}
{{ form }}
<input type="submit" value="Create Patient" />
</form>
</body>
{% endblock %}
</code></pre>
<p>Thanks in advance! :)</p>
| 0 | 2016-07-28T10:39:14Z | 38,635,003 | <p>As pleasedontbelong mentionned, there's indeed no url matching "/spirit3/add_patient/" in your current url config. What you have in tour root urlconf is:</p>
<pre><code>urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'', include('spirit3.urls')),
]
</code></pre>
<p>This means that urls with path starting with "/admin/" are routed to <code>admin.site.urls</code>, and all other are routed to <code>spirit3.urls</code>. Note that this does NOT in any way <em>prefixes</em> urls defined in <code>spirit3.urls</code> with '/spirit3/', so in your case, all of these urls:</p>
<pre><code>urlpatterns = [
url(r'^$', views.home, name='home'),
url(r'^patientlist', views.patient_list, name='patient_list'),
url(r'^patientdetail/(?P<pk>\d+)/$', views.patient_detail, name='patient_detail'),
url(r'^add_patient/$', views.add_patient, name='add_patient'),
]
</code></pre>
<p>will be served directly under the root path "/" - ie, the <code>add_patient</code> view is served by "/add_patient/", not by "/spirit3/add_patient/".</p>
<p>If you want your <code>spirit3</code> app's urls to be routed under "/spirit3/*", you have to specify this prefix in your root urlconf, ie:</p>
<pre><code>urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^spirit3/', include('spirit3.urls')),
]
</code></pre>
<p>Note that you can use any prefix, it's totally unrelated to your app name. </p>
<p>As a last note: never hardcode urls anywhere, django knows how to reverse an url from it's name (and args / kwargs if any). In a template you do this with the <a href="https://docs.djangoproject.com/en/1.9/ref/templates/builtins/#url" rel="nofollow"><code>{% url %}</code> templatetag</a>, in code you use <a href="https://docs.djangoproject.com/en/1.9/ref/urlresolvers/#reverse" rel="nofollow"><code>django.core.urlresolvers.reverse()</code></a>. </p>
| 0 | 2016-07-28T11:26:22Z | [
"python",
"django"
] |
Tkinter Show webcam view in second window | 38,633,995 | <p>I am using a webcam view and performing analysis on the images taken in. I wish to introduce a functionality where a window can be summoned and the user can look at the webcam view in a new window, should they desire. However my attempt causes buttons in my main window to swap over to the instance when I open up the new window. What's going wrong?</p>
<p>Here is my (working) example:</p>
<pre><code>import Tkinter as tk
import cv2
from PIL import Image, ImageTk
class CamView():
def __init__(self, parent):
self.parent = parent
self.window = tk.Toplevel(parent)
self.window.protocol("WM_DELETE_WINDOW", self.close)
self.show_frame()
def show_frame(self):
imgtk = ImageTk.PhotoImage(image=self.parent.img)
lmain.imgtk = imgtk
lmain.configure(image=imgtk)
def close(self):
self.parent.test_frame = None
self.window.destroy()
root = tk.Tk()
root.bind('<Escape>', lambda e: root.quit())
lmain = tk.Label(root)
lmain.pack()
class Main(tk.Frame):
def __init__(self, parent):
self.test_frame = None
frame = tk.Frame.__init__(self,parent)
a = tk.Label(text='hello!').pack()
b = tk.Button(frame, text='open', command=self.load_window)
b.pack()
width, height = 800, 600
self.cap = cv2.VideoCapture(0)
self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
self.do_stuff()
def do_stuff(self):
_, frame = self.cap.read()
frame = cv2.flip(frame, 1)
cv2image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
self.img = Image.fromarray(cv2image)
if self.test_frame != None:
self.test_frame.show_frame()
lmain.after(10, self.do_stuff)
def load_window(self):
self.test_frame = CamView(self)
control = Main(root)
root.mainloop()
</code></pre>
<p>In my real code, as well as this working example - it seems that when I load the new window, it places the webcam frame in the first window when I don't want it to!</p>
| 1 | 2016-07-28T10:40:06Z | 38,634,447 | <p>Fixed! I was getting confused because of <code>self.lmain</code>. Here is the working code:</p>
<pre><code>import Tkinter as tk
import cv2
from PIL import Image, ImageTk
class CamView():
def __init__(self, parent):
self.parent = parent
self.window = tk.Toplevel(parent)
self.lmain2 = tk.Label(self.window)
self.lmain2.pack()
self.window.protocol("WM_DELETE_WINDOW", self.close)
self.show_frame()
def show_frame(self):
imgtk = ImageTk.PhotoImage(image=self.parent.img)
self.lmain2.imgtk = imgtk
self.lmain2.configure(image=imgtk)
def close(self):
self.parent.test_frame = None
self.window.destroy()
root = tk.Tk()
root.bind('<Escape>', lambda e: root.quit())
class Main(tk.Frame):
def __init__(self, parent):
self.lmain = tk.Label(parent)
self.lmain.pack()
self.test_frame = None
frame = tk.Frame.__init__(self,parent)
a = tk.Label(text='hello!').pack()
b = tk.Button(frame, text='open', command=self.load_window)
b.pack()
width, height = 800, 600
self.cap = cv2.VideoCapture(0)
self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
self.do_stuff()
def do_stuff(self):
_, frame = self.cap.read()
frame = cv2.flip(frame, 1)
cv2image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
self.img = Image.fromarray(cv2image)
if self.test_frame != None:
self.test_frame.show_frame()
self.lmain.after(10, self.do_stuff)
def load_window(self):
if self.test_frame == None:
self.test_frame = CamView(self)
control = Main(root)
root.mainloop()
</code></pre>
| 1 | 2016-07-28T11:01:08Z | [
"python",
"opencv",
"tkinter"
] |
pandas pivot_table apply aggfunc last instance | 38,634,120 | <p>I have made a pivot table with various columns and have applied aggfunc like np.sum and first and count. I want last instance of corresponding value of a column from a dataframe. Is there any function that could serve this purpose?</p>
| 1 | 2016-07-28T10:46:37Z | 38,634,152 | <p>I think you can use </p>
<pre><code>aggfunc='last'
</code></pre>
<p>Sample:</p>
<pre><code>df = pd.DataFrame({ 'Age':[35, 37, 40, 29, 31, 26, 28],
'City':['B', 'Ch', 'LA', 'Ch', 'B', 'B', 'Ch'],
'Position':['M','M','M','P', 'P','M','M']})
print (df)
Age City Position
0 35 B M
1 37 Ch M
2 40 LA M
3 29 Ch P
4 31 B P
5 26 B M
6 28 Ch M
print (df.pivot_table(index='Position', columns='City', values='Age', aggfunc='last'))
City B Ch LA
Position
M 26.0 28.0 40.0
P 31.0 29.0 NaN
</code></pre>
| 1 | 2016-07-28T10:48:09Z | [
"python",
"pandas"
] |
Django unit test on custom model manager method | 38,634,138 | <p>I am pretty new in python and django.</p>
<p>I have model with custom model manager with a method ,where i am raising
<code>ValidationError</code> on some exceptions.now i want to test this custom manager method.but don't know how to catch <code>ValidationError</code> or anyother error in terms of testing django model's customs manager method.</p>
<p>My scenario is depicted below,</p>
<pre><code>class CustomModelManager(model.Manager):
def custom_method(self):
#for some exception
raise ValidationError('a sample validation error')
class SampleModel(models.Model):
###fields
objects = CustomModelManager()
</code></pre>
<p>i have tried the following unit test,but its not working ,</p>
<pre><code>def test_samle_model(self):
issues = Issues.objects.custom_method(field1='wrong field')###this will raise that validationError
self.assertEqualValidationError, 'a sample validation error')
</code></pre>
<p>is it possible to catch 'any error' to test? or am i missing something?</p>
| 0 | 2016-07-28T10:47:25Z | 38,634,660 | <p>use <a href="https://docs.python.org/2.7/library/unittest.html#unittest.TestCase.assertRaises" rel="nofollow">assertRaises</a></p>
<pre><code>with self.assertRaises(ValidationError):
issues = Issues.objects.custom_method(field1='wrong field')
</code></pre>
| 1 | 2016-07-28T11:10:24Z | [
"python",
"django",
"unit-testing"
] |
Django unit test on custom model manager method | 38,634,138 | <p>I am pretty new in python and django.</p>
<p>I have model with custom model manager with a method ,where i am raising
<code>ValidationError</code> on some exceptions.now i want to test this custom manager method.but don't know how to catch <code>ValidationError</code> or anyother error in terms of testing django model's customs manager method.</p>
<p>My scenario is depicted below,</p>
<pre><code>class CustomModelManager(model.Manager):
def custom_method(self):
#for some exception
raise ValidationError('a sample validation error')
class SampleModel(models.Model):
###fields
objects = CustomModelManager()
</code></pre>
<p>i have tried the following unit test,but its not working ,</p>
<pre><code>def test_samle_model(self):
issues = Issues.objects.custom_method(field1='wrong field')###this will raise that validationError
self.assertEqualValidationError, 'a sample validation error')
</code></pre>
<p>is it possible to catch 'any error' to test? or am i missing something?</p>
| 0 | 2016-07-28T10:47:25Z | 38,634,681 | <p>You want <a href="https://docs.python.org/2.7/library/unittest.html#unittest.TestCase.assertRaises" rel="nofollow">`assertRaises'</a>:</p>
<pre><code>def test_sample_model(self):
with self.assertRaises(ValidationError):
issues = Issues.objects.custom_method(field1='wrong field')
</code></pre>
| 1 | 2016-07-28T11:11:17Z | [
"python",
"django",
"unit-testing"
] |
Pandas: create dataframe using value_counts | 38,634,235 | <p>I have data</p>
<pre><code>age
32
16
39
39
23
36
29
26
43
34
35
50
29
29
31
42
53
</code></pre>
<p>I need to get smth like this
<a href="http://i.stack.imgur.com/3Trd2.png" rel="nofollow"><img src="http://i.stack.imgur.com/3Trd2.png" alt="image"></a>
I can get </p>
<p><code>df.age.value_counts()</code>
and </p>
<pre><code>100. * df.age.value_counts() / len(df.age)
</code></pre>
<p>But how can I union this and give name to columns?</p>
| -1 | 2016-07-28T10:51:04Z | 38,634,767 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html" rel="nofollow"><code>cut</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.aggregate.html" rel="nofollow"><code>agg</code></a>:</p>
<pre><code>#helper df with min and max ages, necessary add category Total
df1 = pd.DataFrame({'G':['14 yo and younger','15-19','20-24','25-29','30-34',
'35-39','40-44','45-49','50-54','55-59','60-64','65+','Total'],
'Min':[0, 15,20,25,30,35,40,45,50,55,60,65,np.nan],
'Max':[14,19,24,29,34,39,44,49,54,59,64,120, np.nan]})
print (df1)
G Max Min
0 14 yo and younger 14.0 0.0
1 15-19 19.0 15.0
2 20-24 24.0 20.0
3 25-29 29.0 25.0
4 30-34 34.0 30.0
5 35-39 39.0 35.0
6 40-44 44.0 40.0
7 45-49 49.0 45.0
8 50-54 54.0 50.0
9 55-59 59.0 55.0
10 60-64 64.0 60.0
11 65+ 120.0 65.0
12 Total NaN NaN
</code></pre>
<pre><code>cutoff = np.hstack([np.array(df1.Min[0]), df1.Max.values])
labels = df1.G.values
df['Groups'] = pd.cut(df.age, bins=cutoff, labels=labels, right=True, include_lowest=True)
print (df)
age Groups
0 32 30-34
1 16 15-19
2 39 35-39
3 39 35-39
4 23 20-24
5 36 35-39
6 29 25-29
7 26 25-29
8 43 40-44
9 34 30-34
10 35 35-39
11 50 50-54
12 29 25-29
13 29 25-29
14 31 30-34
15 42 40-44
16 53 50-54
</code></pre>
<pre><code>df = df.groupby('Groups')['Groups']
.agg({'Total':[len, lambda x: len(x)/df.shape[0] * 100 ]})
.rename(columns={'len':'N', '<lambda>':'%'})
#last Total row
df.ix['Total'] = df.sum()
print (df)
Total
N %
Groups
14 yo and younger 0.0 0.000000
15-19 1.0 5.882353
20-24 1.0 5.882353
25-29 4.0 23.529412
30-34 3.0 17.647059
35-39 4.0 23.529412
40-44 2.0 11.764706
45-49 0.0 0.000000
50-54 2.0 11.764706
55-59 0.0 0.000000
60-64 0.0 0.000000
65+ 0.0 0.000000
Total 17.0 100.000000
</code></pre>
<p>EDIT1:</p>
<p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>size</code></a> scale better:</p>
<pre><code>df1 = df.groupby('Groups').size().to_frame()
df1.columns = pd.MultiIndex.from_arrays(('Total','N'))
df1.ix[:,('Total','%')] = 100 * df1.ix[:,('Total','N')] / df.shape[0]
df1.ix['Total'] = df1.sum()
print (df1)
Total
N %
Groups
14 yo and younger 0.0 0.000000
15-19 1.0 5.882353
20-24 1.0 5.882353
25-29 4.0 23.529412
30-34 3.0 17.647059
35-39 4.0 23.529412
40-44 2.0 11.764706
45-49 0.0 0.000000
50-54 2.0 11.764706
55-59 0.0 0.000000
60-64 0.0 0.000000
65+ 0.0 0.000000
Total 17.0 100.000000
</code></pre>
| 1 | 2016-07-28T11:15:18Z | [
"python",
"pandas"
] |
PySpark Data Frame - give an ID to sequence of same values | 38,634,248 | <p>I have a dataset in a pyspark job that looks a bit like this:</p>
<pre><code>frame_id direction_change
1 False
2 False
3 False
4 True
5 False
</code></pre>
<p>I want to add a "track" counter to each row so that all the frames between direction changes have the same value. For example, the output I want looks like this:</p>
<pre><code>frame_id direction_change track
1 False 1
2 False 1
3 False 1
4 True 2
5 False 2
</code></pre>
<p>I have been able to do this with Pandas with the following action:</p>
<pre><code>frames['track'] = frames['direction_change'].cumsum()
</code></pre>
<p>But can't find an equivalent way to do it in Spark data frames.
Any help would be really appreciated.</p>
| 0 | 2016-07-28T10:52:02Z | 38,635,139 | <p>Long story short there is no efficient way to do in PySpark with <code>DataFrames</code> alone. One could be tempted to use window functions like this:</p>
<pre><code>from pyspark.sql.functions import col, sum as sum_
from pyspark.sql.window import Window
w = Window().orderBy("frame_id")
df.withColumn("change", 1 + sum_(col("direction_change").cast("long")).over(w))
</code></pre>
<p>but this inefficient and won't scale. It is possible to use lower level APIs as show in <a href="http://stackoverflow.com/q/35154267/1560062">How to compute cumulative sum using Spark</a> but in Python it requires moving out of <code>Dataset</code> / <code>Dataframe</code> API and using plain RDDs.</p>
| 2 | 2016-07-28T11:33:13Z | [
"python",
"apache-spark",
"pyspark",
"spark-dataframe"
] |
cx_freeze build doesn't allow program quitting | 38,634,265 | <p>I've tried with <code>exit(0)</code> in tkinter, <code>pygame.quit()</code> in pygme and other things in my cx_freeze build, but an error pops up when using <code>pygame.quit()</code> (which stops the whole program which is not intentional) and when I use <code>exit(0)</code> in tkinter it won't allow me to quit. It simply does nothing. Is there something i'm missing?</p>
| 1 | 2016-07-28T10:52:52Z | 38,636,660 | <p>It is odd, it should be correct but you can try this...</p>
<pre><code>import sys
print("")
</code></pre>
| 0 | 2016-07-28T12:40:42Z | [
"python",
"cx-freeze"
] |
XML to pandas: Export to csv and make the childrens in same row | 38,634,290 | <p>I am in the "munging stage", trying to convert a XML file to csv with pandas. I finally did with the code bellow: </p>
<pre><code>for element in etree.iterparse(path):
data.append({element.tag: element.text})
df = pd.DataFrame(data,columns=['NOME_DISTRITO', 'NR_CPE', 'MARCA_EQUIPAMENTO',
'NR_EQUIPAMENTO','VALOR_LEITURA','REGISTADOR',
'TIPO_REGISTADOR','TIPO_DADOS_RECOLHIDOS','FACTOR_MULTIPLICATIVO_FINAL',
'NR_DIGITOS_INTEIRO','UNIDADE_MEDIDA','TIPO_LEITURA','MOTIVO_LEITURA',
'ESTADO_LEITURA','DATA_LEITURA','HORA_LEITURA'])
df.to_csv('/lecture.csv')
</code></pre>
<p>This is the XML file:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><DISTRITO xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<NOME_DISTRITO>BRAGANCA</NOME_DISTRITO>
<CPE>
<NR_CPE>PT000200003724</NR_CPE>
<LEITURA>
<MARCA_EQUIPAMENTO>102</MARCA_EQUIPAMENTO>
<NR_EQUIPAMENTO>30806746</NR_EQUIPAMENTO>
<VALOR_LEITURA>16858</VALOR_LEITURA>
<REGISTADOR>001</REGISTADOR>
<TIPO_REGISTADOR>S</TIPO_REGISTADOR>
<TIPO_DADOS_RECOLHIDOS>1</TIPO_DADOS_RECOLHIDOS>
<FACTOR_MULTIPLICATIVO_FINAL>1</FACTOR_MULTIPLICATIVO_FINAL>
<NR_DIGITOS_INTEIRO>5</NR_DIGITOS_INTEIRO>
<UNIDADE_MEDIDA>kWh</UNIDADE_MEDIDA>
<TIPO_LEITURA>2</TIPO_LEITURA>
<MOTIVO_LEITURA>2</MOTIVO_LEITURA>
<ESTADO_LEITURA>A</ESTADO_LEITURA>
<DATA_LEITURA>20151218</DATA_LEITURA>
<HORA_LEITURA>083800</HORA_LEITURA>
</LEITURA>
<LEITURA>
<MARCA_EQUIPAMENTO>102</MARCA_EQUIPAMENTO>
<NR_EQUIPAMENTO>30806746</NR_EQUIPAMENTO>
<VALOR_LEITURA>16925</VALOR_LEITURA>
<REGISTADOR>001</REGISTADOR>
<TIPO_REGISTADOR>S</TIPO_REGISTADOR>
<TIPO_DADOS_RECOLHIDOS>1</TIPO_DADOS_RECOLHIDOS>
<FACTOR_MULTIPLICATIVO_FINAL>1</FACTOR_MULTIPLICATIVO_FINAL>
<NR_DIGITOS_INTEIRO>5</NR_DIGITOS_INTEIRO>
<UNIDADE_MEDIDA>kWh</UNIDADE_MEDIDA>
<TIPO_LEITURA>1</TIPO_LEITURA>
<MOTIVO_LEITURA>1</MOTIVO_LEITURA>
<ESTADO_LEITURA>A</ESTADO_LEITURA>
<DATA_LEITURA>20160119</DATA_LEITURA>
<HORA_LEITURA>203000</HORA_LEITURA>
</LEITURA>
</CPE></code></pre>
</div>
</div>
</p>
<p>And this is the final result in Excel:</p>
<p>NOME_DISTRITO NR_CPE MARCA_EQUIPAMENTO NR_EQUIPAMENTO VALOR_LEITURA REGISTADOR TIPO_REGISTADOR TIPO_DADOS_RECOLHIDOS FACTOR_MULTIPLICATIVO_FINAL NR_DIGITOS_INTEIRO UNIDADE_MEDIDA TIPO_LEITURA MOTIVO_LEITURA ESTADO_LEITURA DATA_LEITURA HORA_LEITURA
BRAGANCA </p>
<pre><code>PT000200003724
102
30806746
16925
1
S
1
1
5
kWh
1
1
A
20160119
203000
</code></pre>
<p>All I want is to have this data in the same row after the column "MARCA_EQUIPAMENTO", but as you can see this is like a "shape staircase row". Is there anything that I can do with pandas or excel to fix and have in a nice manner in excel? </p>
<p>NOME_DISTRITO NR_CPE MARCA_EQUIPAMENTO NR_EQUIPAMENTO VALOR_LEITURA REGISTADOR TIPO_REGISTADOR TIPO_DADOS_RECOLHIDOS FACTOR_MULTIPLICATIVO_FINAL NR_DIGITOS_INTEIRO UNIDADE_MEDIDA TIPO_LEITURA MOTIVO_LEITURA ESTADO_LEITURA DATA_LEITURA HORA_LEITURA
BRAGANCA<br>
PT0002000021673724JE<br>
102 30806746 16858 1 S 1 1 5 kWh 2 2 A 20151218 83800
102 30806746 16925 1 S 1 1 5 kWh 1 1 A 20160119 203000</p>
| 1 | 2016-07-28T10:53:40Z | 38,649,358 | <p>Consider running conditionals in the <code>iterparse()</code>. Because <code><NOME_DISTRITO></code> and <code><NR_CPE></code> lie outside the repeated <code><LEITURA></code> elements, save their values in scalars to be added to the <code>inner{}</code> dictionary for appending to dataframe:</p>
<pre><code>import xml.etree.ElementTree as et
import pandas as pd
path ='/path/to/Input.xml'
data = []
for (ev, el) in et.iterparse(path):
if el.tag == 'NOME_DISTRITO': nome = el.text
if el.tag == 'NR_CPE': nr = el.text
if el.tag == "LEITURA":
inner = {}
inner['NOME_DISTRITO'] = nome
inner['NR_CPE'] = nr
for i in el:
inner[i.tag] = i.text
data.append(inner)
df = pd.DataFrame(data)
print(df)
# DATA_LEITURA ESTADO_LEITURA FACTOR_MULTIPLICATIVO_FINAL HORA_LEITURA \
# 0 20151218 A 1 083800
# 1 20160119 A 1 203000
# MARCA_EQUIPAMENTO MOTIVO_LEITURA NOME_DISTRITO NR_CPE \
# 0 102 2 BRAGANCA PT000200003724
# 1 102 1 BRAGANCA PT000200003724
# NR_DIGITOS_INTEIRO NR_EQUIPAMENTO REGISTADOR TIPO_DADOS_RECOLHIDOS \
# 0 5 30806746 001 1
# 1 5 30806746 001 1
# TIPO_LEITURA TIPO_REGISTADOR UNIDADE_MEDIDA VALOR_LEITURA
# 0 2 S kWh 16858
# 1 1 S kWh 16925
</code></pre>
| 0 | 2016-07-29T01:57:56Z | [
"python",
"excel",
"pandas"
] |
Empty Div return with Xpath or Css Selector Using Scrapy | 38,634,495 | <p>I'm using Scrapy to crawl a web page which contains a specific article.</p>
<p>I'm trying to get the informations stored inside the div with the class "return". The big problem that the div return always empty when i use Scrapy Xpath or Css selectors.</p>
<p><strong>The Div that i'm trying to extract:</strong> </p>
<pre><code><div class="return">
<p><strong>Conditionnement : </strong></p>
<p class="one-product-detail">2 colis :<br>
L178xl106xH80&nbsp;72kg<br>L178xl112xH80&nbsp;60kg<br>
<span itemprop="weight" alt="3fin" class="hidden" hidden="">132kg</span></p>
</div>
</code></pre>
<p><strong>My Spider Code:</strong></p>
<pre><code>import scrapy
from alinea.items import AlineaItem
class AlineaSpider(scrapy.Spider):
name = "alinea"
start_urls = [
"http://www.alinea.fr/",
]
def parse(self, response):
# ref = input("Enter Item Reference ?\n")
# 25321050
# link = "http://www.alinea.fr/alinea_fredhopper/catalogSearch_result/products/search/" + str(ref)
link = "http://www.alinea.fr/alinea_fredhopper/catalogSearch_result/products/search/" + str(25321050)
print(link)
return scrapy.Request(link,
callback=self.parse_page2)
def parse_page2(self, response):
self.logger.info("Visited %s", response.url)
for sel in response.xpath('//li[contains(@itemprop,"title")]/text()'):
print("**************")
print("Description")
print(sel.extract())
print("**************")
# print("------------------------------------------------------------------")
#
# for sel in response.xpath('//*[@class="delivery"]'):
#
# print("**************")
# print("Details")
# print(sel.extract())
# print("**************")
print("------------------------------------------------------------------")
for sel in response.css('[class="return"]'):
print("**************")
print("Details")
print(sel.extract())
print("**************")
</code></pre>
<p><strong>My Terminal Log:</strong></p>
<pre><code>2016-07-28 12:57:21 [alinea] INFO: Visited http://www.alinea.fr/orca-canape-angle-gauche-droit-convertible-gris.html
**************
Description
Orca - Canapé CONVERTIBLE d'angle gauche ou droit gris
**************
------------------------------------------------------------------
**************
Details
<div class="return">
</div>
**************
</code></pre>
| 0 | 2016-07-28T11:02:45Z | 38,648,774 | <p>The <a href="http://www.alinea.fr/orca-canape-angle-gauche-droit-convertible-gris.html" rel="nofollow">page</a> you visited has no content for that <code>div</code> at all. So you are supposed to get what you got.</p>
<p>If you change to other pages, for example <a href="http://www.alinea.fr/orca-canape-angle-droit-gris-fonce.html" rel="nofollow">http://www.alinea.fr/orca-canape-angle-droit-gris-fonce.html</a>, you will see the <code>div</code> is there and not empty.</p>
<p>Output from the shell: <code>scrapy shell 'http://www.alinea.fr/orca-canape-angle-droit-gris-fonce.html'</code></p>
<pre><code>In [1]: response.xpath('//div[@class="return"]').extract()
Out[1]: [u'<div class="return">\n\n \n<p><strong>Conditionnement : </strong></p>\n<p class="one-product-detail">\n\n\t\t\t\t\t\t\n\t\t\t\t\t\t\t2 colis :<br>\n\t\t\t\t\t\t\t\t\t L178xl106xH80\xa055kg<br>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t L178xl112xH80\xa053kg<br>\t\t\t\t\t\t<span itemprop="weight" alt="3fin" hidden class="hidden">108kg</span></p>\n \n</div>']
</code></pre>
<p>If you want the text, you use <code>//text()</code> instead, as <code>/text()</code> only gives you text directly under <code>div</code>, in your case whitespace.</p>
<pre><code>In [2]: response.xpath('//div[@class="return"]/text()').extract()
Out[2]: [u'\n\n \n', u'\n', u'\n \n']
In [3]: [x.strip() for x in response.xpath('//div[@class="return"]//text()').extract()]
Out[3]:
[u'',
u'Conditionnement :',
u'',
u'2 colis :',
u'L178xl106xH80\xa055kg',
u'L178xl112xH80\xa053kg',
u'',
u'108kg',
u'']
</code></pre>
| 0 | 2016-07-29T00:33:41Z | [
"python",
"web-scraping",
"scrapy"
] |
Check GitHub credentials validity | 38,634,656 | <p>I am trying to verify GitHub credentials with Python.</p>
<p>I have tried this:</p>
<pre><code>import urllib2, base64
username = "test@example.com"
password = "password"
request = urllib2.Request("https://github.com/")
base64string = base64.encodestring('%s:%s' % (username, password)).replace('\n', '')
request.add_header("Authorization", "Basic %s" % base64string)
result = urllib2.urlopen(request)
if result.code == 200:
print "Success"
else:
print "Error"
</code></pre>
<p>But it always returns <code>Success</code>, even with wrong password. What am I doing wrong?</p>
| 1 | 2016-07-28T11:10:17Z | 38,635,980 | <p>I should've been using <code>https://api.github.com/user</code> instead of <code>https://github.com/</code>.</p>
<p>Nevertheless, I will use <code>requests</code> 3rd party library, which makes this code concise:</p>
<pre><code>import requests
print requests.get(
'https://api.github.com/user',
auth=('username', 'password')
)
</code></pre>
| 0 | 2016-07-28T12:10:56Z | [
"python",
"github",
"urllib2"
] |
beautifulsoup installation in pyCharm with two versions of python installed | 38,634,685 | <p>I use PyCharm to write Python, at first I configurated PyCharm with Python 2.7.12, and I installed the Beautiful Soup package under the 2.7.12 environment. </p>
<p>However, I now have installed python 3.5.2 in PyCharm and I want to use Beautiful Soup in PyCharm with 3.5.2, but I can't import bs4 because the interpreter cant find the Beautiful Soup package which is in 2.7.12 package folder. </p>
<p>So I tried to <code>pip install bs4</code> in 3.5.2 console, but it tells me that the pkg has already been installed in 2.7.12 folder. So how can I import Beautiful Soup in 3.5.2 now in PyCharm?</p>
<p><a href="http://i.stack.imgur.com/FhxKS.png" rel="nofollow"><img src="http://i.stack.imgur.com/FhxKS.png" alt="enter image description here"></a></p>
| 0 | 2016-07-28T11:11:32Z | 38,634,744 | <p>To be sure to install your package for the right version of python, you can use pip as a module :</p>
<pre><code>python3.5 -m pip install [package]
</code></pre>
<p>So for bs4 :</p>
<pre><code>python3.5 -m pip install beautifulsoup4
</code></pre>
| 0 | 2016-07-28T11:14:04Z | [
"python",
"pycharm"
] |
Catch an error from command line Python | 38,634,746 | <p>I need to catch an error from command line without print error message on the screen. When this occurs I need to give another command to run.</p>
<p>This is what I do now:</p>
<pre><code>hyst_cmd = "si viewhistory ..."
process = subprocess.Popen(hyst_cmd, stdout=subprocess.PIPE)
hyst = process.stdout.read().splitlines()
</code></pre>
<p>When I do this for some projects I receive an error message, on the screen.</p>
<p>Sorry for my english!</p>
| 1 | 2016-07-28T11:14:07Z | 38,634,986 | <p>According to the official document, the most common exception for Popen in subprocess is <strong>OSError</strong>.</p>
<p>To catch the error, you can simply try the following approach:</p>
<pre><code>hyst_cmd = "si viewhistory ..."
try:
process = subprocess.Popen(hyst_cmd, stdout=subprocess.PIPE)
hyst = process.stdout.read().splitlines()
except OSError:
<write_log_file or other action.>
</code></pre>
<p>For more information, you can check the link below: <br>
<a href="https://docs.python.org/3/library/subprocess.html#exceptions" rel="nofollow">Subprocess Exception</a></p>
| 0 | 2016-07-28T11:25:40Z | [
"python",
"command"
] |
Extracting parameters from astropy.modeling Gaussian2D | 38,634,782 | <p>I have managed to use <code>astropy.modeling</code> to model a 2D Gaussian over my image and the parameters it has produced to fit the image seem reasonable. However, I need to run the 2D Gaussian over thousands of images because we are interested in examining the mean x and y of the model and also the x and y standard deviations over our images. The model output looks like this:</p>
<pre><code>m2
<Gaussian2D(amplitude=0.0009846091239480168, x_mean=30.826676737477573, y_mean=31.004045976953222, x_stddev=2.5046722491074536, y_stddev=3.163048479350727, theta=-0.0070295894129793896)>
</code></pre>
<p>I can also tell you this:</p>
<pre><code>type(m2)
<class 'astropy.modeling.functional_models.Gaussian2D'>
Name: Gaussian2D
Inputs: (u'x', u'y')
Outputs: (u'z',)
Fittable parameters: ('amplitude', 'x_mean', 'y_mean', 'x_stddev', 'y_stddev', 'theta')
</code></pre>
<p>What I need is a method to extract the parameters of the model, namely:</p>
<pre><code>x_mean
y_mean
x_stddev
y_stddev
</code></pre>
<p>I am not familiar with this form output so I am really stuck on how to extract the parameters.</p>
| 1 | 2016-07-28T11:15:52Z | 38,635,135 | <p>The models have attributes you can access:</p>
<pre><code>from astropy.modeling import models
g2d = models.Gaussian2D(1,2,3,4,5)
g2d.amplitude.value # 1.0
g2d.x_mean.value # 2.0
g2d.y_mean.value # 3.0
g2d.x_stddev.value # 4.0
g2d.y_stddev.value # 5.0
</code></pre>
<p>You need to extract these values after you fitted the model but you can access them in the same way: <code>.<name>.value</code>.</p>
<p>You can also extract them in one go but then you need to keep track which parameter is in which position:</p>
<pre><code>g2d.parameters # array([ 1., 2., 3., 4., 5., 0.])
# Amplitude
g2d.parameters[0] # 1.0
# x-mean
g2d.parameters[1] # 2.0
# ...
</code></pre>
| 1 | 2016-07-28T11:33:08Z | [
"python",
"modeling",
"astropy"
] |
Python Regular Expression for stopping in between | 38,634,788 | <p>This is my string:</p>
<blockquote>
<p>age: adult/child gender: male/female <strong>age range: 3 - 5 years/5 - 8 years/8 - 12 yrs/12 years and up</strong> product type: costume character: animals & insects material: polyester theme: animal age start: 3 years age end: adult features: -face is seen through the mouth of the zebra. -zipper closure in the front and a tail in the back. -set includes: jumpsuit and head mask. -animal collection. age: -adult/child. gender: -male/female. age group: -3 - 5 years/5 - 8 years/8 - 12 years/12 yrs and up</p>
</blockquote>
<p>I want to catch only the bold part with python regex. But I am not able to do it. I used this regex but not working quite possibly. My Regex is:</p>
<pre><code>\bage[a-z]?\b.*\d+\s(?:years[a-z]?|yrs|month[a-z]+)
</code></pre>
<p>This was getting the weird answer, catching unwanted string.</p>
| 0 | 2016-07-28T11:16:01Z | 38,634,959 | <p>You could try this pattern using <code>re.search()</code>:</p>
<pre><code>import re
string = 'age: adult/child gender: male/female age range: 3 - 5 years/5 - 8 years/8 - 12 years/12 years and up product type: costume character: animals &amp; insects material: polyester theme: animal age start: 3 years age end: adult features: -face is seen through the mouth of the zebra. -zipper closure in the front and a tail in the back. -set includes: jumpsuit and head mask. -animal collection. age: -adult/child. gender: -male/female. age range: -3 - 5 years/5 - 8 years/8 - 12 years/12 years and up'
match = re.search(r'(age range:.*?) ', string)
if match:
print(match.group(1))
</code></pre>
<p>Output:</p>
<pre>
age range: 3 - 5 years/5 - 8 years/8 - 12 years/12 years and up
</pre>
<p>This relies on the assumption that each item of data is separated by <em>two</em> spaces as shown in the given string. The pattern says to match the string <code>age match:</code> followed by zero or more characters (non-greedy), followed by exactly 2 spaces.</p>
| 0 | 2016-07-28T11:24:21Z | [
"python",
"regex"
] |
Python Regular Expression for stopping in between | 38,634,788 | <p>This is my string:</p>
<blockquote>
<p>age: adult/child gender: male/female <strong>age range: 3 - 5 years/5 - 8 years/8 - 12 yrs/12 years and up</strong> product type: costume character: animals & insects material: polyester theme: animal age start: 3 years age end: adult features: -face is seen through the mouth of the zebra. -zipper closure in the front and a tail in the back. -set includes: jumpsuit and head mask. -animal collection. age: -adult/child. gender: -male/female. age group: -3 - 5 years/5 - 8 years/8 - 12 years/12 yrs and up</p>
</blockquote>
<p>I want to catch only the bold part with python regex. But I am not able to do it. I used this regex but not working quite possibly. My Regex is:</p>
<pre><code>\bage[a-z]?\b.*\d+\s(?:years[a-z]?|yrs|month[a-z]+)
</code></pre>
<p>This was getting the weird answer, catching unwanted string.</p>
| 0 | 2016-07-28T11:16:01Z | 38,634,993 | <p>You can use the following:</p>
<pre><code>\bage range:\s*(?:\d+\s*-\s*\d+\s*y(?:ea)?rs/)+\d+\s*y(?:ea)?rs and up\b
</code></pre>
<p>See <a href="https://regex101.com/r/eK1nM9/1" rel="nofollow">Demo</a></p>
| 0 | 2016-07-28T11:26:02Z | [
"python",
"regex"
] |
Python Regular Expression for stopping in between | 38,634,788 | <p>This is my string:</p>
<blockquote>
<p>age: adult/child gender: male/female <strong>age range: 3 - 5 years/5 - 8 years/8 - 12 yrs/12 years and up</strong> product type: costume character: animals & insects material: polyester theme: animal age start: 3 years age end: adult features: -face is seen through the mouth of the zebra. -zipper closure in the front and a tail in the back. -set includes: jumpsuit and head mask. -animal collection. age: -adult/child. gender: -male/female. age group: -3 - 5 years/5 - 8 years/8 - 12 years/12 yrs and up</p>
</blockquote>
<p>I want to catch only the bold part with python regex. But I am not able to do it. I used this regex but not working quite possibly. My Regex is:</p>
<pre><code>\bage[a-z]?\b.*\d+\s(?:years[a-z]?|yrs|month[a-z]+)
</code></pre>
<p>This was getting the weird answer, catching unwanted string.</p>
| 0 | 2016-07-28T11:16:01Z | 38,635,150 | <p>If "product type" is always following your desired string, then you can use <a href="http://www.regular-expressions.info/lookaround.html" rel="nofollow">lookahead assertion</a>:</p>
<pre><code>>>> r = re.search(r'(age range:.*?)(?= product type)', s)
>>> r.group(1)
'age range: 3 - 5 years/5 - 8 years/8 - 12 years/12 years and up'
</code></pre>
| 0 | 2016-07-28T11:33:53Z | [
"python",
"regex"
] |
Failing to import itertools in Python 3.5.2 | 38,634,810 | <p>I am new to Python. I am trying to import izip_longest from itertools. But I am not able to find the import "itertools" in the preferences in Python interpreter. I am using Python 3.5.2. It gives me the below error-</p>
<pre><code>from itertools import izip_longest
ImportError: cannot import name 'izip_longest'
</code></pre>
<p>Please let me know what is the right course of action. I have tried Python 2.7 too and ended up with same problem. Do I need to use lower version Python.</p>
| 0 | 2016-07-28T11:16:52Z | 38,634,822 | <p><code>izip_longest</code> was <em>renamed</em> to <a href="https://docs.python.org/3/library/itertools.html#itertools.zip_longest" rel="nofollow"><code>zip_longest</code></a> in Python 3 (note, no <code>i</code> at the start), import that instead:</p>
<pre><code>from itertools import zip_longest
</code></pre>
<p>and use that name in your code.</p>
<p>If you need to write code that works both on Python 2 and 3, catch the <code>ImportError</code> to try the other name, then rename:</p>
<pre><code>try:
# Python 3
from itertools import zip_longest
except ImportError
# Python 2
from itertools import izip_longest as zip_longest
# use the name zip_longest
</code></pre>
| 1 | 2016-07-28T11:17:37Z | [
"python",
"python-2.7",
"pycharm",
"python-3.5",
"itertools"
] |
Use Flask to convert a Pandas dataframe to CSV and serve a download | 38,634,862 | <p>I have a Pandas dataframe in my Flask app that I want to return as a CSV file.</p>
<pre><code>return Response(df.to_csv())
</code></pre>
<p>The problem is that the output appears in the browser instead of downloading as a separate file. How can I change that?</p>
<p>I tried the following as well but it just gave empty output.</p>
<pre><code>response = make_response(df.to_csv())
response.headers['Content-Type'] = 'text/csv'
return Response(response)
</code></pre>
| 2 | 2016-07-28T11:19:45Z | 38,635,222 | <p>Set the <code>Content-Disposition</code> to tell the browser to download the file instead of showing its content on the page.</p>
<pre><code>resp = make_response(df.to_csv())
resp.headers["Content-Disposition"] = "attachment; filename=export.csv"
resp.headers["Content-Type"] = "text/csv"
return resp
</code></pre>
| 3 | 2016-07-28T11:37:01Z | [
"python",
"pandas",
"flask"
] |
Check if programm runs in Debug mode | 38,634,988 | <p>I use the PyCharm IDE for Python programming.</p>
<p>Is there a possibility to check, whether I'm in debugging mode or not when I run my proframm?</p>
<p>I use pyplot as plt and want a Figure only to be shown if I debug my programm. Yes, I could have a global boolean <em>debug</em> which is set by myself, but I look for a sexier solution.</p>
<p>Thank you for your support!</p>
| 1 | 2016-07-28T11:25:45Z | 38,636,826 | <p>It is better not to use other programming platforms....
use this instead (If you like!):-
<a href="https://www.python.org/downloads/windows/(If" rel="nofollow">https://www.python.org/downloads/windows/(If</a> you are using windows)
<a href="https://www.python.org/downloads/mac-osx/(If" rel="nofollow">https://www.python.org/downloads/mac-osx/(If</a> you are using Mac Os)</p>
| 0 | 2016-07-28T12:47:23Z | [
"python",
"python-2.7",
"debugging",
"pycharm"
] |
Check if programm runs in Debug mode | 38,634,988 | <p>I use the PyCharm IDE for Python programming.</p>
<p>Is there a possibility to check, whether I'm in debugging mode or not when I run my proframm?</p>
<p>I use pyplot as plt and want a Figure only to be shown if I debug my programm. Yes, I could have a global boolean <em>debug</em> which is set by myself, but I look for a sexier solution.</p>
<p>Thank you for your support!</p>
| 1 | 2016-07-28T11:25:45Z | 38,637,774 | <p>According to the documentation, <code>settrace</code> / <code>gettrace</code> functions could be used in order to implement Python debugger:</p>
<blockquote>
<pre><code>sys.settrace(tracefunc)
</code></pre>
<p>Set the systemâs trace function, which allows
you to implement a Python source code debugger in Python. The function
is thread-specific; for a debugger to support multiple threads, it
must be registered using <code>settrace()</code> for each thread being debugged.</p>
</blockquote>
<p>However, these methods may not be available in all implementations:</p>
<blockquote>
<p><strong>CPython implementation detail</strong>: The <code>settrace()</code> function is intended
only for implementing debuggers, profilers, coverage tools and the
like. Its behavior is part of the implementation platform, rather than
part of the language definition, and thus may not be available in all
Python implementations.</p>
</blockquote>
<p>You could use the following snippet in order to check if someone is debugging your code:</p>
<pre><code>import sys
gettrace = getattr(sys, 'gettrace', None)
if gettrace is None:
print('No sys.gettrace')
elif gettrace():
print('Hmm, Big Debugger is watching me')
else:
print("Let's do something interesting")
print(1 / 0)
</code></pre>
<p>This one works for pdb:</p>
<pre><code>$ python -m pdb main.py
> /home/soon/Src/Python/main/main.py(3)<module>()
-> import sys
(Pdb) step
> /home/soon/Src/Python/main/main.py(6)<module>()
-> gettrace = getattr(sys, 'gettrace', None)
(Pdb) step
> /home/soon/Src/Python/main/main.py(8)<module>()
-> if gettrace is None:
(Pdb) step
> /home/soon/Src/Python/main/main.py(10)<module>()
-> elif gettrace():
(Pdb) step
> /home/soon/Src/Python/main/main.py(11)<module>()
-> print('Hmm, Big Debugger is watching me')
(Pdb) step
Hmm, Big Debugger is watching me
--Return--
> /home/soon/Src/Python/main/main.py(11)<module>()->None
-> print('Hmm, Big Debugger is watching me')
</code></pre>
<p>And PyCharm:</p>
<pre><code>/usr/bin/python3 /opt/pycharm-professional/helpers/pydev/pydevd.py --multiproc --qt-support --client 127.0.0.1 --port 34192 --file /home/soon/Src/Python/main/main.py
pydev debugger: process 17250 is connecting
Connected to pydev debugger (build 143.1559)
Hmm, Big Debugger is watching me
Process finished with exit code 0
</code></pre>
| 1 | 2016-07-28T13:25:05Z | [
"python",
"python-2.7",
"debugging",
"pycharm"
] |
Finding the optimal combination of algorithms in an sklearn machine learning toolchain | 38,635,075 | <p>In sklearn it is possible to create a pipeline to optimize the complete tool chain of a machine learning setup, as shown in the following sample:</p>
<pre><code>from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.decomposition import PCA
estimators = [('reduce_dim', PCA()), ('svm', SVC())]
clf = Pipeline(estimators)
</code></pre>
<p>Now a pipeline represents by definition a serial process. But what if I want to compare different algorithms on the same level of a pipeline? Say I want to try another feature transformation algorithm additionally to PCA and another machine learning algorithm such as trees additionally to SVM, and get the best of the 4 possible combinations? Can this be represented by some kind of parallel pipe or is there a meta algorithm for this in sklearn? </p>
| 0 | 2016-07-28T11:29:35Z | 38,636,748 | <p>A pipeline is something sequential:</p>
<pre><code>Data -> Process input with algorithm A -> Process input with algorithm B -> ...
</code></pre>
<p>Something parallel, and I also think what you're looking for is called an "Ensemble". For example, in a classification context you can train several SVMs but on different features:</p>
<pre><code> |-SVM A gets features x_1, ... x_n -> vote for class 1 -|
DATA -|-SVM B gets features x_{n+1}, ..., x_m -> vote for class 1 -| -> Classify
|-SVM C gets features x_{m+1}, ..., x_p -> vote for class 0 -|
</code></pre>
<p>In this small example 2 of 3 classifiers voted for class 1, the 3rd voted for class 0. So by majority vote, the ensemble classifies the data as class 1. (Here, the classifiers are executed in parallel)</p>
<p>Of course, you can have several pipelines in an ensemble.</p>
<p>See <a href="http://scikit-learn.org/stable/modules/ensemble.html" rel="nofollow">sklearns Ensemble methods</a> for a pretty good summary.</p>
<p>A short image summary I made a while ago for different ensemble methods:</p>
<p><img src="https://martin-thoma.com/images/2015/12/ml-ensemble-learning.png" alt=""></p>
| 1 | 2016-07-28T12:44:21Z | [
"python",
"machine-learning",
"scikit-learn"
] |
Finding the optimal combination of algorithms in an sklearn machine learning toolchain | 38,635,075 | <p>In sklearn it is possible to create a pipeline to optimize the complete tool chain of a machine learning setup, as shown in the following sample:</p>
<pre><code>from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.decomposition import PCA
estimators = [('reduce_dim', PCA()), ('svm', SVC())]
clf = Pipeline(estimators)
</code></pre>
<p>Now a pipeline represents by definition a serial process. But what if I want to compare different algorithms on the same level of a pipeline? Say I want to try another feature transformation algorithm additionally to PCA and another machine learning algorithm such as trees additionally to SVM, and get the best of the 4 possible combinations? Can this be represented by some kind of parallel pipe or is there a meta algorithm for this in sklearn? </p>
| 0 | 2016-07-28T11:29:35Z | 38,636,833 | <p>The pipeline is not a parallel process. It's rather sequential (Pipe<strong>line</strong>) - see <a href="http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html" rel="nofollow">here</a> the documentation, mentionning : </p>
<blockquote>
<p>Sequentially apply a list of transforms and a final estimator. [...] The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters.</p>
</blockquote>
<p>Thus, you should create two pipelines by just changing one parameters. Then, you would be able to compare the results and keep the better. If you want to, let's say, compare more estimators, you can automize the process</p>
<p>Here is a simple example :</p>
<pre><code>from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_regression
from sklearn.decomposition import PCA
clf1 = SVC(Kernel = 'rbf')
clf2 = RandomForestClassifier()
feat_selec1 = SelectKBest(f_regression)
feat_selec2 = PCA()
for selec in [('SelectKBest', feat_selec1), ('PCA', feat_select2)]:
for clf in [('SVC', clf1), ('RandomForest', clf2):
pipe = Pipeline([selec, clf])
//Do your training / testing cross_validation
</code></pre>
| 1 | 2016-07-28T12:47:34Z | [
"python",
"machine-learning",
"scikit-learn"
] |
Define range for index for lists in for loops | 38,635,088 | <p>I'm a complete beginner in <em>Python</em>. I was coding the "minimum difference between array elements" problem. The idea was to sort the array and then find the difference between adjacent elements, to find the one with the minimum difference. </p>
<p>However, I wonder how to define the range for the index of the list in for loops so that my index doesn't exceed <code>size-2</code>. </p>
<pre><code>import sys
a=[34,56,78,32,97,123]
a,size=sorted(a),len(a)
min=sys.maxint
for i,x in enumerate(a): # Need a range for index i from 0 to size-2
if(abs(a[i]-a[i+1])<min):
min=abs(a[i]-a[i+1])
print min
</code></pre>
| 3 | 2016-07-28T11:30:29Z | 38,635,124 | <p>You can pass a slice of <code>a</code> with the specified start and stop indices to <code>enumerate</code>:</p>
<pre><code>for i, x in enumerate(a[:size-1]):
...
</code></pre>
<p><code>i</code> will run from <code>0</code> to <code>size-2</code></p>
<hr>
<p>On a side note, comments in Python start with <code>#</code> and not <code>//</code></p>
<hr>
<p>You can achieve the same results by using <code>min</code> on a generator expression created from the <code>zip</code> of <code>a</code> and its <em>advanced slice</em>:</p>
<pre><code>minimum = min(abs(i - j) for i, j in zip(a, a[1:]))
</code></pre>
<p>Also, be careful to not use the name <code>min</code> as this already shadows the builtin <code>min</code>. Something you obviously don't want.</p>
| 3 | 2016-07-28T11:32:51Z | [
"python",
"arrays"
] |
Define range for index for lists in for loops | 38,635,088 | <p>I'm a complete beginner in <em>Python</em>. I was coding the "minimum difference between array elements" problem. The idea was to sort the array and then find the difference between adjacent elements, to find the one with the minimum difference. </p>
<p>However, I wonder how to define the range for the index of the list in for loops so that my index doesn't exceed <code>size-2</code>. </p>
<pre><code>import sys
a=[34,56,78,32,97,123]
a,size=sorted(a),len(a)
min=sys.maxint
for i,x in enumerate(a): # Need a range for index i from 0 to size-2
if(abs(a[i]-a[i+1])<min):
min=abs(a[i]-a[i+1])
print min
</code></pre>
| 3 | 2016-07-28T11:30:29Z | 38,635,288 | <p>If you really want to use manual indexing, then dont use <code>enumerate()</code> and just create a <code>range()</code> (or <code>xrange()</code> if Python 2.x) of the right size, ie:</p>
<pre><code>for i in xrange(len(a) - 2):
# code here
</code></pre>
<p>Now you don't have to manually take care of indexes at all - if you want to iterate over <code>(a[x], a[x+1])</code> pairs all you need is <code>zip()</code>:</p>
<pre><code>for x, y in zip(a, a[1:]):
if abs(x - y) < min:
min = abs(x - y)
</code></pre>
<p><code>zip(seq1, seq2)</code> will build a list of <code>(seq1[i], seq2[i])</code> tuples (stopping when the smallest sequence or iterator is exhausted). Using <code>a[1:]</code> as the second sequence, we will have a list of <code>(a[i], a[i+1])</code> tuples. Then we use tuple unpacking to assign each of the tuple's values to <code>x</code> and <code>y</code>.</p>
<p><strong>But</strong> you can also just use the builtin <code>min(iterable)</code> function instead:</p>
<pre><code>min(abs(x - y) for x, y in zip(a, a[1:]))
</code></pre>
<p>which is the pythonic way to get the smallest value of any sequence or iterable. </p>
<p>Note that with Python 2.x, if your real list is actually way bigger, you'll benefit from using <code>itertools.izip</code> instead of <code>zip</code></p>
<p>As as side note, using <code>min</code> (actually using any builtin name) as a variable name is possibly not a good idea as it shadows the builtin in the current namespace. If you get a <code>TypeError: 'int' object is not callable</code> message trying this code you'll know why...</p>
| 5 | 2016-07-28T11:39:49Z | [
"python",
"arrays"
] |
Define range for index for lists in for loops | 38,635,088 | <p>I'm a complete beginner in <em>Python</em>. I was coding the "minimum difference between array elements" problem. The idea was to sort the array and then find the difference between adjacent elements, to find the one with the minimum difference. </p>
<p>However, I wonder how to define the range for the index of the list in for loops so that my index doesn't exceed <code>size-2</code>. </p>
<pre><code>import sys
a=[34,56,78,32,97,123]
a,size=sorted(a),len(a)
min=sys.maxint
for i,x in enumerate(a): # Need a range for index i from 0 to size-2
if(abs(a[i]-a[i+1])<min):
min=abs(a[i]-a[i+1])
print min
</code></pre>
| 3 | 2016-07-28T11:30:29Z | 38,635,291 | <p>You could just <a class='doc-link' href="http://stackoverflow.com/documentation/python/289/indexing-and-slicing#t=201607281137123998325">slice</a> <code>a</code>. Then <code>enumerate(a[:-1])</code> will ignore one element at the end of <code>a</code>.</p>
<p>You don't even need to compute <code>size</code> anymore!</p>
<p>More so, as you don't use <code>x</code> in <code>i, x</code>, you don't need <code>enumerate</code>. Just use <code>range</code> or <code>xrange</code>:</p>
<pre><code>for i in xrange(len(a)-1):
....
</code></pre>
<p>See differences between <code>range</code> and <code>xrange</code> <a class='doc-link' href="http://stackoverflow.com/documentation/python/809/compatibility-between-python-3-and-python-2/2840/differences-between-range-and-xrange-functions#t=201607281149531032462">here</a></p>
| 4 | 2016-07-28T11:39:50Z | [
"python",
"arrays"
] |
How to create Json Web Token to User login in Django Rest Framework? | 38,635,092 | <p>I want to add JWT to my User Login API for authenticate. What should I do according to this codes? I create a token by manuel. But it must change. How can I integrated? Thank you.</p>
<p><strong>Serializers</strong></p>
<pre><code>class UserLoginSerializer(ModelSerializer):
token = CharField(allow_blank=True, read_only=True)
class Meta:
model = User
fields = [
'username',
'password',
'token',
]
extra_kwargs = {"password":
{"write_only": True}
}
def validate(self, data):
user_obj = None
username = data.get("username", None)
password = data["password"]
if not username:
raise ValidationError("Kullanıcı adı gerekli.")
user = User.objects.filter(
Q(username=username)
).distinct()
user = user.exclude(email__isnull=True).exclude(email__iexact='')
if user.exists() and user.count() == 1:
user = user.first()
else:
raise ValidationError("Böyle bir Kullanıcı Adı yoktur.")
if user_obj:
if not user_obj.check_password(password):
raise ValidationError("Tekrar deneyiniz.")
data["token"] = "asdasdasdasd"
return data
</code></pre>
<p><strong>Views</strong></p>
<pre><code>class UserLoginAPIView(APIView):
permission_classes = [AllowAny]
serializer_class = UserLoginSerializer
def post(self, request, *args, **kwargs):
data = request.data
serializer = UserLoginSerializer(data=data)
if serializer.is_valid(raise_exception=True):
new_data = serializer.data
return Response(new_data, status=HTTP_200_OK)
return Response(serializer.errors, status=HTTP_400_BAD_REQUEST)
</code></pre>
<p><strong>Settings</strong></p>
<pre><code>REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAuthenticated',
),
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.SessionAuthentication',
'rest_framework.authentication.BasicAuthentication',
'rest_framework_jwt.authentication.JSONWebTokenAuthentication',
),
</code></pre>
<p>}</p>
<p><strong>urls</strong></p>
<pre><code>urlpatterns = [
url(r'^login/$', UserLoginAPIView.as_view(), name='login'),
url(r'^api-token-auth/', obtain_jwt_token),
url(r'^api-token-refresh/', refresh_jwt_token),
url(r'^api-token-verify/', verify_jwt_token),
url(r'^register/$', UserCreateAPIView.as_view(), name='register'),
</code></pre>
<p>]</p>
| 0 | 2016-07-28T11:30:48Z | 38,640,350 | <p>You can try doing something like this:</p>
<p>First, install <code>pip install djangorestframework-jwt</code>.</p>
<p><strong>settings.py:</strong></p>
<pre><code>REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAuthenticated',
),
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.SessionAuthentication',
'rest_framework.authentication.BasicAuthentication',
'rest_framework_jwt.authentication.JSONWebTokenAuthentication',
),
}
def jwt_response_payload_handler(token, user, request, *args, **kwargs):
data = {
"token": token,
"user": "{}".format(user.id),
"userid": user.id,
"active": user.is_active
}
return data
JWT_AUTH = {
'JWT_RESPONSE_PAYLOAD_HANDLER': 'jwt_response_payload_handler',
'JWT_EXPIRATION_DELTA': datetime.timedelta(days=180),
'JWT_ALLOW_REFRESH': False,
'JWT_REFRESH_EXPIRATION_DELTA': datetime.timedelta(days=30),
'JWT_SECRET_KEY': 'generate_a_secret_key',
}
</code></pre>
<p><strong>serializers.py:</strong></p>
<pre><code>from rest_framework import serializers
from rest_framework.authtoken.models import Token
class TokenSerializer(serializers.ModelSerializer):
class Meta:
model = Token
fields = ('key',)
</code></pre>
<p>Anywhere <code>authenication_classes</code> apply in your <strong>views</strong>, you'll want to add:</p>
<pre><code>from rest_framework_jwt.authentication import JSONWebTokenAuthentication
</code></pre>
<p>I hope that helps you!</p>
| 0 | 2016-07-28T15:12:40Z | [
"python",
"django",
"django-rest-framework"
] |
How to create Json Web Token to User login in Django Rest Framework? | 38,635,092 | <p>I want to add JWT to my User Login API for authenticate. What should I do according to this codes? I create a token by manuel. But it must change. How can I integrated? Thank you.</p>
<p><strong>Serializers</strong></p>
<pre><code>class UserLoginSerializer(ModelSerializer):
token = CharField(allow_blank=True, read_only=True)
class Meta:
model = User
fields = [
'username',
'password',
'token',
]
extra_kwargs = {"password":
{"write_only": True}
}
def validate(self, data):
user_obj = None
username = data.get("username", None)
password = data["password"]
if not username:
raise ValidationError("Kullanıcı adı gerekli.")
user = User.objects.filter(
Q(username=username)
).distinct()
user = user.exclude(email__isnull=True).exclude(email__iexact='')
if user.exists() and user.count() == 1:
user = user.first()
else:
raise ValidationError("Böyle bir Kullanıcı Adı yoktur.")
if user_obj:
if not user_obj.check_password(password):
raise ValidationError("Tekrar deneyiniz.")
data["token"] = "asdasdasdasd"
return data
</code></pre>
<p><strong>Views</strong></p>
<pre><code>class UserLoginAPIView(APIView):
permission_classes = [AllowAny]
serializer_class = UserLoginSerializer
def post(self, request, *args, **kwargs):
data = request.data
serializer = UserLoginSerializer(data=data)
if serializer.is_valid(raise_exception=True):
new_data = serializer.data
return Response(new_data, status=HTTP_200_OK)
return Response(serializer.errors, status=HTTP_400_BAD_REQUEST)
</code></pre>
<p><strong>Settings</strong></p>
<pre><code>REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAuthenticated',
),
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.SessionAuthentication',
'rest_framework.authentication.BasicAuthentication',
'rest_framework_jwt.authentication.JSONWebTokenAuthentication',
),
</code></pre>
<p>}</p>
<p><strong>urls</strong></p>
<pre><code>urlpatterns = [
url(r'^login/$', UserLoginAPIView.as_view(), name='login'),
url(r'^api-token-auth/', obtain_jwt_token),
url(r'^api-token-refresh/', refresh_jwt_token),
url(r'^api-token-verify/', verify_jwt_token),
url(r'^register/$', UserCreateAPIView.as_view(), name='register'),
</code></pre>
<p>]</p>
| 0 | 2016-07-28T11:30:48Z | 38,735,964 | <p>Automatically, you can use 'rest_framework_jwt.views.obtain_jwt_token' for User Login. It create a token. And then, you need to go RestrictedView and use to token for authentication. Basicly, that's all.</p>
| 0 | 2016-08-03T06:29:04Z | [
"python",
"django",
"django-rest-framework"
] |
Exception when training data in Predictionio | 38,635,166 | <p>I am trying to Deploy an Recommendation Engine as mentioned in <a href="http://docs.prediction.io/templates/recommendation/quickstart/" rel="nofollow">quick start guide</a>.
I completed the steps up to build the engine. Now I want to train the Recommendation Engine. I did as mentioned in quick start guide. (execute <code>pio train</code>). Then I got the lengthy error log and I couldn't paste all here. So I am putting first few rows of the error.</p>
<pre><code>[INFO] [Console$] Using existing engine manifest JSON at /home/PredictionIO/PredictionIO-0.9.6/bin/MyRecommendation/manifest.json
[INFO] [Runner$] Submission command: /home/PredictionIO/PredictionIO-0.9.6/vendors/spark-1.5.1-bin-hadoop2.6/bin/spark-submit --class io.prediction.workflow.CreateWorkflow --jar/PredictionIO/PredictionIO-0.9.6/bin/MyRecommendation/target/scala-2.10/template-scala-parallel-recommendation_2.10-0.1-SNAPSHOT.jar,file:/home/PredictionIO/PredictionIO-0.9.6/bndation/target/scala-2.10/template-scala-parallel-recommendation-assembly-0.1-SNAPSHOT-deps.jar --files file:/home/PredictionIO/PredictionIO-0.9.6/conf/log4j.properties --driver/home/PredictionIO/PredictionIO-0.9.6/conf:/home/PredictionIO/PredictionIO-0.9.6/lib/postgresql-9.4-1204.jdbc41.jar:/home/PredictionIO/PredictionIO-0.9.6/lib/mysql-connector-jav file:/home/PredictionIO/PredictionIO-0.9.6/lib/pio-assembly-0.9.6.jar --engine-id qokYFr4rwibijNjabXeVSQKKFrACyrYZ --engine-version ed29b3e2074149d483aa85b6b1ea35a52dbbdb9a --et file:/home/PredictionIO/PredictionIO-0.9.6/bin/MyRecommendation/engine.json --verbosity 0 --json-extractor Both --env PIO_ENV_LOADED=1,PIO_STORAGE_REPOSITORIES_METADATA_NAME=pFS_BASEDIR=/root/.pio_store,PIO_HOME=/home/PredictionIO/PredictionIO-0.9.6,PIO_FS_ENGINESDIR=/root/.pio_store/engines,PIO_STORAGE_SOURCES_PGSQL_URL=jdbc:postgresql://localhost/pGE_REPOSITORIES_METADATA_SOURCE=PGSQL,PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=PGSQL,PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event,PIO_STORAGE_SOURCES_PGSQL_PASSWORD=pio,PIURCES_PGSQL_TYPE=jdbc,PIO_FS_TMPDIR=/root/.pio_store/tmp,PIO_STORAGE_SOURCES_PGSQL_USERNAME=pio,PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model,PIO_STORAGE_REPOSITORIES_EVENTDGSQL,PIO_CONF_DIR=/home/PredictionIO/PredictionIO-0.9.6/conf
[INFO] [Engine] Extracting datasource params...
[INFO] [WorkflowUtils$] No 'name' is found. Default empty String will be used.
[INFO] [Engine] Datasource params: (,DataSourceParams(MyApp3,None))
[INFO] [Engine] Extracting preparator params...
[INFO] [Engine] Preparator params: (,Empty)
[INFO] [Engine] Extracting serving params...
[INFO] [Engine] Serving params: (,Empty)
[WARN] [Utils] Your hostname, test-digin resolves to a loopback address: 127.0.1.1; using 192.168.2.191 instead (on interface p5p1)
[WARN] [Utils] Set SPARK_LOCAL_IP if you need to bind to another address
[INFO] [Remoting] Starting remoting
[INFO] [Remoting] Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.2.191:56574]
[WARN] [MetricsSystem] Using default name DAGScheduler for source because spark.app.id is not set.
[INFO] [Engine$] EngineWorkflow.train
[INFO] [Engine$] DataSource: duo.DataSource@6088451e
[INFO] [Engine$] Preparator: duo.Preparator@1642eeae
[INFO] [Engine$] AlgorithmList: List(duo.ALSAlgorithm@a09303)
[INFO] [Engine$] Data sanity check is on.
[INFO] [Engine$] duo.TrainingData does not support data sanity check. Skipping check.
[INFO] [Engine$] duo.PreparedData does not support data sanity check. Skipping check.
[WARN] [BLAS] Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
[WARN] [BLAS] Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
[WARN] [LAPACK] Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK
[WARN] [LAPACK] Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.StackOverflowError
java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1028)
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
scala.collection.immutable.$colon$colon.writeObject(List.scala:379)
sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1028)
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
</code></pre>
<p>what can I do to overcome this isssue?</p>
| 5 | 2016-07-28T11:34:18Z | 38,655,950 | <p>Your error says <code>java.lang.StackOverflowError</code> for that you can reduce the <code>numIterations parameter</code> in <code>engine.json</code> file. Refer <a href="http://stackoverflow.com/questions/34133172/predictionio-engine">this</a>.</p>
| 1 | 2016-07-29T10:01:14Z | [
"python",
"apache-spark",
"recommendation-engine",
"data-science",
"predictionio"
] |
TensorFlow: Implementing Spearman Distance as the Objective Function | 38,635,182 | <p>In order to make my issue reproducible, I have generated the following <code>.csv</code> file using iris flower data set (10 arbitrary rows, all columns standard normalized) and a minimal neural network model (which predicts petal width using sepal length, sepal width and petal length) by modifying an MNIST example that I found on the internet. Scroll down to see my question!</p>
<blockquote>
<p>iris.csv</p>
</blockquote>
<pre><code>"Sepal.Length","Sepal.Width","Petal.Length","Petal.Width","Species"
0.0551224773430978,-0.380319414627833,-0.335895230408602,-0.548226210538025,"versicolor"
1.48830688826362,-1.01418510567422,1.37931445678426,0.614677872421422,"virginica"
0.606347250774068,0.887411967464943,0.450242542888127,0.780807027129915,"virginica"
-0.606347250774067,-1.64805079672061,0.235841331989019,0.44854871771293,"virginica"
1.15757202420504,-1.01418510567422,0.950512034986045,0.44854871771293,"virginica"
-1.92928670700839,0.887411967464943,-2.33697319880027,-2.37564691233144,"setosa"
0.38585734140168,0.253546276418555,0.307308402288722,1.1130653365469,"virginica"
-0.826837160146455,0.253546276418555,-0.478829371008007,-0.548226210538025,"versicolor"
0.0551224773430978,1.52127765851133,-0.192961089809197,-0.21596790112104,"versicolor"
-0.385857341401679,0.253546276418555,0.021440121089911,0.282419563004437,"virginica"
</code></pre>
<blockquote>
<p>nn.py</p>
</blockquote>
<pre><code>import pandas as pd
import numpy as np
import tensorflow as tf
import scipy.stats
# Import iris data
data = pd.read_csv("iris.csv")
input = data[["Sepal.Length", "Sepal.Width", "Petal.Length"]]
target = data[["Petal.Width"]]
# Parameters
learning_rate = 0.001
training_epochs = 6000
# Network Parameters
n_hidden_1 = 5 # 1st layer number of features
n_hidden_2 = 5 # 2nd layer number of features
n_input = 3 # data input
n_output = 1 # data output
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_output])
# Create model
def multilayer_network(x, weights, biases):
# Hidden layer with TanH activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.tanh(layer_1)
# Hidden layer with TanH activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.tanh(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_output]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_output]))
}
# Construct model
pred = multilayer_network(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.square(pred-y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: input, y: target})
# Display logs per epoch step
if epoch % 1000 == 0:
print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c)
print "Optimization Finished!"
</code></pre>
<p>Here is an example training session result:</p>
<pre><code>$ python nn.py
Epoch: 0001 cost= 3.000185966
Epoch: 1001 cost= 0.031734336
Epoch: 2001 cost= 0.000614795
Epoch: 3001 cost= 0.000008422
Epoch: 4001 cost= 0.000000057
Epoch: 5001 cost= 0.000000000
Optimization Finished!
</code></pre>
<hr>
<p>My idea was to replace Mean Square Error with the Spearman distance, which I recently learnt about, as my objective function. Following the definition:</p>
<p><a href="http://i.stack.imgur.com/2Awye.png" rel="nofollow"><img src="http://i.stack.imgur.com/2Awye.png" alt="FORMULA"></a></p>
<p>I wrote a function that returns the ranking of a vector:</p>
<pre><code>import scipy.stats
def rank(vector):
return scipy.stats.rankdata(vector, method="min")
</code></pre>
<p>Using TensorFlow's method <code>py_func</code>, I defined my cost tensor as follows.</p>
<pre><code>pred = tf.to_float(tf.py_func(rank, [pred], [tf.int64])[0])
y = tf.to_float(tf.py_func(rank, [y], [tf.int64])[0])
cost = tf.reduce_mean(tf.square(y-pred))
</code></pre>
<p>However, this gave me the error</p>
<pre><code>ValueError: No gradients provided for any variable: ((None, <tensorflow.python.ops.variables.Variable object at 0x7f67ffe4ee90>), (None, <tensorflow.python.ops.variables.Variable object at 0x7f66ed3c4990>), (None, <tensorflow.python.ops.variables.Variable object at 0x7f66ed357310>), (None, <tensorflow.python.ops.variables.Variable object at 0x7f66ed357190>), (None, <tensorflow.python.ops.variables.Variable object at 0x7f66ed380350>), (None, <tensorflow.python.ops.variables.Variable object at 0x7f66ed3801d0>))
</code></pre>
<p>I do not understand what the underlying problem is. Any direction you could provide me would be greatly appreciated!</p>
| 0 | 2016-07-28T02:56:13Z | 38,636,465 | <p>Your error comes from the fact that <code>tf.py_func</code> has no gradient defined. </p>
<p>Anyway, as @user20160 said in the comments, no gradient even exists for the operation <code>rank</code>, so this is not a loss on which you can train your algorithm directly.</p>
| 2 | 2016-07-28T12:31:37Z | [
"python",
"ranking",
"tensorflow"
] |
Inserting elements from array to the string | 38,635,199 | <p>I have two variables:</p>
<pre><code>query = "String: {} Number: {}"
param = ['text', 1]
</code></pre>
<p>I need to merge these two variables and keep the quote marks in case of string and numbers without quote marks.</p>
<p>result= <code>"String: 'text' Number: 1"</code></p>
<p>I tried to use query.format(param), but it removes the quote marks around the 'text'. How can I solve that?</p>
| 2 | 2016-07-28T11:35:53Z | 38,635,274 | <p>You can use <code>repr</code> on each item in <code>param</code> within a generator expression, then use <code>format</code> to add them to your string.</p>
<pre><code>>>> query = "String: {} Number: {}"
>>> param = ['text', 1]
>>> query.format(*(repr(i) for i in param))
"String: 'text' Number: 1"
</code></pre>
| 7 | 2016-07-28T11:39:10Z | [
"python",
"arrays",
"string"
] |
Matching a number within a number in a python List | 38,635,408 | <p>A queryset returned a list:</p>
<pre><code>list1=[2856,28564,1245,232856]
</code></pre>
<p>When I try to find if a number exists in the above list by writing:</p>
<pre><code>num=2856
if num in list1:
</code></pre>
<p>it matches 2856, 25564, and 232856. How can I make sure it matches only 2856 and not the rest?</p>
<p>Sorry I am new to Python, and could not find the solution. Apologies if I am asking a duplicate question.</p>
| -2 | 2016-07-28T11:45:14Z | 38,635,491 | <p>It does not behave like you suggest. In fact your current syntax gives your desired output.</p>
<pre><code>>>> list1=[2856,28564,1245,232856]
>>> num=2856
>>> if num in list1:
... print(num)
...
2856
>>>
</code></pre>
<p>This is the behavior you are suggesting. Note how it is necessary to convert the integers to strings in this case.</p>
<pre><code>>>> list1 = [2856,28564,1245,232856]
>>> num = 2856
>>> [x for x in list1 if str(num) in str(x)]
[2856, 28564, 232856]
</code></pre>
| 3 | 2016-07-28T11:49:04Z | [
"python"
] |
Matching a number within a number in a python List | 38,635,408 | <p>A queryset returned a list:</p>
<pre><code>list1=[2856,28564,1245,232856]
</code></pre>
<p>When I try to find if a number exists in the above list by writing:</p>
<pre><code>num=2856
if num in list1:
</code></pre>
<p>it matches 2856, 25564, and 232856. How can I make sure it matches only 2856 and not the rest?</p>
<p>Sorry I am new to Python, and could not find the solution. Apologies if I am asking a duplicate question.</p>
| -2 | 2016-07-28T11:45:14Z | 38,635,530 | <p>You are checking if the value of num is inside of the list. Therefore, the expression </p>
<pre><code>num in list1
</code></pre>
<p>is True if num is equal to any item of list1. To make it only True when num is equal to the first (here: 2856) you have to write:</p>
<pre><code>num == list1[0]
</code></pre>
| -1 | 2016-07-28T11:50:54Z | [
"python"
] |
Matching a number within a number in a python List | 38,635,408 | <p>A queryset returned a list:</p>
<pre><code>list1=[2856,28564,1245,232856]
</code></pre>
<p>When I try to find if a number exists in the above list by writing:</p>
<pre><code>num=2856
if num in list1:
</code></pre>
<p>it matches 2856, 25564, and 232856. How can I make sure it matches only 2856 and not the rest?</p>
<p>Sorry I am new to Python, and could not find the solution. Apologies if I am asking a duplicate question.</p>
| -2 | 2016-07-28T11:45:14Z | 38,635,570 | <p>For your specific example:</p>
<p>[x for x in list1 if x==2856][0]</p>
<p>returns a single element list containing only the value you want and you access its only position to get the element.</p>
| 0 | 2016-07-28T11:52:41Z | [
"python"
] |
Searching in Google with Python | 38,635,419 | <p>I want to search a text in Google using a python script and return the name, description and URL for each result. I'm currently using this code:</p>
<pre><code>from google import search
ip=raw_input("What would you like to search for? ")
for url in search(ip, stop=20):
print(url)
</code></pre>
<p>This returns only the URL's, how can I return the name and description for each URL?</p>
<p>Thanks!</p>
| -3 | 2016-07-28T11:45:37Z | 38,635,722 | <p>I assume you are using <a href="https://breakingcode.wordpress.com/2010/06/29/google-search-python/" rel="nofollow">this library by Mario Vilas</a> because of the <code>stop=20</code> argument which appears in his code. It seems like this library is not able to return anything but the URLs, making it horribly undeveloped. As such, what you want to do is not possible with the library you are currently using.</p>
<p>I would suggest you instead use <a href="https://github.com/abenassi/Google-Search-API" rel="nofollow">abenassi/Google-Search-API</a>. Then you can simply do:</p>
<pre><code>from google import google
num_page = 3
search_results = google.search("This is my query", num_page)
for result in search_results:
print(result.description)
</code></pre>
| 3 | 2016-07-28T11:58:57Z | [
"python",
"python-2.7",
"google-search"
] |
Searching in Google with Python | 38,635,419 | <p>I want to search a text in Google using a python script and return the name, description and URL for each result. I'm currently using this code:</p>
<pre><code>from google import search
ip=raw_input("What would you like to search for? ")
for url in search(ip, stop=20):
print(url)
</code></pre>
<p>This returns only the URL's, how can I return the name and description for each URL?</p>
<p>Thanks!</p>
| -3 | 2016-07-28T11:45:37Z | 38,639,228 | <p>Not exatcly what I was looking for, but I found myself a nice solution for now (I might edit this if I will able to make this better). I combined searching in Google like I did (returning only URL) and the Beautiful Soup package for parsing HTML pages:</p>
<pre><code>from google import search
import urllib
from bs4 import BeautifulSoup
def google_scrape(url):
thepage = urllib.urlopen(url)
soup = BeautifulSoup(thepage, "html.parser")
return soup.title.text
i = 1
query = 'search this'
for url in search(query, stop=10):
a = google_scrape(url)
print str(i) + ". " + a
print url
print " "
i += 1
</code></pre>
<p>This gives me a list of the title of pages and the link.</p>
<p>And another great solutions:</p>
<pre><code>from google import search
import requests
for url in search(ip, stop=10):
r = requests.get(url)
title = everything_between(r.text, '<title>', '</title>')
</code></pre>
| 0 | 2016-07-28T14:23:37Z | [
"python",
"python-2.7",
"google-search"
] |
Matrix of polynomial elements | 38,635,580 | <p>I am using NumPy for operations on matrices, to calculate matrixA * matrixB, the trace of the matrix, etc... And elements of my matrices are integers. But what I want to know is if there is possibility to work with matrices of polynomials. So for instance I can work with matrices such as <code>[x,y;a,b]</code>, not <code>[1,1;1,1]</code>, and when I calculate the trace it provides me with the polynomial x + b, and not 2. Is there some polynomial class in NumPy which matrices can work with?</p>
| 4 | 2016-07-28T11:53:01Z | 38,635,743 | <p>One option is to use the <a href="http://docs.sympy.org/dev/modules/matrices/matrices.html" rel="nofollow">SymPy Matrices module</a>. SymPy is a symbolic mathematics library for Python which is quite interoperable with NumPy, especially for simple matrix manipulation tasks such as this. </p>
<pre><code>>>> from sympy import symbols, Matrix
>>> from numpy import trace
>>> x, y, a, b = symbols('x y a b')
>>> M = Matrix(([x, y], [a, b]))
>>> M
Matrix([
[x, y],
[a, b]])
>>> trace(M)
b + x
>>> M.dot(M)
[a*y + x**2, a*b + a*x, b*y + x*y, a*y + b**2]
</code></pre>
| 2 | 2016-07-28T11:59:42Z | [
"python",
"numpy",
"matrix",
"sympy"
] |
Why I output csv file with a blank row | 38,635,669 | <p>This is my code:</p>
<pre><code>with open("testoffset.csv") as handler:
f = open('output_file2.csv', 'w+')
f.write('X,Y,value\n')
for r,l in enumerate(handler):
for col, e in enumerate(l.split(',')):
f.write('{0},{1},{2}\n'.format(r+1,col+1,e))
</code></pre>
<p>This outputs</p>
<pre class="lang-none prettyprint-override"><code>X Y value
1 1 1
1 2 2
1 3 3
----------
blank row
----------
2 1 2
2 2 2
2 3 2
----------
blank row
----------
3 1 1
3 2 2
3 3 3
----------
blank row
----------
</code></pre>
<p>How can I output a CSV file without those blank rows?</p>
| 0 | 2016-07-28T11:56:36Z | 38,635,741 | <p>Your lines from the <code>handler</code> file have a newline at the end, which you write to your output file.</p>
<p>Remove that newline by stripping before you split the line:</p>
<pre><code>for col, e in enumerate(l.rstrip('\n').split(',')):
</code></pre>
<p>You may want to avoid re-inventing the CSV reading and writing wheels; Python comes with the excellent <a href="https://docs.python.org/2/library/csv.html" rel="nofollow"><code>csv</code> module</a> that can do all this work for you, including removing the newline and splitting the lines:</p>
<pre><code>import csv
with open("testoffset.csv", 'rb') as handler, open('output_file2.csv', 'wb') as output:
reader = csv.reader(handler)
writer = csv.writer(output)
writer.writerow(['X', 'Y', 'value'])
for rownum, row in enumerate(reader, 1):
for colnum, col in enumerate(row, 1):
writer.writerow([rownum, colnum, col])
</code></pre>
<p>I also started the <code>enumerate()</code> calls at 1 to avoid having to add one when writing.</p>
| 2 | 2016-07-28T11:59:34Z | [
"python",
"python-2.7",
"csv"
] |
Why I output csv file with a blank row | 38,635,669 | <p>This is my code:</p>
<pre><code>with open("testoffset.csv") as handler:
f = open('output_file2.csv', 'w+')
f.write('X,Y,value\n')
for r,l in enumerate(handler):
for col, e in enumerate(l.split(',')):
f.write('{0},{1},{2}\n'.format(r+1,col+1,e))
</code></pre>
<p>This outputs</p>
<pre class="lang-none prettyprint-override"><code>X Y value
1 1 1
1 2 2
1 3 3
----------
blank row
----------
2 1 2
2 2 2
2 3 2
----------
blank row
----------
3 1 1
3 2 2
3 3 3
----------
blank row
----------
</code></pre>
<p>How can I output a CSV file without those blank rows?</p>
| 0 | 2016-07-28T11:56:36Z | 38,635,885 | <pre><code>with open("testoffset.csv",'wb') as handler:
</code></pre>
<p>Replace first line of your code. </p>
| 0 | 2016-07-28T12:05:52Z | [
"python",
"python-2.7",
"csv"
] |
Why I output csv file with a blank row | 38,635,669 | <p>This is my code:</p>
<pre><code>with open("testoffset.csv") as handler:
f = open('output_file2.csv', 'w+')
f.write('X,Y,value\n')
for r,l in enumerate(handler):
for col, e in enumerate(l.split(',')):
f.write('{0},{1},{2}\n'.format(r+1,col+1,e))
</code></pre>
<p>This outputs</p>
<pre class="lang-none prettyprint-override"><code>X Y value
1 1 1
1 2 2
1 3 3
----------
blank row
----------
2 1 2
2 2 2
2 3 2
----------
blank row
----------
3 1 1
3 2 2
3 3 3
----------
blank row
----------
</code></pre>
<p>How can I output a CSV file without those blank rows?</p>
| 0 | 2016-07-28T11:56:36Z | 38,635,947 | <p>Just skip empty lines.</p>
<pre><code>...
for r,l in enumerate(handler):
if not l.startswith('\n'): #or if l[0] != '\n':
for col, e in enumerate(l.split(',')):
...
</code></pre>
| 1 | 2016-07-28T12:08:49Z | [
"python",
"python-2.7",
"csv"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.