title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
Python version of h=area() in Matlab
38,858,506
<p>I asked a related question yesterday and fortunately got my answer from jlarsch quickly. But now I am stuck with the next part, which starts with the <code>h=area()</code> line. I'd like to know the python version of the <code>area()</code> function, via which I will be able to set the colors. Could someone shed me some light again? Thanks much in advance.</p> <pre><code>... Subplot (2,1,1); H = plot (rand(100,5)); C = get (H, 'Color') H = area (myX, myY); H(1).FaceColor = C(1); H(2).FaceColor = C(2); Grid on; ... </code></pre>
-1
2016-08-09T19:01:18Z
38,858,647
<p>You probably want <code>plt.fill()</code>. A huge amount of graph types at <a href="http://matplotlib.org/gallery.html" rel="nofollow">Matplotlib Gallery</a></p>
0
2016-08-09T19:09:03Z
[ "python" ]
Python version of h=area() in Matlab
38,858,506
<p>I asked a related question yesterday and fortunately got my answer from jlarsch quickly. But now I am stuck with the next part, which starts with the <code>h=area()</code> line. I'd like to know the python version of the <code>area()</code> function, via which I will be able to set the colors. Could someone shed me some light again? Thanks much in advance.</p> <pre><code>... Subplot (2,1,1); H = plot (rand(100,5)); C = get (H, 'Color') H = area (myX, myY); H(1).FaceColor = C(1); H(2).FaceColor = C(2); Grid on; ... </code></pre>
-1
2016-08-09T19:01:18Z
38,858,722
<p>The pretty much exact equivalent of MATLAB's <a href="http://www.mathworks.com/help/matlab/ref/area.html" rel="nofollow">Area plot</a> is matplotlib's <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.stackplot" rel="nofollow">stackplot</a>. Here is the first MATLAB example from the above link reproduced using matplotlib:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np x = np.arange(4) y = [[1, 3, 1, 2], [5, 2, 5, 6], [3, 7, 3, 1]] plt.stackplot(x, y) plt.show() </code></pre> <p>And here is the result:</p> <p><a href="http://i.stack.imgur.com/Ivpsd.png" rel="nofollow"><img src="http://i.stack.imgur.com/Ivpsd.png" alt="Stack plot"></a></p>
0
2016-08-09T19:14:16Z
[ "python" ]
How to use subprocess to interact with a python script
38,858,650
<p>I'm writing an IDE for python, in python, and need to use subprocess to intereact with a user's script.</p> <p>I am completely new to using subprocess and not sure what I'm doing here. I've created a test snippet representing what I'm trying to do:</p> <pre><code>from subprocess import Popen,PIPE,STDOUT import tkinter as tk t=tk.Text() t.pack() p = Popen(["python","c:/runme.py"],stdout=PIPE,stdin=PIPE,stderr=PIPE,shell=True) p.stdin.write("5".encode()) out=p.stdout.read() t.insert(1.0,out) </code></pre> <p>And here is the test script I'm trying to interact with:</p> <pre><code>print("Hello World") inp=input("Enter a Number: ") print(inp) quit() </code></pre> <p>Unfortunately it just waiting (presumably) on line 2. How do I read what has already been printed and how to I then input the string?</p>
1
2016-08-09T19:09:13Z
38,858,764
<p>You have to flush stdout regularly, because, if the script is not connected to a terminal, the output is not automatically flushed:</p> <pre><code>import sys print("Hello World") print("Enter a Number: ") stdout.flush() inp = input() print(inp) </code></pre> <p>and you have to terminate the input by return <code>\n</code>:</p> <pre><code>p = Popen(["python", "c:/runme.py"], stdout=PIPE, stdin=PIPE, stderr=PIPE) p.stdin.write("5\n".encode()) out = p.stdout.read() </code></pre>
2
2016-08-09T19:18:03Z
[ "python", "python-3.x", "subprocess", "python-3.4", "popen" ]
How to use subprocess to interact with a python script
38,858,650
<p>I'm writing an IDE for python, in python, and need to use subprocess to intereact with a user's script.</p> <p>I am completely new to using subprocess and not sure what I'm doing here. I've created a test snippet representing what I'm trying to do:</p> <pre><code>from subprocess import Popen,PIPE,STDOUT import tkinter as tk t=tk.Text() t.pack() p = Popen(["python","c:/runme.py"],stdout=PIPE,stdin=PIPE,stderr=PIPE,shell=True) p.stdin.write("5".encode()) out=p.stdout.read() t.insert(1.0,out) </code></pre> <p>And here is the test script I'm trying to interact with:</p> <pre><code>print("Hello World") inp=input("Enter a Number: ") print(inp) quit() </code></pre> <p>Unfortunately it just waiting (presumably) on line 2. How do I read what has already been printed and how to I then input the string?</p>
1
2016-08-09T19:09:13Z
38,858,766
<p><strong>Remove <code>shell=True</code></strong>. Currently you are <em>not</em> executing the script at all, but just launching a <code>python</code> interactive interpreter.</p> <p>The problem is that <a href="http://stackoverflow.com/a/21029310/510937">when you use <code>shell=True</code> the way in which the first argument is interpreted changes.</a> You do not need <code>shell=True</code> and the arguments you provided are correct for the <code>shell=False</code> version.</p> <p>See the difference between:</p> <pre><code>&gt;&gt;&gt; import subprocess &gt;&gt;&gt; subprocess.Popen(['python', 'whatever'], shell=True) &lt;subprocess.Popen object at 0x7ff1bf933d30&gt; &gt;&gt;&gt; Python 2.7.12 (default, Jul 1 2016, 15:12:24) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; KeyboardInterrupt &gt;&gt;&gt; KeyboardInterrupt </code></pre> <p>Which as you may notice launches a python interpreter which gets stuck, and this:</p> <pre><code>&gt;&gt;&gt; import subprocess &gt;&gt;&gt; subprocess.Popen(['python', 'whatever']) &lt;subprocess.Popen object at 0x7f14e1446cf8&gt; &gt;&gt;&gt; python: can't open file 'whatever': [Errno 2] No such file or directory </code></pre> <p>Which tries to execute <code>whatever</code>.</p> <hr> <p>Also you should consider using the <a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate" rel="nofollow"><code>communicate</code></a> method instead of reading and writing directly to/from <code>stdin</code>/<code>stdout</code>.</p>
2
2016-08-09T19:18:06Z
[ "python", "python-3.x", "subprocess", "python-3.4", "popen" ]
use loop to query overpass api
38,858,664
<p>I have a csv file of bounding boxes, that I loaded using pandas in python. The dataframe is <em>df</em> and the column name is <em>coord</em>. Does anyone know how I could create a loop to pass the list to my overpass api query?</p> <p>I tried this:</p> <pre><code>import time import overpass api = overpass.API(timeout=900) coords = [df['coord'][0:10]] for coord in coords: results =[] params =[] params = 'node["power"="generator"]'+coord+';out skel;' results = api.Get(params, responseformat="json") time.sleep(5.0) </code></pre> <p>However, I got a multiple requests error. </p> <p>I also tried:</p> <pre><code>x={} x = (api.Get(param) for param in params) </code></pre> <p>But that returned a python object ( <code>&lt;generator object &lt;genexpr&gt; at 0x11755c0f8&gt;</code>) and I need the data as json.</p> <p>Any help would be much appreciated!</p>
0
2016-08-09T19:09:54Z
38,860,535
<p>There are a couple of this wrong here mainly with your approach:</p> <ol> <li><code>coords = [df['coord'][0:10]]</code> is not required. You can replace that with a simpler <code>df['coord'].tolist()</code></li> <li>You are initializing <code>params</code> with an empty list and then immediately after with a string, why?</li> <li>Your string to <code>params</code> is what is causing your difficulty. I would use the <a href="https://docs.python.org/3.5/library/string.html#format-string-syntax" rel="nofollow"><code>format</code></a> method of Python strings to impute the coords in the query. Doing this is safer because it handles type conversions and proper placement by itself. You cannot simply add a tuple of coordinates to a string and expect it to work. Try printing the string you are assigning <code>params</code> to in your code, and print the same line from my solution below and you will see the difference. Your code will raise the error <code>TypeError: Can't convert 'tuple' object to str implicitly</code>, which mine will not.</li> </ol> <p>Here is a complete solution:</p> <pre><code>import time import overpass api = overpass.API(timeout=900) for coord in df['coord'].tolist(): params = 'node["power"="generator"]{c};out skel;'.format(c=coord) results = api.Get(params, responseformat="json") time.sleep(5.0) </code></pre> <hr> <p>Note that if you get an error with the request, please provide a full traceback in your question. That will help us narrow down the real cause of the error.</p>
0
2016-08-09T21:14:03Z
[ "python", "api", "loops", "pandas", "overpass-api" ]
Adding defaultdicts together on key then dividing them
38,858,684
<p>I have a dictionary of 1000+ default dicts, I want to iterate through each default dict, sum them up on key and then divide by the count to get the average value per key.</p> <p>Each default dict has the same keys, i.e.</p> <pre><code>{'A': 0.0, 'B': 1.0, 'C': 1.0, 'D': 1.0, 'E': 1.0} {'A': 2.0, 'B': 1.2, 'C': 3.0, 'D': 1.0, 'E': 3.0} </code></pre> <p>I want the following to be my output</p> <pre><code>{'A': 1.0, 'B': 1.1, 'C': 2.0, 'D': 1.0, 'E': 2.0} </code></pre> <p>How do I iteratively add default dicts up, and then divide them, a la row operations in a DataFrame? Or is there a better way to be doing this?</p> <p>Thanks </p>
2
2016-08-09T19:11:20Z
38,858,729
<p>Iterate over one dictionary's keys and values and add the value to the value of the corresponding key in the other dictionary.<br> Example:</p> <pre><code>dict1 = {'A': 0.0, 'B': 1.0, 'C': 1.0, 'D': 1.0, 'E': 1.0} dict2 = {'A': 2.0, 'B': 1.2, 'C': 3.0, 'D': 1.0, 'E': 3.0} for key,value in dict1.iteritems(): dict2[key] = (value + dict2[key]) / 2 print dict2 # prints {'A': 1.0, 'B': 1.1, 'C': 2.0, 'D': 1.0, 'E': 2.0 } </code></pre> <p>For simplicity, you may just create a new dictionary as well:</p> <pre><code>dict1 = {'A': 0.0, 'B': 1.0, 'C': 1.0, 'D': 1.0, 'E': 1.0} dict2 = {'A': 2.0, 'B': 1.2, 'C': 3.0, 'D': 1.0, 'E': 3.0} dictAns = dict() for key,value in dict1.iteritems(): dictAns[key] = (value + dict2[key]) / 2 print dictAns # prints {'A': 1.0, 'B': 1.1, 'C': 2.0, 'D': 1.0, 'E': 2.0 } </code></pre>
0
2016-08-09T19:15:09Z
[ "python", "dictionary" ]
Adding defaultdicts together on key then dividing them
38,858,684
<p>I have a dictionary of 1000+ default dicts, I want to iterate through each default dict, sum them up on key and then divide by the count to get the average value per key.</p> <p>Each default dict has the same keys, i.e.</p> <pre><code>{'A': 0.0, 'B': 1.0, 'C': 1.0, 'D': 1.0, 'E': 1.0} {'A': 2.0, 'B': 1.2, 'C': 3.0, 'D': 1.0, 'E': 3.0} </code></pre> <p>I want the following to be my output</p> <pre><code>{'A': 1.0, 'B': 1.1, 'C': 2.0, 'D': 1.0, 'E': 2.0} </code></pre> <p>How do I iteratively add default dicts up, and then divide them, a la row operations in a DataFrame? Or is there a better way to be doing this?</p> <p>Thanks </p>
2
2016-08-09T19:11:20Z
38,858,837
<p>You can simplify this a bit using <code>collections.Counter</code>:</p> <pre><code>summed_dict = collections.Counter() for d in partial_dicts: summed_dict.update(d) # Use .viewitems or .iteritems instead of .items on Py2 average_dict = {k: v / len(partial_dicts) for k, v in summed_dict.items()} </code></pre>
3
2016-08-09T19:22:12Z
[ "python", "dictionary" ]
Adding defaultdicts together on key then dividing them
38,858,684
<p>I have a dictionary of 1000+ default dicts, I want to iterate through each default dict, sum them up on key and then divide by the count to get the average value per key.</p> <p>Each default dict has the same keys, i.e.</p> <pre><code>{'A': 0.0, 'B': 1.0, 'C': 1.0, 'D': 1.0, 'E': 1.0} {'A': 2.0, 'B': 1.2, 'C': 3.0, 'D': 1.0, 'E': 3.0} </code></pre> <p>I want the following to be my output</p> <pre><code>{'A': 1.0, 'B': 1.1, 'C': 2.0, 'D': 1.0, 'E': 2.0} </code></pre> <p>How do I iteratively add default dicts up, and then divide them, a la row operations in a DataFrame? Or is there a better way to be doing this?</p> <p>Thanks </p>
2
2016-08-09T19:11:20Z
38,858,868
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html" rel="nofollow"><code>pandas.Series</code></a> to perform the <em>averaging</em> of both dictionary values, then convert the series back to a dictionary:</p> <pre><code>import pandas as pd a = pd.Series({'A': 2.0, 'B': 1.2, 'C': 3.0, 'D': 1.0, 'E': 3.0}) b = pd.Series({'A': 0.0, 'B': 1.0, 'C': 1.0, 'D': 1.0, 'E': 1.0}) c = ((a+b)/2).round(1) print(c.to_dict()) # {'A': 1.0, 'B': 1.1, 'D': 1.0, 'E': 2.0, 'C': 2.0} </code></pre>
3
2016-08-09T19:24:25Z
[ "python", "dictionary" ]
Adding defaultdicts together on key then dividing them
38,858,684
<p>I have a dictionary of 1000+ default dicts, I want to iterate through each default dict, sum them up on key and then divide by the count to get the average value per key.</p> <p>Each default dict has the same keys, i.e.</p> <pre><code>{'A': 0.0, 'B': 1.0, 'C': 1.0, 'D': 1.0, 'E': 1.0} {'A': 2.0, 'B': 1.2, 'C': 3.0, 'D': 1.0, 'E': 3.0} </code></pre> <p>I want the following to be my output</p> <pre><code>{'A': 1.0, 'B': 1.1, 'C': 2.0, 'D': 1.0, 'E': 2.0} </code></pre> <p>How do I iteratively add default dicts up, and then divide them, a la row operations in a DataFrame? Or is there a better way to be doing this?</p> <p>Thanks </p>
2
2016-08-09T19:11:20Z
38,858,907
<p>How about using a dictionary comprehension with <code>sum</code>:</p> <pre><code>d1 = {'A': 0.0, 'B': 1.0, 'C': 1.0, 'D': 1.0, 'E': 1.0} d2 = {'A': 2.0, 'B': 1.2, 'C': 3.0, 'D': 1.0, 'E': 3.0} dicts = {"d1": d1, "d2": d2} n = len(dicts) res = {k: sum(d[k] for d in dicts.values()) / n for k in d1} # {'A': 1.0, 'B': 1.1, 'C': 2.0, 'D': 1.0, 'E': 2.0} </code></pre> <p>Note: This is assuming that, as you wrote in the question, all the dicts have <em>the same</em> keys, or are <code>defaultdicts</code>, so that missing keys don't cause an error.</p>
5
2016-08-09T19:26:37Z
[ "python", "dictionary" ]
Adding defaultdicts together on key then dividing them
38,858,684
<p>I have a dictionary of 1000+ default dicts, I want to iterate through each default dict, sum them up on key and then divide by the count to get the average value per key.</p> <p>Each default dict has the same keys, i.e.</p> <pre><code>{'A': 0.0, 'B': 1.0, 'C': 1.0, 'D': 1.0, 'E': 1.0} {'A': 2.0, 'B': 1.2, 'C': 3.0, 'D': 1.0, 'E': 3.0} </code></pre> <p>I want the following to be my output</p> <pre><code>{'A': 1.0, 'B': 1.1, 'C': 2.0, 'D': 1.0, 'E': 2.0} </code></pre> <p>How do I iteratively add default dicts up, and then divide them, a la row operations in a DataFrame? Or is there a better way to be doing this?</p> <p>Thanks </p>
2
2016-08-09T19:11:20Z
38,858,931
<p>Regarding your data as a dict of dicts, then this is how I would do it (pretty functional approach):</p> <pre><code> d = {1: {'A': 0.0, 'B': 1.0, 'C': 1.0, 'D': 1.0, 'E': 1.0}, 2: {'A': 2.0, 'B': 1.2, 'C': 3.0, 'D': 1.0, 'E': 3.0}} import functools def add_dicts(d1, d2): return {k:d1[k]+d2[k] for k in d1} dsum = functools.reduce(add_dicts, d.values()) N = len(d.keys()) davg = {k:v/N for k,v in dsum.items()} print(davg) </code></pre> <p>Output:</p> <pre><code>{'C': 2.0, 'E': 2.0, 'A': 1.0, 'B': 1.1, 'D': 1.0} </code></pre>
1
2016-08-09T19:28:40Z
[ "python", "dictionary" ]
Ignore missing file while downloading with Python urllib2
38,858,697
<p><strong>Issue:</strong> As the title states I am downloading data via ftp from NOAA based on the year and the day. I have configured my script to go through a range of years and download data for each day. However the script is getting hung up on days where no file exists. What happens is it just keeps reloading the same line saying that the file does not exist. Without the time.sleep(5) the script prints to the log like crazy.</p> <p><strong>Solution:</strong> Somehow skip the missing day and move onto the next one. I have explored <em>continue</em> (maybe I am placing it in the wrong spot), making an empty directory (not elegant and still will not move past missing day). I am at a loss, what have I overlooked?</p> <p><strong>Here is the script:</strong></p> <pre><code>##Working 24km import urllib2 import time import os import os.path flink = 'ftp://sidads.colorado.edu/DATASETS/NOAA/G02156/24km/{year}/ims{year}{day}_24km_v1.1.asc.gz' days = [str(d).zfill(3) for d in range(1,365,1)] years = range(1998,1999) flinks = [flink.format(year=year,day=day) for year in years for day in days] from urllib2 import Request, urlopen, URLError for fname in flinks: dl = False while dl == False: try: # req = urllib2.Request(fname) req = urllib2.urlopen(fname) with open('/Users/username/Desktop/scripts_hpc/scratch/'+fname.split('/')[-1], 'w') as dfile: dfile.write(req.read()) print 'file downloaded' dl = True except URLError, e: #print 'sleeping' print e.reason #print req.info() print 'skipping day: ', fname.split('/')[-1],' was not processed for ims' continue ''' if not os.path.isfile(fname): f = open('/Users/username/Desktop/scripts_hpc/empty/'+fname.split('/')[-1], 'w') print 'day was skipped' ''' time.sleep(5) else: break #everything is fine </code></pre> <p><strong>Research:</strong> I have browsed through other questions and they get close, but don't seem to hit the nail on the head. <a href="http://stackoverflow.com/questions/28386104/ignore-missing-file-while-downloading-with-python-ftplib">Ignore missing files Python ftplib </a>,<a href="http://stackoverflow.com/questions/26703021/how-to-skip-over-lines-of-a-file-if-they-are-empty//">how to skip over a lines of a file if they aempty </a> Any help would be greatly appreciated!</p> <p>Thank you!</p>
1
2016-08-09T19:12:22Z
38,858,990
<p>I guess when you stand up walk away and get some coffee things become clear. Apparently something was getting hung up in my while statement (still unsure why). When I took that out and added <em>pass</em> instead of <em>continue</em> it behaved correctly.</p> <p>Here's what it looks like now:</p> <pre><code>for fname in flinks: try: req = urllib2.urlopen(fname) with open('/Users/username/Desktop/scripts_hpc/scratch/'+fname.split('/')[-1], 'w') as dfile: dfile.write(req.read()) print 'file downloaded' except URLError, e: print e.reason print 'skipping day: ', fname.split('/')[-1],' was not processed for ims' pass time.sleep(5) </code></pre>
1
2016-08-09T19:31:55Z
[ "python", "python-2.7", "urllib2" ]
Ignore missing file while downloading with Python urllib2
38,858,697
<p><strong>Issue:</strong> As the title states I am downloading data via ftp from NOAA based on the year and the day. I have configured my script to go through a range of years and download data for each day. However the script is getting hung up on days where no file exists. What happens is it just keeps reloading the same line saying that the file does not exist. Without the time.sleep(5) the script prints to the log like crazy.</p> <p><strong>Solution:</strong> Somehow skip the missing day and move onto the next one. I have explored <em>continue</em> (maybe I am placing it in the wrong spot), making an empty directory (not elegant and still will not move past missing day). I am at a loss, what have I overlooked?</p> <p><strong>Here is the script:</strong></p> <pre><code>##Working 24km import urllib2 import time import os import os.path flink = 'ftp://sidads.colorado.edu/DATASETS/NOAA/G02156/24km/{year}/ims{year}{day}_24km_v1.1.asc.gz' days = [str(d).zfill(3) for d in range(1,365,1)] years = range(1998,1999) flinks = [flink.format(year=year,day=day) for year in years for day in days] from urllib2 import Request, urlopen, URLError for fname in flinks: dl = False while dl == False: try: # req = urllib2.Request(fname) req = urllib2.urlopen(fname) with open('/Users/username/Desktop/scripts_hpc/scratch/'+fname.split('/')[-1], 'w') as dfile: dfile.write(req.read()) print 'file downloaded' dl = True except URLError, e: #print 'sleeping' print e.reason #print req.info() print 'skipping day: ', fname.split('/')[-1],' was not processed for ims' continue ''' if not os.path.isfile(fname): f = open('/Users/username/Desktop/scripts_hpc/empty/'+fname.split('/')[-1], 'w') print 'day was skipped' ''' time.sleep(5) else: break #everything is fine </code></pre> <p><strong>Research:</strong> I have browsed through other questions and they get close, but don't seem to hit the nail on the head. <a href="http://stackoverflow.com/questions/28386104/ignore-missing-file-while-downloading-with-python-ftplib">Ignore missing files Python ftplib </a>,<a href="http://stackoverflow.com/questions/26703021/how-to-skip-over-lines-of-a-file-if-they-are-empty//">how to skip over a lines of a file if they aempty </a> Any help would be greatly appreciated!</p> <p>Thank you!</p>
1
2016-08-09T19:12:22Z
38,859,032
<p>On the <code>except</code>, use <code>pass</code> instead of <code>continue</code>, since it can only be used inside loops(<code>for</code>, <code>while</code>). </p> <p>With that you won't need to do handle the missing files, since Python will just ignore the error and keep going.</p>
1
2016-08-09T19:34:08Z
[ "python", "python-2.7", "urllib2" ]
Python - Faster way to load several images?
38,858,704
<p>one of the tasks on my project is to load a dataset (chars74k) and set the label for each image. In this implementation, I already have a matrix with other images and a list with their respective labels. In order to do the task, I wrote the following code:</p> <pre><code>#images: (input/output)matrix of images #labels: (input/output)list of labels #path: (input)path to my root folder of images. It is like this: # path # |-folder1 # |-folder2 # |-folder3 # |-... # |-lastFolder def loadChars74k(images, labels, path): # list of directories dirlist = [ item for item in os.listdir(path) if os.path.isdir(os.path.join(path, item)) ] # for each subfolder, open all files, append to list of images x and set path as label in y for subfolder in dirlist: imagePath = glob.glob(path + '/' + subfolder +'/*.Bmp') print "folder ", subfolder, " has ",len(imagePath), " images and matrix of images is:", images.shape, "labels are:", len(labels) for i in range(len(imagePath)): anImage = numpy.array(Image.open(imagePath[i]).convert('L'), 'f').ravel() images = numpy.vstack((images,anImage)) labels.append(subfolder) </code></pre> <p>It works fine, but it is taking too long (around 20 minutes). I wonder if there is a faster way to load the images and set the labels.</p> <p>Regards.</p>
2
2016-08-09T19:13:08Z
38,862,273
<p>After some research, I was able to improve the code in this way:</p> <pre><code>def loadChars74k(images, labels, path): # list of directories dirlist = [ item for item in os.listdir(path) if os.path.isdir(os.path.join(path, item)) ] # for each subfolder, open all files, append to list of images x and set path as label in y for subfolder in dirlist: imagePath = glob.glob(path + '/' + subfolder +'/*.Bmp') im_array = numpy.array( [numpy.array(Image.open(imagePath[i]).convert('L'), 'f').ravel() for i in range(len(imagePath))] ) images = numpy.vstack((images, im_array)) for i in range(len(imagePath)): labels.append(subfolder) return images, labels </code></pre> <p>I'm pretty sure it's possible to improve even more, but it's ok for now! It's taking now 33 seconds! </p>
0
2016-08-10T00:03:27Z
[ "python", "numpy" ]
multiple python UDP socket & UI
38,858,802
<p>I am very new to python and still in reading phase. I have two question here to discuss. I need to develop an python base user interface using <code>tkinter</code> which should run on Raspberry Pi2. On the background process I need to create multiple UDP socket server &amp; clients which is also python based. </p> <p>I need to send data to a c++ application when the user press any button and select any value from combo box. Similarly I need to update the UI with the messages received from the C++ application. </p> <p>I have create the basic UI and UDP socket in python which works as expected. Now I need to extend it so that I could send UI data to UDP Socket script and from there to C++ application. </p> <ol> <li>How to instantiate multiple UDP socket? is there something similar to FD_SET, select() in python?</li> <li>how to start both UI and background UDP socket script from a main.py script?</li> <li>How to specify c# region like implementation like canvas as region one, combobox as region two, buttons as region three and label(labelframe) as region four and position then with different size.</li> </ol> <p>This is my python code:</p> <pre><code>from tkinter import * from tkinter import ttk class MainWindow(Frame): def __init__(self): Frame.__init__(self) self.master.title("Test") self.master.minsize(330, 400) self.grid(sticky=E+W+N+S) modeFrame = Frame(self) actionFrame = Frame(self) msgframe = Frame(self) modeFrame.pack(side="top", fill="x") actionFrame.pack(side="top", fill="x") msgframe.pack(side="top", fill="x") # Mode Frame Label(modeFrame, text="Mode :", font="bold").pack(side="left") modeFrame.canvas1 = Canvas(modeFrame, height=25, width=25) modeFrame.setupled = modeFrame.canvas1.create_oval(5, 5, 20, 20, fill="green") modeFrame.canvas1.pack(side="left") Label(modeFrame, text="Setup Mode").pack(side="left") modeFrame.canvas2 = Canvas(modeFrame, height=25, width=25) modeFrame.setupled = modeFrame.canvas2.create_oval(5, 5, 20, 20, fill="black") modeFrame.canvas2.pack(side="left") Label(modeFrame, text="Run Mode").pack(side="left") # Action Frame Label(self, text="Select Coupon").pack(side="left") self.value_of_combo = 'X' self.combo("a,b,c") Button(self, text="Accept", command=acceptCallback).pack(side="left") Button(self, text="Reject", command=rejectCallback).pack(side="left") Button(self, text="EndSession", command=endSessionCallback).pack(fill="both", expand="yes", side="bottom") # Message Frame self.label0frame = LabelFrame(msgframe, text="ID") self.label0frame.pack(fill="both", expand="yes") Label(self.label0frame, text="Waiting for Client ...").pack(side="left") self.label1frame = LabelFrame(msgframe, text="Available Coupons") self.label1frame.pack(fill="both", expand="yes") Label(self.label1frame, text="Waiting for Client ...").pack(side="left") self.label2frame = LabelFrame(msgframe, text="Scanned Code") self.label2frame.pack(fill="both", expand="yes") Label(self.label2frame, text="Scanned Code ...").pack(side="left") self.label3frame = LabelFrame(msgframe, text="Status") self.label3frame.pack(fill="both", expand="yes") Label(self.label3frame, text="Status Message ...").pack(side="left") def newselection(self, event): self.value_of_combo = self.comboBox.get() print(self.value_of_combo) def combo(self,Values): self.box_value = StringVar() self.comboBox = ttk.Combobox(self, state="readonly", values=("a", "b", "c")) self.comboBox.pack(side="left") self.comboBox.set("a") self.comboBox.bind("&lt;&lt;ComboboxSelected&gt;&gt;", self.newselection) def acceptCallback(): print("send Accept Message to C++") def rejectCallback(): print("send Reject Message to C++") def endSessionCallback(): print("send EndSession Message to C++") if __name__ == "__main__": app = MainWindow() app.mainloop() </code></pre> <p>UDP socket code:</p> <pre><code>import time import struct import socket import sys MYPORT = 51506 MYGROUP_4 = '225.0.0.1' MYTTL = 1 # Increase to reach other networks def UDPmain(): udpApp = udpsocket() class udpsocket(): def __init__(self): print('UDP Socket started') group = MYGROUP_4 self.receiver('225.0.0.1') def sender(group): addrinfo = socket.getaddrinfo(group, None)[0] s = socket.socket(addrinfo[0], socket.SOCK_DGRAM) # Set Time-to-live (optional) ttl_bin = struct.pack('@i', MYTTL) if addrinfo[0] == socket.AF_INET: # IPv4 s.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, ttl_bin) else: s.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_MULTICAST_HOPS, ttl_bin) while True: data = repr(time.time()) s.sendto(data + '\0', (addrinfo[4][0], MYPORT)) time.sleep(1) def receiver(self,group): print('Receiver') # Look up multicast group address in name server and find out IP version addrinfo = socket.getaddrinfo(group, None)[0] # Create a socket s = socket.socket(addrinfo[0], socket.SOCK_DGRAM) # Allow multiple copies of this program on one machine # (not strictly needed) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # Bind it to the port s.bind(('', MYPORT)) group_bin = socket.inet_pton(addrinfo[0], addrinfo[4][0]) # Join group if addrinfo[0] == socket.AF_INET: # IPv4 mreq = group_bin + struct.pack('=I', socket.INADDR_ANY) s.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq) else: mreq = group_bin + struct.pack('@I', 0) s.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_JOIN_GROUP, mreq) # Loop, printing any data we receive while True: data, sender = s.recvfrom(1500) while data[-1:] == '\0': data = data[:-1] # Strip trailing \0's print (str(sender) + ' ' + repr(data)) </code></pre> <p>Kindly look the UI and the comments that I have mentioned</p> <p><a href="http://i.stack.imgur.com/w18FS.png" rel="nofollow"><img src="http://i.stack.imgur.com/w18FS.png" alt="enter image description here"></a></p> <p>I appreciate people who goes through the entire question. Sorry for keeping it long</p>
0
2016-08-09T19:20:25Z
38,859,214
<ol> <li><p>Python can use <code>select</code> too: see <a href="https://docs.python.org/3.4/library/select.html?highlight=select.select#select.select" rel="nofollow">https://docs.python.org/3.4/library/select.html?highlight=select.select#select.select</a> The <code>FD_SET</code> stuff is handled for you; you just provide lists of file descriptors (or file objects).</p></li> <li><p>Several options here. One is to simply incorporate the other scripts directly into your main code and use <code>multiprocessing</code> to invoke their entry points. <a href="https://docs.python.org/3.4/library/multiprocessing.html?highlight=multiprocess#the-process-class" rel="nofollow">https://docs.python.org/3.4/library/multiprocessing.html?highlight=multiprocess#the-process-class</a> There are other options if you wish to invoke them separately, depending on your operating system (<code>os.spawn</code>, <code>os.fork</code> + <code>os.execl</code>, <code>subprocess</code>)</p></li> </ol> <p>Leaving (3) to someone else who knows <code>tkinter</code> stuff.</p>
1
2016-08-09T19:45:57Z
[ "python", "sockets", "widget", "udp" ]
Django: How to unit test Update Views/Forms
38,859,266
<p>I'm trying to unit test my update forms and views. I'm using Django Crispy Forms for both my Create and Update Forms. UpdateForm inherits CreateForm and makes a small change to the submit button text. The CreateView and UpdateView are very similar. They have the same model, template, and success_url. They differ in that they use their respective forms, and CreateView inherits django.views.generic.CreateView, and UpdateView inherits django.views.generic.edit.UpdateView.</p> <p>The website works fine. I can create and edit an object without a problem. However, my second test shown below fails. How do I test my UpdateForm?</p> <p>Any help would be appreciated. Thanks.</p> <p>This test passes:</p> <pre><code>class CreateFormTest(TestCase): def setUp(self): self.valid_data = { 'x': 'foo', 'y': 'bar', } def test_create_form_valid(self): """ Test CreateForm with valid data """ form = CreateForm(data=self.valid_data) self.assertTrue(form.is_valid()) obj = form.save() self.assertEqual(obj.x, self.valid_data['x']) </code></pre> <p>This test fails:</p> <pre><code>class UpdateFormTest(TestCase): def setUp(self): self.obj = Factories.create_obj() # Creates the object def test_update_form_valid(self): """ Test UpdateForm with valid data """ valid_data = model_to_dict(self.obj) valid_data['x'] = 'new' form = UpdateForm(valid_data) self.assertTrue(form.is_valid()) case = form.save() self.assertEqual(case.defendant, self.valid_data['defendant'] </code></pre>
0
2016-08-09T19:48:51Z
38,860,446
<p>When pre-populating a <code>ModelForm</code> with an object that has already been created you can use the <code>instance</code> keyword argument to pass the object to the form.</p> <pre><code>form = SomeForm(instance=my_obj) </code></pre> <p>This can be done in a test, such as in the OP&lt; or in a view to edit an object that has already been created. When calling <code>save()</code> the existing object will updated instead of creating a new one.</p>
1
2016-08-09T21:07:28Z
[ "python", "django", "forms", "unit-testing" ]
Extracting text without tags of HTML with Beautifulsoup Python
38,859,271
<p>I try to extract this part of text but i don't figure it out how to do it, i'm working with several html files locally.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;HTML&gt;&lt;HEAD&gt;&lt;STYLE&gt;SOME STYLE CODE&lt;/STYLE&gt;&lt;/HEAD&gt;&lt;META http-equiv=Content-Type content="text/html; charset=utf-8"&gt; &lt;BODY&gt; &lt;H1&gt;SOME TEXT I DONT WANT&lt;/H1&gt; THIS TEXT IS WHICH I WANT &lt;H1&gt;ANOTHER TEXT I DONT WANT&lt;/H1&gt; ANOTHER TEXT THAT I WANT [.. Continues ..] &lt;/BODY&gt;&lt;/HTML&gt;</code></pre> </div> </div> </p> <p>Thanks for your help</p> <p>EDIT: I have tried with this code but sometimes prints the h1 tags</p> <pre><code>import glob from bs4 import BeautifulSoup for file in glob.glob('Logs/Key*.html'): with open(file) as f: htmlfile = f.read() soup = BeautifulSoup(htmlfile, 'html.parser') c = soup.find('body').findAll() for i in c: print i.nextSibling </code></pre> <p>EDIT 2: Actually the problem is that the html file has only one line, so when i try to run that code with this html, also prints the h1 tags:</p> <pre><code>from bs4 import BeautifulSoup htmlfile = '&lt;HTML&gt;&lt;HEAD&gt;&lt;STYLE&gt;SOME STYLE CODE&lt;/STYLE&gt;&lt;/HEAD&gt;&lt;META http-equiv=Content-Type content="text/html; charset=utf-8"&gt;&lt;BODY&gt;&lt;H1&gt;SHIT&lt;/H1&gt;WANTED&lt;H1&gt;SHIT&lt;/H1&gt;&lt;H1&gt;SHIT&lt;/H1&gt;WANTED&lt;H1&gt;SHIT&lt;/H1&gt;WANTED&lt;H1&gt;SHIT&lt;/H1&gt;WANTED&lt;/BODY&gt;&lt;HTML&gt;' soup = BeautifulSoup(htmlfile, 'html.parser') c = soup.find('body').findAll() for i in c: print i.nextSibling </code></pre> <p>Prints:</p> <pre><code>WANTED &lt;h1&gt;SHIT&lt;/h1&gt; WANTED WANTED WANTED </code></pre>
0
2016-08-09T19:49:05Z
38,859,382
<p>Now you can put HTML_TEXT as the html you got from scrapping the url.</p> <pre><code>y = BeautifulSoup(HTML_TEXT) c = y.find('body').findAll(text=True, recursive=False) for i in c: print i </code></pre>
0
2016-08-09T19:56:28Z
[ "python", "web-scraping", "beautifulsoup" ]
Logistic regression multiclass classification with Python API
38,859,302
<p>currently the Python API does not yet support multi class classification within Spark, but will in the future as it is described on the Spark page <a href="http://spark.apache.org/docs/latest/mllib-linear-methods.html#classification" rel="nofollow">1</a>. </p> <p>Is there any release date or any chance to run it with Python that implements multi class with Logistic regression? I know it does with Scala, but I would like to run it with Python. Thank you. </p>
1
2016-08-09T19:51:19Z
38,860,140
<p>scikit-learn's <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html" rel="nofollow">LogisticRegression</a> offers a <code>multi_class</code> parameter. From the docs:</p> <blockquote> <p>Multiclass option can be either ‘ovr’ or ‘multinomial’. If the option chosen is ‘ovr’, then a binary problem is fit for each label. Else the loss minimised is the multinomial loss fit across the entire probability distribution. Works only for the ‘lbfgs’ solver.</p> </blockquote> <p>Hence, <code>multi_class='ovr'</code> seems to be the right choice for you. </p> <p>For more information: <a href="http://www.codeproject.com/Articles/821347/MultiClass-Logistic-Classifier-in-Python" rel="nofollow">see this link</a></p> <hr> <p>Added:</p> <p>As per the pyspark documentation, you can still do multi class regression using their API. Using the class <code>pyspark.mllib.classification.LogisticRegressionWithLBFGS</code>, you get the optional parameter <code>numClasses</code> for multi-class classification.</p>
3
2016-08-09T20:47:09Z
[ "python", "classification", "logistic-regression" ]
Python minidom: #text node disappears when appending it to new parent node
38,859,341
<p>I have XML that looks like this:</p> <pre><code>&lt;example&gt; &lt;para&gt; &lt;phrase&gt;child_0&lt;/phrase&gt; child_1 &lt;phrase&gt;child_2&lt;/phrase&gt; &lt;/para&gt; &lt;/example&gt; </code></pre> <p>and I want it to look like this:</p> <pre><code>&lt;foo&gt; &lt;phrase&gt;child_0&lt;/phrase&gt; child_1 &lt;phrase&gt;child_2&lt;/phrase&gt; &lt;/foo&gt; </code></pre> <p>Simple, right? I create a new parent node -- <code>&lt;foo&gt;</code> -- and then iterate through the <code>&lt;para&gt;</code> node and append the children to the new <code>&lt;foo&gt;</code> node.</p> <p>What's strange is that the <code>child_1</code> (a text node) disappears when I try to do so. If I simply iterate through the <code>&lt;para&gt;</code> node, I get this:</p> <pre><code>&gt;&gt;&gt; for p in para.childNodes: print p.nodeType 1 3 1 </code></pre> <p>So there are 3 child nodes, and the middle one is the text node. But when I try to append it to the new <code>&lt;foo&gt;</code> node, it doesn't make it. </p> <pre><code>&gt;&gt;&gt; for p in para.childNodes: foo_node.appendChild(p) &gt;&gt;&gt; print foo_node.toprettyxml() &lt;foo&gt; &lt;phrase&gt;child_0&lt;/phrase&gt; &lt;phrase&gt;child_2&lt;/phrase&gt; &lt;/foo&gt; </code></pre> <p>What the <code>@#$%&amp;*!</code> is going on?</p>
0
2016-08-09T19:54:06Z
39,174,563
<p>Well, here I am, answering my own question.</p> <p>The <code>appendChild()</code> function <strong>removes</strong> the child node from the <code>&lt;para&gt;</code> list of nodes, so you will be effectively skipping every other element as the index gets out of sync with each iteration. The solution is to append a <em>copy</em> of the node:</p> <pre><code>for p in para.childNodes: p_copy = p.cloneNode(deep=True) foo_node.appendChild(p_copy) </code></pre>
0
2016-08-26T21:00:55Z
[ "python", "xml", "minidom" ]
Break out of loop if GPIO doesn't toggle in time
38,859,353
<p>I'm changing some inputs to a device with relays, and I'm expecting at some point to break the firmware. The question is, when will this happen?</p> <p>So, to determine if the firmware breaks, I'm monitoring some LEDs that normally blink during normal operation. I know that they will lock up in whatever state they're currently in when the firmware breaks. So, my bright idea was to simple feed that signal back into a Raspberry Pi and watch the that GPIO for a change-state. If I see the state change, then go ahead and flip the relays...Then look at the LEDs and make sure they're still blinking...rinse, repeat.</p> <p>However, I would normally check this with an interrupt or something in C, but I'm writing this in Python...</p> <p>What's the Python way for handling this? I know that if I don't see any blinking for 2 seconds or so, the test is over, but I'm not sure how to do this without invoking something like <code>sleep</code>...to which, I wouldn't be able to watch for pin changes.</p>
0
2016-08-09T19:54:51Z
38,859,605
<p>From the <a href="https://sourceforge.net/p/raspberry-gpio-python/wiki/Inputs/" rel="nofollow">gpio</a> module, create a threaded callback function that fires every time a rising (or falling, etc...) edge is detected.</p> <p>Then using <a href="https://docs.python.org/3/library/signal.html#signal.alarm" rel="nofollow">signal</a> within this function call <code>signal.alarm()</code> to reset an alarm whenever the pin changes.</p> <p>Finally use <code>signal.signal()</code> to register a function for what should happen when the alarm is not reset in time (ie, firmware has broken)</p> <p><em>this won't work if you're using windows unfortunately... you would need to implement your own alarm system with threading if you are</em></p>
0
2016-08-09T20:12:11Z
[ "python", "timer", "raspberry-pi", "interrupt", "gpio" ]
Tweepy - Multiple User Search
38,859,362
<p>Trying to perform what I thought was a very basic exercise. I have a list of 7000+ customers and I want to import some of their attributes from twitter. I am trying to figure out how to input several user names into my tweepy query, but I am not getting any luck with below. </p> <pre><code>#!/usr/bin/python import tweepy import csv #Import csv import os # Consumer keys and access tokens, used for OAuth consumer_key = 'MINE' consumer_secret = 'MINE' access_token = 'MINE' access_token_secret = 'MINE' # OAuth process, using the keys and tokens auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) # Open/Create a file to append data csvFile = open('ID.csv', 'a') #Use csv Writer csvWriter = csv.writer(csvFile) users = ('IBM' or 'massmutual') user = api.get_user(screen_name = users) csvWriter.writerow([user.screen_name, user.id, user.followers_count, user.description.encode('utf-8')]) print user.id csvFile.close() </code></pre>
0
2016-08-09T19:55:16Z
38,859,454
<p>You will have to do it in a loop, you will make list of users and get user instance for all one by one in loop.</p> <pre><code>#!/usr/bin/python import tweepy import csv #Import csv import os # Consumer keys and access tokens, used for OAuth consumer_key = 'MINE' consumer_secret = 'MINE' access_token = 'MINE' access_token_secret = 'MINE' # OAuth process, using the keys and tokens auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) # Open/Create a file to append data csvFile = open('ID.csv', 'a') #Use csv Writer csvWriter = csv.writer(csvFile) users = ['IBM','massmutual','user3',.......] for user_name in users: user = api.get_user(screen_name = user_name) csvWriter.writerow([user.screen_name, user.id, user.followers_count, user.description.encode('utf-8')]) print user.id csvFile.close() </code></pre>
1
2016-08-09T20:02:05Z
[ "python", "python-2.7", "twitter", "tweepy", "api-design" ]
Python groupby apply returning odd dataframe
38,859,397
<p>Here is my function:</p> <pre><code>def calculate_employment_two_digit_industry(df): df['intersection'] = df['racEmpProb'] * df['wacEmpProb'] df['empProb'] = df['intersection'] / df['intersection'].sum() df['newEmp'] = df['empProb'] * df['Emp'] df = df[['h_zcta', 'w_zcta', 'indID', 'newEmp', 'empProb']] df.rename(columns = {'newEmp' : 'Emp'}, inplace = True) return df </code></pre> <p>Here is my test:</p> <pre><code>def test_calculate_employment_two_digit_industry(): testDf = pandas.DataFrame({'h_zcta' : [99163, 99163, 99163, 99163], 'w_zcta' : [83843, 83843, 83843, 83843], 'indID' : [11, 21, 22, 42], 'Emp' : [20, 20, 40, 40], 'racEmpProb' : [0.5, 0.5, 0.6, 0.4], 'wacEmpProb' : [0.7, 0.3, 0.625, 0.375], '1_digit' : [1, 1, 2, 2]}) expectedDf = pandas.DataFrame({'h_zcta' : [99163, 99163, 99163, 99163], 'w_zcta' : [83843, 83843, 83843, 83843], 'indID' : [11, 21, 22, 42], 'Emp' : [14, 6, 28.5716, 11.4284], 'empProb' : [0.7, 0.3, 0.71429, 0.28571]}) expectedDf = expectedDf[['h_zcta', 'w_zcta', 'indID', 'Emp', 'empProb']] final = testDf.groupby(['h_zcta', 'w_zcta', '1_digit'])\ .apply(calculate_employment_two_digit_industry).reset_index() assert expected.equals(final) </code></pre> <p>As you can see within in the test I have what I expect the function to return. Aside from potential mathematical errors within the code which I can fix, here is the dataframe that is returned, how do I have it return a normal dataframe (if normal is the correct term) i.e., without the layers just columns and rows?</p> <pre class="lang-none prettyprint-override"><code> h_zcta w_zcta indID Emp empProb h_zcta w_zcta 1_digit 99163 83843 1 0 99163 83843 11 14.0 0.7 1 99163 83843 21 6.0 0.3 2 0 99163 83843 22 28.0 0.7 1 99163 83843 42 12.0 0.3 </code></pre> <p>Thank you in advance.</p>
2
2016-08-09T19:57:47Z
38,859,602
<p>You need <code>.reset_index(drop=True)</code></p> <p>That is:</p> <pre><code>final = testDf.groupby(['h_zcta', 'w_zcta', '1_digit']).apply( calculate_employment_two_digit_industry).reset_index(drop=True) &gt;&gt;&gt; final.index RangeIndex(start=0, stop=4, step=1) </code></pre>
2
2016-08-09T20:12:05Z
[ "python", "pandas", "group-by" ]
class which reads the file content
38,859,426
<p>I'd like to write class which reads the *.csv file and parse it using the pandas library. I'm wondering where I should initialize df. </p> <pre><code>#!/usr/bin/env python import pandas as pd import os class ParseDataBase(object): def __init__(self, name_file): self.name_file = name_file def read_file(self): """Read the file concent""" try: self.df = pd.read_csv(self.name_file) except IndexError: print ("Error: Wrong file name") sys.exit(2) return self.df def dispaly_file(self): print self.df def main(): x = ParseDataBase('something.csv') x.dispaly_file() if __name__ == '__main__': main() </code></pre> <p>The above code returns the following error: <code>'ParseDataBase' object has no attribute 'df'</code>. </p> <p>I don't want to pass to many variables while crating the object.</p> <p>I'm new to object oriented programming, so any comments and hints are highly appreciated! </p>
2
2016-08-09T19:59:49Z
38,859,481
<p>the attribute <code>df</code> gets assigned in the <code>read_file</code> method. You are trying to access that attribute prior to it existing.</p> <p>I'd do this:</p> <pre><code>#!/usr/bin/env python import pandas as pd import os class ParseDataBase(object): def __init__(self, name_file): self.name_file = name_file # Change I made to initiate in the init method. self.df = self.read_file() def read_file(self): """Read the file concent""" try: self.df = pd.read_csv(self.name_file) except IndexError: print ("Error: Wrong file name") sys.exit(2) return self.df def dispaly_file(self): print self.df def main(): x = ParseDataBase('something.csv') x.dispaly_file() if __name__ == '__main__': main() </code></pre>
2
2016-08-09T20:03:36Z
[ "python", "pandas" ]
class which reads the file content
38,859,426
<p>I'd like to write class which reads the *.csv file and parse it using the pandas library. I'm wondering where I should initialize df. </p> <pre><code>#!/usr/bin/env python import pandas as pd import os class ParseDataBase(object): def __init__(self, name_file): self.name_file = name_file def read_file(self): """Read the file concent""" try: self.df = pd.read_csv(self.name_file) except IndexError: print ("Error: Wrong file name") sys.exit(2) return self.df def dispaly_file(self): print self.df def main(): x = ParseDataBase('something.csv') x.dispaly_file() if __name__ == '__main__': main() </code></pre> <p>The above code returns the following error: <code>'ParseDataBase' object has no attribute 'df'</code>. </p> <p>I don't want to pass to many variables while crating the object.</p> <p>I'm new to object oriented programming, so any comments and hints are highly appreciated! </p>
2
2016-08-09T19:59:49Z
38,859,483
<p>You aren't assigning <code>self.df</code> unless you run <code>read_file()</code>, which you aren't.</p> <pre><code>def main(): x = ParseDataBase('something.csv') x.read_file() x.dispaly_file() </code></pre>
1
2016-08-09T20:03:43Z
[ "python", "pandas" ]
python for ele in list[:] and for ele in list difference?
38,859,472
<p>So there's a problem in leetcode that's pretty simple but the solution is incorrect for the second: (OG question: Given an array nums, write a function to move all 0's to the end of it while maintaining the relative order of the non-zero elements.</p> <p>For example, given nums = [0, 1, 0, 3, 12], after calling your function, nums should be [1, 3, 12, 0, 0].</p> <p>Note: You must do this in-place without making a copy of the array. Minimize the total number of operations. )</p> <pre><code>class Solution(object): def moveZeroes(self, nums): """ :type nums: List[int] :rtype: void Do not return anything, modify nums in-place instead. """ k = 0 for ele in nums[:]: if ele == 0: nums.remove(0) k += 1 nums.extend([0]*k) class Solution(object): -------Incorrect solution def moveZeroes(self, nums): """ :type nums: List[int] :rtype: void Do not return anything, modify nums in-place instead. """ k = 0 for ele in nums: if ele == 0: nums.remove(0) k += 1 nums.extend([0]*k) </code></pre> <p>Why does that make a difference please?</p>
-1
2016-08-09T20:02:49Z
38,862,004
<p>You can both not copy the list and not modify the list while it's being looped over:</p> <pre><code>class Solution(object): def moveZeroes(self, numbers): """ :type numbers: List[int] :rtype: void Do not return anything, modify numbers in-place instead. """ k = 0 while True: try: numbers.remove(0) k += 1 except ValueError: break numbers.extend([0] * k) numbers = [0, 1, 0, 3, 12] print(id(numbers), "-&gt;", numbers) Solution().moveZeroes(numbers) print(id(numbers), "-&gt;", numbers) </code></pre> <p><strong>OUTPUT:</strong></p> <pre><code>(4348993904, '-&gt;', [0, 1, 0, 3, 12]) (4348993904, '-&gt;', [1, 3, 12, 0, 0]) </code></pre>
0
2016-08-09T23:29:10Z
[ "python", "list", "for-loop", "data-structures" ]
python for ele in list[:] and for ele in list difference?
38,859,472
<p>So there's a problem in leetcode that's pretty simple but the solution is incorrect for the second: (OG question: Given an array nums, write a function to move all 0's to the end of it while maintaining the relative order of the non-zero elements.</p> <p>For example, given nums = [0, 1, 0, 3, 12], after calling your function, nums should be [1, 3, 12, 0, 0].</p> <p>Note: You must do this in-place without making a copy of the array. Minimize the total number of operations. )</p> <pre><code>class Solution(object): def moveZeroes(self, nums): """ :type nums: List[int] :rtype: void Do not return anything, modify nums in-place instead. """ k = 0 for ele in nums[:]: if ele == 0: nums.remove(0) k += 1 nums.extend([0]*k) class Solution(object): -------Incorrect solution def moveZeroes(self, nums): """ :type nums: List[int] :rtype: void Do not return anything, modify nums in-place instead. """ k = 0 for ele in nums: if ele == 0: nums.remove(0) k += 1 nums.extend([0]*k) </code></pre> <p>Why does that make a difference please?</p>
-1
2016-08-09T20:02:49Z
38,862,072
<p>You can't modify a list while looping over it. It's illegal in Python. </p> <p><code>[:]</code> makes an independent copy of the list to loop over.</p> <p>But the question requires you to not make a copy of the list, so both solutions are incorrect. </p> <p>I won't tell you exactly how to fix it because I want you to learn, but you should create a set. Whenever you would use <code>.remove()</code>, add it to the set. Check to see if an element is in the set before starting your for loop. At the end of the loop, actually remove the elements in the set from the list. </p> <p>To create a set, use <code>set()</code>. To add an element to a set, use <code>set_name.add()</code></p> <hr> <p>Don't use leetcode to learn Python. Leetcode is for technical coding interviews, and your python isn't very good. You wouldn't pass the interview. </p> <p>Your code violates PEP 8 and your class doesn't inherit off of <code>object</code>. You also have your indentation done wrong, which results in an error in Python. Get an editor like PyCharm (it's free) that can warn you of these errors in the future. It's a great IDE, seriously, try it out.</p>
0
2016-08-09T23:37:05Z
[ "python", "list", "for-loop", "data-structures" ]
Installing guppy with pip3 issues
38,859,593
<p>I am trying to install <a href="https://pypi.python.org/pypi/guppy/" rel="nofollow">guppy</a>. My program uses python3 so I must use pip3 exclusively. When I run:</p> <pre><code>pip3 install guppy </code></pre> <p>I get:</p> <pre><code>src/sets/sets.c:77:1: error: expected function body after function declarator INITFUNC (void) ^ src/sets/sets.c:39:18: note: expanded from macro 'INITFUNC' #define INITFUNC initsetsc ^ 1 error generated. error: command 'clang' failed with exit status 1 </code></pre> <p>I tried doing <a href="https://stackoverflow.com/questions/10238458/cant-install-orange-error-command-clang-failed-with-exit-status-1">this</a>, even thourgh it wasn't the same and exported gcc and g++:</p> <pre><code>➜ ~ export CC=gcc ➜ ~ export CXX=g++ </code></pre> <p>Running again:</p> <pre><code>src/sets/sets.c:77:1: error: expected function body after function declarator INITFUNC (void) ^ src/sets/sets.c:39:18: note: expanded from macro 'INITFUNC' #define INITFUNC initsetsc ^ 1 error generated. error: command 'gcc' failed with exit status 1 </code></pre> <p>Most who had this issue used <code>sudo apt-get python-dev</code> or something of the like to resolve this issue, I couldn't find an equivalent for Mac. Is there a way to resolve this issue?</p>
1
2016-08-09T20:11:10Z
38,860,927
<p>Unfortunately it seems that <code>guppy</code> library works only for Python 2.x. An alternative could be <a href="http://mg.pov.lt/objgraph/" rel="nofollow">objgraph</a></p>
1
2016-08-09T21:43:51Z
[ "python", "python-3.x" ]
Linking to brew openssl via python virtualenvwrapper
38,859,693
<p>Running <code>openssl version</code> returns the standard openssl on OS El Capitan, <code>OpenSSL 0.9.8zh</code> in <code>/usr/bin/openssl</code>.</p> <p>I've installed the latest via brew <code>brew install openssl</code>. Various post/articles recommended manually symlinking to <code>/usr/local/bin/openssl</code> or running <code>brew link --force openssl</code>. Other posts said not to do this, running the latter also gave the following warning.</p> <pre><code>Warning: Refusing to link: openssl Linking keg-only openssl means you may end up linking against the insecure, deprecated system OpenSSL while using the headers from Homebrew's openssl. Instead, pass the full include/library paths to your compiler e.g.: -I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib </code></pre> <p>I'm not sure what that means. :|</p> <p>Also I managed to symlink successfully to the brew version, so <code>which openssl</code> pointed to <code>/usr/local/bin/openssl</code> instead of the systems <code>/usr/bin/openssl</code> version, <code>which openssl</code> returned the latest version too, but when I opened a python shell, inside and outside of a virtualenv and ran <code>import ssl ssl.OPENSSL_VERSION</code> it returned the system version.</p> <p>How do I force it to use the brew version in my python code?</p>
0
2016-08-09T20:18:30Z
38,948,507
<p>In the end I used <code>brew install python3 --with-brewed-openssl</code>, then ran <code>brew link python3</code> to symlink it to <code>/usr/local/bin/python3</code> and then used <code>mkvirtualenv --python=/usr/local/bin/python3 [projectname]</code> to use the brewed python(that uses the brewed openssl), now when I run <code>import ssl ssl.OPENSSL_VERSION</code> within my virtualenv I am pointing to my brewed openssl, and I don't have to touch my system openssl or python. This is a similar issue <a href="http://stackoverflow.com/questions/18752409/updating-openssl-in-python-2-7">Updating openssl in python 2.7</a></p>
0
2016-08-15T02:23:12Z
[ "python", "ssl", "openssl", "virtualenv", "homebrew" ]
Joining parts of lists with specific tags and creating a new list in Python
38,859,823
<p>I am new to Python, and I faced with a problem. I used StanfordNER in Python to tag a text , the output of the name entities are like the following:</p> <pre><code>[('Micheal', 'PERSON'), ('Jaf', 'PERSON'), ('Bin', 'PERSON'), ('Aloo', 'PERSON'), ('and', 'O'), ('Purno', 'PERSON'), ('Yusgiantoro', 'PERSON'), ('USA', 'LOCATION'), ('Ibrahim', 'PERSON'), ('Baah', 'PERSON'), ('Alolom', 'PERSON'), ('or', 'O'), ('Ahmad', 'PERSON'), ('Fahad', 'PERSON'), ('Al', 'PERSON'), ('Ahmad', 'PERSON'), ('in', 'O'), ('the', 'O'), ('Sabah', 'PERSON'), ('Purnomo', 'PERSON'), ('Khorabi', 'PERSON'), ('Elie', 'PERSON')] </code></pre> <p>I would like to join first names and family names of each person and get a list that looks like:</p> <pre><code>persons_names = ['Micheal Jaf Bin Aloo', 'Purno Yusgiantoro', 'Ibrahim Baah Alolom', 'Ahmad Fahad Al Ahmad ' 'Sabah Purnomo Khorabi Elie'] </code></pre>
1
2016-08-09T20:27:02Z
38,860,014
<p>What you have posted in the question is not a valid python object. It is most probably a <code>str</code> version of something. The snippet below assumes the first element of every word is converted to a string.</p> <p>The idea is to use <code>itertools.groupby</code>. It groups adjacent elements according a given condition, and returns one group at a time. All that remains is to join them with a space.</p> <pre><code>from itertools import groupby lst = [("Micheal", 'PERSON'),("Jaf", 'PERSON'), ("Bin", 'PERSON'),("Aloo", 'PERSON'),("and", 'O'),("Purno", 'PERSON'), ("Yusgiantoro", 'PERSON'),("USA", 'LOCATION'),("Ibrahim", 'PERSON'), ("Baah", 'PERSON'), ("Alolom", 'PERSON'),("or", 'O'),("Ahmad", 'PERSON'),("Fahad", 'PERSON'),("Al", 'PERSON'),("Ahmad", 'PERSON')] print [" ".join(x[0] for x in names) for typ, names in groupby(lst, key=lambda x: x[1]) if typ == "PERSON"] </code></pre> <p>OUTPUT:</p> <pre><code>['Micheal Jaf Bin Aloo', 'Purno Yusgiantoro', 'Ibrahim Baah Alolom', 'Ahmad Fahad Al Ahmad'] </code></pre>
2
2016-08-09T20:38:51Z
[ "python", "list" ]
Joining parts of lists with specific tags and creating a new list in Python
38,859,823
<p>I am new to Python, and I faced with a problem. I used StanfordNER in Python to tag a text , the output of the name entities are like the following:</p> <pre><code>[('Micheal', 'PERSON'), ('Jaf', 'PERSON'), ('Bin', 'PERSON'), ('Aloo', 'PERSON'), ('and', 'O'), ('Purno', 'PERSON'), ('Yusgiantoro', 'PERSON'), ('USA', 'LOCATION'), ('Ibrahim', 'PERSON'), ('Baah', 'PERSON'), ('Alolom', 'PERSON'), ('or', 'O'), ('Ahmad', 'PERSON'), ('Fahad', 'PERSON'), ('Al', 'PERSON'), ('Ahmad', 'PERSON'), ('in', 'O'), ('the', 'O'), ('Sabah', 'PERSON'), ('Purnomo', 'PERSON'), ('Khorabi', 'PERSON'), ('Elie', 'PERSON')] </code></pre> <p>I would like to join first names and family names of each person and get a list that looks like:</p> <pre><code>persons_names = ['Micheal Jaf Bin Aloo', 'Purno Yusgiantoro', 'Ibrahim Baah Alolom', 'Ahmad Fahad Al Ahmad ' 'Sabah Purnomo Khorabi Elie'] </code></pre>
1
2016-08-09T20:27:02Z
38,860,033
<p>you could do</p> <pre><code>last=None grouped=[] for word,t in myList: if t==last: grouped[-1].append(word) else: grouped.append([t,word]) last=t person_names=[" ".join(i[1:]) for i in grouped if i[0]=="PERSON"] </code></pre>
0
2016-08-09T20:40:16Z
[ "python", "list" ]
Multiple forms submitting with single view - Django
38,859,842
<p>I have two ModelForms submitting from a single view. One model is a ForeignKey of another. At this point, the form will add to the DB, but not as expected. Instead of adding a Course and a Section (with course field populated with the newly-created Course), I'm getting a Course and Section that are linked, but with the Section name that was entered as both the Course name and the Section name.</p> <p>models.py</p> <pre><code>class Course(models.Model): Name = models.CharField(max_length=30,unique=True) Active = models.BooleanField(default=True) def __unicode__(self): return u'%s' % (self.Name) class Section(models.Model): Name = models.CharField(max_length=30,default='.',unique=True) course = models.ForeignKey(Course, on_delete=models.CASCADE) assessments = models.ManyToManyField(Assessment) def __unicode__(self): return u'%s / %s' % (self.Name, self.course) </code></pre> <p>forms.py</p> <pre><code>class CourseAddForm(forms.ModelForm): class Meta: model = Course fields = ['Name', 'Active'] class SectionAddForm(forms.ModelForm): class Meta: model = Section fields = ['Name'] </code></pre> <p>templates/index.html</p> <pre><code>&lt;!-- COURSE ADD MODAL --&gt; &lt;div class="modal fade" id="CourseAddModal" role="dialog"&gt; &lt;div class="modal-dialog"&gt; &lt;!-- Modal content--&gt; &lt;div class="modal-content"&gt; &lt;div class="modal-header" style="padding:5px 10px;"&gt; &lt;button type="button" class="close" data-dismiss="modal"&gt;&amp;times;&lt;/button&gt; &lt;h4&gt;Add Course&lt;/h4&gt; &lt;/div&gt; &lt;div class="modal-body" style="padding:10px 10px;"&gt; &lt;form data-parsley-validate method="post" id="coursesecaddform" action="" enctype="multipart/form-data" data-parsley-trigger="focusout"&gt; {% csrf_token %} Add course {{ courseaddform.as_p }} Add section {{ sectionaddform.as_p }} &lt;p id="login-error"&gt;&lt;/p&gt; &lt;input type="submit" class="btn btn-info submit" name="AddCourse" value="Add Course" /&gt; &lt;/form&gt; &lt;/div&gt; &lt;div class="modal-footer"&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>views.py</p> <pre><code>def IndexView(request,Course_id,Section_id): if request.method == "GET": this_course = Course.objects.get(pk=Course_id) active_courses = Course.objects.all().filter(Active=True).exclude(pk=Course_id) section_list = Section.objects.all().filter(course=this_course) if len(section_list) &gt;1: multi_section = True else: multi_section = False active_section = Section.objects.get(pk=Section_id) roster = Student.objects.all().filter(sections__in=[active_section]) announcement_list = Announcement.objects.all().filter(sections__in=[active_section]) courseaddform = CourseAddForm() sectionaddform = SectionAddForm() context = {'active_courses':active_courses, 'this_course': this_course, 'active_section':active_section, 'section_list':section_list, 'roster':roster, 'multi_section':multi_section, 'announcement_list':announcement_list, 'courseaddform':courseaddform, 'sectionaddform':sectionaddform} return render(request,'gbook/index.html', context) elif request.method == "POST": this_course = Course.objects.get(pk=Course_id) active_courses = Course.objects.all().filter(Active=True).exclude(pk=Course_id) section_list = Section.objects.all().filter(course=this_course) if len(section_list) &gt;1: multi_section = True else: multi_section = False active_section = Section.objects.get(pk=Section_id) f = CourseAddForm(request.POST, instance=Course()) g = SectionAddForm(request.POST, instance=Section()) if f.is_valid() and g.is_valid(): new_course = f.save() new_section = g.save(commit=False) new_section.course = new_course print new_section.course new_section.save() return redirect("/gbook/"+str(Course_id)+"/"+str(active_section)) </code></pre>
0
2016-08-09T20:27:58Z
38,860,097
<p>Solution:</p> <p>Instead of using the <code>instance</code> argument, use <code>prefix</code>:</p> <pre><code>def IndexView(request,Course_id,Section_id): if request.method == "GET": this_course = Course.objects.get(pk=Course_id) active_courses = Course.objects.all().filter(Active=True).exclude(pk=Course_id) section_list = Section.objects.all().filter(course=this_course) if len(section_list) &gt;1: multi_section = True else: multi_section = False active_section = Section.objects.get(pk=Section_id) roster = Student.objects.all().filter(sections__in=[active_section]) announcement_list = Announcement.objects.all().filter(sections__in=[active_section]) courseaddform = CourseAddForm(prefix='crs') sectionaddform = SectionAddForm(prefix='sctn') context = {'active_courses':active_courses, 'this_course': this_course, 'active_section':active_section, 'section_list':section_list, 'roster':roster, 'multi_section':multi_section, 'announcement_list':announcement_list, 'courseaddform':courseaddform, 'sectionaddform':sectionaddform} return render(request,'gbook/index.html', context) elif request.method == "POST": this_course = Course.objects.get(pk=Course_id) active_courses = Course.objects.all().filter(Active=True).exclude(pk=Course_id) section_list = Section.objects.all().filter(course=this_course) if len(section_list) &gt;1: multi_section = True else: multi_section = False active_section = Section.objects.get(pk=Section_id) f = CourseAddForm(request.POST, prefix='crs') g = SectionAddForm(request.POST, prefix='sctn') if f.is_valid() and g.is_valid(): new_course = f.save() new_section = g.save(commit=False) new_section.course = new_course print new_section.course new_section.save() return redirect("/gbook/"+str(Course_id)+"/"+str(active_section)) </code></pre>
1
2016-08-09T20:44:33Z
[ "python", "django", "forms", "django-forms" ]
Multiple forms submitting with single view - Django
38,859,842
<p>I have two ModelForms submitting from a single view. One model is a ForeignKey of another. At this point, the form will add to the DB, but not as expected. Instead of adding a Course and a Section (with course field populated with the newly-created Course), I'm getting a Course and Section that are linked, but with the Section name that was entered as both the Course name and the Section name.</p> <p>models.py</p> <pre><code>class Course(models.Model): Name = models.CharField(max_length=30,unique=True) Active = models.BooleanField(default=True) def __unicode__(self): return u'%s' % (self.Name) class Section(models.Model): Name = models.CharField(max_length=30,default='.',unique=True) course = models.ForeignKey(Course, on_delete=models.CASCADE) assessments = models.ManyToManyField(Assessment) def __unicode__(self): return u'%s / %s' % (self.Name, self.course) </code></pre> <p>forms.py</p> <pre><code>class CourseAddForm(forms.ModelForm): class Meta: model = Course fields = ['Name', 'Active'] class SectionAddForm(forms.ModelForm): class Meta: model = Section fields = ['Name'] </code></pre> <p>templates/index.html</p> <pre><code>&lt;!-- COURSE ADD MODAL --&gt; &lt;div class="modal fade" id="CourseAddModal" role="dialog"&gt; &lt;div class="modal-dialog"&gt; &lt;!-- Modal content--&gt; &lt;div class="modal-content"&gt; &lt;div class="modal-header" style="padding:5px 10px;"&gt; &lt;button type="button" class="close" data-dismiss="modal"&gt;&amp;times;&lt;/button&gt; &lt;h4&gt;Add Course&lt;/h4&gt; &lt;/div&gt; &lt;div class="modal-body" style="padding:10px 10px;"&gt; &lt;form data-parsley-validate method="post" id="coursesecaddform" action="" enctype="multipart/form-data" data-parsley-trigger="focusout"&gt; {% csrf_token %} Add course {{ courseaddform.as_p }} Add section {{ sectionaddform.as_p }} &lt;p id="login-error"&gt;&lt;/p&gt; &lt;input type="submit" class="btn btn-info submit" name="AddCourse" value="Add Course" /&gt; &lt;/form&gt; &lt;/div&gt; &lt;div class="modal-footer"&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>views.py</p> <pre><code>def IndexView(request,Course_id,Section_id): if request.method == "GET": this_course = Course.objects.get(pk=Course_id) active_courses = Course.objects.all().filter(Active=True).exclude(pk=Course_id) section_list = Section.objects.all().filter(course=this_course) if len(section_list) &gt;1: multi_section = True else: multi_section = False active_section = Section.objects.get(pk=Section_id) roster = Student.objects.all().filter(sections__in=[active_section]) announcement_list = Announcement.objects.all().filter(sections__in=[active_section]) courseaddform = CourseAddForm() sectionaddform = SectionAddForm() context = {'active_courses':active_courses, 'this_course': this_course, 'active_section':active_section, 'section_list':section_list, 'roster':roster, 'multi_section':multi_section, 'announcement_list':announcement_list, 'courseaddform':courseaddform, 'sectionaddform':sectionaddform} return render(request,'gbook/index.html', context) elif request.method == "POST": this_course = Course.objects.get(pk=Course_id) active_courses = Course.objects.all().filter(Active=True).exclude(pk=Course_id) section_list = Section.objects.all().filter(course=this_course) if len(section_list) &gt;1: multi_section = True else: multi_section = False active_section = Section.objects.get(pk=Section_id) f = CourseAddForm(request.POST, instance=Course()) g = SectionAddForm(request.POST, instance=Section()) if f.is_valid() and g.is_valid(): new_course = f.save() new_section = g.save(commit=False) new_section.course = new_course print new_section.course new_section.save() return redirect("/gbook/"+str(Course_id)+"/"+str(active_section)) </code></pre>
0
2016-08-09T20:27:58Z
38,860,098
<p>Both models have a <code>Name</code> field, so both forms contain an input with identical <code>name</code> attribute. When Django encounters two POST parameters with the same name, it binds the last one's value to all fields by that name. Long story short - change field names in your models so they are different.</p>
0
2016-08-09T20:44:39Z
[ "python", "django", "forms", "django-forms" ]
Multiple forms submitting with single view - Django
38,859,842
<p>I have two ModelForms submitting from a single view. One model is a ForeignKey of another. At this point, the form will add to the DB, but not as expected. Instead of adding a Course and a Section (with course field populated with the newly-created Course), I'm getting a Course and Section that are linked, but with the Section name that was entered as both the Course name and the Section name.</p> <p>models.py</p> <pre><code>class Course(models.Model): Name = models.CharField(max_length=30,unique=True) Active = models.BooleanField(default=True) def __unicode__(self): return u'%s' % (self.Name) class Section(models.Model): Name = models.CharField(max_length=30,default='.',unique=True) course = models.ForeignKey(Course, on_delete=models.CASCADE) assessments = models.ManyToManyField(Assessment) def __unicode__(self): return u'%s / %s' % (self.Name, self.course) </code></pre> <p>forms.py</p> <pre><code>class CourseAddForm(forms.ModelForm): class Meta: model = Course fields = ['Name', 'Active'] class SectionAddForm(forms.ModelForm): class Meta: model = Section fields = ['Name'] </code></pre> <p>templates/index.html</p> <pre><code>&lt;!-- COURSE ADD MODAL --&gt; &lt;div class="modal fade" id="CourseAddModal" role="dialog"&gt; &lt;div class="modal-dialog"&gt; &lt;!-- Modal content--&gt; &lt;div class="modal-content"&gt; &lt;div class="modal-header" style="padding:5px 10px;"&gt; &lt;button type="button" class="close" data-dismiss="modal"&gt;&amp;times;&lt;/button&gt; &lt;h4&gt;Add Course&lt;/h4&gt; &lt;/div&gt; &lt;div class="modal-body" style="padding:10px 10px;"&gt; &lt;form data-parsley-validate method="post" id="coursesecaddform" action="" enctype="multipart/form-data" data-parsley-trigger="focusout"&gt; {% csrf_token %} Add course {{ courseaddform.as_p }} Add section {{ sectionaddform.as_p }} &lt;p id="login-error"&gt;&lt;/p&gt; &lt;input type="submit" class="btn btn-info submit" name="AddCourse" value="Add Course" /&gt; &lt;/form&gt; &lt;/div&gt; &lt;div class="modal-footer"&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>views.py</p> <pre><code>def IndexView(request,Course_id,Section_id): if request.method == "GET": this_course = Course.objects.get(pk=Course_id) active_courses = Course.objects.all().filter(Active=True).exclude(pk=Course_id) section_list = Section.objects.all().filter(course=this_course) if len(section_list) &gt;1: multi_section = True else: multi_section = False active_section = Section.objects.get(pk=Section_id) roster = Student.objects.all().filter(sections__in=[active_section]) announcement_list = Announcement.objects.all().filter(sections__in=[active_section]) courseaddform = CourseAddForm() sectionaddform = SectionAddForm() context = {'active_courses':active_courses, 'this_course': this_course, 'active_section':active_section, 'section_list':section_list, 'roster':roster, 'multi_section':multi_section, 'announcement_list':announcement_list, 'courseaddform':courseaddform, 'sectionaddform':sectionaddform} return render(request,'gbook/index.html', context) elif request.method == "POST": this_course = Course.objects.get(pk=Course_id) active_courses = Course.objects.all().filter(Active=True).exclude(pk=Course_id) section_list = Section.objects.all().filter(course=this_course) if len(section_list) &gt;1: multi_section = True else: multi_section = False active_section = Section.objects.get(pk=Section_id) f = CourseAddForm(request.POST, instance=Course()) g = SectionAddForm(request.POST, instance=Section()) if f.is_valid() and g.is_valid(): new_course = f.save() new_section = g.save(commit=False) new_section.course = new_course print new_section.course new_section.save() return redirect("/gbook/"+str(Course_id)+"/"+str(active_section)) </code></pre>
0
2016-08-09T20:27:58Z
38,860,105
<p>Since your <code>Course</code> and <code>Section</code> both have an attribute called <code>Name</code>, I am betting that your form has two fields with the attributes <code>id = "id_Name"</code> and <code>name = "Name"</code>.</p>
0
2016-08-09T20:44:59Z
[ "python", "django", "forms", "django-forms" ]
Python: Read Data file with variable number of rows per observation
38,859,916
<p>I have to work with a dataset that contains multiple lines per observation. The number of rows per observation can vary. The file is structured so that information is not repeated. </p> <p>The file contains a segment ID, which relates the output to a specific piece of information. Here is a sample of the file layout.</p> <p>Segment Id definitions </p> <pre><code>SegementID Table Number of Occurrences 1 Customer Information 1 3 Items bought 1-10 </code></pre> <p>Table layout - Customer Information </p> <pre><code>ID Name </code></pre> <p>Table layout - Items Bought </p> <pre><code>Item Cost Date </code></pre> <p>Here is a sample of how the output file would look.</p> <pre><code>SegementID 1 100 matt 3 ball 3.25 1/16/2016 3 cat 5.55 1/17/2016 1 200 lucy 3 doll 500.35 2/1/2016 3 ball 3.25 2/2/2016 3 dog 5.55 2/3/2016 </code></pre> <p>Notice that segment ID = 1, relates to customer information. Segment ID 3 then shows all the transactions that customer has made. </p> <p>I would like to make the structure that has customer ID available on each transaction line. What is the best way to do this?</p> <pre><code>ID Item Cost Date 100 ball 3.25 1/16/2016 100 cat 5.55 1/17/2016 200 doll 500.35 2/1/2016 200 cat 3.25 2/2/2016 200 dog 5.55 2/3/2016 </code></pre>
0
2016-08-09T20:33:39Z
38,860,698
<p>Here is a quick solution using <code>Pandas</code> </p> <pre><code>import pandas as pd df = pd.DataFrame() with open("file.txt", "r") as f: f.next() for row,line in enumerate(f): info = line.split() if info[0] == '1': client = info[1] else: df[row] = [client, info[1],info[2],info[3]] df = df.transpose() df.columns = ["ID","Item","Cost","Date"] </code></pre>
1
2016-08-09T21:24:49Z
[ "python", "data-files" ]
Skipping 'FILE' which didn't exist, or couldn't be read error
38,860,016
<p>I'm trying to create a workflow where I take a directory that contains a list of input files and runs them through a command line tool and outputs the results into an output directory. It should be really simple, and I have gotten it to work ... mostly.</p> <p>The problem is that whenever I give it a input <strong>directory</strong>, I get an error of "Skipping FILE which didn't exist, or couldn't be read" even though I am 100% certain that the files exist in my input directory. </p> <p>However, if I alter the code just a little bit, and make it so that I only feed it an input <strong>file</strong> and not a directory, the script runs along like it should and completes perfectly.</p> <p><strong>My input files are gzipped.</strong></p> <p>Here is the script:</p> <pre><code>import argparse import subprocess import os parser = argparse.ArgumentParser(description="A RNAseq pipeline for pair-end data") parser.add_argument("-i", "--inputDir", help="A input directory containing your gzipped fastq files", required=True) parser.add_argument("-o", "--outputDir", help="Output directory", required=True) parser.parse_args() ### Define global variables args = parser.parse_args() inputDir = args.inputDir outputDir = args.outputDir ### Grab all fastq files in input directory fastq_directory = os.listdir("{}".format(inputDir)) fastq_files = [] for file in fastq_directory: fastq_files.append(file) ### Run FastQC for file in fastq_files: fastqc_command = "fastqc --extract -o {} {}".format(outputDir, file) subprocess.check_output(['bash', '-c', fastqc_command]) </code></pre> <p>The error:</p> <pre><code>Skipping 'KO1_R1.fastq.gz' which didn't exist, or couldn't be read Skipping 'KO1_R2.fastq.gz' which didn't exist, or couldn't be read Skipping 'KO2_R1.fastq.gz' which didn't exist, or couldn't be read Skipping 'KO2_R2.fastq.gz' which didn't exist, or couldn't be read Skipping 'KO3_R1.fastq.gz' which didn't exist, or couldn't be read Skipping 'KO3_R2.fastq.gz' which didn't exist, or couldn't be read Skipping 'WT1_R1.fastq.gz' which didn't exist, or couldn't be read Skipping 'WT1_R2.fastq.gz' which didn't exist, or couldn't be read Skipping 'WT2_R1.fastq.gz' which didn't exist, or couldn't be read Skipping 'WT2_R2.fastq.gz' which didn't exist, or couldn't be read Skipping 'WT3_R1.fastq.gz' which didn't exist, or couldn't be read Skipping 'WT3_R2.fastq.gz' which didn't exist, or couldn't be read </code></pre> <p>Any recommendations?</p> <p>PS: I know the script is terrible, but i'm learning :). Though suggestions definitely welcomed!</p>
0
2016-08-09T20:38:58Z
38,860,885
<p>Try changing this:</p> <pre><code>fastq_directory = os.listdir("{}".format(inputDir)) fastq_files = [] for file in fastq_directory: fastq_files.append(file) </code></pre> <p>To this:</p> <pre><code>fastq_directory = os.listdir("{}".format(inputDir)) fastq_files = [] for file in fastq_directory: fastq_files.append(os.path.join(inputDir, file)) </code></pre> <p>This is because <code>os.listdir()</code> will only return filenames, not full paths.</p>
2
2016-08-09T21:40:20Z
[ "python", "workflow", "bioinformatics" ]
python can't use a property defined in the super (father) class
38,860,053
<p>This is my code</p> <pre><code>class ElasticsearchController(object): def __init__(self): self.es = Elasticsearch(['blabla'], port=9200) class MasterDataIndexController(ElasticsearchController): def __init__(self): self.indexName = "bbbbb" def search(self, query): return super.es.search(index=self.indexName, docuemntType = self.documentType, query = query) </code></pre> <p>I got this error:</p> <pre><code>AttributeError: type object 'super' has no attribute 'es' </code></pre> <p>though the super does have it.</p> <p>Any idea please?</p>
1
2016-08-09T20:41:29Z
38,860,085
<p>You are not initialising your super classes.</p> <pre><code>class ElasticsearchController(object): def __init__(self): self.es = Elasticsearch(['a.b.c.d'], port=1234) class MasterDataIndexController(ElasticsearchController): def __init__(self): super(MasterDataIndexController, self).__init__() #^^^^^^^^^^^ self.indexName = "bbbbb" def search(self, query): return self.es.search(index=self.indexName, docuemntType = self.documentType, query = query) # ^^^^^ self should be fine. </code></pre>
4
2016-08-09T20:43:33Z
[ "python" ]
python can't use a property defined in the super (father) class
38,860,053
<p>This is my code</p> <pre><code>class ElasticsearchController(object): def __init__(self): self.es = Elasticsearch(['blabla'], port=9200) class MasterDataIndexController(ElasticsearchController): def __init__(self): self.indexName = "bbbbb" def search(self, query): return super.es.search(index=self.indexName, docuemntType = self.documentType, query = query) </code></pre> <p>I got this error:</p> <pre><code>AttributeError: type object 'super' has no attribute 'es' </code></pre> <p>though the super does have it.</p> <p>Any idea please?</p>
1
2016-08-09T20:41:29Z
38,860,090
<p>You can't use <code>super</code> like that. <code>super</code> will give you access to the super class, but you should not use it as a shortcut for <code>self</code>. Depending if you use python or python 3, you can call <code>super(MyClass, self)</code> or just <code>super()</code>. You can use this during initialization to call the <code>__init__</code> method of your superclass. </p> <p>However, in most simple class hierarchies, it is not necessary to call super and your code will be clearer if you just called <code>SuperClass.__init__(self)</code>.</p> <p>After this you should be able to just use <code>self</code> and attribute access.</p>
3
2016-08-09T20:43:51Z
[ "python" ]
Create random numpy matrix of same size as another.
38,860,095
<p>This question <a href="http://stackoverflow.com/questions/35967907/how-to-make-a-new-numpy-array-same-size-as-a-given-array-and-fill-it-with-a-scal">here</a> was useful, but mine is slightly different. </p> <p>I am trying to do something simple here, I have a numpy matrix <strong>A</strong>, and I simply want to create another numpy matrix <strong>B</strong>, of the same shape as <strong>A</strong>, but I want <strong>B</strong> to be created from <code>numpy.random.randn()</code> How can this be done? Thanks. </p>
1
2016-08-09T20:44:18Z
38,860,153
<p><code>np.random.randn</code> takes the shape of the array as its input which you can get directly from the <code>shape</code> property of the first array. You have to unpack <code>a.shape</code> with the <a class='doc-link' href="http://stackoverflow.com/documentation/python/4282/list-destructuring-aka-packing-and-unpacking/14983/unpacking-function-arguments#t=201608101202010597851"><code>*</code> operator</a> in order to get the proper input for <code>np.random.randn</code>.</p> <pre><code>a = np.zeros([2, 3]) print(a.shape) # outputs: (2, 3) b = np.random.randn(*a.shape) print(b.shape) # outputs: (2, 3) </code></pre>
2
2016-08-09T20:47:44Z
[ "python", "arrays", "numpy", "random" ]
How to assign multiple values to an array of strings in python
38,860,135
<p>I'm reading a file line by line and am trying to parse a part of each line and do stuff with it. the Info that I'm trying to parse are 25 strings</p> <p>I was trying to do </p> <pre><code>for i in info: Consequence=i[0] IMPACT=i[1] . . HGNC_ID = i[24] </code></pre> <p>but obviously there's a better way of doing this. I tried making a list of all the strings and initialize them as empty strings, and then did:</p> <pre><code> for counter,val in enumerate(info_list): try: val=i[counter] break except: val="" </code></pre> <p>where</p> <pre><code>info_list=(Allele,Consequence...) </code></pre> <p>That doesn't work though, it prints empty strings and counter is always zero, even though the length of info_list is 25. </p> <p>What would be the best way to assign those values? (keep in mind that some "infos" might have 23 or 24 values in the array, in that case I would want to assign an empty string to the missing values, the missing values would only be at the end so there is no confusion as to which variables are missing)</p> <p>Let me know if I can add more information! </p> <p>Thanks! :) </p>
-2
2016-08-09T20:47:02Z
38,860,253
<p>There does not seem to be much wrong with your first version of the code. I don't know why you think there "obviously" should be a better way.</p> <p>One thing you can do to make this a little clearer is:</p> <pre><code>if len(info) == 25: Consequence, IMPACT , . . ., HGNC_ID = info elif len(info) == 24: Consequence , IMPACT , . . ., HGNC_ID = info + '' </code></pre> <p>But I doubt if that will make it much better.</p> <p>What you might want to consider is not using variables but a hash table or a named tuple for the values you read from the line.</p>
0
2016-08-09T20:54:44Z
[ "python", "arrays", "for-loop" ]
Python can't import WMI under special circumstance
38,860,185
<p>I've created a standalone exe Windows service written in Python and built with pyInstaller. When I try to import wmi, an exception is thrown. </p> <p>What's really baffling is that I can do it without a problem if running the code in a foreground exe, or a foreground python script, or a python script running as a background service via pythonservice.exe! </p> <p>Why does it fail under this special circumstance of running as a service exe?</p> <pre><code>import wmi </code></pre> <p>Produces this error for me:</p> <pre><code>com_error: (-2147221020, 'Invalid syntax', None, None) </code></pre> <p>Here's the traceback:</p> <pre><code>Traceback (most recent call last): File "&lt;string&gt;", line 43, in onRequest File "C:\XXX\XXX\XXX.pyz", line 98, in XXX File "C:\XXX\XXX\XXX.pyz", line 31, in XXX File "C:\XXX\XXX\XXX.pyz", line 24, in XXX File "C:\XXX\XXX\XXX.pyz", line 34, in XXX File "C:\Program Files (x86)\PyInstaller-2.1\PyInstaller\loader\pyi_importers.py", line 270, in load_module File "C:\XXX\XXX\out00-PYZ.pyz\wmi", line 157, in &lt;module&gt; File "C:\XXX\XXX\out00-PYZ.pyz\win32com.client", line 72, in GetObject File "C:\XXX\XXX\out00-PYZ.pyz\win32com.client", line 87, in Moniker </code></pre> <p>wmi.py line 157 has a global call to GetObject:</p> <pre><code>obj = GetObject ("winmgmts:") </code></pre> <p>win32com\client__init.py__ contains GetObject(), which ends up calling Moniker():</p> <pre><code>def GetObject(Pathname = None, Class = None, clsctx = None): """ Mimic VB's GetObject() function. ob = GetObject(Class = "ProgID") or GetObject(Class = clsid) will connect to an already running instance of the COM object. ob = GetObject(r"c:\blah\blah\foo.xls") (aka the COM moniker syntax) will return a ready to use Python wrapping of the required COM object. Note: You must specifiy one or the other of these arguments. I know this isn't pretty, but it is what VB does. Blech. If you don't I'll throw ValueError at you. :) This will most likely throw pythoncom.com_error if anything fails. """ if clsctx is None: clsctx = pythoncom.CLSCTX_ALL if (Pathname is None and Class is None) or \ (Pathname is not None and Class is not None): raise ValueError("You must specify a value for Pathname or Class, but not both.") if Class is not None: return GetActiveObject(Class, clsctx) else: return Moniker(Pathname, clsctx) </code></pre> <p>The first line in Moniker(), i.e. MkParseDisplayName() is where the exception is encountered:</p> <pre><code>def Moniker(Pathname, clsctx = pythoncom.CLSCTX_ALL): """ Python friendly version of GetObject's moniker functionality. """ moniker, i, bindCtx = pythoncom.MkParseDisplayName(Pathname) dispatch = moniker.BindToObject(bindCtx, None, pythoncom.IID_IDispatch) return __WrapDispatch(dispatch, Pathname, clsctx=clsctx) </code></pre> <p>Note: I tried using </p> <pre><code>pythoncom.CoInitialize() </code></pre> <p>which apparently solves this import problem within a thread, but that didn't work...</p>
1
2016-08-09T20:50:20Z
38,879,523
<p>I tried solving this countless ways. In the end, I threw in the towel and had to just find a different means of achieving the same goals I had with wmi. </p> <p>Apparently that invalid syntax error is thrown when trying to create an object with an invalid "moniker name", which can simply mean the service, application, etc. doesn't exit on the system. Under this circumstance "winmgmts" just can't be found at all it seems! And yes, I tried numerous variations on that moniker with additional specs, and I tried running the service under a different user account, etc. </p>
0
2016-08-10T17:03:13Z
[ "python", "service", "wmi", "pyinstaller" ]
Which package to install based only on import line
38,860,246
<p>I am very new to Python, and I am attempting to reproduce an <a href="https://www.quantstart.com/articles/Forex-Trading-Diary-1-Automated-Forex-Trading-with-the-OANDA-API" rel="nofollow">example</a> (not necessary to answer the question). If all I have is <code>import threading</code> from within the code I assumed I could just run <code>pip install threading</code> however the module is not found. When I searched for a different package name in the Python package manager I came across hundreds. Why doesn't the pip command work, and how do I know which package to install?</p> <p><strong>My exact error</strong></p> <pre><code>:\Users\king\Desktop\_REPOS\misc\stock_analysis\forex\python\pythonv2&gt;python trading.py Traceback (most recent call last): File "trading.py", line 1, in &lt;module&gt; import Queue #pip install queuelib ImportError: No module named 'Queue' </code></pre> <p><strong>Version info</strong></p> <p>Python 3.5 32bit (64 bit OS) </p>
0
2016-08-09T20:54:23Z
38,860,324
<p>The first hit on google (search: python threading) actually gave me:</p> <p><a href="https://docs.python.org/2/library/threading.html" rel="nofollow">https://docs.python.org/2/library/threading.html</a> (the URL itself already indicate it)</p> <p>This means it's a library module so it should be already available to you without extra installs. </p> <p>In case your Python is limited in a way and doesn't have it by default, please update your question with your Python version and way it was installed.</p> <p>For future reference, you were mostly doing the right thing, a lot of modules have the same name as their import statements, but otherwise, in almost all case, a simple Google search will suffice.</p>
1
2016-08-09T20:59:39Z
[ "python" ]
How to set chunk size of netCDF4 in python?
38,860,344
<p>I can see the default chunking setting in netCDF4 library, but I have no idea how to change the chunk size.</p> <pre><code>from netCDF4 import Dataset volcgrp = Dataset('datasets/volcano.nc', 'r') data = volcgrp.variables['abso4'] print data.shape print data.chunking() &gt;(8, 96, 192) &gt;[1, 96, 192] </code></pre> <p>Is there anyone who can help with the setting?</p>
1
2016-08-09T21:00:54Z
38,873,786
<p>You can use <a href="http://xarray.pydata.org/en/stable/dask.html" rel="nofollow">xarray</a> to read the netcdf file and set chunks, e.g. </p> <pre><code>import xarray as xr ds = xr.open_dataset('/datasets/volcano.nc', chunks={'time': 10}) </code></pre>
2
2016-08-10T12:42:56Z
[ "python", "netcdf", "chunks", "chunking", "netcdf4" ]
pandas plot a specific column to be used as both x and y axis?
38,860,428
<p>I have a dataframe that looks like the below:</p> <p>I want to <code>plot company_id</code> as the x axis and <code>count(company_id)</code> for the y axis. I also want to stack it by category of <code>open</code> and <code>close</code>.</p> <p>The code doesnt show any output and runs infinitely.</p> <pre><code>df person_id company_id time event type date 0 1 255 1379312026 open A 2013-09-16 02:13:46 1 1 255 1379312086 close A 2013-09-16 02:14:46 2 1 182 1379312926 open B 2013-09-16 02:28:46 3 1 182 1379313046 close B 2013-09-16 02:30:46 4 1 81 1379314006 open A 2013-09-16 02:46:46 df2=df[['company_id','event']] df2.plot(kind='bar',stacked=True) </code></pre>
1
2016-08-09T21:06:16Z
38,861,142
<p>If you're able to use <code>seaborn</code> as suggested in a comment you could do this:</p> <pre><code>import seaborn as sns, matplotlib.pyplot as plt df = {'company_id': {0: 255, 1: 255, 2: 182, 3: 182, 4: 81}, 'time': {0: 1379312026, 1: 1379312086, 2: 1379312926, 3: 1379313046, 4: 1379314006}, 'person_id': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1}, 'counts': {0: 2, 1: 2, 2: 2, 3: 2, 4: 1}, 'type': {0: 'A', 1: 'A', 2: 'B', 3: 'B', 4: 'A'}, 'event': {0: 'open', 1: 'close', 2: 'open', 3: 'close', 4: 'open'}} # add counts column counts = df.groupby('company_id').size().rename('counts') df['count'] = df['company_id'].map(counts) g = sns.factorplot(y='count',x='company_id',hue='event',data=df,kind='bar', palette='muted',legend=False,ci=None) plt.legend(loc='best') plt.show() </code></pre> <p>Result:</p> <p><a href="http://i.stack.imgur.com/Oux2W.png" rel="nofollow"><img src="http://i.stack.imgur.com/Oux2W.png" alt="enter image description here"></a></p>
2
2016-08-09T22:03:24Z
[ "python", "pandas", "plot" ]
Python math.pow() losing calculation percision
38,860,496
<p>My code:</p> <pre><code>import math def calculate(operator, firstValue, secondValue): if operator == '^': toReturn = math.pow(firstValue, secondValue) . . . return toReturn . . . new = calculate('^', 19, 19) print(' is ' + str(new)) workingStack.append(int(new)) print('New stack is ' + str(workingStack)) </code></pre> <p>The results are</p> <pre><code>" is 1.9784196556603136e+24" New stack is [16, 14, 1978419655660313627328512] </code></pre> <p>which is fine for the formatted string, but when I actually use the variable, it shows it's losing the precision of the number, as you can see it calculates math.pow(19, 19) as 1978419655660313627328512, but it should be 1978419655660313589123979.</p> <p>Here is a better way to compare them</p> <ul> <li>1978419655660313627328512 &lt;- Calculated value</li> <li>1978419655660313589123979 &lt;- True value</li> </ul> <p>You can see the error occurs where the scientific notation loses the precision in the printed results above. I need to be able to use the true value of the variable in other calculations.</p> <p>I have read many things about Python 3 automatically converting int to bignum, but bignum doesn't seem to be enough. I tried 19 ** 19 too. It calculates at the same wrong number.</p> <p>Can someone help me?</p>
-2
2016-08-09T21:10:50Z
38,860,544
<p><code>math.pow</code> does a floating point calculation with finite precision, use the power-operator instead:</p> <pre><code>def calculate(operator, firstValue, secondValue): if operator == '^': toReturn = firstValue ** secondValue return toReturn </code></pre>
3
2016-08-09T21:14:43Z
[ "python", "precision" ]
How to install a custom, local Python Package with Anaconda?
38,860,547
<p>I have Anaconda2 running smoothly on Eclipse's PyDev environment. </p> <p>I have received a custom package from a colleague in the form of a folder with a "library" sub-directory that contains many ".pyc" files (which I presume are the function files) and a "<strong>init</strong>.py" file. But no matter what I do, I cannot seem to install the folder as a package. </p> <p>I have tried everything posted here in the Anaconda Prompt (which I'm assuming was the correct way of implementing the instructions) <a href="http://conda.pydata.org/docs/using/pkgs.html#install-non-conda-packages" rel="nofollow">http://conda.pydata.org/docs/using/pkgs.html#install-non-conda-packages</a> but nothing worked.</p> <p>I am very new to really working with Anaconda, Python, Eclipse, and PyDev (I have only written simple scripts with the default IDLE IDE in the past). </p> <p>All I really want to be able to do is to use the package of functions given to me - even if they are not properly "installed", although that would be ideal. If anyone out there can help me with this I would be very grateful!</p>
0
2016-08-09T21:14:52Z
38,860,631
<p>Pyc are precompiled files, you dont need them. You should simply import package folder with</p> <pre><code>import folder-name </code></pre>
0
2016-08-09T21:19:41Z
[ "python", "eclipse", "local", "pydev", "anaconda" ]
Openpyxl Copy Time, returns -1
38,860,565
<p>I am trying to great an excel file that has is the combination of multiple excel files. However, when I copy a cell with a value 00:00, and append it to the master excel file, excel thinks the time is for the year 1899?</p> <p>Here is my code:</p> <pre><code>def excel_graphs_all(day, users): chart_wb = Workbook(write_only=True) graph_ws = chart_wb.create_sheet(day + ' Graphs', 0) chart_wb_filename = 'graphs_' + day + '.xlsx' columnNum = ['A', 'H'] rowNum = 1 i = 0 for user in users: filename = user[1] + '_' + day + '.xlsx' iter_wb = load_workbook(filename=filename,read_only=True) ws = iter_wb.active chart_ws = chart_wb.create_sheet(user[1]) for row in ws.rows: chart_ws.append([row[0].value, row[1].value]) chart = ScatterChart() chart.title = user[1] + ' ' + day + ' Heartrate Data' chart.x_axis.title = 'Time' chart.y_axis.title = 'Heartrate' chart.x_axis.scaling.min = 0 chart.x_axis.scaling.max = 1 xvalues = Reference(chart_ws, min_col=1, min_row=1, max_row= ws.max_row) yvalues = Reference(chart_ws, min_col=2, min_row=1, max_row= ws.max_row) series = Series(yvalues, xvalues, title='Heartrate') chart.series.append(series) spot = columnNum[i % 2]+str(rowNum) graph_ws.add_chart(chart, spot) if ((i+1)%2)== 0: rowNum += 16 i += 1 chart_wb.save(chart_wb_filename) return chart_wb_filename </code></pre> <p>Thanks!</p>
-1
2016-08-09T21:16:02Z
38,865,669
<p>What do you mean by value <code>00:00</code>? Excel uses formatting and not typing for dates and times. From the specification: </p> <blockquote> <p>When using the 1900 date system, which has a base date of 30th December 1899, a serial date- time of 1.5 represents midday on the 31st December 1899</p> </blockquote> <p>It sounds like you just need to check the formatting for the relevant cells.</p>
0
2016-08-10T06:20:52Z
[ "python", "excel", "python-2.7", "datetime", "openpyxl" ]
Why not serving static files with Django in a production environment?
38,860,601
<p>I came across the following example for <code>settings.py</code>:</p> <pre><code>if settings.DEBUG: urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) </code></pre> <p>and was told:</p> <blockquote> <p>The static() helper function is suitable for development but not for production use. Never serve your static files with Django in a production environment.</p> </blockquote> <p>Can anyone explain why and how to use it the better way?</p> <p>EDIT:</p> <p>Can I use static() with Apache?</p>
2
2016-08-09T21:17:59Z
38,861,145
<p>Django is not very fast or efficient for serving static files. To quote the Django docs, "This method is grossly inefficient and probably insecure, so it is unsuitable for production." It is better to use tools that are specifically designed for serving static content. There are extensive instructions for how to setup a static server in the Django documentation on <a href="https://docs.djangoproject.com/en/1.10/howto/static-files/deployment/" rel="nofollow">deploying static files</a>.</p> <p>The basic idea is to not unnecessarily involve Django in the serving of static files. Let your production server, which from your comment it sounds like is apache, serve the static files directly. Here are instructions for editing your httpd.conf file to get apache to serve the static files <a href="https://docs.djangoproject.com/en/1.10/howto/deployment/wsgi/modwsgi/#serving-files" rel="nofollow">https://docs.djangoproject.com/en/1.10/howto/deployment/wsgi/modwsgi/#serving-files</a>. The static() function in django should not be involved at all. Make sure to use the collectstatic management command in django to copy all your static files to the STATIC_ROOT so apache can find them.</p>
3
2016-08-09T22:03:30Z
[ "python", "django", "apache", "static", "settings" ]
Python CSV File writing contents to file
38,860,662
<p>I am trying to write and append values to CSV file. Whenever I am executing the script Header values are printing multipletimes. When I open the excel sheet I see</p> <p><img src="http://i.stack.imgur.com/Hsg0l.png" alt="enter image description here"></p> <p>Instead of</p> <p><img src="http://i.stack.imgur.com/2fy3P.png" alt="enter image description here"></p> <p>My code:</p> <pre><code>with open('names.csv', 'a') as csvfile: fieldnames = ['Header1', 'Header2', 'Header3'] writer= csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() writer.writerow({'Header1':'%s'%(a),'Header2':'%s'%(b),'Header3':'%s'% (c)}) </code></pre>
1
2016-08-09T21:22:42Z
38,861,000
<p>If you're executing the same code multiple times, you'll end up with multiple calls to <code>writer.writeheader()</code>. This is the cause for the header being repeated in your resulting file.</p> <p>Instead of calling the code multiple times (and opening the same file over and over, appending each time), you could pass the contents you want to write in an iterable (e.g. a <code>list</code> of <code>dict</code>s):</p> <pre><code>define write_to_file(rows): with open('names.csv', 'w') as csvfile: fieldnames = ['Header1', 'Header2', 'Header3'] writer= csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() for row in rows: writer.writerow({'Header1': row['a'], 'Header2': row['b'], 'Header3': row['a']}) </code></pre> <p>There's also no reason to use string formatting to define a <code>dict</code>.</p>
0
2016-08-09T21:49:22Z
[ "python", "csv" ]
Multiple kwargs in a function call?
38,860,676
<p>I have a simple function which is called like this:</p> <pre><code>arbitrary_function(**kwargs1, **kwargs2, **kwargs3) </code></pre> <p>It seems to compile fine on my local installation (python 3.5.1) but throws a SyntaxError when I compile it on a docker with python 3.4.5.</p> <p>I'm not too sure why this behavior is present. Are multiple kwargs not allowed? Should I combine them before passing to function? It is more convenient to pass them individually, for example:</p> <pre><code>plot(**x_axis_params, **y_axis_params, **plot_params) </code></pre> <p>instead of </p> <pre><code>params = dict() for specific_param in [x_axis_params, y_axis_params, plot_params]: params.update(specific_param) plot(**params) </code></pre>
4
2016-08-09T21:23:43Z
38,860,834
<p>That's a new feature introduced in Python 3.5. If you have to support Python 3.4, you're basically stuck with the <code>update</code> loop.</p> <p>People have their own favored variations on how to combine multiple dicts into one, but the only one that's really a major improvement over the <code>update</code> loop is 3.5+ exclusive, so it doesn't help with this. (For reference, the new dict-merging syntax is <code>{**kwargs1, **kwargs2, **kwargs3}</code>.)</p>
1
2016-08-09T21:35:58Z
[ "python" ]
Multiple kwargs in a function call?
38,860,676
<p>I have a simple function which is called like this:</p> <pre><code>arbitrary_function(**kwargs1, **kwargs2, **kwargs3) </code></pre> <p>It seems to compile fine on my local installation (python 3.5.1) but throws a SyntaxError when I compile it on a docker with python 3.4.5.</p> <p>I'm not too sure why this behavior is present. Are multiple kwargs not allowed? Should I combine them before passing to function? It is more convenient to pass them individually, for example:</p> <pre><code>plot(**x_axis_params, **y_axis_params, **plot_params) </code></pre> <p>instead of </p> <pre><code>params = dict() for specific_param in [x_axis_params, y_axis_params, plot_params]: params.update(specific_param) plot(**params) </code></pre>
4
2016-08-09T21:23:43Z
38,861,724
<p>One workaround <a href="https://www.python.org/dev/peps/pep-0448/#rationale" rel="nofollow">mentioned in the rationale for PEP448</a> (which introduced that Python feature) is to use <a href="https://docs.python.org/3/library/collections.html#collections.ChainMap" rel="nofollow"><code>collections.ChainMap</code></a>:</p> <pre><code>from collections import ChainMap plot(**ChainMap(x_axis_params, y_axis_params, plot_params)) </code></pre> <p><code>ChainMap</code> was introduced in Python 3.3, so it should work in your docker instance.</p>
0
2016-08-09T22:58:28Z
[ "python" ]
What is Pandas doing here that my indexes [0] and [1] refer to the same value?
38,860,772
<p>I have a dataframe with these indices and values:</p> <pre><code>df[df.columns[0]] 1 example 2 example1 3 example2 </code></pre> <p>When I access df[df.columns[0]][2], I get "example1". Makes sense. That's how indices work. </p> <p>When I access df[df.columns[0]], however, I get "example", and I get example when I access df[df.columns[1]] as well. So for</p> <pre><code>df[df.columns[0]][0] df[df.columns[0]][1] </code></pre> <p>I get "example". </p> <p>Strangely, I can delete "row" 0, and the result is that 1 is deleted:</p> <pre><code>gf = df.drop(df.index[[0]]) gf exampleDF 2 example1 3 example2 </code></pre> <p>But when I delete row 1, then</p> <pre><code>2 example1 </code></pre> <p>is deleted, as opposed to example.</p> <p>This is a bit confusing to me; are there inconsistent standards in Pandas regarding row indices, or am I missing something / made an error? </p>
3
2016-08-09T21:31:02Z
38,860,909
<p>You are probably causing pandas to switch between <code>.iloc</code> (index based) and <code>.loc</code> (label based) indexing.</p> <p>All arrays in Python are 0 indexed. And I notice that indexes in your DataFrame are starting from 1. So when you run <code>df[df.column[0]][0]</code> pandas realizes that there is no index named 0, and falls back to <code>.iloc</code> which locates things by array indexing. Therefore it returns what it finds at the first location of the array, which is <code>'example'</code>.</p> <p>When you run <code>df[df.column[0]][1]</code> however, pandas realizes that there is a index label 1, and uses <code>.loc</code> which returns what it finds at that label, which again happens to be <code>'example'</code>.</p> <p>When you delete the first row, your DataFrame does not have index labels 0 and 1. So when you go to locate elements at those places in the way you are, it does not return <code>None</code> to you, but instead falls back on array based indexing and returns elements from the 0th and 1st places in the array.</p> <p>To enforce pandas to use one of the two indexing techniques, use <code>.iloc</code> or <code>.loc</code>. <code>.loc</code> is label based, and will raise <code>KeyError</code> if you try <code>df[df.column[0]].loc[0]</code>. <code>.iloc</code> is index based and will return <code>'example'</code> when you try <code>df[df.column[0]].iloc[0]</code>.</p> <hr> <p>Additional note</p> <p>These commands are bad practice: <code>df[col_label].iloc[row_index]</code>; <code>df[col_label].loc[row_label]</code>.</p> <p>Please use <code>df.loc[row_label, col_label]</code>; or <code>df.iloc[row_index, col_index]</code>; or <code>df.ix[row_label_or_index, col_label_or_index]</code></p> <p>See <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#different-choices-for-indexing" rel="nofollow">Different Choices for Indexing</a> for more information.</p>
5
2016-08-09T21:42:20Z
[ "python", "pandas" ]
Tkinter mysterious binding issue
38,860,828
<p>I have a bound key combination :</p> <pre><code>self.parent.bind_all('&lt;Control-n&gt;', self.next_marked) </code></pre> <p>It is supposed to take me to the next tag in a text widget whose parent is a frame.</p> <pre><code>def next_marked(self, skip=False): print (len(self.text.tag_ranges('definition'))) print('next_marked()') self.text.focus_set() print (self.text.index(INSERT)) next_tag = str(self.text.tag_nextrange('definition', 'insert+1c')[0]) print (self.text.index(INSERT)) spl = next_tag.split('.') line = int(spl[0]) col = int(spl[1]) self.text.mark_set('insert', '%d.%d' % ( line, col )) </code></pre> <p>It does this when I do not use the hotkey, however when I do use the hotkey, it always moves the position of the cursor down one line and then performs the function. Is this my operating system at work? (Windows 7) Any recommendation on how to handle this?</p> <p>I am using Python 2.7 and Tkinter 8.5</p>
2
2016-08-09T21:35:05Z
38,861,328
<p>The problem seems to be that <code>&lt;Control-n&gt;</code> is already bound to "go to next line" on the <code>Text</code> class, and if there are multiple bindings, <a href="http://effbot.org/tkinterbook/tkinter-events-and-bindings.htm" rel="nofollow">they will all be executed, in a specific order</a>:</p> <blockquote> <p>Tkinter first calls the best binding on the instance level, then the best binding on the toplevel window level, then the best binding on the class level (which is often a standard binding), and finally the best available binding on the application level.</p> </blockquote> <p>So you could <em>either</em> overwrite the existing class-level binding of <code>&lt;Control-n&gt;</code> for all the <code>Text</code> widgets:</p> <pre><code>self.parent.bind_class("Text", '&lt;Control-n&gt;', lambda e: None) </code></pre> <p>Or bind your function to the instance (so it is scheduled before the class-level binding) and make it <code>return "break"</code> to cancel all subsequent bindings:</p> <pre><code>def next_marked(self, skip=False): ... return "break" self.text.bind('&lt;Control-n&gt;', self.next_marked) </code></pre> <p>Also, note that when used as a callback to <code>bind</code>, the first parameter (after <code>self</code>), i.e. <code>skip</code> in your case, will always be the <code>Event</code>.</p>
4
2016-08-09T22:18:52Z
[ "python", "python-2.7", "tkinter" ]
compatible Android AudioEncoder and pydub decoder
38,861,012
<p>I am recording an audio in android in mp3 format and to do that I am using <code>MPEG_4</code> as an Output Format while <code>AAC</code> as an AudioEncoder and It does record an audio.</p> <p>The problem is when I am using that file for further processing in python pydub it could not decode the audio amd gives me error something like this:</p> <pre><code>CouldntDecodeError: Decoding failed. ffmpeg returned error code: 1 </code></pre> <p>Even though I have ffmpeg installed. I have also tried different audio encoder such as <code>AMR_NB</code> but problem is still the same. It couldnt decode the audio.</p> <p>Here is what I am doing in pydub:</p> <pre><code>sound = AudioSegment.from_mp3("test.mp3") da = np.fromstring(sound.raw_data, dtype=np.int16) </code></pre> <p>Does anyone have idea what could be the proper AudioEncoder for recording mp3 audio?</p>
0
2016-08-09T21:50:33Z
38,918,790
<p>ffmpeg returning an error means something went wrong while ffmpeg was running (so it's not an issue with finding the ffmpeg executable)</p> <p>pydub uses creates temporary files and passes the path to those files to ffmpeg, so it's possible that some kind of file system restriction is the reason.</p> <p>You can <a href="https://github.com/jiaaro/pydub#debugging" rel="nofollow">enable logging as described in the docs</a> to see the ffmpeg call and try to reproduce the error in a terminal</p>
0
2016-08-12T13:07:31Z
[ "android", "python", "audio", "ffmpeg", "pydub" ]
How can I test the standard input and standard output in Python Script with a Unittest test?
38,861,101
<p>I'm trying to test a Python script (2.7) where I work with the standar input (readed with raw_input() and writed with a simple print) but I don't find how do this and I'm sure that this issue is very simple.</p> <p>This is a very very very resume code of my script:</p> <pre><code>def example(): number = raw_input() print number if __name__ == '__main__': example() </code></pre> <p>I want to write a unittest test to check this, but I don't find how. I've trying with StringIO and other things but I don't find the solution to do this really simple.</p> <p>Somebody have a idea?</p> <p>PD: Of course in the real script I use data blocks with several lines and other kind of data.</p> <p>Thank you so much.</p> <p><strong>EDIT:</strong></p> <p>Thank you so much for the first really specific answer, it works perfectly, only I've had a little problem importing <code>StringIO</code>, because I was doing import StringIO and I needed to import like <code>from StringIO import StringIO</code> (I don't understand really why), but be that as It may, it works.</p> <p>But I I've found another problem using this way, in my project I need test a scripts with this way (that work perfectly thanks to your support) but I want do this: I have a file with a lot of test to pass over a script, so I open the file and read blocks of info with their result blocks and I would like to do that the code will be able to process a block checking their result and do the same with other and another...</p> <p>Something like this:</p> <pre><code>class Test(unittest.TestCase): ... #open file and process saving data like datablocks and results ... allTest = True for test in tests: stub_stdin(self, test.dataBlock) stub_stdouts(self) runScrip() if sys.stdout.getvalue() != test.expectResult: allTest = False self.assertEqual(allTest, True) </code></pre> <p>I know that maybe unittest doesn't has sense now, but you can do a idea about I want. So, this way fails and I don't know why.</p>
3
2016-08-09T21:59:04Z
38,861,365
<p>Typical techniques involve mocking the standard <code>sys.stdin</code> and <code>sys.stdout</code> with your desired items. If you do not care for Python 3 compatibility you can just use the <code>StringIO</code> module, however if you want forward thinking and is willing to restrict to Python 2.7 and 3.3+, supporting for this both Python 2 and 3 in this way becomes possible without too much work through the <a href="https://docs.python.org/library/io.html" rel="nofollow"><code>io</code></a> module (but requires a bit of modification, but put this thought on hold for now).</p> <p>Assuming you already have a <code>unittest.TestCase</code> going, you can create a utility function (or method in the same class) that will replace <code>sys.stdin</code>/<code>sys.stdout</code> as outlined. First the imports:</p> <pre><code>import sys import io import unittest </code></pre> <p>In one of my recent projects I've done this for stdin, where it take a <code>str</code> for the inputs that the user (or another program through pipes) will enter into yours as stdin:</p> <pre><code>def stub_stdin(testcase_inst, inputs): stdin = sys.stdin def cleanup(): sys.stdin = stdin testcase_inst.addCleanup(cleanup) sys.stdin = StringIO(inputs) </code></pre> <p>As for stdout and stderr:</p> <pre><code>def stub_stdouts(testcase_inst): stderr = sys.stderr stdout = sys.stdout def cleanup(): sys.stderr = stderr sys.stdout = stdout testcase_inst.addCleanup(cleanup) sys.stderr = StringIO() sys.stdout = StringIO() </code></pre> <p>Note that in both cases, it accepts a testcase instance, and calls its <a href="https://docs.python.org/library/unittest.html#unittest.TestCase.addCleanup" rel="nofollow"><code>addCleanup</code></a> method that adds the <code>cleanup</code> function call that will reset them back to where they were when the duration of a test method is concluded. The effect is that for the duration from when this was invoked in the test case until the end, <code>sys.stdout</code> and friends will be replaced with the <code>io.StringIO</code> version, meaning you can check its value easily, and don't have to worry about leaving a mess behind.</p> <p>Better to show this as an example. To use this, you can simply create a test case like so:</p> <pre><code>class ExampleTestCase(unittest.TestCase): def test_example(self): stub_stdin(self, '42') stub_stdouts(self) example() self.assertEqual(sys.stdout.getvalue(), '42\n') </code></pre> <p>Now, in Python 2, this test will only pass if the <code>StringIO</code> class is from the <code>StringIO</code> module, and in Python 3 no such module exists. What you can do is use the version from the <code>io</code> module with a modification that makes it slightly more lenient in terms of what input it accepts, so that the unicode encoding/decoding will be done automatically rather than triggering an exception (such as <code>print</code> statements in Python 2 will not work nicely without the following). I typically do this for cross compatibility between Python 2 and 3:</p> <pre><code>class StringIO(io.StringIO): """ A "safely" wrapped version """ def __init__(self, value=''): value = value.encode('utf8', 'backslashreplace').decode('utf8') io.StringIO.__init__(self, value) def write(self, msg): io.StringIO.write(self, msg.encode( 'utf8', 'backslashreplace').decode('utf8')) </code></pre> <p>Now plug your example function plus every code fragment in this answer into one file, you will get your self contained unittest that works in both Python 2 and 3 (although you need to call <code>print</code> as a function in Python 3) for doing testing against stdio.</p> <p>One more note: you can always put the <code>stub_</code> function calls in the <code>setUp</code> method of the <code>TestCase</code> if every single test method requires that.</p> <p>Of course, if you want to use various mocks related libraries out there to stub out stdin/stdout, you are free to do so, but this way relies on no external dependencies if this is your goal.</p> <hr> <p>For your second issue, test cases have to be written in a certain way, where they must be encapsulated within a method and not at the class level, your original example will fail. However you might want to do something like this:</p> <pre><code>class Test(unittest.TestCase): def helper(self, data, answer, runner): stub_stdin(self, data) stub_stdouts(self) runner() self.assertEqual(sys.stdout.getvalue(), answer) self.doCleanups() # optional, see comments below def test_various_inputs(self): data_and_answers = [ ('hello', 'HELLOhello'), ('goodbye', 'GOODBYEgoodbye'), ] runScript = upperlower # the function I want to test for data, answer in data_and_answers: self.helper(data, answer, runScript) </code></pre> <p>The reason why you might want to call <code>doCleanups</code> is to prevent the cleanup stack from getting as deep as all the <code>data_and_answers</code> pairs are there, but that will pop everything off the cleanup stack so if you had any other things that need to be cleaned up at the end this might end up being problematic - you are free to leave that there as all of the stdio related objects will be restored at the end in the same order, so the real one will always be there. Now the function I wanted to test:</p> <pre><code>def upperlower(): raw = raw_input() print (raw.upper() + raw), </code></pre> <p>So yes, a bit of explanation for what I did might help: remember within a <code>TestCase</code> class, the framework relies strictly on the instance's <code>assertEqual</code> and friends for it to function. So to ensure testing being done at the right level you really want to call those asserts all the time so that helpful error messages will be shown at the moment the error occurred with the inputs/answers that didn't quite show up right, rather than until the very end like what you did with the for loop (that will tell you something was wrong, but not exactly where out of the hundreds and now you are mad). Also the <code>helper</code> method - you can call it anything you want, as long as it doesn't start with <code>test</code> because then the framework will try to run it as one and it will fail terribly. So just follow this convention and you can basically have templates within your test case to run your test with - you can then use it in a loop with a bunch of inputs/outputs like what I did.</p> <p>As for your other question:</p> <blockquote> <p>only I've had a little problem importing StringIO, because I was doing import StringIO and I needed to import like from StringIO import StringIO (I don't understand really why), but be that as It may, it works.</p> </blockquote> <p>Well, if you look at my original code I did show you how did <code>import io</code> and then overrode the <code>io.StringIO</code> class by defining <code>class StringIO(io.StringIO)</code>. However it works for you because you are doing this strictly from Python 2, whereas I do try to target my answers to Python 3 whenever possible given that Python 2 will (probably definitely this time) not be supported in less than 5 years. Think of the future users that might be reading this post who had similar problem as you. Anyway, yes, the original <code>from StringIO import StringIO</code> works, as that's the <code>StringIO</code> class from the <code>StringIO</code> module. Though <code>from cStringIO import StringIO</code> should work as that imports the <code>C</code> version of the <code>StringIO</code> module. It works because they all offer close enough interfaces, and so they will basically work as intended (until of course you try to run this under Python 3).</p> <p>Again, putting all this together along with my code should result in a <a href="https://gist.github.com/metatoaster/64139971b53ad728dba636e34b8a5558" rel="nofollow">self-contained working test script</a>. Do remember to look at documentation and follow the form of the code, and not invent your own syntax and hoping things to work (and as for exactly why your code didn't work, because the "test" code was defined at where the class was being constructed, so all of that was executed while Python was importing your module, and since none of the things that are needed for the test to run are even available (namely the class itself doesn't even exist yet), the whole thing just dies in fits of twitching agony). Asking questions here help too, even though the issue you face is something really common, not having a quick and simple name to search for your exact problem does make it difficult to figure out where you went wrong, I supposed? :) Anyway good luck, and good on you for taking the effort to test your code.</p> <hr> <p>There are other methods, but given that the other questions/answers I looked at here at SO doesn't seem to help, I hope this one this. Other ones for reference:</p> <ul> <li><a href="http://stackoverflow.com/questions/2617057/how-to-supply-stdin-files-and-environment-variable-inputs-to-python-unit-tests">How to supply stdin, files and environment variable inputs to Python unit tests?</a></li> <li><a href="http://stackoverflow.com/questions/21046717/python-mocking-raw-input-in-unittests">python mocking raw input in unittests</a></li> </ul> <p>Naturally, it bares repeating that all of this <em>can</em> be done using <a href="https://docs.python.org/3/library/unittest.mock.html" rel="nofollow"><code>unittest.mock</code></a> available in Python 3.3+ or the <a href="https://pypi.python.org/pypi/mock" rel="nofollow">original/rolling backport version on pypi</a>, but given that those libraries hides some of the intricacies on what actually happens, they may end up hiding some of the details on what actually happens (or need to happen) or how the redirection actually happens. If you want, you can read up on <a href="https://docs.python.org/3/library/unittest.mock.html#patch" rel="nofollow"><code>unittest.mock.patch</code></a> and go down slightly to the <code>StringIO</code> patching <code>sys.stdout</code> section.</p>
3
2016-08-09T22:21:38Z
[ "python", "io", "python-unittest" ]
Regular expressions in python to match Twitter handles
38,861,170
<p>I'm trying to use regular expressions to capture all Twitter handles within a tweet body. The challenge is that I'm trying to get handles that</p> <ol> <li>Contain a specific string</li> <li>Are of unknown length</li> <li>May be followed by either <ul> <li>punctuation</li> <li>whitespace</li> <li>or the end of string.</li> </ul></li> </ol> <p>For example, for each of these strings, Ive marked <em>in italics</em> what I'd like to return.</p> <blockquote> <p>"@handle what is your problem?" <em>[RETURN '@handle']</em></p> <p>"what is your problem @handle?" <em>[RETURN '@handle']</em></p> <p>"@123handle what is your problem @handle123?" <em>[RETURN '@123handle', '@handle123']</em></p> </blockquote> <p>This is what I have so far:</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; re.findall(r'(@.*handle.*?)\W','hi @123handle, hello @handle123') ['@123handle'] # This misses the handles that are followed by end-of-string </code></pre> <p>I tried modifying to include an <code>or</code> character allowing the end-of-string character. Instead, it just returns the whole string.</p> <pre><code>&gt;&gt;&gt; re.findall(r'(@.*handle.*?)(?=\W|$)','hi @123handle, hello @handle123') ['@123handle, hello @handle123'] # This looks like it is too greedy and ends up returning too much </code></pre> <p>How can I write an expression that will satisfy both conditions?</p> <p>I've looked at a <a href="http://stackoverflow.com/questions/16932012/regex-how-to-match-any-string-until-whitespace-or-until-punctuation-followed-b">couple</a> <a href="http://stackoverflow.com/questions/6713310/how-to-specify-space-or-end-of-string-and-space-or-start-of-string">other</a> places, but am still stuck. </p>
1
2016-08-09T22:05:25Z
38,861,225
<p>It seems you are trying to match strings starting with <code>@</code>, then having 0+ word chars, then <code>handle</code>, and then again 0+ word chars.</p> <p>Use</p> <pre><code>r'@\w*handle\w*' </code></pre> <p>or - to avoid matching <code>@</code>+word chars in emails:</p> <pre><code>r'\B@\w*handle\w*' </code></pre> <p>See the <a href="https://regex101.com/r/jW4xL1/1" rel="nofollow">Regex 1 demo</a> and the <a href="https://regex101.com/r/jW4xL1/2" rel="nofollow">Regex 2 demo</a> (the <code>\B</code> non-word boundary requires a non-word char or start of string to be right before the <code>@</code>).</p> <p>Note that the <code>.*</code> is a greedy dot matching pattern that matches any characters other than newline, as many as possible. <code>\w*</code> only matches 0+ characters (also as many as possible) but from the <code>[a-zA-Z0-9_]</code> set if the <code>re.UNICODE</code> flag is not used (and it is not used in your code).</p> <p><a href="http://ideone.com/T1bZx4" rel="nofollow">Python demo</a>:</p> <pre><code>import re p = re.compile(r'@\w*handle\w*') test_str = "@handle what is your problem?\nwhat is your problem @handle?\n@123handle what is your problem @handle123?\n" print(p.findall(test_str)) # =&gt; ['@handle', '@handle', '@123handle', '@handle123'] </code></pre>
2
2016-08-09T22:10:36Z
[ "python", "regex", "twitter" ]
Regular expressions in python to match Twitter handles
38,861,170
<p>I'm trying to use regular expressions to capture all Twitter handles within a tweet body. The challenge is that I'm trying to get handles that</p> <ol> <li>Contain a specific string</li> <li>Are of unknown length</li> <li>May be followed by either <ul> <li>punctuation</li> <li>whitespace</li> <li>or the end of string.</li> </ul></li> </ol> <p>For example, for each of these strings, Ive marked <em>in italics</em> what I'd like to return.</p> <blockquote> <p>"@handle what is your problem?" <em>[RETURN '@handle']</em></p> <p>"what is your problem @handle?" <em>[RETURN '@handle']</em></p> <p>"@123handle what is your problem @handle123?" <em>[RETURN '@123handle', '@handle123']</em></p> </blockquote> <p>This is what I have so far:</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; re.findall(r'(@.*handle.*?)\W','hi @123handle, hello @handle123') ['@123handle'] # This misses the handles that are followed by end-of-string </code></pre> <p>I tried modifying to include an <code>or</code> character allowing the end-of-string character. Instead, it just returns the whole string.</p> <pre><code>&gt;&gt;&gt; re.findall(r'(@.*handle.*?)(?=\W|$)','hi @123handle, hello @handle123') ['@123handle, hello @handle123'] # This looks like it is too greedy and ends up returning too much </code></pre> <p>How can I write an expression that will satisfy both conditions?</p> <p>I've looked at a <a href="http://stackoverflow.com/questions/16932012/regex-how-to-match-any-string-until-whitespace-or-until-punctuation-followed-b">couple</a> <a href="http://stackoverflow.com/questions/6713310/how-to-specify-space-or-end-of-string-and-space-or-start-of-string">other</a> places, but am still stuck. </p>
1
2016-08-09T22:05:25Z
38,861,247
<p>Matches only handles that contain this range of characters -> <code>/[a-zA-Z0-9_]/</code>.</p> <pre><code>s = "@123handle what is your problem @handle123?" print re.findall(r'\B(@[\w\d_]+)', s) &gt;&gt;&gt; ['@123handle', '@handle123'] s = '@The quick brown fox@jumped over the LAAZY @_dog.' &gt;&gt;&gt; ['@The', '@_dog'] </code></pre>
1
2016-08-09T22:11:58Z
[ "python", "regex", "twitter" ]
Find the index of certain values in a data frame and put it as a separate column
38,861,180
<p>In the following data frame DF, Users have different values for Movies and Exist columns. For example, user 2 has 10 values and User 5 has 9 values. I want the position of the first 'True' value for Exist column (relative to the user vector length) divided to the user vector length to be put in a separate data frame along with the User ID: Imagine this is the data frame:</p> <pre><code> User Movie Exist 0 2 172 False 1 2 2717 False 2 2 150 False 3 2 2700 False 4 2 2699 True 5 2 2616 False 6 2 112 False 7 2 2571 True 8 2 2657 True 9 2 2561 False 10 5 3471 False 11 5 187 False 12 5 2985 False 13 5 3388 False 14 5 3418 False 15 5 32 False 16 5 1673 False 17 5 3740 True 18 5 1693 False </code></pre> <p>So the target data frame should look like this:</p> <pre><code>5/10 =0.5 8/9= 0.88 User Location 2 0.5 5 0.88 </code></pre> <p>As the first True value for user 2 is in the relative index 5 (5th value in user 2 vector) and the first True value for user 5 is in index 8 (8th value in the user 5 vector). Note that, I don't want the real index which are 4 and 17. </p>
2
2016-08-09T22:06:07Z
38,861,235
<p><strong><em>Option 1</em></strong></p> <pre><code>def first_ratio(x): x = x.reset_index(drop=True) i = x.any() * (x.idxmax() + 1.) l = len(x) return i / l df.groupby('User').Exist.apply(first_ratio).rename('Location').to_frame() User 2 0.500000 5 0.888889 Name: Exist, dtype: float64 </code></pre> <p><strong><em>Option 2</em></strong></p> <pre><code>def first_ratio(x): v = x.values i = v.any() * (v.argmax() + 1.) l = v.shape[0] return i / l df.groupby('User').Exist.apply(first_ratio).rename('Location').to_frame() </code></pre> <hr> <h3>Timing</h3> <p><a href="http://i.stack.imgur.com/mVncE.png" rel="nofollow"><img src="http://i.stack.imgur.com/mVncE.png" alt="enter image description here"></a></p>
3
2016-08-09T22:10:57Z
[ "python", "pandas", "dataframe" ]
Comparing two CSV files and searching for similar items
38,861,232
<p>I'm still new to Python and I'm trying to adapt this code to work for me from <a href="http://stackoverflow.com/questions/5268929/python-comparing-two-csv-files-and-searching-for-similar-items">this post</a>. </p> <p>The difference between that post and what I'm looking for is that I am looking to concatenate the entire contents of the matching rows from both hosts.csv and masterlist.csv when a matching 'signature' is found in both files. </p> <p>So if hosts.csv looked like this:</p> <pre><code>Path Filename Size Signature C:\ a.txt 14kb 012345 D:\ b.txt 99kb 678910 C:\ c.txt 44kb 111213 </code></pre> <p>And masterlist.csv looked like this:</p> <pre><code>Signature Name State 012345 Joe CT 567890 Sue MA 111222 Dan MD </code></pre> <p>Tinkering with the code posted by Martijn Pieters in his response to Serk's post, his code gets me most of the way there. </p> <pre><code>import time, csv timestr = time.strftime("%Y%m%d_%H%M") outputfile = "Results_" + (timestr) + ".csv" with open('masterlist.csv', 'rb') as master: master_indices = dict((r[0], i) for i, r in enumerate(csv.reader(master))) with open('hosts.csv', 'rb') as hosts: with open('results.csv', 'wb') as results: reader = csv.reader(hosts) writer = csv.writer(results) writer.writerow(next(reader, []) + ['RESULTS']) for row in reader: index = master_indices.get(row[3]) if index is not None: message = 'FOUND in (row {})'.format(index) else: message = 'NOT FOUND' writer.writerow(row + [message]) </code></pre> <p>Instead of just adding the RESULTS column as Serk was looking for that indicates the matching signature, how can I pull in the corresponding rows from the masterlist.csv and hosts.csv files and concatenate the two together in the results.csv file? The desired output file would look like this:</p> <pre><code>Path Filename Size RESULTS Signature Name State C:\ a.txt 14kb FOUND in Row 1 012345 Joe CT D:\ b.txt 99kb FOUND in Row 2 678910 Sue MA C:\ c.txt 44kb NOT FOUND 111213 </code></pre> <p>Thanks in advance, responses on here have already help me out with most of the solutions I have been looking for!</p>
2
2016-08-09T22:10:54Z
38,861,450
<p>Use pandas.read_csv and merge on "Signature" columns </p> <pre><code>import pandas as pd hosts_df = pd.read_csv("hosts.csv ") masterlist_df = pd.read_csv("masterlist.csv") results = masterlist_df.merge(hosts_df, on="Signature", how="outer") results.to_csv("results.csv") </code></pre>
2
2016-08-09T22:30:09Z
[ "python", "csv", "concatenation" ]
How to join integers intervals in python?
38,861,290
<p>I have used the module intervals (<a href="http://pyinterval.readthedocs.io/en/latest/index.html" rel="nofollow">http://pyinterval.readthedocs.io/en/latest/index.html</a>)</p> <p>And created an interval from a set or start, end tuples:</p> <pre><code>intervals = interval.interval([1,8], [7,10], [15,20]) </code></pre> <p>Which result in interval([1.0, 10.0], [15.0, 20.0]) as the [1,8] and [7,10] overlaps.</p> <p>But this module interprets the values of the pairs as real numbers, so two continuous intervals in integers will not be joined together. </p> <p>Example:</p> <pre><code>intervals = interval.interval([1,8], [9,10], [11,20]) </code></pre> <p>results in: interval([1.0, 8.0], [9.0, 10.0], [11.0, 20.0])</p> <p>My question is how can I join this intervals as integers and not as real numbers? And in the last example the result would be interval([1.0, 20.0])</p>
2
2016-08-09T22:15:38Z
38,861,520
<p>I came up with the following program:</p> <pre><code>ls = [[1,8], [7,10], [15,20]] ls2 = [] prevList = ls[0] for lists in ls[1:]: if lists[0] &lt;= prevList[1]+1: prevList = [prevList[0], lists[1]] else: ls2.append(prevList) prevList = lists ls2.append(prevList) print ls2 # prints [[1, 10], [15, 20]] </code></pre> <p>It permutes through all lists and checks if the firsy element of each list is less than or equal to the previous element + 1. If so, it clubs the two.</p>
0
2016-08-09T22:36:30Z
[ "python", "intervals" ]
How to join integers intervals in python?
38,861,290
<p>I have used the module intervals (<a href="http://pyinterval.readthedocs.io/en/latest/index.html" rel="nofollow">http://pyinterval.readthedocs.io/en/latest/index.html</a>)</p> <p>And created an interval from a set or start, end tuples:</p> <pre><code>intervals = interval.interval([1,8], [7,10], [15,20]) </code></pre> <p>Which result in interval([1.0, 10.0], [15.0, 20.0]) as the [1,8] and [7,10] overlaps.</p> <p>But this module interprets the values of the pairs as real numbers, so two continuous intervals in integers will not be joined together. </p> <p>Example:</p> <pre><code>intervals = interval.interval([1,8], [9,10], [11,20]) </code></pre> <p>results in: interval([1.0, 8.0], [9.0, 10.0], [11.0, 20.0])</p> <p>My question is how can I join this intervals as integers and not as real numbers? And in the last example the result would be interval([1.0, 20.0])</p>
2
2016-08-09T22:15:38Z
38,861,631
<p>The intervals module <a href="http://pyinterval.readthedocs.io/en/latest/index.html" rel="nofollow">pyinterval</a> is used for real numbers, not for integers. If you want to use objects, you can create an integer interval class or you can also code a program to join integer intervals using the interval module:</p> <pre><code>def join_int_intervlas(int1, int2): if int(int1[-1][-1])+1 &gt;= int(int2[-1][0]): return interval.interval([int1[-1][0], int2[-1][-1]]) else: return interval.interval() </code></pre>
1
2016-08-09T22:48:14Z
[ "python", "intervals" ]
Pandas : in case of duplicate values, remove the row with a particular value in another column
38,861,323
<p>I have a dataset :</p> <pre><code>id url keep_anyway field 1 A.com Yes X 2 A.com Yes Y 3 B.com No Y 4 B.com No X 5 C.com No X </code></pre> <p>I want to <strong>remove "url" duplicates with conditions</strong> :</p> <ol> <li>Keep duplicates if "keep_anyway" = "Yes".</li> <li>For duplicates with "keep_anyway" = "No", I want to keep the row with "X" value in "field" column.</li> </ol> <p>Expected output is :</p> <pre><code>id url keep_anyway field 1 A.com Yes X 2 A.com Yes Y 4 B.com No X 5 C.com No X </code></pre> <p>I have been able to manage condition 1 with :</p> <pre><code>df.loc[(df['keep_aanyway'] =='Yes') | ~df['url'].duplicated()] </code></pre> <p>But how to set up Condition 2 ?</p> <p>Note that possible values of "field" column are either X or Y, and if I have duplicates, I know FOR SURE that I have one "X" and one "Y" value.</p> <p>I thought maybe I could sort from A to Z in "field" column then have "keep_first"=True in df.duplicated, but I think it is deprecated, isn't it ?</p>
0
2016-08-09T22:18:32Z
38,861,379
<p>Try this:</p> <pre><code>import numpy as np duplicates = df.duplicated(subset='url') keep_anyway_bool = df['keep_away'] == 'Yes' # (credit @acushner for pointing this out) field_bool = df['field'] == 'X' # (credit @acushner for pointing this out) df[np.invert(duplicates) | keep_anyway_bool | field_bool] </code></pre>
1
2016-08-09T22:22:58Z
[ "python", "pandas" ]
Converting string into datetime: ValueError in python
38,861,395
<p>i check many <strong>StackOverflow</strong> questions. But can't solve this problem...</p> <pre><code>import pandas as pd from datetime import datetime import csv username = input("enter name: ") with open('../data/%s_tweets.csv' % (username), 'rU') as f: reader = csv.reader(f) your_list = list(reader) for x in your_list: date = x[1] # is the date index dateOb = datetime.strptime(date, '%Y-%m-%d %H:%M:%S') # i also used "%d-%m-%Y %H:%M:%S" formate # i also used "%d-%m-%Y %I:%M:%S" formate # i also used "%d-%m-%Y %I:%M:%S%p" formate # but the same error shows for every formate print(dateOb) </code></pre> <p>i am getting the error</p> <pre><code>ValueError: time data 'date' does not match format '%d-%m-%Y %I:%M:%S' </code></pre> <p><a href="http://i.stack.imgur.com/n0Q41.png" rel="nofollow">in my csv file</a> </p>
0
2016-08-09T22:24:48Z
39,178,372
<p><code>ValueError: time data 'date' does not match format '%d-%m-%Y %I:%M:%S'</code></p> <p><em>'date'</em> is not a Date String. That's why python can not convert this string into DateTime format. I check in my .csv file, and there i found the 1st line of the date list is not a date string, is a column head. I remove the first line of my CSV file, and then its works in <em>Python 3.5.1</em>. But, sill the same problem is occurring in <em>python 2.7</em> </p>
0
2016-08-27T06:56:01Z
[ "python", "python-2.7", "datetime" ]
What exactly did I do wrong with this script?
38,861,407
<p>I am trying to run this script</p> <pre><code>from schoolclass import School import elasticsearch import elasticsearch_dsl as srch import mysql.connector as mdb es = elasticsearch.Elasticsearch() cnx = mdb.connect(user= 'root', password= '*****', host= '127.0.0.1', database= 'sync-helper') cursor = cnx.cursor(), query = "SELECT Zip FROM school" cursor.execute(query) schools = list(cursor.fetchall()) zips = [] for z in schools: zips.append(str(z[0]) school = School(3, "Crystal", "Hillsborough", 94010) print school.search(zips) </code></pre> <p>but python is saying that there is a syntax error, highlighting the school variable where it is being defined. How do I fix the syntax?</p>
-2
2016-08-09T22:26:15Z
38,861,438
<p>A <code>SyntaxError</code> means that somewhere in your code a statement is hasn't been constructed properly. In your case, <code>zips.append(str(z[0])</code> is missing a closing parenthesis. A good place to look when you have a <code>SyntaxError</code> is the line before the one indicated.</p>
1
2016-08-09T22:29:24Z
[ "python", "python-2.7" ]
Pandas: Multi-index apply function between column and index
38,861,415
<p>I have a multi-index dataframe that look like this:</p> <pre><code>In[13]: df Out[13]: Last Trade Date Ticker 1983-03-30 CLM83 1983-05-18 CLN83 1983-06-17 CLQ83 1983-07-18 CLU83 1983-08-19 CLV83 1983-09-16 CLX83 1983-10-18 CLZ83 1983-11-18 1983-04-04 CLM83 1983-05-18 CLN83 1983-06-17 CLQ83 1983-07-18 CLU83 1983-08-19 CLV83 1983-09-16 CLX83 1983-10-18 CLZ83 1983-11-18 </code></pre> <p>With two levels for indexes (namely 'Date' and 'Ticker'). I would like to apply a function to the column 'Last Trade' that would let me know how many months separate this 'Last Trade' date from the index 'Date' I found a function that does the calculation:</p> <pre><code>from calendar import monthrange def monthdelta(d1, d2): delta = 0 while True: mdays = monthrange(d1.year, d1.month)[1] d1 += datetime.timedelta(days=mdays) if d1 &lt;= d2: delta += 1 else: break return delta </code></pre> <p>I tried to apply the following function h but it returns me an AttributeError: 'Timestamp' object has no attribute 'index':</p> <pre><code>In[14]: h = lambda x: monthdelta(x.index.get_level_values(0),x) In[15]: df['Last Trade'] = df['Last Trade'].apply(h) </code></pre> <p>How can I apply a function that would use both a column and an index value?</p> <p>Thank you for your tips,</p>
2
2016-08-09T22:26:45Z
38,861,500
<p>Use <code>df.index.to_series().str.get(0)</code> to get at first level of index.</p> <pre><code>(df['Last Trade'].dt.month - df.index.to_series().str.get(0).dt.month) + \ (df['Last Trade'].dt.year - df.index.to_series().str.get(0).dt.year) * 12 Date Ticker 1983-03-30 CLM83 2 CLN83 3 CLQ83 4 CLU83 5 CLV83 6 CLX83 7 CLZ83 8 1983-04-04 CLM83 1 CLN83 2 CLQ83 3 CLU83 4 CLV83 5 CLX83 6 CLZ83 7 dtype: int64 </code></pre> <hr> <h3>Timing</h3> <p><strong><em>Given <code>df</code></em></strong></p> <p><a href="http://i.stack.imgur.com/3SbcA.png" rel="nofollow"><img src="http://i.stack.imgur.com/3SbcA.png" alt="enter image description here"></a></p> <p><strong><em><code>pd.concat([df for _ in range(10000)])</code></em></strong></p> <p><a href="http://i.stack.imgur.com/qxL0K.png" rel="nofollow"><img src="http://i.stack.imgur.com/qxL0K.png" alt="enter image description here"></a></p>
3
2016-08-09T22:34:23Z
[ "python", "pandas", "apply", "attributeerror", "multi-index" ]
Pandas: Multi-index apply function between column and index
38,861,415
<p>I have a multi-index dataframe that look like this:</p> <pre><code>In[13]: df Out[13]: Last Trade Date Ticker 1983-03-30 CLM83 1983-05-18 CLN83 1983-06-17 CLQ83 1983-07-18 CLU83 1983-08-19 CLV83 1983-09-16 CLX83 1983-10-18 CLZ83 1983-11-18 1983-04-04 CLM83 1983-05-18 CLN83 1983-06-17 CLQ83 1983-07-18 CLU83 1983-08-19 CLV83 1983-09-16 CLX83 1983-10-18 CLZ83 1983-11-18 </code></pre> <p>With two levels for indexes (namely 'Date' and 'Ticker'). I would like to apply a function to the column 'Last Trade' that would let me know how many months separate this 'Last Trade' date from the index 'Date' I found a function that does the calculation:</p> <pre><code>from calendar import monthrange def monthdelta(d1, d2): delta = 0 while True: mdays = monthrange(d1.year, d1.month)[1] d1 += datetime.timedelta(days=mdays) if d1 &lt;= d2: delta += 1 else: break return delta </code></pre> <p>I tried to apply the following function h but it returns me an AttributeError: 'Timestamp' object has no attribute 'index':</p> <pre><code>In[14]: h = lambda x: monthdelta(x.index.get_level_values(0),x) In[15]: df['Last Trade'] = df['Last Trade'].apply(h) </code></pre> <p>How can I apply a function that would use both a column and an index value?</p> <p>Thank you for your tips,</p>
2
2016-08-09T22:26:45Z
38,861,514
<p>Try this instead of your function:</p> <h1>Option 1</h1> <h2>You get an integer number</h2> <pre><code>def monthdelta(row): trade = row['Last Trade'].year*12 + row['Last Trade'].month date = row['Date'].year*12 + row['Date'].month return trade - date df.reset_index().apply(monthdelta, axis=1) </code></pre> <hr> <p>Inspired by PiRsquared:</p> <pre><code>df = df.reset_index() (df['Last Trade'].dt.year*12 + df['Last Trade'].dt.month) -\ (df['Date'].dt.year*12 + df['Date'].dt.month) </code></pre> <h1>Option 2</h1> <h2>You get a <a href="http://docs.scipy.org/doc/numpy/reference/arrays.datetime.html" rel="nofollow"><code>numpy.timedelta64</code></a></h2> <p>Which can be directly used for other date computations. However, this will be in the form of days, not months, because the number of <a href="http://docs.scipy.org/doc/numpy/reference/arrays.datetime.html#datetime-and-timedelta-arithmetic" rel="nofollow">days in a month are not constant</a>.</p> <pre><code>def monthdelta(row): return row['Last Trade'] - row['Date'] df.reset_index().apply(monthdelta, axis=1) </code></pre> <hr> <p>Inspired by PiRsquared:</p> <pre><code>df = df.reset_index() df['Last Trade'] - df['Date'] </code></pre> <p>Option 2 will of course be faster, because it involves less computations. Pick what you like!</p> <hr> <p>To get your index back: <code>df.index = df[['Date', 'Ticker']]</code></p>
3
2016-08-09T22:35:58Z
[ "python", "pandas", "apply", "attributeerror", "multi-index" ]
Regex match inner '.'
38,861,419
<p>I would like to remove an inner sentence based on one word. So instead of just 'start' I would like the regex statement to return 'start.stop.'.</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; s = 'start.stop.do nice.' &gt;&gt;&gt; re.sub(r'\..*nice.*', '', s) 'start' </code></pre>
1
2016-08-09T22:27:07Z
38,861,465
<p>You need a <a href="http://www.regular-expressions.info/charclass.html" rel="nofollow"><em>negated character class</em></a> instead of <code>.*</code> to refuse of matching the dots in other sentences. And in order to preserving the last dot, you can use a <a href="http://www.regular-expressions.info/lookaround.html" rel="nofollow">positive-lookahead</a> for the last dot, to makes the regex engine doesn't capture that (just check its existence).</p> <pre><code>&gt;&gt;&gt; re.sub(r'\.[^.]*nice[^.]*(?=\.)', '', s) 'start.stop' </code></pre> <p>Another good example by @bfontaine:</p> <pre><code>&gt;&gt;&gt; s = "foo.bar.nice.qux" &gt;&gt;&gt; re.sub(r'\.[^.]*nice[^.]*(?=\.)', '', s) 'foo.bar.qux' </code></pre>
1
2016-08-09T22:31:05Z
[ "python", "regex", "string", "python-3.x" ]
Unix timestamp to iso 8601 time format
38,861,426
<blockquote> <p>When i convert unix time 1463288494 to isoformat i get 2016-05-14T22:01:34. How can I get the output including the -07:00. In this format 2016-05-14T22:01:34-07:00</p> </blockquote> <pre><code>from datetime import datetime t = int("1463288494") print(datetime.fromtimestamp(t).isoformat()) </code></pre>
0
2016-08-09T22:27:48Z
38,861,784
<p>You can pass a <code>tzinfo</code> instance representing your timezone offset to <code>fromtimestamp()</code>. The problem then is how to get the <code>tzinfo</code> object. The easiest way is to use the <a href="https://pypi.python.org/pypi/pytz" rel="nofollow"><code>pytz</code></a> module which provides a <code>tzinfo</code> compatible object:</p> <pre><code>import pytz from datetime import datetime tz = pytz.timezone('America/Los_Angeles') print(datetime.fromtimestamp(1463288494, tz).isoformat()) #2016-05-14T22:01:34-07:00 </code></pre>
0
2016-08-09T23:03:50Z
[ "python", "python-2.7", "python-3.x" ]
Splitting a list into uneven groups?
38,861,457
<p>I know how to split a list into even groups, but I'm having trouble splitting it into uneven groups. </p> <p>Essentially here is what I have: some list, let's call it <code>mylist</code>, that contains x elements.</p> <p>I also have another file, lets call it second_list, that looks something like this:</p> <pre><code>{2, 4, 5, 9, etc.} </code></pre> <p>Now what I want to do is divide <code>mylist</code> into uneven groups by the spacing in second_list. So, I want my first group to be the first 2 elements of <code>mylist</code>, the second group to be the next 4 elements of <code>mylist</code>, the third group to be the next 5 elements of <code>mylist</code>, the fourth group to be the next 9 elements of `mylist, and so on.</p> <p>Is there some easy way to do this? I tried doing something similar to if you want to split it into even groups:</p> <pre><code>for j in range(0, len(second_list)): for i in range(0, len(mylist), second_list[j]): chunk_mylist = mylist[i:i+second_list[j]] </code></pre> <p>However this doesn't split it like I want it to. I want to end up with my # of sublists being <code>len(second_list)</code>, and also split correctly, and this gives a lot more than that (and also splits incorrectly).</p>
11
2016-08-09T22:30:37Z
38,861,547
<p>This solution keeps track of how many items you've written. It will crash if the sum of the numbers in the <code>second_list</code> is longer than <code>mylist</code></p> <pre><code>total = 0 listChunks = [] for j in range(len(second_list)): chunk_mylist = mylist[total:total+second_list[j]] listChunks.append(chunk_mylist) total += second_list[j] </code></pre> <p>After running this, <code>listChunks</code> is a list containing sublists with the lengths found in <code>second_list</code>.</p>
1
2016-08-09T22:39:18Z
[ "python", "list", "python-2.7", "split", "sublist" ]
Splitting a list into uneven groups?
38,861,457
<p>I know how to split a list into even groups, but I'm having trouble splitting it into uneven groups. </p> <p>Essentially here is what I have: some list, let's call it <code>mylist</code>, that contains x elements.</p> <p>I also have another file, lets call it second_list, that looks something like this:</p> <pre><code>{2, 4, 5, 9, etc.} </code></pre> <p>Now what I want to do is divide <code>mylist</code> into uneven groups by the spacing in second_list. So, I want my first group to be the first 2 elements of <code>mylist</code>, the second group to be the next 4 elements of <code>mylist</code>, the third group to be the next 5 elements of <code>mylist</code>, the fourth group to be the next 9 elements of `mylist, and so on.</p> <p>Is there some easy way to do this? I tried doing something similar to if you want to split it into even groups:</p> <pre><code>for j in range(0, len(second_list)): for i in range(0, len(mylist), second_list[j]): chunk_mylist = mylist[i:i+second_list[j]] </code></pre> <p>However this doesn't split it like I want it to. I want to end up with my # of sublists being <code>len(second_list)</code>, and also split correctly, and this gives a lot more than that (and also splits incorrectly).</p>
11
2016-08-09T22:30:37Z
38,861,562
<p>Using <a class='doc-link' href="http://stackoverflow.com/documentation/python/196/comprehensions/737/list-comprehensions#t=201608092238264469378">list-comprehensions</a> together with <a class='doc-link' href="http://stackoverflow.com/documentation/python/1494/list-slicing-selecting-parts-of-lists#t=20160809223929327569">slicing</a> and <a href="https://docs.python.org/2/library/functions.html#sum" rel="nofollow"><code>sum()</code></a> function (all <em>basic and built-in</em> tools of python): </p> <pre><code>mylist = [1,2,3,4,5,6,7,8,9,10] seclist = [2,4,6] [mylist[sum(seclist[:i]):sum(seclist[:i+1])] for i in range(len(seclist))] #output: [[1, 2], [3, 4, 5, 6], [7, 8, 9, 10]] </code></pre> <hr> <p>If <code>seclist</code> is very long and you wish to be more efficient use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.cumsum.html" rel="nofollow"><code>numpy.cumsum()</code></a> first:</p> <pre><code>import numpy as np cumlist = np.hstack((0, np.cumsum(seclist))) [mylist[cumlist[i]:cumlist[i+1]] for i in range(len(cumlist)-1)] </code></pre> <p>and get the same results</p>
4
2016-08-09T22:40:32Z
[ "python", "list", "python-2.7", "split", "sublist" ]
Splitting a list into uneven groups?
38,861,457
<p>I know how to split a list into even groups, but I'm having trouble splitting it into uneven groups. </p> <p>Essentially here is what I have: some list, let's call it <code>mylist</code>, that contains x elements.</p> <p>I also have another file, lets call it second_list, that looks something like this:</p> <pre><code>{2, 4, 5, 9, etc.} </code></pre> <p>Now what I want to do is divide <code>mylist</code> into uneven groups by the spacing in second_list. So, I want my first group to be the first 2 elements of <code>mylist</code>, the second group to be the next 4 elements of <code>mylist</code>, the third group to be the next 5 elements of <code>mylist</code>, the fourth group to be the next 9 elements of `mylist, and so on.</p> <p>Is there some easy way to do this? I tried doing something similar to if you want to split it into even groups:</p> <pre><code>for j in range(0, len(second_list)): for i in range(0, len(mylist), second_list[j]): chunk_mylist = mylist[i:i+second_list[j]] </code></pre> <p>However this doesn't split it like I want it to. I want to end up with my # of sublists being <code>len(second_list)</code>, and also split correctly, and this gives a lot more than that (and also splits incorrectly).</p>
11
2016-08-09T22:30:37Z
38,861,604
<pre><code>subgroups = [] start=0 for i in second_list: subgroups.append(mylist[start:start + i]) start = i + start </code></pre> <p>At the end <code>subgroups</code> will contain the desired lists</p> <p>Example run:</p> <pre><code>&gt;&gt;&gt; mylist = [1,2,3,4,5,6,7,8,9,10,11,12] &gt;&gt;&gt; second_list = [2,4,5,9] &gt;&gt;&gt; subgroups = [] &gt;&gt;&gt; start=0 &gt;&gt;&gt; for i in second_list: ... subgroups.append(mylist[start:start + i]) ... start = i + start ... &gt;&gt;&gt; subgroups [[1, 2], [3, 4, 5, 6], [7, 8, 9, 10, 11], [12]] </code></pre>
1
2016-08-09T22:44:22Z
[ "python", "list", "python-2.7", "split", "sublist" ]
Splitting a list into uneven groups?
38,861,457
<p>I know how to split a list into even groups, but I'm having trouble splitting it into uneven groups. </p> <p>Essentially here is what I have: some list, let's call it <code>mylist</code>, that contains x elements.</p> <p>I also have another file, lets call it second_list, that looks something like this:</p> <pre><code>{2, 4, 5, 9, etc.} </code></pre> <p>Now what I want to do is divide <code>mylist</code> into uneven groups by the spacing in second_list. So, I want my first group to be the first 2 elements of <code>mylist</code>, the second group to be the next 4 elements of <code>mylist</code>, the third group to be the next 5 elements of <code>mylist</code>, the fourth group to be the next 9 elements of `mylist, and so on.</p> <p>Is there some easy way to do this? I tried doing something similar to if you want to split it into even groups:</p> <pre><code>for j in range(0, len(second_list)): for i in range(0, len(mylist), second_list[j]): chunk_mylist = mylist[i:i+second_list[j]] </code></pre> <p>However this doesn't split it like I want it to. I want to end up with my # of sublists being <code>len(second_list)</code>, and also split correctly, and this gives a lot more than that (and also splits incorrectly).</p>
11
2016-08-09T22:30:37Z
38,861,665
<p>You can create an iterator and <a href="https://docs.python.org/dev/library/itertools.html#itertools.islice"><em>itertools.islice</em></a>:</p> <pre><code>mylist = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] seclist = [2,4,6] from itertools import islice it = iter(mylist) sliced =[list(islice(it, 0, i)) for i in seclist] </code></pre> <p>Which would give you:</p> <pre><code>[[1, 2], [3, 4, 5, 6], [7, 8, 9, 10, 11, 12]] </code></pre> <p>Once <em>i</em> elements are consumed they are gone so we keep getting the next <em>i</em> elements.</p> <p>Not sure what should happen with any remaining elements, if you want them added, you could add something like:</p> <pre><code>mylist = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ,14] seclist = [2, 4, 6] from itertools import islice it = iter(mylist) slices = [sli for sli in (list(islice(it, 0, i)) for i in seclist)] remaining = list(it) if remaining: slices.append(remaining) print(slices) </code></pre> <p>Which would give you:</p> <pre><code> [[1, 2], [3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14]] </code></pre> <p>Or in contrast if there were not enough, you could use a couple of approaches to remove empty lists, one an inner generator expression:</p> <pre><code>from itertools import islice it = iter(mylist) slices = [sli for sli in (list(islice(it, 0, i)) for i in seclist) if sli] </code></pre> <p>Or combine with <a href="https://docs.python.org/dev/library/itertools.html#itertools.takewhile">itertools.takewhile</a>:</p> <pre><code> from itertools import islice, takewhile it = iter(mylist) slices = list(takewhile(bool, (list(islice(it, 0, i)) for i in seclist))) </code></pre> <p>Which for:</p> <pre><code>mylist = [1, 2, 3, 4, 5, 6] seclist = [2, 4, 6,8] </code></pre> <p>would give you:</p> <pre><code>[[1, 2], [3, 4, 5, 6]] </code></pre> <p>As opposed to:</p> <pre><code>[[1, 2], [3, 4, 5, 6], [], []] </code></pre> <p>What you use completely depends on your possible inouts and how you would like to handle the various possibilities.</p>
12
2016-08-09T22:51:36Z
[ "python", "list", "python-2.7", "split", "sublist" ]
Splitting a list into uneven groups?
38,861,457
<p>I know how to split a list into even groups, but I'm having trouble splitting it into uneven groups. </p> <p>Essentially here is what I have: some list, let's call it <code>mylist</code>, that contains x elements.</p> <p>I also have another file, lets call it second_list, that looks something like this:</p> <pre><code>{2, 4, 5, 9, etc.} </code></pre> <p>Now what I want to do is divide <code>mylist</code> into uneven groups by the spacing in second_list. So, I want my first group to be the first 2 elements of <code>mylist</code>, the second group to be the next 4 elements of <code>mylist</code>, the third group to be the next 5 elements of <code>mylist</code>, the fourth group to be the next 9 elements of `mylist, and so on.</p> <p>Is there some easy way to do this? I tried doing something similar to if you want to split it into even groups:</p> <pre><code>for j in range(0, len(second_list)): for i in range(0, len(mylist), second_list[j]): chunk_mylist = mylist[i:i+second_list[j]] </code></pre> <p>However this doesn't split it like I want it to. I want to end up with my # of sublists being <code>len(second_list)</code>, and also split correctly, and this gives a lot more than that (and also splits incorrectly).</p>
11
2016-08-09T22:30:37Z
38,861,811
<p>A numpythonic approach:</p> <pre><code>&gt;&gt;&gt; lst = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] &gt;&gt;&gt; sec = [2, 4, 5] &gt;&gt;&gt; np.split(lst, np.cumsum(sec)) [array([0, 1]), array([2, 3, 4, 5]), array([ 6, 7, 8, 9, 10]), array([11])] </code></pre> <p>And here is a Python3.X approach using <code>itertool.accumulate()</code>:</p> <pre><code>&gt;&gt;&gt; lst = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] &gt;&gt;&gt; sec = [2,4,6] &gt;&gt;&gt; from itertools import accumulate &gt;&gt;&gt; sec = list(accumulate(sec_lst)) &gt;&gt;&gt; sec = [0] + sec + [None] if sec[0] != 0 else sec + [None] &gt;&gt;&gt; &gt;&gt;&gt; [lst[i:j] for i, j in zip(sec, sec[1:])] [[0, 1], [2, 3, 4, 5], [6, 7, 8, 9, 10], [11]] </code></pre>
5
2016-08-09T23:06:17Z
[ "python", "list", "python-2.7", "split", "sublist" ]
Loop Through Folder - Extract Heading 1 From Documents - Extract to New Document
38,861,532
<p>I am trying to extract heading 1 from documents stored in a directory.</p> <p>I am extremely new to python, so my experience is extremely limited.</p> <p>My code below does not work, it has syntax and structural errors. </p> <p>The code returns an error document not defined.</p> <pre><code>import os from docx import Document #document = Document('C:\\Users\\Work\\Desktop\\Docs') mydir ="C:\\Users\\Work\\Desktop\\Docs\\" for arch in os.listdir(mydir): archpath = os.path.join(mydir, arch) with open(archpath) as f: for paragraph in document.paragraphs: if paragraph.style.name == 'Heading 1': print(paragraph.text) document.save = Document('headings.docx') </code></pre> <p>I have researched both on stack and on the internet, but I have not found anything that shows how to loop through documents in a folder.</p> <p>Have I set the code up in the correct manner? How can I loop through documents in a directory and extract the headings 1 to a new document.</p>
0
2016-08-09T22:37:59Z
38,864,245
<p>To get a list of files that you can iterate over, you could use:</p> <pre><code>import os os.chdir("path/to/files") lists_of_files = os.listdir(os.getcwd()) </code></pre> <p>and then</p> <pre><code>for i in list_of_files: #extract heading from file i </code></pre> <p>For extracting the headers, you can use python's native <a href="http://stackoverflow.com/questions/125222/extracting-text-from-ms-word-files-in-python">docx module</a>. The link points to a SO answer where you can find a way of getting the entire data from the doc file. In this manner, you could get the heading. Haven't tried those methods though. </p>
0
2016-08-10T04:27:14Z
[ "python", "docx", "python-docx" ]
Subset chains within powerset in python
38,861,589
<p>If I've enumerated the powerset of the alphabet, for example, as 0,...,1&lt;&lt;26-1.</p> <p>For a given number in that range, I want to know what all of its subsets are. I can do something a bit inefficient like:</p> <pre><code>def find_subset_chain(subset): return [i for i in subset if i &amp; subset == i] </code></pre> <p>In the event that I'm doing this for <em>every</em> element of the powerset, I can proceed backward from the element in question until I hit a subset, and then attach the stuff I've already figured out, but it happens to be the case that I want to do this for some select elements of the power set and not all of them.</p> <p>Perhaps there exists a more number theoretic way to produce the list of subsets of a given element, a, of the powerset without having to iterate through every element up to a? </p>
0
2016-08-09T22:42:44Z
38,861,833
<p>We can find the least significant set bit of a number as</p> <pre><code>n &amp; -n </code></pre> <p>and we can use that to "count down" the subsets of a set represented as a bitmask by repeatedly clearing the least significant set bit and restoring all less-significant set bits from the original number:</p> <pre><code>def subsets(bitmask): current = bitmask while current: yield current lssb = current &amp; -current # find least significant set bit current &amp;= ~lssb # clear least significant set bit current |= bitmask &amp; (lssb - 1) # restore less significant bits from original yield current </code></pre>
1
2016-08-09T23:08:36Z
[ "python", "performance", "optimization" ]
How to assign a sequential label to pandas groupby?
38,861,600
<p>I start with the following pandas dataframe, I wish to group each day, and make a new column called 'label', which labels the group with a sequential number. How do I do this?</p> <pre><code>df = pd.DataFrame({'val': [10,40,30,10,11,13]}, index=pd.date_range('2016-01-01 00:00:00', periods=6, freq='12H' ) ) # df['label'] = df.groupby(pd.TimeGrouper('D')) # what do i do here??? print df </code></pre> <p>output: </p> <pre><code> val 2016-01-01 00:00:00 10 2016-01-01 12:00:00 40 2016-01-02 00:00:00 30 2016-01-02 12:00:00 10 2016-01-03 00:00:00 11 2016-01-03 12:00:00 13 </code></pre> <p>desired output:</p> <pre><code> val label 2016-01-01 00:00:00 10 1 2016-01-01 12:00:00 40 1 2016-01-02 00:00:00 30 2 2016-01-02 12:00:00 10 2 2016-01-03 00:00:00 11 3 2016-01-03 12:00:00 13 3 </code></pre>
1
2016-08-09T22:43:42Z
38,861,691
<p>Try this:</p> <pre><code>df = pd.DataFrame({'val': [10,40,30,10,11,13]}, index=pd.date_range('2016-01-01 00:00:00', periods=6, freq='12H' ) ) </code></pre> <p>If you just want to group by date:</p> <pre><code>df['label'] = df.groupby(df.index.date).grouper.group_info[0] + 1 print(df) </code></pre> <p>To group by time more generally, you can use TimeGrouper:</p> <pre><code>df['label'] = df.groupby(pd.TimeGrouper('D')).grouper.group_info[0] + 1 print(df) </code></pre> <p>Both of the above should give you the following:</p> <pre><code> val label 2016-01-01 00:00:00 10 1 2016-01-01 12:00:00 40 1 2016-01-02 00:00:00 30 2 2016-01-02 12:00:00 10 2 2016-01-03 00:00:00 11 3 2016-01-03 12:00:00 13 3 </code></pre> <p>I think this is undocumented (or hard to find, at least). Check out:</p> <p><a href="http://stackoverflow.com/questions/15072626/get-group-id-back-into-pandas-dataframe">Get group id back into pandas dataframe</a></p> <p>for more discussion.</p>
3
2016-08-09T22:53:37Z
[ "python", "pandas" ]
How to assign a sequential label to pandas groupby?
38,861,600
<p>I start with the following pandas dataframe, I wish to group each day, and make a new column called 'label', which labels the group with a sequential number. How do I do this?</p> <pre><code>df = pd.DataFrame({'val': [10,40,30,10,11,13]}, index=pd.date_range('2016-01-01 00:00:00', periods=6, freq='12H' ) ) # df['label'] = df.groupby(pd.TimeGrouper('D')) # what do i do here??? print df </code></pre> <p>output: </p> <pre><code> val 2016-01-01 00:00:00 10 2016-01-01 12:00:00 40 2016-01-02 00:00:00 30 2016-01-02 12:00:00 10 2016-01-03 00:00:00 11 2016-01-03 12:00:00 13 </code></pre> <p>desired output:</p> <pre><code> val label 2016-01-01 00:00:00 10 1 2016-01-01 12:00:00 40 1 2016-01-02 00:00:00 30 2 2016-01-02 12:00:00 10 2 2016-01-03 00:00:00 11 3 2016-01-03 12:00:00 13 3 </code></pre>
1
2016-08-09T22:43:42Z
38,883,582
<p>maybe a more simpler and intuitive approach is this:</p> <pre><code>df['label'] = df.groupby(df.index.day).keys </code></pre>
0
2016-08-10T21:08:45Z
[ "python", "pandas" ]
Numpy .shuffle gives the same results each time
38,861,686
<p>I am attempting to take a pandas DataFrame, take out 1 column, shuffle the contents of that column, then place it back into the DataFrame and return it. This is the code used:</p> <pre><code>def randomize(self, data, column): '''Takes in a pandas database and randomizes the values in column. data is the pandas dataframe to be altered. column is the column in the dataframe to be randomized. returns the altered dataframe. ''' df1 = data df1.drop(column, 1) newcol = list(data[column]) np.random.shuffle(newcol) df1[column] = newcol return df1 </code></pre> <p>It gives the same output every time I run it. Why is that?</p> <p>Note: I am using the same dataframe every time.</p>
4
2016-08-09T22:53:20Z
38,862,368
<p><strong><em>Your code</em></strong></p> <pre><code>def randomize(data, column): df1 = data.copy() newcol = list(data[column]) np.random.shuffle(newcol) df1[column] = newcol return df1 </code></pre> <p><strong><em>My <code>df</code></em></strong></p> <pre><code>df = pd.DataFrame(np.arange(25).reshape(5, 5), list('abcde'), list('ABCDE')) </code></pre> <p><strong><em>Your code + My <code>df</code></em></strong></p> <pre><code>np.random.seed([3,1415]) randomize(df, 'A') </code></pre> <p><a href="http://i.stack.imgur.com/xQdRB.png" rel="nofollow"><img src="http://i.stack.imgur.com/xQdRB.png" alt="enter image description here"></a></p> <p>And again</p> <pre><code>randomize(df, 'A') </code></pre> <p><a href="http://i.stack.imgur.com/yH29z.png" rel="nofollow"><img src="http://i.stack.imgur.com/yH29z.png" alt="enter image description here"></a></p> <p>Looks like it works!</p>
1
2016-08-10T00:18:19Z
[ "python", "pandas", "numpy", "random" ]
Send css over python socket server
38,861,763
<p>I have created a small python html server, but I am having issues sending external css and javascript. The html transfers as it should and inline css works fine. The chrome developer tool responds with this error: </p> <blockquote> <p>Resource interpreted as Stylesheet but transferred with MIME type text/plain: "<a href="http://localhost:8888/style.css" rel="nofollow">http://localhost:8888/style.css</a>".</p> </blockquote> <p>Unfortunately I have no knowledge on what a "MIME type" is.</p> <p>Here is the python code:</p> <pre><code># server.py import socket file = open('website/index.html', 'r') def start_server(HOST, PORT): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind((HOST, PORT)) s.listen(1) print('Serving HTTP on port %s ...' % PORT) while True: client_connection, client_address = s.accept() request = client_connection.recv(1024) print(request.decode('utf-8')) http_response = """\ http/1.1 200 OK """ + file.read() + """ """ client_connection.sendall(bytes(http_response, 'utf-8')) client_connection.close() </code></pre>
3
2016-08-09T23:01:55Z
38,861,808
<p>Add this line to your response string right beneath the 200 OK line:</p> <pre><code>Content-Type: text/css </code></pre> <p>What's happening is that Chrome is attempting to interpret the HTML you sent as as stylesheet, which you want. But, when you send it, you're sending with a content header that's telling chrome "I'm just plain text, nothing special here!" So Chrome is like, well something is wrong with that, I was expecting a stylesheet, and throws the error you see. If you tell Chrome that you're sending it a stylesheet, the error should be resolved.</p> <p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types" rel="nofollow">This is from Mozilla rather than Chrome, but it gives a good overview of MIME types.</a></p>
2
2016-08-09T23:06:14Z
[ "python", "html", "sockets" ]
Need to write a string with exponent to file
38,861,771
<p>I need to write feet cubed (ft**3) to a file that will be read by another program, where the 3 is a superscript. When I cut and paste the text I need from an example input file (like this: ft³) and try to save the script it says:</p> <p>"Encoding file [filename] using "ascii" encoding will result in information loss. Do you want to continue?"</p> <p>The command I am using is:</p> <pre><code>f.write('Units ft³\n') </code></pre> <p>What kind of information will I be losing? How do I write 'ft³' (I assume in ascii format?) from my script to my input file? I'm not even sure where to begin so any information is appreciated.</p> <p>I am using PyScripter IDE if that makes any difference.</p>
0
2016-08-09T23:02:27Z
38,861,876
<pre><code>with open('some_file.txt','wb'): # save non-ascii bytes (superscript 3 might even work with just 'w') f.write('Units ft\xb3\n') #\xb3 is the superscript 3 in utf8 encoding </code></pre> <p>you can do this which will be recognized by <em>most</em> test editors</p>
0
2016-08-09T23:13:39Z
[ "python", "ascii" ]
Need to write a string with exponent to file
38,861,771
<p>I need to write feet cubed (ft**3) to a file that will be read by another program, where the 3 is a superscript. When I cut and paste the text I need from an example input file (like this: ft³) and try to save the script it says:</p> <p>"Encoding file [filename] using "ascii" encoding will result in information loss. Do you want to continue?"</p> <p>The command I am using is:</p> <pre><code>f.write('Units ft³\n') </code></pre> <p>What kind of information will I be losing? How do I write 'ft³' (I assume in ascii format?) from my script to my input file? I'm not even sure where to begin so any information is appreciated.</p> <p>I am using PyScripter IDE if that makes any difference.</p>
0
2016-08-09T23:02:27Z
39,089,769
<p>Joran's option works for my text editors, but unfortunately comes across as a replacement character (�) when I try to open the input file with the program. The program uses Micr$oft <a href="https://msdn.microsoft.com/en-us/library/s2tte0y1(v=vs.90).aspx" rel="nofollow">File.ReadAllLines</a> method, so it should be able to read the text if it is encoded in utf8, right?</p> <pre><code>file.write('units ft'+'\xb3'+'/day\n'.encode('utf8')) </code></pre> <p>Unfortunately, I am still getting the replacement character when I try to open the file in the program. The input file looks fine in a text editor and when I cut and paste the line from the text editor:</p> <pre><code>Units ft³/day </code></pre> <p>To answer Mephy's question: the error regarding the ASCII character came when I copied the ft³/day string from an existing input file and simply pasted it into my python script, hoping python would write it the same way the program reads it... but that didn't work either. When I do a file compare in UltraEdit the lines are reported as being the same, yet the program will read the one it produced but not the one my python script produced. If I change the units to something without an exponent (i.e. acre-feet) it works fine.</p> <p>(For clarification, I am the user who posted the original. However, I have lost my password for that account and cannot merge the two accounts with out it.)</p>
0
2016-08-22T22:39:56Z
[ "python", "ascii" ]
Need to write a string with exponent to file
38,861,771
<p>I need to write feet cubed (ft**3) to a file that will be read by another program, where the 3 is a superscript. When I cut and paste the text I need from an example input file (like this: ft³) and try to save the script it says:</p> <p>"Encoding file [filename] using "ascii" encoding will result in information loss. Do you want to continue?"</p> <p>The command I am using is:</p> <pre><code>f.write('Units ft³\n') </code></pre> <p>What kind of information will I be losing? How do I write 'ft³' (I assume in ascii format?) from my script to my input file? I'm not even sure where to begin so any information is appreciated.</p> <p>I am using PyScripter IDE if that makes any difference.</p>
0
2016-08-09T23:02:27Z
39,131,304
<p>Okay, problem solved. Thanks for pointing me in the right direction Joran. Although I'm not sure why I have to <code>.encode('utf8')</code> the string, since as you state <code>\xb3</code> is already in utf8. But I tried and tried and it won't work without the <code>.encode('utf8')</code>.</p> <p>In the second post I was encoding the wrong part of the string. Either of these works:</p> <pre><code>f.write('Units ft'+u'\xb3'.encode('utf8')+'/day\n') </code></pre> <p>or <code>encode('utf8')</code> the whole line:</p> <pre><code>f.write(('Units ft'+u'\xb3'+'/day\n').encode('utf8')) </code></pre> <p>Is one preferred over the other?</p>
0
2016-08-24T19:11:00Z
[ "python", "ascii" ]
finding the max per id and creating a new column in pandas
38,861,804
<p>So, I have a pandas dataframe like this:</p> <pre><code>id, counts 1, 20 1, 21 1,15 1, 24 2,12 2,42 2,9 3,43 ... id, counts, label 1, 20, 0 1, 21, 0 1,15, 0 1, 24, 1 # because 24 is the highest count for id 1 2,12, 0 2,42, 1 # because 42 is the highest count for id 2 2,9, 0 3,43, ... </code></pre> <p>How do i do this in using pandas</p>
-3
2016-08-09T23:05:55Z
38,861,867
<pre><code>maxes = df.groupby('id').counts.max().rename('Max').reset_index() df1 = df.merge(maxes, how='left') df['Max'] = (df1.counts == df1.Max) * 1 df </code></pre> <p><a href="http://i.stack.imgur.com/7kDx0.png" rel="nofollow"><img src="http://i.stack.imgur.com/7kDx0.png" alt="enter image description here"></a></p>
4
2016-08-09T23:12:34Z
[ "python", "pandas" ]
finding the max per id and creating a new column in pandas
38,861,804
<p>So, I have a pandas dataframe like this:</p> <pre><code>id, counts 1, 20 1, 21 1,15 1, 24 2,12 2,42 2,9 3,43 ... id, counts, label 1, 20, 0 1, 21, 0 1,15, 0 1, 24, 1 # because 24 is the highest count for id 1 2,12, 0 2,42, 1 # because 42 is the highest count for id 2 2,9, 0 3,43, ... </code></pre> <p>How do i do this in using pandas</p>
-3
2016-08-09T23:05:55Z
38,861,911
<p>This seems to work:</p> <pre><code>df['label'] = 0 df['label'].iloc[df.groupby('id').apply(lambda x: x['counts'].argmax()).values] = 1 </code></pre> <p>But it is so ugly! And does not follow good coding practices... I'll try to improve it.</p> <hr> <p>If you like the below line, upvote <a href="http://stackoverflow.com/a/38862031/3765319">this answer</a> (Merlin's answer to this question) to say thanks.</p> <pre><code>df['label'] = np.where(df.index.isin((df.groupby('id')['counts'].idxmax())), 1, 0) </code></pre> <p>IMHO, you should use Merlin's answer to solve this problem. Mine is not good coding practice and will scale poorly compared to Merlin's</p>
3
2016-08-09T23:19:45Z
[ "python", "pandas" ]
finding the max per id and creating a new column in pandas
38,861,804
<p>So, I have a pandas dataframe like this:</p> <pre><code>id, counts 1, 20 1, 21 1,15 1, 24 2,12 2,42 2,9 3,43 ... id, counts, label 1, 20, 0 1, 21, 0 1,15, 0 1, 24, 1 # because 24 is the highest count for id 1 2,12, 0 2,42, 1 # because 42 is the highest count for id 2 2,9, 0 3,43, ... </code></pre> <p>How do i do this in using pandas</p>
-3
2016-08-09T23:05:55Z
38,862,031
<p>Try this: </p> <pre><code> df["label"] = np.where( df.index.isin((df2.groupby("id")["counts"].idxmax())),1,0) id counts label 0 1 20 0 1 1 21 0 2 1 15 0 3 1 24 1 4 2 12 0 5 2 42 1 6 2 9 0 7 3 43 1 </code></pre> <p>​</p>
1
2016-08-09T23:32:47Z
[ "python", "pandas" ]
tkinter debugging with Spyder
38,861,824
<p>I am using tkinter to build a GUI app with anaconda python and spyder on OSX. I can't get a prompt in the ipython console while my tkinter window is open. I can set a breakpoint and get into the debugger, but after that Spyder will shortly freeze or crash. </p> <p>Here is sample code that fails: </p> <pre><code>from tkinter import * def toggle(): i = 1 b = 2 print(i, b) pass root = Tk() frame = Frame(root, width=100, height=100) button = Button(frame,text="Press", command=toggle).grid(column=1, row=1) frame.pack() root.mainloop() </code></pre> <p>I run the debugger and set a breakpoint in the toggle() function. At the ipdb> prompt I can get the state of frame but nothing for button as below:</p> <pre><code>ipdb&gt; frame &lt;tkinter.Frame object .4709317264&gt; ipdb&gt; button </code></pre> <p>I click on the button in the tkinter window and back to the ipdb> prompt and soon spyder crashes. </p> <p>Question 1: Can this be fixed? Question 2: Is there a way to get a Spyder ipython console and variable explorer pane to function when a tkinter window open?</p>
1
2016-08-09T23:07:15Z
38,866,871
<p>Try changing frame.pack() to frame.grid() in this gui you don't really need to use grid for geometry, though. So might just want to change Button().grid() to Button().pack().</p>
0
2016-08-10T07:29:38Z
[ "python", "debugging", "tkinter", "spyder" ]
Python and Ctypes: Not getting expected offsets
38,861,906
<p>I am converting an app from VB.NET to Python 3.4 and am running into problems calling a function within a DLL file using ctypes. For that particular function, two structures are passed in byref. For one structure, the field offsets are not working out as needed. The last field's offset ends up being off. That makes all of the fields in that structure have values that are not correct.</p> <p>For the PPNChartList structure below, the field offsets should be 0, 8, 12, and 76 (based on the working VB.NET code) but end up being 0, 8, 12, and 72. So, how can I shift that last field over or otherwise get the correct values for that structure?</p> <p>Any guidance would be appreciated.</p> <p>Below is the Python code:</p> <pre><code>import os from ctypes import * _sFile = 'ppn.dll' _sPath = os.path.join(*(os.path.split(__file__)[:-1] + (_sFile,))) _ppn = windll.LoadLibrary(_sPath) class PPNChartSpec(Structure): _fields_ = [("dStructVer", c_double), ("iNumLanes", c_long), ("iNumCars", c_long), ("iNumRounds", c_long), ("iOptHeatCountEven", c_long), ("iOptAvoidConsecRaces", c_long), ("iOptAvoidRepeatLanes", c_long)] class PPNChartList(Structure): # This is the problem structure _fields_ = [("dStructVer", c_double), ("iUsedAlts", c_long), ("aiChartType", c_long * 15), ("audtChartSpec", PPNChartSpec * 15)] # The offset for this last field is off by 4 class PPNChart(Structure): _fields_ = [("dStructVer", c_double), ("udtSpec", PPNChartSpec), ("iChartType", c_long), ("iNumLanes", c_long), ("iNumHeats", c_long), ("aiCar", c_long * 2399)] makePPNChart = _ppn.makePPNChart ptrChartSpec = POINTER(PPNChartSpec) ptrChart = POINTER(PPNChart) makePPNChart.argtypes = [ptrChartSpec, ptrChart] makePPNChart.restype = c_int altCharts = _ppn.altCharts ptrChartSpec = POINTER(PPNChartSpec) ptrChartList = POINTER(PPNChartList) altCharts.argtypes = [ptrChartSpec, ptrChartList] altCharts.restype = c_int </code></pre> <p>EDIT: If it helps any, below is the applicable portions of the working VB.NET code showing the structure and function definitions. This shows the appropriate offsets for the structures. In Python, I am able to call the makePPNChart() function and pass it the two structures byref that it needs. The structures come back as they should. So, I have it partially working in Python.</p> <pre><code>&lt;StructLayout(LayoutKind.Explicit)&gt; _ Friend Structure PPNChartSpec &lt;FieldOffset(0)&gt; Dim dStructVer As Double &lt;FieldOffset(8)&gt; Dim iNumLanes As Integer &lt;FieldOffset(12)&gt; Dim iNumCars As Integer &lt;FieldOffset(16)&gt; Dim iNumRounds As Integer &lt;FieldOffset(20)&gt; Dim iOptHeatCountEven As Integer &lt;FieldOffset(24)&gt; Dim iOptAvoidConsecRaces As Integer &lt;FieldOffset(28)&gt; Dim iOptAvoidRepeatLanes As Integer End Structure &lt;StructLayout(LayoutKind.Explicit)&gt; _ Friend Structure PPNChartList &lt;FieldOffset(0)&gt; Dim dStructVer As Double &lt;FieldOffset(8)&gt; Dim iUsedAlts As Integer &lt;FieldOffset(12)&gt; &lt;MarshalAs(UnmanagedType.ByValArray, SizeConst:=15)&gt; _ Dim aiChartType() As Integer &lt;FieldOffset(76)&gt; &lt;MarshalAs(UnmanagedType.ByValArray, SizeConst:=15)&gt; _ Dim audtChartSpec() As PPNChartSpec End Structure &lt;StructLayout(LayoutKind.Explicit)&gt; _ Friend Structure PPNChart &lt;FieldOffset(0)&gt; Dim dStructVer As Double &lt;FieldOffset(8)&gt; Dim udtSpec As PPNChartSpec &lt;FieldOffset(40)&gt; Dim iChartType As Integer &lt;FieldOffset(44)&gt; Dim iNumLanes As Integer &lt;FieldOffset(48)&gt; Dim iNumHeats As Integer &lt;FieldOffset(52)&gt; &lt;MarshalAs(UnmanagedType.ByValArray, SizeConst:=2399)&gt; _ Dim aiCar() As Integer End Structure &lt;DllImport("ppn.dll")&gt; _ Friend Shared Function makePPNChart(&lt;MarshalAs(UnmanagedType.Struct)&gt; ByRef spec As PPNChartSpec, &lt;MarshalAs(UnmanagedType.Struct)&gt; ByRef Chart As PPNChart) As Short End Function &lt;DllImport("ppn.dll")&gt; _ Friend Shared Function altCharts(&lt;MarshalAs(UnmanagedType.Struct)&gt; ByRef spec As PPNChartSpec, &lt;MarshalAs(UnmanagedType.Struct)&gt; ByRef List As PPNChartList) As Short End Function </code></pre> <p>EDIT #2: I used the loop below to check the offsets. Adding in an extra c_long field into that structure's definition, I would think that would shift things over just right. However, as you can see below, the offset for the audtChartSpec structure jumps to 80, instead of the expected 76.</p> <pre><code>for f, t in ppn.PPNChartList._fields_: a = getattr(ppn.PPNChartList, f) print(f, a) dStructVer &lt;Field type=c_double, ofs=0, size=8&gt; iUsedAlts &lt;Field type=c_long, ofs=8, size=4&gt; aiChartType &lt;Field type=c_long_Array_15, ofs=12, size=60&gt; unknown &lt;Field type=c_long, ofs=72, size=4&gt; audtChartSpec &lt;Field type=PPNChartSpec_Array_15, ofs=80, size=480&gt; </code></pre>
0
2016-08-09T23:18:44Z
38,905,355
<p>12+15*4 <em>is</em> 72, so the VB.NET structure skips those 4 bytes, maybe to guarantee that the array is null terminated. Without knowing the actual struct definition from the called library it's hard to say. You could just add an additional 4 byte padding field.</p> <p>To control the alignment of the members you can set <code>_pack_ = 4</code> to make them 4-bytes aligned (<a href="https://docs.python.org/3/library/ctypes.html#structure-union-alignment-and-byte-order" rel="nofollow">the equivalent of <code>#pragma pack(4)</code></a>):</p> <pre><code>class PPNChartList(Structure): _pack_ = 4 _fields_ = [("dStructVer", c_double), ("iUsedAlts", c_long), ("aiChartType", c_long * 15), ("padding", c_long), ("audtChartSpec", PPNChartSpec * 15)] </code></pre> <p>Now you should get the expected result, at least on systems where c_long is 32bit.</p>
0
2016-08-11T20:16:11Z
[ "python", "ctypes" ]
tkinter canvas color not changing
38,861,916
<p>I have a two tkinter canvas which needs to change color based on the data which I receive from other module. basically 0,1 Ex: canvas1 to black and canvas2 to green if 1 is received and viceversa if 0 is received.</p> <p>I have used multiprocessing queue technique to receive the data, but when I tried to apply the changes it is not updating? I assume there is something with <code>self</code>.</p> <p>Here is my code snippet: main.py</p> <pre><code>import multiprocessing from udpsocket import * from ui import * if __name__ == "__main__": queue = multiprocessing.Queue() ui = multiprocessing.Process(target=uiMain, args=(queue,)) ui.daemon = True ui.start() udpMain(queue) </code></pre> <p>udpsocket.py:</p> <pre><code>import time import struct import socket import ui MYPORT = 51506 MYGROUP_4 = '225.0.0.1' MYTTL = 1 # Increase to reach other networks def udpMain(queue): app = udpsocket(queue) class udpsocket(): def __init__(self,queue): print('UDP Socket started') group = MYGROUP_4 self.receiver('225.0.0.1',queue) def receiver(self,group,queue): print('Receiver') addrinfo = socket.getaddrinfo(group, None)[0] # Create a socket s = socket.socket(addrinfo[0], socket.SOCK_DGRAM) #.... reuse address, binding, add membership # loop, send data to ui while True: data, sender = s.recvfrom(1500) while data[-1:] == '\0': data = data[:-1] # Strip trailing \0's print (str(sender) + ' ' + str(ord(data))) queue.put(ord(data)) ui.do_something(queue) </code></pre> <p>ui.py:</p> <pre><code>from tkinter import * from tkinter import ttk import multiprocessing def uiMain(queue): app = MainWindow() app.mainloop() class MainWindow(Frame): def __init__(self): Frame.__init__(self) self.master.title("Test") self.master.minsize(330, 400) self.grid(sticky=E+W+N+S) modeFrame = Frame(self) modeFrame.pack(side="top", fill="x") self.canvas1 = Canvas(modeFrame, height=25, width=25) self.canvas1.create_oval(5, 5, 20, 20, fill="black", tags="SetupLight") self.canvas1.pack(side="left") self.canvas2 = Canvas(modeFrame, height=25, width=25) self.canvas2.create_oval(5, 5, 20, 20, fill="black", tags="RunLight") self.canvas2.pack(side="left") def changeLight(self,value): print('change light ' + str(value)) if(value): self.canvas1.itemconfig("SetupLight", fill="black") self.canvas2.itemconfig("RunLight", fill="green") else: self.canvas1.itemconfig("SetupLight", fill="green") self.canvas2.itemconfig("RunLight", fill="black") def do_something(queue): t = MainWindow() MainWindow.changeLight(t,queue.get()) #is this way of calling is correct?? </code></pre> <p><strong>Note:</strong> I tried modifying <code>modeFrame</code> to <code>self</code> while creating <code>Canvas</code>, but there is nothing could happen</p> <p>I understood from the below link <a href="http://stackoverflow.com/questions/13683395/tkinter-canvas-not-updating-color">tkinter canvas not updating color</a> that I was creating MainWindow() again and again and that is the reason that canvas was not changing the color. I need an implementation which could help me in changing the colors with the usecase scenerio</p>
0
2016-08-09T23:20:05Z
38,868,730
<p>As you mentioned yourself already, you are creating instances of <code>MainWindow</code> class all over in your <code>do_something</code>.</p> <p>Second, the <code>do_something</code> function call is a bit strange for me. I would prefer </p> <pre><code>def do_something(queue): t = MainWindow() t.changeLight(queue.get()) </code></pre> <p>It might be arguable if this is a more pythonic way or not but I think in almost all tutorials, how-to's and example code you will see it like I mentioned it.</p> <p>Last time I implemented something like this, I took a different approach.</p> <p>I started Threads from the GUI, passed a queue to it and let the threads handle traffic.</p> <p>The GUI updated cyclic (every 100ms) where it checked the queue for items and based on what was inside the queue it updated the GUI.</p> <p>The threads where started again every time the update finished. (The application itself was a Session Watchdog for a Server including localization of the Users inside the threads)</p> <p>So as some implementation Advice I would start like the following:</p> <pre><code>import Tkinter as tk import Queue class MainWindow(tk.Frame): """ This is the GUI It starts threads and updates the GUI this is needed as tkinter GUI updates inside the Main Thread """ def __init__(self, *args, **kwargs): # Here some initial action takes place # ... self.queue = Queue() self.start_threads() self.__running = True self.update_gui() def start_threads(self): """ Here we start the threads for ease of reading only one right now thread_handler_class is a class performing tasks and appending results to a queue """ thread = thread_handler_class(self.queue) def update_gui(self): """ Update the UI with Information from the queue """ if not self.queue.empty(): # we need to check for emptiness first # otherwise we get exceptions if empty while self.queue.qsize() > 0: # handle the data # self.queue.get() automatically reduces # qsize return value next round so if no # new elements are added to the queue, # it will go to zero data = self.queue.get() # do stuff here # ... # Set up the cyclic task if self.__running: self.after(100, self.update_gui) # if neccessary we can also restart the threads # if they have a determined runtime # self.after(100, self.start_threads) if __name__ == "__main__": APP = MainWindow() APP.mainloop() </code></pre>
1
2016-08-10T09:02:33Z
[ "python", "canvas", "tkinter" ]
Pandas - Propogating variance
38,861,934
<p>For data of the form</p> <pre><code>mean var count 31.5910645161 747.570011484 310 45.7 350.0658 2 77.2548205128 4968.46005809 195 166.830361446 13755.5734253 166 40.29 208.8968 2 254.35 15204.1922 2 4.81 0.0 1 56.0124200913 962.697805171 1533 114.25 0.0 1 24.12 422.257129412 18 </code></pre> <p>Where there a many more repetitions of count later. I need to <code>groupby('count').agg('mean','var')</code> in order to propagate the variance properly. However, that code does not work (mean and var don't know what to do with the 2 columns), and of course just using mean is out of the question (the mean of the variance is not the variance of the mean). How do you do this such that the variance gets sent forward properly?</p>
2
2016-08-09T23:22:34Z
38,861,989
<blockquote> <pre><code>Parameters ---------- arg : function or dict Function to use for aggregating groups. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply. If passed a dict, the keys must be DataFrame column names. Accepted Combinations are: - string cythonized function name - function - list of functions - dict of columns -&gt; functions - nested dict of names -&gt; dicts of functions </code></pre> </blockquote> <p>You passed two strings when you needed to pass a list of strings.</p> <pre><code>df.groupby('count').agg(['mean','var']) </code></pre> <p><a href="http://i.stack.imgur.com/JjCUe.png" rel="nofollow"><img src="http://i.stack.imgur.com/JjCUe.png" alt="enter image description here"></a></p>
1
2016-08-09T23:28:01Z
[ "python", "pandas" ]
Attempting to read from two serial ports at once
38,861,980
<p>I am trying to read from two serial ports at once. Each connected device spits out a line of data. I read the data from each port as a list and then concatenate the list and print it out as one line.</p> <p>If I read each port individually, the data updates fine. But the second I attempt to read from both, it lags up and the data stops changing in the print output. The timestamp updates fine, but the data itself is what starts to lag.</p> <p>Below is my code, should I be doing some sort of threading? I am reading from an Arduino and a Teensy.</p> <pre><code>import serial import time serA = serial.Serial('/dev/arduino', 230400) serT = serial.Serial('/dev/teensy', 9600) while 1 : timestamp = "%f" % time.time() print(timestamp) arduino = serA.readline().rstrip('\n') data_listA = arduino.split('$') teensy = serT.readline().rstrip('\n') data_listT = teensy.split('$') data_list = data_listA + data_listT print(data_list) </code></pre>
0
2016-08-09T23:27:14Z
38,862,051
<p>just check to see if your serial port has bytes to read before you try to read it ...</p> <pre><code>while 1 : timestamp = "%f" % time.time() print(timestamp) if serA.inWaiting(): # only read if there is something waiting to be read arduino = serA.readline().rstrip('\n') data_listA = arduino.split('$') print("GOT ARDUINO:",data_listA) if serB.inWaiting(): teensy = serT.readline().rstrip('\n') data_listT = teensy.split('$') print("GOT TEENSY:",data_listT) </code></pre>
1
2016-08-09T23:35:01Z
[ "python" ]
Attempting to read from two serial ports at once
38,861,980
<p>I am trying to read from two serial ports at once. Each connected device spits out a line of data. I read the data from each port as a list and then concatenate the list and print it out as one line.</p> <p>If I read each port individually, the data updates fine. But the second I attempt to read from both, it lags up and the data stops changing in the print output. The timestamp updates fine, but the data itself is what starts to lag.</p> <p>Below is my code, should I be doing some sort of threading? I am reading from an Arduino and a Teensy.</p> <pre><code>import serial import time serA = serial.Serial('/dev/arduino', 230400) serT = serial.Serial('/dev/teensy', 9600) while 1 : timestamp = "%f" % time.time() print(timestamp) arduino = serA.readline().rstrip('\n') data_listA = arduino.split('$') teensy = serT.readline().rstrip('\n') data_listT = teensy.split('$') data_list = data_listA + data_listT print(data_list) </code></pre>
0
2016-08-09T23:27:14Z
38,874,247
<p>Using <code>inwaiting()</code> unfortunately did not work for me. I ended up having to use threading. A basic example for people who might encounter my problem is shown below.</p> <pre><code>import serial import Queue import threading queue = Queue.Queue(1000) serA = serial.Serial('/dev/arduino', 230400) serT = serial.Serial('/dev/teensy', 9600) def serial_read(s): while 1: line = s.readline() queue.put(line) threadA = threading.Thread(target=serial_read, args=(serA,),).start() threadT = threading.Thread(target=serial_read, args=(serT,),).start() while 1: line = queue.get(True, 1) print line </code></pre> <p>I based my code on the last answer from <a href="http://stackoverflow.com/questions/37505062/read-from-two-serial-ports-asynchronously">this</a> question.</p>
0
2016-08-10T13:03:08Z
[ "python" ]
Raw sql to json in Django, with Datetime and Decimal MySql columns
38,862,016
<p>I am using Ajax to make some requests from client to server, I am using DJango and I have used some Raw Sql queries before, but all of my fields was Int, varchar and a Decimal, for the last one I had an enconding problem, but I overrided the "default" property of Json and everything worked. </p> <p>But that was before, now I have a query wich gives me Decimal and DateTime fields, both of them gave me enconding errors, the overrided "default" doesn't work now, thats why with this new one I used <code>DjangoJSONEncoder</code>, but now I have another problem, and its not an encoding one, I am using <code>dictfetchall(cursor)</code> method, recomended on Django docs, to return a dictionary from the Sql query, because <code>cursor.fetchall()</code> gives me this error: <code>'tuple' object has no attribute '_meta'</code>.</p> <p>Before I just sended that dictionary to <code>json.dumps(response_data,default=default)</code> and everything was fine, but now for the encoding I have to use the following: <code>json.dumps(response_data,cls=DjangoJSONEncoder)</code> and if I send the dictionary in that way, I get this error:</p> <pre><code>SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data </code></pre> <p>And if I try to use the serializers, like this:</p> <pre><code>response_data2= serializers.serialize('json', list(response_data)) </code></pre> <p>And later send <code>response_data2</code> to <code>dumps</code>, I get this error:</p> <pre><code>'dict' object has no attribute '_meta' </code></pre> <p>This is the code for the MySql query:</p> <pre><code>def consulta_sql_personalizada(nombres,apellidos,puesto): from django.db import connection, transaction cursor = connection.cursor() cursor.execute("""select E.idEmpleado as id,CONCAT(Per.nombres_persona,' ',Per.apellidos_persona) as nombre,P.nombre_puesto as puesto,E.motivo_baja_empleado as motivo_baja,E.fecha_contratacion_empleado AS fecha_contratacion,E.fecha_baja_empleado as fecha_baja, SUM(V.total_venta) AS ventas_mes,E.fotografia_empleado as ruta_fotografia from Empleado as E inner join Puesto as P on E.Puesto_idPuesto=P.idPuesto inner join Venta as V on V.vendedor_venta=E.idEmpleado inner join Persona as Per on E.Persona_idPersona=Per.idPersona where (Per.nombres_persona like %s OR Per.apellidos_persona like %s OR E.Puesto_idPuesto=%s) AND E.estado_empleado=1 AND V.estado_venta=1 AND (YEAR(V.fecha_venta) = YEAR(Now()) AND MONTH(V.fecha_venta) = MONTH(Now()))""",[nombres,apellidos,puesto]) row = dictfetchall(cursor) return row </code></pre> <p>And this is the last part of the view that makes the query and send it to ajax using json:</p> <pre><code> response_data=consulta_sql_personalizada(rec_nombres,rec_apellidos,rec_puesto) return HttpResponse( json.dumps(response_data,cls=DjangoJSONEncoder), content_type="application/json" ) else: return HttpResponse( json.dumps({"nothing to see": "this isn't happening"}), content_type="application/json" ) </code></pre> <p>What I want to know is, how can I parse the raw sql result to Json using that enconding?</p>
2
2016-08-09T23:30:13Z
38,862,204
<p>Sorry, was my bad, i'm using JQuery ajax method, and in the "success" part I forgot to stop using <code>json.parse</code> to print the data in the console, the data was json already, that's why I had that <code>line 1 column 1 error</code>. My code worked exactly like it was posted here. If someone want to know how to make asynchronous requests, I followed this tutorial: <a href="https://realpython.com/blog/python/django-and-ajax-form-submissions/" rel="nofollow">Django form submissions using ajax</a> </p>
0
2016-08-09T23:53:36Z
[ "python", "mysql", "json", "django" ]
Convert unicode base notation to string in python
38,862,034
<p>I have a data in form of <code>2\u2070iPSC</code>. which is actually <code>2⁰iPSC</code>. how do i convert <code>2\u2070iPSC</code> to 2⁰iPSC using python. </p>
1
2016-08-09T23:32:52Z
38,862,071
<p>You need to add <code>u</code> as prefix in order to set it as unicode string.</p> <pre><code>unicode_string = u'2\u2070iPSC' print(unicode_string) &gt;&gt; 2⁰iPSC </code></pre>
0
2016-08-09T23:37:02Z
[ "python", "unicode" ]
Convert unicode base notation to string in python
38,862,034
<p>I have a data in form of <code>2\u2070iPSC</code>. which is actually <code>2⁰iPSC</code>. how do i convert <code>2\u2070iPSC</code> to 2⁰iPSC using python. </p>
1
2016-08-09T23:32:52Z
38,862,192
<p>As a unicode string the data already is <code>2⁰iPSC</code>. I think that you are concerned about its display.</p> <p>The code point <code>\u2070</code> <em>is</em> <code>⁰</code>:</p> <pre><code>&gt;&gt;&gt; import unicodedata &gt;&gt;&gt; unicodedata.name(u'\u2070') 'SUPERSCRIPT ZERO' </code></pre> <p>If you are using Python 2 you need to prefix the string with <code>u</code> to indicate that the unicode escape sequences are to be interpreted:</p> <pre><code>&gt;&gt;&gt; type('2\u2070iPSC') &lt;type 'str'&gt; &gt;&gt;&gt; type(u'2\u2070iPSC') # note `u` prefix &lt;type 'unicode'&gt; </code></pre> <p>In Python 3 strings are unicode by default, so the <code>u</code> prefix is not required:</p> <pre><code>&gt;&gt;&gt; type('2\u2070iPSC') &lt;class 'str'&gt; </code></pre> <p>To <em>display</em> the string you can simply print it:</p> <pre><code>&gt;&gt;&gt; print(u'2\u2070iPSC') 2⁰iPSC </code></pre> <p>This works if the default encoding of your interpreter can represent <code>u'\u2070'</code>, e.g. UTF-8. </p>
2
2016-08-09T23:51:57Z
[ "python", "unicode" ]
Xlwings fails to import UDFs
38,862,066
<p>So, I have an Excel workbook that I originally created on Windows 7 running Anaconda 4.1.1 (Python 3.5) and Excel 2013. Everything was working great in that environment. I am now trying to work on it remotely using a computer running Windows 10, Anaconda 4.1.1, and Excel 2016. I downloaded the workbook and its corresponding .py file, but when trying to import the UDFs in the .py file I get the following error: <a href="http://i.stack.imgur.com/QGGt5.png" rel="nofollow">(link to image)</a></p> <p>If for some reason the link is broken, the error text is basically </p> <pre><code>ImportError: No module named 'C:\\Anaconda3\\custom_scripts\\loop_parameters' </code></pre> <p>The file's name is "loop_parameters.py" and it is found in the path "C:\Anaconda3\custom_scripts". It seems to me that somehow the name of the module (<code>loop_parameters</code>) is getting conflated with the path associated with it. But I have no idea how to fix that. I tried changing the UDF_PATH variable with no change in the error message at all. Thoughts?</p>
0
2016-08-09T23:36:18Z
38,901,756
<p>So I ended up just starting a new project in xlwings on the new machine, copying over the python code into the new .py file, importing the UDFs, then copying the entire workbook from the old (non-functioning) book to the new one, where the UDFs worked fine. I'm still not sure what the error was or why it was happening. The workaround does seem clucky, but it only took a few minutes.</p>
0
2016-08-11T16:30:14Z
[ "python", "xlwings" ]
How do I add more python modules to my yocto/openembedded project?
38,862,088
<p>I wish to add more python modules to my yocto/openembedded project but I am unsure how to? I wish to add flask and its dependencies.</p> <p>Anyone able to help me on this?</p> <p>Thanks,</p> <p>Tim</p>
0
2016-08-09T23:38:56Z
38,865,389
<p>In your Image recipe you can add a Python module by adding it to the <code>IMAGE_INSTALL</code> variable:</p> <pre><code>IMAGE_INSTALL += "python-numpy" </code></pre> <p>You can find possible modules for example by searching for them with wildcards:</p> <pre><code>find -name *python*numpy*bb </code></pre> <p>in the Yocto Folder brings:</p> <pre><code>./poky/meta/recipes-devtools/python/python-numpy_1.7.0.bb </code></pre>
0
2016-08-10T06:03:18Z
[ "python", "linux", "yocto", "bitbake", "openembedded" ]
How do I add more python modules to my yocto/openembedded project?
38,862,088
<p>I wish to add more python modules to my yocto/openembedded project but I am unsure how to? I wish to add flask and its dependencies.</p> <p>Anyone able to help me on this?</p> <p>Thanks,</p> <p>Tim</p>
0
2016-08-09T23:38:56Z
38,865,576
<p>The OE layer index at layers.openembedded.org lists all known layers and the recipes they contain, so searching that should bring up the meta-python layer that you can add to your build and use recipes from.</p>
1
2016-08-10T06:15:56Z
[ "python", "linux", "yocto", "bitbake", "openembedded" ]
How do I share Protocol Buffer .proto files between multiple repositories
38,862,101
<p>We are considering using <a href="https://developers.google.com/protocol-buffers/" rel="nofollow">Protocol Buffers</a> for communicating between a python &amp; a node.js service that each live in their own repos. </p> <p>Since the <code>.proto</code> files must be accessible to both repos, how should we share the <code>.proto</code> files? </p> <p>We are currently considering:</p> <ol> <li>Creating a repo for all our <code>.proto</code> files, and making it a git subtree of all our services</li> <li>Creating a repo for all our <code>.proto</code> files, publishing both a private python module and private node module on push, and requiring the modules from the respective services</li> <li>Creating a repo for all our <code>.proto</code> files, and specifying the repository as the destination of a <code>pip</code> / <code>npm</code> package</li> </ol> <p>What is the standard way to share <code>.proto</code> files between repositories?</p>
2
2016-08-09T23:41:03Z
38,866,138
<p>This depends on your development process.</p> <p>A git subtree / submodule seems like a sensible solution for most purposes. If you had more downstream projects, publishing a ready-made module would make sense, as then the protobuf generator wouldn't be needed for every project.</p>
2
2016-08-10T06:48:06Z
[ "python", "node.js", "git", "protocol-buffers" ]
How do I share Protocol Buffer .proto files between multiple repositories
38,862,101
<p>We are considering using <a href="https://developers.google.com/protocol-buffers/" rel="nofollow">Protocol Buffers</a> for communicating between a python &amp; a node.js service that each live in their own repos. </p> <p>Since the <code>.proto</code> files must be accessible to both repos, how should we share the <code>.proto</code> files? </p> <p>We are currently considering:</p> <ol> <li>Creating a repo for all our <code>.proto</code> files, and making it a git subtree of all our services</li> <li>Creating a repo for all our <code>.proto</code> files, publishing both a private python module and private node module on push, and requiring the modules from the respective services</li> <li>Creating a repo for all our <code>.proto</code> files, and specifying the repository as the destination of a <code>pip</code> / <code>npm</code> package</li> </ol> <p>What is the standard way to share <code>.proto</code> files between repositories?</p>
2
2016-08-09T23:41:03Z
38,881,998
<p>We, in the same situation, used 3 repos: server-side was written in c++, client-side in actionscript 3, and protobufs was in the third, and was used both of them. For a big team, and big project I think it was a good choice.</p>
0
2016-08-10T19:30:06Z
[ "python", "node.js", "git", "protocol-buffers" ]
Read-write lock with only one underlying lock?
38,862,104
<p>I've written a read-write lock using Python's concurrency primitives (I think!). Every implementation I've read on SO or elsewhere seems to use 2 locks -- one for reads, and another for writes. My implementation contains only one monitor for reads, but I may be missing something crucial -- can anyone confirm that this will work? If so, what is the benefit to using an additional write lock?</p> <p>This is the classic read-write lock with preference for readers (may starve writers). I use a dummy cache to demonstrate the reading and writing.</p> <pre class="lang-py prettyprint-override"><code> import threading as t class ReadWriteCache(object): def __init__(self): self.cache = {} self.reads = 0 self.read_cond = t.Condition(t.Lock()) def read(self, key): with self.read_cond: # Register the read, so writes will wait() self.reads += 1 result = self.cache[key] with self.read_cond: self.reads -= 1 if not self.reads: self.read_cond.notify_all() return result def update(self, key, value): with self.read_cond: while self.reads: self.read_cond.wait() # Wait for reads to clear self.cache[key] = value # With read lock, update value </code></pre>
1
2016-08-09T23:41:26Z
38,867,835
<p>You are not using a single lock. <br>You are using a <strong>lock and a condition variable</strong></p> <pre><code>self.read_lock = t.Condition(t.Lock()) </code></pre> <p>A condition variable is a concurrency primitive too. A more complex one than a lock.</p> <p><strong>note :</strong> please do not call a condition variable object <code>read_lock</code></p> <p><strong>edit:</strong> Your code seems correct to me, as it solves the <a href="https://en.wikipedia.org/wiki/Readers%E2%80%93writers_problem" rel="nofollow"><em>First readers-writers problem</em></a>. As you said it may starve writer. This is not a small issue. The logic behind reader writer is that there may be a lot more reads than writes <br> An additional lock allow to solve the <em>Second readers-writers problem</em>, where a writer doesn't starve. Indeed, readers have to wait when there is a writer waiting for the resource.</p>
1
2016-08-10T08:19:12Z
[ "python", "multithreading", "concurrency", "locking" ]
Flask cannot raise HTTP exception after try catching Runtime error
38,862,144
<p>When I try to raise a HTTP exception status code 400 it only prints the json error message on the browser but does not state <code>HTTP/1.1 400 BAD REQUEST</code> in the console like it is supposed to. The exception raising works for all other parts of my program but it doesn't work when I do it in a try-catch for a runtime error.</p> <p>My exception handler is exactly this: <a href="http://flask.pocoo.org/docs/0.11/patterns/apierrors/" rel="nofollow">http://flask.pocoo.org/docs/0.11/patterns/apierrors/</a></p> <p>my try-catch:</p> <pre><code>try: // run some program catch RuntimeError as e: raise InvalidUsage(e.message, status_code=400) </code></pre>
0
2016-08-09T23:46:21Z
38,863,378
<p>You should use the <code>abort</code> function of flask, something like:</p> <pre><code>from flask import abort @app.route("/some_route") def some_route(): try: # do something except SomeException: abort("Some message", 400) </code></pre>
0
2016-08-10T02:34:23Z
[ "python", "flask", "exception-handling" ]
Determine dependencies
38,862,163
<p>I have a large file with several lines, where each line is a <code>&lt;project name&gt;: &lt;dependencies&gt;, &lt;dependencies1&gt;</code> list. </p> <p><code> AAA: BBB, CCC, DDD BBB: EEE, FFF DDD: CCC CCC: GGG: DDD EEE: FFF FFF </code></p> <p>The projects needs to ordered such such that its dependencies are met before selecting the next one. Now the ordered file would look like the following. </p> <p><code> 1. CCC // CCC and FFF since they have no dependencies 1. FFF<br> 2. DDD // since the dependencies (CCC) are met 2. EEE // since the dependencies (FFF) are met 3. GGG // since the dependencies (DDD) are met 4. BBB // since the dependencies (EEE and FFF) are met 5. AAA // since the dependencies (BBB, CCC, DDD) are met </code></p> <p>Even better if the output is like the below: <code> 1. CCC, FFF // CCC and FFF since they have no dependencies 2. DDD, EEE // since the dependencies (CCC, FFF) are met 3. GGG // since the dependencies (DDD) are met 4. BBB // since the dependencies (EEE and FFF) are met 5. AAA // since the dependencies (BBB, CCC, DDD) are met </code> Sorry, if this has already been asked/solved earlier. </p>
-1
2016-08-09T23:49:02Z
38,863,172
<p>The trick with this is recursively finding all dependencies in order to establish their priorities. Try the code below, you can simply run it providing the file containing the dependencies:</p> <pre><code>#!/usr/bin/env python2.7 import re import string import sys import unittest from collections import OrderedDict class Solver(object): def __init__(self, input): self.input = input self.pattern = r'(\w+)[\:,]{0,1}\s{0,1}' self.cached = {} def get_dependencies(self, project): """Recursively get a project's dependencies.""" dependencies = set() for line in self.input.split('\n'): matches = re.findall(self.pattern, line) try: if matches[0] == project: dependencies = set(matches[1:]) for dependency in matches[1:]: if dependency not in self.cached: for r_dependency in self.get_dependencies(dependency): dependencies.add(r_dependency) else: for r_dependency in self.cached[dependency]: dependencies.add(r_dependency) except IndexError: pass self.cached[project] = dependencies return dependencies def get_solved_dependencies(self): """ Get all solved dependencies. Least dependant projects are returned first. """ counted = {} for line in self.input.split('\n'): matches = re.findall(self.pattern, line) try: project = matches[0] dependencies = self.get_dependencies(project) try: counted[len(dependencies)].append(project) except KeyError: counted[len(dependencies)] = [project] except IndexError: pass solved = [] for projects in OrderedDict(sorted(counted.items())).values(): solved.append(tuple(projects)) return tuple(solved) class TestSolver(unittest.TestCase): def test_get_dependencies(self): input = """ AAA: BBB, CCC, DDD BBB: EEE, FFF DDD: CCC CCC: GGG: DDD EEE: FFF FFF """ solver = Solver(input=input) self.assertEqual(solver.get_dependencies('AAA'), set(['BBB', 'CCC', 'DDD', 'EEE', 'FFF'])) self.assertEqual(solver.get_dependencies('GGG'), set(['DDD', 'CCC'])) self.assertEqual(solver.get_dependencies('FFF'), set()) def test_get_solved_dependencies(self): input = """ A: B, C B: C, D C: D, E D: E, F E: F, G F: G, H G: H, I H: I, J I: J, K J: K, L K: L, M L: M, N M: N, O N: O, P O: P, Q P: Q, R Q: R, S R: S, T S: T, U T: U, V U: V, W V: W, X W: X, Y X: Y, Z Y: Z Z: """ solved_dependencies = tuple([tuple(letter) for letter in reversed(string.ascii_uppercase)]) solver = Solver(input=input) self.assertEqual(solver.get_solved_dependencies(), solved_dependencies) if __name__ == '__main__': try: input_file = sys.argv[1] except IndexError: print('You must specify a file listing all dependencies.') sys.exit(1) try: with open(input_file, 'r') as handle: input = handle.read() except IOError: print("File does not exist: {0}".format(input_file)) sys.exit(1) solved_dependencies = Solver(input=input).get_solved_dependencies() counter = 1 for dependencies in solved_dependencies: print('{0}. {1}'.format(counter, ', '.join(dependencies))) counter += 1 </code></pre> <p>I realise using recursion may raise issues if your dependency matrix is too deep, but I think it should solve your problem.</p> <p>Also, from your examples, it looks like BBB and GGG may be installed at the same time, since both of their dependencies are solved by the time they are reached.</p>
-1
2016-08-10T02:10:58Z
[ "python" ]