title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Removing some of the duplicates from a list in Python | 38,599,066 | <p>I would like to remove a certain number of duplicates of a list without removing all of them. For example, I have a list <code>[1,2,3,4,4,4,4,4]</code> and I want to remove 3 of the 4's, so that I am left with <code>[1,2,3,4,4]</code>. A naive way to do it would probably be</p>
<pre><code>def remove_n_duplicates(remove_from, what, how_many):
for j in range(how_many):
remove_from.remove(what)
</code></pre>
<p>Is there a way to do remove the three 4's in one pass through the list, but keep the other two.</p>
| 5 | 2016-07-26T20:13:25Z | 38,599,466 | <p>You can use Python's set functionality with the & operator to create a list of lists and then flatten the list. The result list will be [1, 2, 3, 4, 4].</p>
<pre><code>x = [1,2,3,4,4,4,4,4]
x2 = [val for sublist in [[item]*max(1, x.count(item)-3) for item in set(x) & set(x)] for val in sublist]
</code></pre>
<p>As a function you would have the following.</p>
<pre><code>def remove_n_duplicates(remove_from, what, how_many):
return [val for sublist in [[item]*max(1, remove_from.count(item)-how_many) if item == what else [item]*remove_from.count(item) for item in set(remove_from) & set(remove_from)] for val in sublist]
</code></pre>
| 0 | 2016-07-26T20:39:17Z | [
"python",
"list",
"duplicates"
] |
Removing some of the duplicates from a list in Python | 38,599,066 | <p>I would like to remove a certain number of duplicates of a list without removing all of them. For example, I have a list <code>[1,2,3,4,4,4,4,4]</code> and I want to remove 3 of the 4's, so that I am left with <code>[1,2,3,4,4]</code>. A naive way to do it would probably be</p>
<pre><code>def remove_n_duplicates(remove_from, what, how_many):
for j in range(how_many):
remove_from.remove(what)
</code></pre>
<p>Is there a way to do remove the three 4's in one pass through the list, but keep the other two.</p>
| 5 | 2016-07-26T20:13:25Z | 38,600,022 | <p>Here is another trick which might be useful sometimes. Not to be taken as the recommended recipe. </p>
<pre><code>def remove_n_duplicates(remove_from, what, how_many):
exec('remove_from.remove(what);'*how_many)
</code></pre>
| -1 | 2016-07-26T21:17:10Z | [
"python",
"list",
"duplicates"
] |
Removing some of the duplicates from a list in Python | 38,599,066 | <p>I would like to remove a certain number of duplicates of a list without removing all of them. For example, I have a list <code>[1,2,3,4,4,4,4,4]</code> and I want to remove 3 of the 4's, so that I am left with <code>[1,2,3,4,4]</code>. A naive way to do it would probably be</p>
<pre><code>def remove_n_duplicates(remove_from, what, how_many):
for j in range(how_many):
remove_from.remove(what)
</code></pre>
<p>Is there a way to do remove the three 4's in one pass through the list, but keep the other two.</p>
| 5 | 2016-07-26T20:13:25Z | 38,600,220 | <p>If the list is sorted, there's the fast solution:</p>
<pre><code>def remove_n_duplicates(remove_from, what, how_many):
index = 0
for i in range(len(remove_from)):
if remove_from[i] == what:
index = i
break
if index + how_many >= len(remove_from):
#There aren't enough things to remove.
return
for i in range(index, how_many):
if remove_from[i] != what:
#Again, there aren't enough things to remove
return
endIndex = index + how_many
return remove_from[:index+1] + remove_from[endIndex:]
</code></pre>
<p>Note that this returns the new array, so you want to do arr = removeCount(arr, 4, 3)</p>
| 0 | 2016-07-26T21:31:09Z | [
"python",
"list",
"duplicates"
] |
How does Djangos user permissions work? | 38,599,174 | <p>I am lacking of basic understanding of the user permission model of django.
What I basically want is that a user can delete its own account with a button. But I fear that I grant the user more permissions to delete things than the user should. Like deleting other users?!?</p>
<p>So i guess i have to use something like</p>
<pre><code>myuser.user_permissions.add(permission, permission, ...)
</code></pre>
<p>I red <a href="https://docs.djangoproject.com/en/1.9/topics/auth/default/#topic-authorization" rel="nofollow">https://docs.djangoproject.com/en/1.9/topics/auth/default/#topic-authorization</a></p>
<p>So do I need to write model methods, which i give granted permission to the user for execution?</p>
| 0 | 2016-07-26T20:19:30Z | 38,599,394 | <p>Just to let an user to delete his own account you don't really need the permission system. You can do it in a view like this:</p>
<pre><code>from django.contrib.auth import logout
def delete_my_account(request):
user = request.user
if user.is_authenticated():
logout(request)
user.delete()
(then redirect, render a template or whatever)
</code></pre>
| 1 | 2016-07-26T20:34:18Z | [
"python",
"django",
"python-3.x",
"permissions"
] |
Changing or Turning off _FillValues | 38,599,252 | <p>I want to either turn off the filling or change the _FillValue to None/NaN in the NetCDF file. How do you do this? I have tried looking it up and nobody talks about it. When I output a variable such as longitude, this is what I get:</p>
<p><strong>float32 lons(lons)
units: degree_east
unlimited dimensions:
current shape = (720,)
filling on, default _FillValue of 9.969209968386869e+36 used</strong></p>
<p>I have also tried masking, but it still gives me the information above.</p>
<p>Here is some code I have: </p>
<pre><code> lati = numpy.arange(-89.75,90.25,0.5)
long = numpy.arange(-179.75,180.25,0.5)
row = 360
column = 720
dataset = netCDF4.Dataset(r'Y://Projects//ToriW//NC Files//April2.nc', 'w', format = 'NETCDF4_CLASSIC')
dataset.misisngValue = None
dataset.filling = "off"
dataset.createDimension('lats',row)
dataset.createDimension('lons',column)
lats = dataset.createVariable('lats', 'f4',('lats'))
lats.units = 'degree_north'
lons = dataset.createVariable('lons','f4',('lons'))
lons.units = 'degree_east'
print (lons)
lats[:] = lati
lons[:] = long
Pre = dataset.createVariable ('Pre',numpy.float64, ('lats','lons'))
Pre[:,:] = total
dataset.close()
</code></pre>
<p>}</p>
| 0 | 2016-07-26T20:25:25Z | 38,685,109 | <p><a href="http://unidata.github.io/netcdf4-python/#netCDF4.Dataset.set_fill_off" rel="nofollow" title="_Fill_Value">_Fill_Value</a> is the proper name for the attribute you want.</p>
<pre><code>dateset.set_fill_off()
</code></pre>
<p>should work.</p>
<pre><code>dataset.setncattr("_Fill_Value", None)
</code></pre>
<p>might work. I'm not sure, but will work to change the _Fill_Value.</p>
| 0 | 2016-07-31T14:44:28Z | [
"python",
"missing-data",
"netcdf"
] |
Iterate through columns to compare each one with a specific column in Python | 38,599,261 | <p>I'm trying to use pandas in python to solve this problem. I have a data frame with nearly 1000 columns. For each column, I'd like to return a boolean value for a mathematical operation - specifically <code>Column A</code> - <code>Column n</code> => 0.</p>
<pre><code>"ID" "Column A" "Column B" "Column C" "Column D"
"A" 100 200 300 50
"B" 75 20 74 500
</code></pre>
<p>Let's assume <code>Column A</code> is the row I'd like to use for the comparison. I would like the result to be a data frame that looks like this:</p>
<pre><code>"ID" "Column A" "Column B" "Column C" "Column D"
"A" 100 False False True
"B" 75 True True False
</code></pre>
<p>Thanks for your help.</p>
| 2 | 2016-07-26T20:25:51Z | 38,599,495 | <p>This should do it:</p>
<pre><code>c = 'Column A'
d = df.set_index('ID')
lt = d.drop(c, axis=1).lt(d[c], axis=0)
pd.concat([d[c], lt], axis=1).reset_index()
</code></pre>
<p><a href="http://i.stack.imgur.com/bEKvB.png" rel="nofollow"><img src="http://i.stack.imgur.com/bEKvB.png" alt="enter image description here"></a></p>
| 1 | 2016-07-26T20:41:38Z | [
"python",
"loops",
"pandas"
] |
Iterate through columns to compare each one with a specific column in Python | 38,599,261 | <p>I'm trying to use pandas in python to solve this problem. I have a data frame with nearly 1000 columns. For each column, I'd like to return a boolean value for a mathematical operation - specifically <code>Column A</code> - <code>Column n</code> => 0.</p>
<pre><code>"ID" "Column A" "Column B" "Column C" "Column D"
"A" 100 200 300 50
"B" 75 20 74 500
</code></pre>
<p>Let's assume <code>Column A</code> is the row I'd like to use for the comparison. I would like the result to be a data frame that looks like this:</p>
<pre><code>"ID" "Column A" "Column B" "Column C" "Column D"
"A" 100 False False True
"B" 75 True True False
</code></pre>
<p>Thanks for your help.</p>
| 2 | 2016-07-26T20:25:51Z | 38,599,523 | <p>You can apply a <code>lambda</code> function that subtracts each column series from the target column and then tests if the result is greater than or equal to zero (<code>ge(0)</code>).</p>
<pre><code>d = {'Column A': {'A': 100, 'B': 75},
'Column B': {'A': 200, 'B': 20},
'Column C': {'A': 300, 'B': 74},
'Column D': {'A': 50, 'B': 500}}
df = pd.DataFrame(d)
col = "Column A"
other_cols = [c for c in df if c != col]
>>> pd.concat([df[[col]],
df[other_cols].apply(lambda series: df[col].sub(series).ge(0))], axis=1)
Column A Column B Column C Column D
ID
A 100 False False True
B 75 True True False
</code></pre>
| 1 | 2016-07-26T20:43:28Z | [
"python",
"loops",
"pandas"
] |
Iterate through columns to compare each one with a specific column in Python | 38,599,261 | <p>I'm trying to use pandas in python to solve this problem. I have a data frame with nearly 1000 columns. For each column, I'd like to return a boolean value for a mathematical operation - specifically <code>Column A</code> - <code>Column n</code> => 0.</p>
<pre><code>"ID" "Column A" "Column B" "Column C" "Column D"
"A" 100 200 300 50
"B" 75 20 74 500
</code></pre>
<p>Let's assume <code>Column A</code> is the row I'd like to use for the comparison. I would like the result to be a data frame that looks like this:</p>
<pre><code>"ID" "Column A" "Column B" "Column C" "Column D"
"A" 100 False False True
"B" 75 True True False
</code></pre>
<p>Thanks for your help.</p>
| 2 | 2016-07-26T20:25:51Z | 38,602,339 | <pre><code>df = df.set_index("ID")
dd = df.apply(lambda x: x.lt(df["Column A"]))
dd["Column A"] = df["Column A"]
dd
Column A Column B Column C Column D
ID
A 100 False False True
B 75 True True False
</code></pre>
| 0 | 2016-07-27T01:37:08Z | [
"python",
"loops",
"pandas"
] |
When making a program that draws forever, how do I keep the drawings on screen? | 38,599,295 | <p>It's a fairly simple code I made in my free time, but since it's random the pointer always has a chance of going off screen. I've watched my child use Scratch before and in their drawing/sprites if it hits the edge it can bounce off back in, so it's never off screen. I am looking for a code like that but for Python. </p>
<pre><code>from turtle import*
from random import*
while True:
forward(randint(1, 360))
right(randint(1, 360))
</code></pre>
| 0 | 2016-07-26T20:28:25Z | 38,600,243 | <pre><code>from turtle import *
from random import*
while True:
forward(randint(1, 360))
right(randint(1, 360))
if xcor() < 0:
setx(20)
if ycor() < 0:
sety(20)
if xcor() > window_width():
setx(window_width()-20)
if ycor() > window_height():
sety(window_height()-20)
</code></pre>
<p>Should work, where 20 is up to your choice.</p>
<p>Edit: The Solutions in one of the duplicate posts are probably better</p>
| 0 | 2016-07-26T21:32:16Z | [
"python",
"turtle-graphics"
] |
Accesing a matrix within a .mat file with python | 38,599,329 | <p>I'm translating matlab code to python. I have a few matrices within a .mat file called 'AK_1'. I only want to access the data in one of these matrices. The matlab code accesses it this way where .response1 is the desired matrix:</p>
<blockquote>
<p>numtrials1 = subject_data1.response1(1,:);</p>
</blockquote>
<p>I tried loading all the data into a dict so I could then loop through it to the desired matrice with this code but that did not produce a workable result.</p>
<blockquote>
<p>subject_data1_dict = {}</p>
<p>subject_data1 = scipy.io.loadmat('./MAT_Data_Full_AAAD_V2/AK_1.mat', subject_data1_dict)</p>
</blockquote>
<p>How can I access only the matrix 'response1' within the file AK_1.mat?</p>
| 0 | 2016-07-26T20:30:45Z | 38,599,482 | <p>create and save a structure containing 3 matrices in matlab:</p>
<pre><code>a=1:5
b.aa=a
b.bb=a
b.cc=a
save(struct.mat,'b')
</code></pre>
<p>load .mat file in python</p>
<pre><code>from scipy.io import loadmat
matfile = loadmat('d:/struct.mat')
</code></pre>
<p>you can now access for example b.aa and b.bb via:</p>
<pre><code>matfile[('b')][0][0][0]
matfile[('b')][0][0][1]
</code></pre>
<p>Is that what you wanted?</p>
| 0 | 2016-07-26T20:40:15Z | [
"python",
"matlab",
"matrix"
] |
Accesing a matrix within a .mat file with python | 38,599,329 | <p>I'm translating matlab code to python. I have a few matrices within a .mat file called 'AK_1'. I only want to access the data in one of these matrices. The matlab code accesses it this way where .response1 is the desired matrix:</p>
<blockquote>
<p>numtrials1 = subject_data1.response1(1,:);</p>
</blockquote>
<p>I tried loading all the data into a dict so I could then loop through it to the desired matrice with this code but that did not produce a workable result.</p>
<blockquote>
<p>subject_data1_dict = {}</p>
<p>subject_data1 = scipy.io.loadmat('./MAT_Data_Full_AAAD_V2/AK_1.mat', subject_data1_dict)</p>
</blockquote>
<p>How can I access only the matrix 'response1' within the file AK_1.mat?</p>
| 0 | 2016-07-26T20:30:45Z | 38,600,257 | <p>Say you have a <code>myfile.mat</code> with the following struct S:</p>
<pre><code>S =
response1: [5x5 double]
response2: [5x5 double]
response3: [5x5 double]
</code></pre>
<p>And you want to access <code>response1</code> from python. Then:</p>
<pre><code>>>> from scipy.io import loadmat
>>> D = loadmat("myfile.mat", variable_names = ("S",) )
>>> D["S"]["response1"] # returns matlab's S.response1
</code></pre>
<p>If you wanted to select more variables contained in the file than just S, you just add them in the tuple, i.e. <code>variable_names=("S","otherVar")</code> </p>
<p>Obviously, if all you're interested in is the <code>response1</code> array, you can bypass collecting the dictionary altogether, i.e.:</p>
<pre><code>>>> response1 = loadmat("myfile.mat", variable_names = ("S",) )["S"]["response1"]
>>> response1
array([[ array([[ 9, 1, 2, 2, 7],
[10, 3, 10, 5, 1],
[ 2, 6, 10, 10, 9],
[10, 10, 5, 8, 10],
[ 7, 10, 9, 10, 7]], dtype=uint8)]], dtype=object)
</code></pre>
| 1 | 2016-07-26T21:33:29Z | [
"python",
"matlab",
"matrix"
] |
How to set SAS program macro variables through a python script | 38,599,404 | <p>I have multiple SAS program files, each of which has various macro variables which need to be set dynamically through an excel file. I was wondering if I can pass the values from the excel file to the SAS program through a python script (or shell script). I wish to automate the process of setting parameter for each SAS program manually. </p>
<p>Please suggest.</p>
| 0 | 2016-07-26T20:35:25Z | 38,734,883 | <p>You can pass a string to SAS using the <code>-sysparm</code> command line option, and this string will be available in SAS as the <code>&sysparm</code> automatic variable.</p>
<p>E.g. (from the command line:)</p>
<pre>sas myprogram.sas -sysparm myparam</pre>
<p>If you need to parse the string inside SAS, the <code>%scan()</code> macro function will probably be useful.</p>
| 0 | 2016-08-03T05:13:43Z | [
"python",
"parameters",
"automation",
"sas"
] |
How to set SAS program macro variables through a python script | 38,599,404 | <p>I have multiple SAS program files, each of which has various macro variables which need to be set dynamically through an excel file. I was wondering if I can pass the values from the excel file to the SAS program through a python script (or shell script). I wish to automate the process of setting parameter for each SAS program manually. </p>
<p>Please suggest.</p>
| 0 | 2016-07-26T20:35:25Z | 40,027,782 | <p>So I did figure out how I can solve this:</p>
<pre><code> subprocess.call(['C:\\Program Files\\SASHome\\SASFoundation\\9.4\\sas.exe', '-config', config_path, '-autoexec', autoexec_path,'-sasinitialfolder',sasinitialfolder,'-sysin', sas_script_path,'-log',ori_log])
</code></pre>
<p>I made a subprocess call from python to run it in Bash and I am sending different parameters to this call with respect to the SAS program I am calling. The parameters which I pass from the subprocess call are captured in the SAS program through a %sysget command.Thank you.</p>
| 0 | 2016-10-13T17:52:11Z | [
"python",
"parameters",
"automation",
"sas"
] |
An equivalent to R's %in% operator in Python | 38,599,455 | <p>I'm starting to learn a bit more about Python. One function I often want but don't know how to program/do in Python is the <code>x %in% y</code> operator in R. It works like so:</p>
<pre><code>1:3 %in% 2:4
##[1] FALSE TRUE TRUE
</code></pre>
<p>The first element in <code>x</code> (1) has no match in <code>y</code> so <code>FALSE</code> where as the last two elements of <code>x</code> (2 & 3) do have a match in <code>y</code> so they get <code>TRUE</code>.</p>
<p>Not that I expected this to work as it wouldn't have worked in R either but using <code>==</code> in Python I get:</p>
<pre><code>[1, 2, 3] == [2, 3, 4]
# False
</code></pre>
<p>How could I get the same type of operator in Python? I'm not tied to base Python if such an operator already exists elsewhere.</p>
| 2 | 2016-07-26T20:38:46Z | 38,599,487 | <p><code>%in%</code> is simply <code>in</code> in Python. But, like most other things in Python outside numpy, itâs not vectorised:</p>
<pre><code>In [1]: 1 in [1, 2, 3]
Out[1]: True
In [2]: [1, 2] in [1, 2, 3]
Out[2]: False
</code></pre>
| 1 | 2016-07-26T20:40:48Z | [
"python"
] |
An equivalent to R's %in% operator in Python | 38,599,455 | <p>I'm starting to learn a bit more about Python. One function I often want but don't know how to program/do in Python is the <code>x %in% y</code> operator in R. It works like so:</p>
<pre><code>1:3 %in% 2:4
##[1] FALSE TRUE TRUE
</code></pre>
<p>The first element in <code>x</code> (1) has no match in <code>y</code> so <code>FALSE</code> where as the last two elements of <code>x</code> (2 & 3) do have a match in <code>y</code> so they get <code>TRUE</code>.</p>
<p>Not that I expected this to work as it wouldn't have worked in R either but using <code>==</code> in Python I get:</p>
<pre><code>[1, 2, 3] == [2, 3, 4]
# False
</code></pre>
<p>How could I get the same type of operator in Python? I'm not tied to base Python if such an operator already exists elsewhere.</p>
| 2 | 2016-07-26T20:38:46Z | 38,599,505 | <p>It's actually very simple. </p>
<pre><code>if/for element in list-like object.
</code></pre>
<p>Specifically, when doing so for a list it will compare every element in that list. When using a dict as the list-like object (iterable), it will use the dict keys as the object to iterate through. </p>
<p>Cheers </p>
<p>Edit:
For your case you should make use of "List Comprehensions". </p>
<pre><code>[True for element in list1 if element in list2]
</code></pre>
| 2 | 2016-07-26T20:42:13Z | [
"python"
] |
while loop not working in tkinter animation | 38,599,605 | <p>I am trying to create an animation of a turning wheel and want to have a small delay in a while loop and then update the wheel every time. I have tried both the "after" function in tkinter as well as the "sleep" function in python but it either crashes or finishes the conputation and only shows me the last position without the actual animation as the wheel is turning. </p>
<p>The function I created for the turning wheel:</p>
<pre><code>def turning():
#initial wheel position
global position
pos(position)
#infinite loop turning the wheel
while(1):
root.after(1000, spin)
def spin():
global position
global speed
delspike() #delete current wheel
position += speed #calculate next position
if position > 360:
position -= 360
pos(position) #draw new wheel
</code></pre>
<p>why is this not working?</p>
| 0 | 2016-07-26T20:48:19Z | 38,599,751 | <p>This code:</p>
<pre><code>while (1):
root.after(1000, spin)
</code></pre>
<p>.. is going to schedule the <code>spin</code> function to run in 1 second. And it's going to do that thousands of times in the blink of an eye. Even though you're asking <code>spin</code> to run in one second, the while loop itself is going to run as fast as it possibly can and never stop. It will schedule hundreds of thousands of spins before the first spin, and then they will all run one after the other since they are all going to try to run in one second. </p>
<p>The correct way to do animation is to have the function schedule itself again in one second:</p>
<pre><code>def spin():
...
root.after(1000, spin)
</code></pre>
<p>Then, you call <code>spin</code> exactly once at the start of your program, and it runs indefinitely.</p>
| 0 | 2016-07-26T20:56:59Z | [
"python",
"loops",
"animation",
"tkinter",
"delay"
] |
while loop not working in tkinter animation | 38,599,605 | <p>I am trying to create an animation of a turning wheel and want to have a small delay in a while loop and then update the wheel every time. I have tried both the "after" function in tkinter as well as the "sleep" function in python but it either crashes or finishes the conputation and only shows me the last position without the actual animation as the wheel is turning. </p>
<p>The function I created for the turning wheel:</p>
<pre><code>def turning():
#initial wheel position
global position
pos(position)
#infinite loop turning the wheel
while(1):
root.after(1000, spin)
def spin():
global position
global speed
delspike() #delete current wheel
position += speed #calculate next position
if position > 360:
position -= 360
pos(position) #draw new wheel
</code></pre>
<p>why is this not working?</p>
| 0 | 2016-07-26T20:48:19Z | 38,599,822 | <p><code>after</code> is used inside the function you are trying to put in a loop:</p>
<pre><code>def f():
...
root.after(1000, f)
</code></pre>
<p>(1000 means 1000 milliseconds which is 1 second. This means the program will perform an operation every 1 second. And you can change it to any number you wish.)
Also, keep in mind that, using infinite <code>while</code> loop (<code>while True</code>, <code>while 1</code> etc.) in Tkinter will make the window <b>not responding</b>. We have discussed this here a lot. You could find this if you had researched on <a href="http://stackoverflow.com/search?q=tkinter+while+loop">SO</a>.</p>
| 0 | 2016-07-26T21:02:28Z | [
"python",
"loops",
"animation",
"tkinter",
"delay"
] |
while loop not working in tkinter animation | 38,599,605 | <p>I am trying to create an animation of a turning wheel and want to have a small delay in a while loop and then update the wheel every time. I have tried both the "after" function in tkinter as well as the "sleep" function in python but it either crashes or finishes the conputation and only shows me the last position without the actual animation as the wheel is turning. </p>
<p>The function I created for the turning wheel:</p>
<pre><code>def turning():
#initial wheel position
global position
pos(position)
#infinite loop turning the wheel
while(1):
root.after(1000, spin)
def spin():
global position
global speed
delspike() #delete current wheel
position += speed #calculate next position
if position > 360:
position -= 360
pos(position) #draw new wheel
</code></pre>
<p>why is this not working?</p>
| 0 | 2016-07-26T20:48:19Z | 38,604,020 | <p>I have noticed that many beginners 'who don't know better' try to animate tkinter with a while loop. It turns out that this is not a silly idea. I recently figured out how to make this work using asyncio and the new in 3.5 async-await syntax. I also worked out a generic template for a simple app. I happen to like this style more than using after loops. If you have 3.5.2 or 3.6.0a3 installed (or install either), you can run this code and replace my rotator with yours.</p>
<pre><code>import asyncio
import tkinter as tk
class App(tk.Tk):
def __init__(self, loop, interval=1/120):
super().__init__()
self.loop = loop
self.protocol("WM_DELETE_WINDOW", self.close)
self.tasks = []
self.tasks.append(loop.create_task(
self.rotator(1/60, 1)))
self.updater(interval)
async def rotator(self, interval, d_per_int):
canvas = tk.Canvas(self, height=600, width=600)
canvas.pack()
deg = 0
arc = canvas.create_arc(100, 100, 500, 500,
start=0, extent=deg, fill='blue')
while True:
await asyncio.sleep(interval)
deg = (deg + d_per_int) % 360
canvas.itemconfigure(arc, extent=deg)
def updater(self, interval):
self.update()
self.loop.call_later(interval, self.updater, interval)
def close(self):
for task in self.tasks:
task.cancel()
self.loop.stop()
self.destroy()
loop = asyncio.get_event_loop()
app = App(loop)
loop.run_forever()
loop.close()
</code></pre>
| 0 | 2016-07-27T04:55:38Z | [
"python",
"loops",
"animation",
"tkinter",
"delay"
] |
Parallel calls of sleep don't add up | 38,599,607 | <p>Using the subprocess module, I'm running 1000 calls to sleep(1) in parallel:</p>
<pre><code>import subprocess
import time
start = time.clock()
procs = []
for _ in range(1000):
proc = subprocess.Popen(["sleep.exe", "1"])
procs.append(proc)
for proc in procs:
proc.communicate()
end = time.clock()
print("Executed in %.2f seconds" % (end - start))
</code></pre>
<p>On my 4-core machine, this results in an execution time of a couple of seconds, far less than I expected (~ 1000s / 4).</p>
<p>How does it get optimized away? Does it depend on the sleep implementation (this one is taken from the Windows-Git-executables)?</p>
| 0 | 2016-07-26T20:48:30Z | 38,599,670 | <p>This is because, <code>subprocess.Popen(..)</code> is <strong>not a blocking call</strong>. The thread just triggers the child process creation and moves on. It does not wait for it to finish.</p>
<p>In other words, you are spawning 1000 asynchronous processes in a loop, and then waiting on them one by one later on. This asynchronous behavior results in your overall run time of a few seconds. </p>
<hr>
<p>Calling <code>proc.communicate()</code> waits until the child process is complete (has exited). Now, if you want the sleep times to add up (minus the process creation/destruction) overhead, you'd do:</p>
<pre><code>import subprocess
import time
start = time.clock()
procs = []
#Get the start time
for _ in range(10):
proc = subprocess.Popen(["sleep.exe", "1"])
procs.append(proc)
proc.communicate()
#Get the end time
</code></pre>
<hr>
<blockquote>
<p>Does it depend on the sleep implementation (this one is taken from the Windows-Git-executables)?</p>
</blockquote>
<p>As I've outlined above, this has nothing to do with implementation of sleep.</p>
| 1 | 2016-07-26T20:51:49Z | [
"python",
"multithreading",
"optimization"
] |
Parallel calls of sleep don't add up | 38,599,607 | <p>Using the subprocess module, I'm running 1000 calls to sleep(1) in parallel:</p>
<pre><code>import subprocess
import time
start = time.clock()
procs = []
for _ in range(1000):
proc = subprocess.Popen(["sleep.exe", "1"])
procs.append(proc)
for proc in procs:
proc.communicate()
end = time.clock()
print("Executed in %.2f seconds" % (end - start))
</code></pre>
<p>On my 4-core machine, this results in an execution time of a couple of seconds, far less than I expected (~ 1000s / 4).</p>
<p>How does it get optimized away? Does it depend on the sleep implementation (this one is taken from the Windows-Git-executables)?</p>
| 0 | 2016-07-26T20:48:30Z | 38,599,715 | <p>Sleeping doesn't require any processor time, so your OS can run far more than 4 sleep requests at a time, even though it has only 4 cores. Ideally it would be able to process the entire batch of 1000 in only 1 second, but there's lots of overhead in the creation and teardown of the individual processes.</p>
| 2 | 2016-07-26T20:54:55Z | [
"python",
"multithreading",
"optimization"
] |
Why is my code looping at just one line in the while loop instead over the whole block? | 38,599,634 | <p>Sorry for the unsophisticated question title but I need help desperately: </p>
<p>My objective at work is to create a script that pulls all the records from exacttarget salesforce marketing cloud API. I have successfully setup the API calls, and successfully imported the data into DataFrames. </p>
<p><strong>The problem I am running into is two-fold that I need to keep pulling records till "Results_Message" in my code stops reading "MoreDataAvailable" and I need to setup logic which allows me to control the date from either within the API call or from parsing the DataFrame.</strong> </p>
<p>My code is getting stuck at line 44 where "print Results_Message" is looping around the string "MoreDataAvailable"</p>
<p>Here is my code so far, on lines 94 and 95 you will see my attempt at parsing the date directly from the dataframe but no luck and no luck on line 32 where I have specified the date:</p>
<pre><code>import ET_Client
import pandas as pd
AggreateDF = pd.DataFrame()
Data_Aggregator = pd.DataFrame()
#Start_Date = "2016-02-20"
#End_Date = "2016-02-25"
#retrieveDate = '2016-07-25T13:00:00.000'
Export_Dir = 'C:/temp/'
try:
debug = False
stubObj = ET_Client.ET_Client(False, debug)
print '>>>BounceEvents'
getBounceEvent = ET_Client.ET_BounceEvent()
getBounceEvent.auth_stub = stubObj
getBounceEvent.search_filter = {'Property' : 'EventDate','SimpleOperator' : 'greaterThan','Value' : '2016-02-22T13:00:00.000'}
getResponse1 = getBounceEvent.get()
ResponseResultsBounces = getResponse1.results
Results_Message = getResponse1.message
print(Results_Message)
#EventDate = "2016-05-09"
print "This is orginial " + str(Results_Message)
#print ResponseResultsBounces
i = 1
while (Results_Message == 'MoreDataAvailable'):
#if i > 5: break
print Results_Message
results1 = getResponse1.results
#print(results1)
i = i + 1
ClientIDBounces = []
partner_keys1 = []
created_dates1 = []
modified_date1 = []
ID1 = []
ObjectID1 = []
SendID1 = []
SubscriberKey1 = []
EventDate1 = []
EventType1 = []
TriggeredSendDefinitionObjectID1 = []
BatchID1 = []
SMTPCode = []
BounceCategory = []
SMTPReason = []
BounceType = []
for BounceEvent in ResponseResultsBounces:
ClientIDBounces.append(str(BounceEvent['Client']['ID']))
partner_keys1.append(BounceEvent['PartnerKey'])
created_dates1.append(BounceEvent['CreatedDate'])
modified_date1.append(BounceEvent['ModifiedDate'])
ID1.append(BounceEvent['ID'])
ObjectID1.append(BounceEvent['ObjectID'])
SendID1.append(BounceEvent['SendID'])
SubscriberKey1.append(BounceEvent['SubscriberKey'])
EventDate1.append(BounceEvent['EventDate'])
EventType1.append(BounceEvent['EventType'])
TriggeredSendDefinitionObjectID1.append(BounceEvent['TriggeredSendDefinitionObjectID'])
BatchID1.append(BounceEvent['BatchID'])
SMTPCode.append(BounceEvent['SMTPCode'])
BounceCategory.append(BounceEvent['BounceCategory'])
SMTPReason.append(BounceEvent['SMTPReason'])
BounceType.append(BounceEvent['BounceType'])
df1 = pd.DataFrame({'ClientID': ClientIDBounces, 'PartnerKey': partner_keys1,
'CreatedDate' : created_dates1, 'ModifiedDate': modified_date1,
'ID':ID1, 'ObjectID': ObjectID1,'SendID':SendID1,'SubscriberKey':SubscriberKey1,
'EventDate':EventDate1,'EventType':EventType1,'TriggeredSendDefinitionObjectID':TriggeredSendDefinitionObjectID1,
'BatchID':BatchID1,'SMTPCode':SMTPCode,'BounceCategory':BounceCategory,'SMTPReason':SMTPReason,'BounceType':BounceType})
#print df1
#df1 = df1[(df1.EventDate > "2016-02-20") & (df1.EventDate < "2016-02-25")]
#AggreateDF = AggreateDF[(AggreateDF.EventDate > Start_Date) and (AggreateDF.EventDate < End_Date)]
print(df1['ID'].max())
AggreateDF = AggreateDF.append(df1)
print(AggreateDF.shape)
#df1 = df1[(df1.EventDate > "2016-02-20") and (df1.EventDate < "2016-03-25")]
#AggreateDF = AggreateDF[(AggreateDF.EventDate > Start_Date) and (AggreateDF.EventDate < End_Date)]
print("Final Aggregate DF is: " + str(AggreateDF.shape))
#EXPORT TO CSV
AggreateDF.to_csv(Export_Dir +'DataTest1.csv')
#with pd.option_context('display.max_rows',10000):
#print (df_masked1.shape)
#print df_masked1
except Exception as e:
print 'Caught exception: ' + str(e.message)
print e
</code></pre>
<p>Before my code parses the data, the orginal format I get of the data is a SOAP response, this is what it look like(below). <strong>Is it possible to directly parse records based on EventDate from the SOAP response?</strong> </p>
<pre><code>}, (BounceEvent){
Client =
(ClientID){
ID = 1111111
}
PartnerKey = None
CreatedDate = 2016-05-12 07:32:20.000937
ModifiedDate = 2016-05-12 07:32:20.000937
ID = 1111111
ObjectID = "1111111"
SendID = 1111111
SubscriberKey = "aaa@aaaa.com"
EventDate = 2016-05-12 07:32:20.000937
EventType = "HardBounce"
TriggeredSendDefinitionObjectID = "aa111aaa"
BatchID = 1111111
SMTPCode = "1111111"
BounceCategory = "Hard bounce - User Unknown"
SMTPReason = "aaaa"
BounceType = "immediate"
</code></pre>
<p>Hope this makes sense, this is my desperately plea for help. </p>
<p>Thank you in advance!</p>
| 0 | 2016-07-26T20:49:43Z | 38,599,896 | <p>You don't seem to be updating <code>Results_Message</code> in your loop, so it's always going to have the value it gets in line 29: <code>Results_Message = getResponse1.message</code>. Unless there's code involved that you didn't share, that is.</p>
| 0 | 2016-07-26T21:07:44Z | [
"python",
"parsing",
"soap",
"dataframe"
] |
Exception printing only one word | 38,599,734 | <p>I have part of a class that looks like this:</p>
<pre><code>def set_new_mode(self,mode):
try:
#this will fail, since self.keithley is never initialized
print self.keithley
self.keithley.setzerocheck(on=True)
self.keithley.selectmode(mode,nplc=6)
self.keithley.setzerocheck(on=False) #keithcontrol class will
#automatically turn on zero correction when zchk is disabled
self.mode = mode
self.print_to_log('\nMode set to %s' % self.mode)
except Exception as e:
self.print_to_log('\nERROR:set_new_mode: %s' % e)
print e
</code></pre>
<p>As part of some testing of error handling, I've tried calling the <code>set_new_mode</code> function without first initializing the class variable <code>self.keithley</code>. In this case, I would expect that the <code>print self.keithley</code> statement would raise an <code>AttributeError: keithgui instance has no attribute 'keithley'</code>. However, the <code>print e</code> and <code>self.print_to_log('\nERROR:set_new_mode: %s' % e)</code> indicate that <code>e</code> contains only the word "keithley".</p>
<p>Changing <code>print e</code> to <code>print type(e)</code> reveals that <code>e</code> still has the type AttributeError, but the variable no longer contains any useful information about the exception. Why? And how do I return <code>e</code> to its expected form?</p>
<p>Edit: Here is a MEW to reproduce the error. <strong>To reproduce the error, start the GUI, change the mode to something other than VOLT and click the update button.</strong></p>
<pre><code>import Tkinter
import numpy as np
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg
class keithgui(Tkinter.Tk):
def __init__(self,parent):
Tkinter.Tk.__init__(self,parent)
self.parent = parent
self.initialize()
def initialize(self):
#we are not initially connected to the keithley
self.connected = False
self.pauseupdate = False
#set up frames to distribute widgets
#MASTER FRAME
self.mframe = Tkinter.Frame(self,bg='green')
self.mframe.pack(side=Tkinter.TOP,fill='both',expand=True)
#LEFT AND RIGHT FRAMES
self.Lframe = Tkinter.Frame(self.mframe,bg='red',borderwidth=2,relief='raised')
self.Lframe.pack(side='left',fill='both',expand=True)
self.Rframe = Tkinter.Frame(self.mframe,bg='blue',borderwidth=2,relief='raised')
self.Rframe.pack(side='right',fill='both',expand=False)
#create the log text widget to keep track of what we did last
#also give it a scrollbar...
scrollbar = Tkinter.Scrollbar(master=self.Lframe)
scrollbar.pack(side=Tkinter.RIGHT,anchor='n')
self.logtext = Tkinter.Text(master=self.Lframe,height=3,yscrollcommand=scrollbar.set)
scrollbar.config(command=self.logtext.yview)
self.logtext.pack(side=Tkinter.TOP,anchor='w',fill='both')
#Button to update all settings
updatebutton = Tkinter.Button(master=self.Rframe,text='Update',command=self.update_all_params)
updatebutton.grid(column=2,row=0)
#Option menu & label to select mode of the Keithley
modes = ['VOLT','CHAR','CURR']
modelabel = Tkinter.Label(master=self.Rframe,text='Select Mode:')
modelabel.grid(column=0,row=2,sticky='W')
self.mode = 'VOLT'
self.modevar = Tkinter.StringVar()
self.modevar.set(self.mode)
modeselectmenu = Tkinter.OptionMenu(self.Rframe,self.modevar,*modes)
modeselectmenu.grid(column=1,row=2,sticky='W')
def print_to_log(self,text,loc=Tkinter.END):
self.logtext.insert(loc,text)
self.logtext.see(Tkinter.END)
def update_all_params(self):
self.set_refresh_rate()
if self.modevar.get() != self.mode:
self.set_new_mode(self.modevar.get())
else:
self.print_to_log('\nAlready in mode %s' % self.mode)
def set_refresh_rate(self):
try:
self.refreshrate = np.float(self.refreshrateentryvar.get())
self.print_to_log('\nRefresh rate set to %06.3fs' % self.refreshrate)
except Exception as e:
self.print_to_log('\nERROR:set_referesh_rate: %s' % e)
def set_new_mode(self,mode):
try:
print self.keithley
self.keithley.setzerocheck(on=True)
self.keithley.selectmode(mode,nplc=6)
self.keithley.setzerocheck(on=False) #keithcontrol class will
#automatically turn on zero correction when zchk is disabled
self.mode = mode
self.print_to_log('\nMode set to %s' % self.mode)
except Exception as e:
self.print_to_log('\nERROR:set_new_mode: %s' % e)
print e
print type(e)
if __name__ == "__main__":
app = keithgui(None)
app.title('Keithley GUI')
app.mainloop()
</code></pre>
| 1 | 2016-07-26T20:55:57Z | 38,599,937 | <p>If you modify your code:</p>
<pre><code>import Tkinter as tk
class Fnord(tk.Tk):
def set_new_mode(self,mode):
try:
import pdb; pdb.set_trace()
#this will fail, since self.keithley is never initialized
print self.keithley
Fnord().set_new_mode('whatever')
</code></pre>
<p>And then start stepping through with <code>s</code>, you'll see that there's a <code>__getattr__</code> function on your window. I'm looking through to see what causes the problem now, but that's effectively gonna be your answer.</p>
<hr>
<p>Following the call stack, it led me to a call <code>self.tk = _tkinter.create</code>, which eventually led me get <a href="https://github.com/python/cpython/blob/eb8294f4412f3488ae4804d67ae5bda3031306ac/Modules/clinic/_tkinter.c.h#L555" rel="nofollow">here</a>. Ultimately what this boils down to is that the exception is happening in C-territory, so it's producing a different <code>AttributeError</code> message.</p>
| 1 | 2016-07-26T21:10:52Z | [
"python",
"exception"
] |
python dictionary to csv where each key is in seperate row and value in separate columns | 38,599,741 | <p>I am having trouble trying to output my dictionary to CSV file. I have a Dictionary that contains as keys time dates and as values, companies pertaining to those dates which are in string format. I tried looking through the website for identical question, but it doesn't really help my case. I tried the following code and managed to get the key in first row, values in second column but thats not what i want. </p>
<pre><code>import csv
with open('dict1.csv','w') as f:
w = csv.writer(f,delimiter=',')
for key,values in sorted(a.items()):
w.writerow([key,values])
</code></pre>
<p>But this gives me a CSV file in following format:</p>
<pre><code>2009/01/02 ['AA' 'BB' 'AAPL'] etc
2009/01/03 ['AA' 'CC' 'DD' 'FF']
</code></pre>
<p>Hence I only have two columns. But I want:</p>
<pre><code>2009/01/02 'AA' 'BB' 'AAPL'
2009/01/02 'AA' 'CC' 'DD' 'FF'
</code></pre>
<p>in 4 separate columns for first row and 5 for the second row respectively.<br>
I even tried </p>
<pre><code>for dates in sorted(a):
w.writerow([date] + my_dict[date] )
</code></pre>
<p>But this gives me error saying unsupported operand types for + 'timestamp' and 'str'. </p>
<p>Any help will be appreciated. Thanks</p>
| 0 | 2016-07-26T20:56:24Z | 38,600,089 | <p>You may need to put something like this:</p>
<pre><code>for key,values in sorted(a.items()):
w.writerow(str(key) + "," + ","join(values))
</code></pre>
<p><code>",".join(values)</code> will split the list of values into a string delimited by commas. I'm assuming you want commas separating your columns because you are writing a csv file, even though in your example the columns are separated by tab.</p>
| 0 | 2016-07-26T21:22:37Z | [
"python",
"dictionary",
"export-to-csv"
] |
python dictionary to csv where each key is in seperate row and value in separate columns | 38,599,741 | <p>I am having trouble trying to output my dictionary to CSV file. I have a Dictionary that contains as keys time dates and as values, companies pertaining to those dates which are in string format. I tried looking through the website for identical question, but it doesn't really help my case. I tried the following code and managed to get the key in first row, values in second column but thats not what i want. </p>
<pre><code>import csv
with open('dict1.csv','w') as f:
w = csv.writer(f,delimiter=',')
for key,values in sorted(a.items()):
w.writerow([key,values])
</code></pre>
<p>But this gives me a CSV file in following format:</p>
<pre><code>2009/01/02 ['AA' 'BB' 'AAPL'] etc
2009/01/03 ['AA' 'CC' 'DD' 'FF']
</code></pre>
<p>Hence I only have two columns. But I want:</p>
<pre><code>2009/01/02 'AA' 'BB' 'AAPL'
2009/01/02 'AA' 'CC' 'DD' 'FF'
</code></pre>
<p>in 4 separate columns for first row and 5 for the second row respectively.<br>
I even tried </p>
<pre><code>for dates in sorted(a):
w.writerow([date] + my_dict[date] )
</code></pre>
<p>But this gives me error saying unsupported operand types for + 'timestamp' and 'str'. </p>
<p>Any help will be appreciated. Thanks</p>
| 0 | 2016-07-26T20:56:24Z | 38,600,270 | <p>This line is putting the key (the date) in the key variable, and the values, as a <em>list</em> in values. So values will indeed contain something like ['AA' 'BB' 'AAPL'].</p>
<pre><code>for key,values in sorted(a.items()):
</code></pre>
<p>Next, you're telling writerow "write a row with two elements: the first is the key, the second is whatever is in values" (which is a list so it's just converted to a string representation and output like that).</p>
<pre><code> w.writerow([key,values])
</code></pre>
<p>so [key, values] looks like this:</p>
<pre><code>[2009/01/02, ['AA','BB','AAPL']]
^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
the key this whole thing
is the single value
</code></pre>
<p>What you'd want, I think, is to create a single list containing the key and strings, not one containing the key and a list of strings. You could extend a list with the extra values like so:</p>
<pre><code> the_row = [key] # Create the initial row with just key
the_row.extend(values) # Add the values, one by one, to the row
w.writerow(the_row) # Write the full row
</code></pre>
<p>list.extend(L) does: Extend the list by appending all the items in the given list.</p>
| 0 | 2016-07-26T21:34:12Z | [
"python",
"dictionary",
"export-to-csv"
] |
python dictionary to csv where each key is in seperate row and value in separate columns | 38,599,741 | <p>I am having trouble trying to output my dictionary to CSV file. I have a Dictionary that contains as keys time dates and as values, companies pertaining to those dates which are in string format. I tried looking through the website for identical question, but it doesn't really help my case. I tried the following code and managed to get the key in first row, values in second column but thats not what i want. </p>
<pre><code>import csv
with open('dict1.csv','w') as f:
w = csv.writer(f,delimiter=',')
for key,values in sorted(a.items()):
w.writerow([key,values])
</code></pre>
<p>But this gives me a CSV file in following format:</p>
<pre><code>2009/01/02 ['AA' 'BB' 'AAPL'] etc
2009/01/03 ['AA' 'CC' 'DD' 'FF']
</code></pre>
<p>Hence I only have two columns. But I want:</p>
<pre><code>2009/01/02 'AA' 'BB' 'AAPL'
2009/01/02 'AA' 'CC' 'DD' 'FF'
</code></pre>
<p>in 4 separate columns for first row and 5 for the second row respectively.<br>
I even tried </p>
<pre><code>for dates in sorted(a):
w.writerow([date] + my_dict[date] )
</code></pre>
<p>But this gives me error saying unsupported operand types for + 'timestamp' and 'str'. </p>
<p>Any help will be appreciated. Thanks</p>
| 0 | 2016-07-26T20:56:24Z | 38,600,420 | <p>Am sorry if I read this wrong, but are you using python pandas?</p>
<blockquote>
<p>" I have a Dictionary that contains as keys pandas time dates and as values, >companies pertaining to those dates which are in string format."</p>
</blockquote>
<p>in that case something like this might work</p>
<pre><code>import pandas as pd
df = pd.DataFrame(mydict)
df = df.transpose()
df.to_csv('dict1.csv',encoding='utf-8')
</code></pre>
<p>the to_csv method be default uses ',' as a delimiter which you can change if needed.</p>
| 0 | 2016-07-26T21:46:47Z | [
"python",
"dictionary",
"export-to-csv"
] |
how to replace values in columns by new values in pandas | 38,599,818 | <p>I have this code:</p>
<pre><code>def chgFormat(x):
newFormat = 0
if x[-1] == 'K':
if len(x)==1:newFormat=1000
else:newFormat = float(x[:-1])*1000
elif x[-1] == 'M':
newFormat = float(x[:-1]) * 100000
elif x[-1] == 'B':
newFormat = float(x[:-1]) * 1000000
elif x[-1] == 'H':
newFormat = float(x[:-1]) * 100
elif x[-1] == 'h':
newFormat = float(x[:-1]) * 100
else:
newFormat = float(x)
return newFormat
frame=pd.read_csv(folderpath+"/StormEvents_details-ftp_v1.0_d1951_c20160223.csv",index_col=None,encoding = "ISO-8859-1",header=0,low_memory=False)
frame['DAMAGE_PROPERTY'].fillna('0K',inplace=True)
frame['DAMAGE_PROPERTY'].apply(chgFormat)
</code></pre>
<p>This code converts data entries like <code>2K, 3K, 3H, 3B</code> into <code>2000.0, 3000.0, 300.0</code> etc. </p>
<p>What I want is this code should replace column entries by the values calculated by vales in written function. If I do <code>print(frame['DAMAGE_PROPERTY'])</code> after the written code, I still see the original values. </p>
| 0 | 2016-07-26T21:02:03Z | 38,599,914 | <p>You may try this </p>
<pre><code>frame['DAMAGE_PROPERTY'] = frame['DAMAGE_PROPERTY'].apply(chgFormat, axis=1)
</code></pre>
| 2 | 2016-07-26T21:09:13Z | [
"python",
"pandas",
"dataframe"
] |
python xml.etree - how to search on more than one attribute | 38,599,880 | <p>I have an XML file with this line:</p>
<pre><code><op type="create" file="C:/Users/mureadr/Desktop/A/HMI_FORGF/bld/armle-v7/release/SimpleNetwork/Makefile" found="0"/>
</code></pre>
<p>I want to use <code>xml.etree</code> to search on more than one attribute:</p>
<pre><code>result = tree.search('.//op[@type="create" @file="c:/Users/mureadr/Desktop/A/HMI_FORGF/bld/armle-v7/release/HmiLogging/Makefile"]')
</code></pre>
<p>But I get an error</p>
<blockquote>
<p>raise SyntaxError("invalid predicate")</p>
</blockquote>
<p>I tried this (added <code>and</code>), still got same error</p>
<pre><code>'.//op[@type="create" and @file="c:/Users/mureadr/Desktop/A/HMI_FORGF/bld/armle-v7/release/HmiLogging/Makefile"]'
</code></pre>
<p>Tried adding <code>&&</code>, still got same error</p>
<pre><code>'.//op[@type="create" && @file="c:/Users/mureadr/Desktop/A/HMI_FORGF/bld/armle-v7/release/HmiLogging/Makefile"]'
</code></pre>
<p>Finally, tried <code>&</code>, still got same error</p>
<pre><code>'.//op[@type="create" & @file="c:/Users/mureadr/Desktop/A/HMI_FORGF/bld/armle-v7/release/HmiLogging/Makefile"]'
</code></pre>
<p>I'm guessing that this is a limitation of <code>xml.etree</code>.</p>
<p>Probably I shouldn't use it in the future, but I'm almost done with my project.</p>
<p>For <strong><code>N</code></strong> attributes, how do I use <code>etree.xml</code> to be able to search on all <strong><code>N</code></strong> attributes?</p>
| 1 | 2016-07-26T21:06:19Z | 38,599,908 | <p>You can use multiple square brackets in succession</p>
<pre><code>'.//op[@type="create"][@file="/some/path"]'
</code></pre>
<p><strong>UPDATE</strong>: I see that you are using python's <code>xml.etree</code> module. I am not sure if the above answer is valid for that module (It has extremely limited support for XPath). I'd suggest using the go-to library for all XML tasks -- <a href="https://pypi.python.org/pypi/lxml" rel="nofollow">LXML</a>. If you'd use <code>lxml</code>, it would be simply <code>doc.xpath(".//op[..][..]")</code></p>
| 1 | 2016-07-26T21:08:51Z | [
"python",
"xml"
] |
Custom wrapper for containers implementing __iter__ and __getitem__ | 38,599,902 | <p>I'm trying to write a custom wrapper class for containers. To implement the iterator-prototocol I provide <code>__iter__</code> and <code>__next__</code> and to access individual items I provide <code>__getitem__</code>:</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import absolute_import, division, print_function, unicode_literals, with_statement
from future_builtins import *
import numpy as np
class MyContainer(object):
def __init__(self, value):
self._position = 0
self.value = value
def __str__(self):
return str(self.value)
def __len__(self):
return len(self.value)
def __getitem__(self, key):
return self.value[key]
def __iter__(self):
return self
def next(self):
if (self._position >= len(self.value)):
raise StopIteration
else:
self._position += 1
return self.value[self._position - 1]
</code></pre>
<p>So far, everything works as expected, e. g. when trying things like:</p>
<pre><code>if __name__ == '__main__':
a = MyContainer([1,2,3,4,5])
print(a)
iter(a) is iter(a)
for i in a:
print(i)
print(a[2])
</code></pre>
<p>But I run into problems when trying to use <code>numpy.maximum</code>:</p>
<pre><code>b= MyContainer([2,3,4,5,6])
np.maximum(a,b)
</code></pre>
<p>Raises "<code>ValueError: cannot copy sequence with size 5 to array axis with dimension 0</code>".</p>
<p>When commenting out the <code>__iter__</code> method, I get back a NumPy array with the correct results (while no longer conforming to the iterator protocol):</p>
<pre><code>print(np.maximum(a,b)) # results in [2 3 4 5 6]
</code></pre>
<p>And when commenting out <code>__getitem__</code>, I get back an instance of <code>MyContainer</code></p>
<pre><code>print(np.maximum(a,b)) # results in [2 3 4 5 6]
</code></pre>
<p>But I lose the access to individual items.</p>
<p>Is there any way to achieve all three goals together (Iterator-Protocol, <code>__getitem__</code> and <code>numpy.maximum</code> working)? Is there anything I'm doing fundamentally wrong?</p>
<p>To note: The actual wrapper class has more functionality but this is the minimal example where I could reproduce the behaviour.</p>
<p>(Python 2.7.12, NumPy 1.11.1)</p>
| 1 | 2016-07-26T21:08:20Z | 38,600,431 | <p>Your container is its own iterator, which limits it greatly. You can only iterate each container once, after that, it is considered "empty" as far as the iteration protocol goes.</p>
<p>Try this with your code to see:</p>
<pre><code>c = MyContainer([1,2,3])
l1 = list(c) # the list constructor will call iter on its argument, then consume the iterator
l2 = list(c) # this one will be empty, since the container has no more items to iterate on
</code></pre>
<p>When you don't provide an <code>__iter__</code> method but do implement a <code>__len__</code> methond and a <code>__getitem__</code> method that accepts small integer indexes, Python will use <code>__getitem__</code> to iterate. Such iteration can be done multiple times, since the iterator objects that are created are all distinct from each other.</p>
<p>If you try the above code after taking the <code>__iter__</code> method out of your class, both lists will be <code>[1, 2, 3]</code>, as expected. You could also fix up your own <code>__iter__</code> method so that it returns independent iterators. For instance, you could return an iterator from your internal sequence:</p>
<pre><code>def __iter__(self):
return iter(self.value)
</code></pre>
<p>Or, as suggested in a comment by <a href="http://stackoverflow.com/users/510937/bakuriu">Bakuriu</a>:</p>
<pre><code>def __iter__(self):
return (self[i] for i in range(len(self))
</code></pre>
<p>This latter version is essentially what Python will provide for you if you have a <code>__getitem__</code> method but no <code>__iter__</code> method.</p>
| 1 | 2016-07-26T21:48:03Z | [
"python",
"numpy"
] |
Index must be called with a collection of some kind: assign column name to dataframe | 38,599,912 | <p>I have <code>reweightTarget</code> as follows and I want to convert it to a pandas Dataframe. However, I got following error:</p>
<blockquote>
<p>TypeError: Index(...) must be called with a collection of some kind,
't' was passed</p>
</blockquote>
<p>If I remove <code>columns='t'</code>, it works fine. Can anyone please explain what's going on?</p>
<pre><code>reweightTarget
Trading dates
2004-01-31 4.35
2004-02-29 4.46
2004-03-31 4.44
2004-04-30 4.39
2004-05-31 4.50
2004-06-30 4.53
2004-07-31 4.63
2004-08-31 4.58
dtype: float64
pd.DataFrame(reweightTarget, columns='t')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-334-bf438351aaf2> in <module>()
----> 1 pd.DataFrame(reweightTarget, columns='t')
C:\Anaconda3\lib\site-packages\pandas\core\frame.py in __init__(self, data, index, columns, dtype, copy)
253 else:
254 mgr = self._init_ndarray(data, index, columns, dtype=dtype,
--> 255 copy=copy)
256 elif isinstance(data, (list, types.GeneratorType)):
257 if isinstance(data, types.GeneratorType):
C:\Anaconda3\lib\site-packages\pandas\core\frame.py in _init_ndarray(self, values, index, columns, dtype, copy)
421 raise_with_traceback(e)
422
--> 423 index, columns = _get_axes(*values.shape)
424 values = values.T
425
C:\Anaconda3\lib\site-packages\pandas\core\frame.py in _get_axes(N, K, index, columns)
388 columns = _default_index(K)
389 else:
--> 390 columns = _ensure_index(columns)
391 return index, columns
392
C:\Anaconda3\lib\site-packages\pandas\indexes\base.py in _ensure_index(index_like, copy)
3407 index_like = copy(index_like)
3408
-> 3409 return Index(index_like)
3410
3411
C:\Anaconda3\lib\site-packages\pandas\indexes\base.py in __new__(cls, data, dtype, copy, name, fastpath, tupleize_cols, **kwargs)
266 **kwargs)
267 elif data is None or lib.isscalar(data):
--> 268 cls._scalar_data_error(data)
269 else:
270 if (tupleize_cols and isinstance(data, list) and data and
C:\Anaconda3\lib\site-packages\pandas\indexes\base.py in _scalar_data_error(cls, data)
481 raise TypeError('{0}(...) must be called with a collection of some '
482 'kind, {1} was passed'.format(cls.__name__,
--> 483 repr(data)))
484
485 @classmethod
TypeError: Index(...) must be called with a collection of some kind, 't' was passed
</code></pre>
| 0 | 2016-07-26T21:09:12Z | 38,614,231 | <p>Documentation: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html</a></p>
<blockquote>
<p>columns : <strong>Index or array-like</strong> </p>
<p>Column labels to use for resulting frame. Will default to np.arange(n) if no column labels are provided</p>
</blockquote>
<p>Example:</p>
<blockquote>
<p>df3 = DataFrame(np.random.randn(10, 5), columns=['a', 'b', 'c', 'd', 'e'])</p>
</blockquote>
<p>Try to use:</p>
<pre><code>pd.DataFrame(reweightTarget, columns=['t'])
</code></pre>
| 1 | 2016-07-27T13:22:19Z | [
"python",
"pandas",
"dataframe"
] |
when I pip install, my packages are going in my macs local library | 38,599,918 | <p>when I pip install a package it gets insalled on my macs library. I am using pycharm whih allows me to click on a package like a hyperlink. And instead of going to my site-packages in my virtualenv it's going to my macs library which is</p>
<pre><code>/Library/Frameworks/Python.Framework/Versions/3.5/lib/python3.5/site-packages/gdata/youtube/
</code></pre>
<p>when it should be</p>
<pre><code> myproject/lib/python3.5/site-packages/gdata/youtube/
</code></pre>
<p>why is that.</p>
| 0 | 2016-07-26T21:09:36Z | 38,600,050 | <p>I think you want to create a virtual environment for your project.</p>
<p>Install this tool virtualenv.</p>
<p><code>$ pip install virtualenv</code></p>
<p>Then create your project's folder</p>
<p><code>$ cd my_project_folder</code>
<code>$ virtualenv venv</code></p>
<p>This creates a copy of Python in whichever directory you ran the command in, placing it in a folder named venv.</p>
<p>Source</p>
<p><a href="https://github.com/pypa/virtualenv" rel="nofollow">https://github.com/pypa/virtualenv</a></p>
<p>For further knowledge read</p>
<p><a href="https://realpython.com/blog/python/python-virtual-environments-a-primer/" rel="nofollow">https://realpython.com/blog/python/python-virtual-environments-a-primer/</a> </p>
| 0 | 2016-07-26T21:19:43Z | [
"python",
"osx",
"pip"
] |
when I pip install, my packages are going in my macs local library | 38,599,918 | <p>when I pip install a package it gets insalled on my macs library. I am using pycharm whih allows me to click on a package like a hyperlink. And instead of going to my site-packages in my virtualenv it's going to my macs library which is</p>
<pre><code>/Library/Frameworks/Python.Framework/Versions/3.5/lib/python3.5/site-packages/gdata/youtube/
</code></pre>
<p>when it should be</p>
<pre><code> myproject/lib/python3.5/site-packages/gdata/youtube/
</code></pre>
<p>why is that.</p>
| 0 | 2016-07-26T21:09:36Z | 38,600,167 | <p>You should install your virtual environment and then run pip within that environment. So, for example, I use Anaconda (which I thoroughly recommend if you are installing alot of scientific libraries). </p>
<p>To activate the environment "hle" I type in:</p>
<pre><code>source /Users/admin/.conda/envs/hle/bin/activate hle
</code></pre>
<p>Once I've done this the pip command will reference the virtual environment location and not the standard mac location. So, when I install "mypackage" as follows:</p>
<pre><code>pip install mypackage
</code></pre>
<p>It subsequently installs files in the virtual folder and not in the usual mac system folders.</p>
<p>You can find out about the Anaconda virtual environment (and download it) over here: <a href="http://conda.pydata.org/docs/install/quick.html" rel="nofollow">http://conda.pydata.org/docs/install/quick.html</a> but other virtual environments (like Virtualenv) work in the same way.</p>
| 0 | 2016-07-26T21:27:48Z | [
"python",
"osx",
"pip"
] |
when I pip install, my packages are going in my macs local library | 38,599,918 | <p>when I pip install a package it gets insalled on my macs library. I am using pycharm whih allows me to click on a package like a hyperlink. And instead of going to my site-packages in my virtualenv it's going to my macs library which is</p>
<pre><code>/Library/Frameworks/Python.Framework/Versions/3.5/lib/python3.5/site-packages/gdata/youtube/
</code></pre>
<p>when it should be</p>
<pre><code> myproject/lib/python3.5/site-packages/gdata/youtube/
</code></pre>
<p>why is that.</p>
| 0 | 2016-07-26T21:09:36Z | 38,600,235 | <p>You should activate your virtual environment to install packages on that. In Pycharm you can do it like this:</p>
<p>Go to <code>File</code> > <code>Settings</code> > <code>Project</code> > <code>Project Interpreter</code></p>
<p>Now you have to select the interpreter for this project. Browse or select the interpreter from drop-down if available. In your case this should be:</p>
<pre><code>myproject/lib/python3.5
</code></pre>
<blockquote>
<p>I am using Pycharm community edition on Ubuntu. But the
process should be similar in Mac.</p>
</blockquote>
| 1 | 2016-07-26T21:31:52Z | [
"python",
"osx",
"pip"
] |
Iterate through pandas rows efficiently | 38,599,952 | <p>I have a list that looks like this:</p>
<pre><code>lst = ['a','b','c']
</code></pre>
<p>and a dataframe that looks like this:</p>
<pre><code>id col1
1 ['a','c']
2 ['b']
3 ['b', 'a']
</code></pre>
<p>I am looking to create a new column in the dataframe that has the length of the intersection of the lst and the individual lists from col1</p>
<pre><code>id col1 intersect
1 ['a','c'] 2
2 ['b'] 1
3 ['d', 'a'] 1
</code></pre>
<p>Currently my code looks like this:</p>
<pre><code>df['intersection'] = np.nan
for i, r in df.iterrows():
## If-Statement to deal with Nans in col1
if r['col1'] == r['col1']:
df['intersection'][i] = len(set(r['col1']).intersection(set(lst)))
</code></pre>
<p>The problem is that this code is extremely time-consuming on my dataset of 200K rows and intersecting with a list of 200 elements. Is there any way to do this more efficiently?</p>
<p>Thanks!</p>
| 2 | 2016-07-26T21:11:48Z | 38,600,073 | <p>Have you tried this?</p>
<pre><code>lstset = set(lst)
df['intersection'] = df['col1'].apply(lambda x: len(set(x).intersection(lstset)))
</code></pre>
<p>Another possibility is</p>
<pre><code>df['intersection'] = df['col1'].apply(lambda x: len([1 for item in x if item in lst]))
</code></pre>
| 3 | 2016-07-26T21:21:09Z | [
"python",
"pandas",
"set"
] |
How to extract index/column/data from Pandas DataFrame Based on logic operation? | 38,599,987 | <p>I have the following dataframe:</p>
<pre><code>import numpy as np
import pandas as pd
data = np.random.rand(5,5)
df = pd.DataFrame(data, index = list('abcde'), columns = list('ABCDE'))
df = df[df>0]
df
A B C D E
a NaN 2.038740 1.371158 NaN NaN
b 0.575567 NaN 0.462007 NaN NaN
c 0.984802 0.049818 0.129836 NaN NaN
d NaN NaN NaN NaN NaN
e 0.789563 1.846402 NaN 0.340902 NaN
</code></pre>
<p>I want to get all the (index, col_name, value) of the non-NAN data. How do I do it?</p>
<p>My expected result is:</p>
<pre><code>[('b','A', 0.575567), ('c', 'A', 0.984802), ('e', 'A', 0.789563),...]
</code></pre>
| 2 | 2016-07-26T21:13:52Z | 38,600,180 | <p>You can stack the data frame, which will drop NA values automatically and then reset the index to be columns, after which it will be easy to convert to a list of tuples:</p>
<pre><code>[tuple(r) for r in df.stack().reset_index().values]
# [('a', 'B', 2.03874),
# ('a', 'C', 1.371158),
# ('b', 'A', 0.575567),
# ('b', 'C', 0.46200699999999995),
# ('c', 'A', 0.9848020000000001),
# ('c', 'B', 0.049818),
# ('c', 'C', 0.12983599999999998),
# ('e', 'A', 0.789563),
# ('e', 'B', 1.846402),
# ('e', 'D', 0.340902)]
</code></pre>
<p>Or use the data frames' <code>to_records()</code> method:</p>
<pre><code>list(df.stack().reset_index().to_records(index = False))
</code></pre>
| 4 | 2016-07-26T21:28:29Z | [
"python",
"pandas",
"dataframe"
] |
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet launching debug on pycharm | 38,600,097 | <p>I've a Django (version 1.9) application running with python 2.7.10 and i'm using Virtualenv.
Running the application with <code>./manage.py runserver</code> i've no error, but when i try to run in debug i got <code>django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.</code> </p>
<p>Here it is the pycharm debugging configuration:</p>
<p><a href="http://i.stack.imgur.com/4V5nr.png" rel="nofollow"><img src="http://i.stack.imgur.com/4V5nr.png" alt="enter image description here"></a>
Any idea why?
Here the complete stacktrace:</p>
<pre><code> /Users/matteobetti/Progetti/Enydros/enysoft/bin/python ./manage.py runserver
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Users/matteobetti/Progetti/Enydros/enysoft/lib/python2.7/site-packages/django/core/management/__init__.py", line 350, in execute_from_command_line
utility.execute()
File "/Users/matteobetti/Progetti/Enydros/enysoft/lib/python2.7/site-packages/django/core/management/__init__.py", line 342, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/matteobetti/Progetti/Enydros/enysoft/lib/python2.7/site-packages/django/core/management/__init__.py", line 176, in fetch_command
commands = get_commands()
File "/Users/matteobetti/Progetti/Enydros/enysoft/lib/python2.7/site-packages/django/utils/lru_cache.py", line 100, in wrapper
result = user_function(*args, **kwds)
File "/Users/matteobetti/Progetti/Enydros/enysoft/lib/python2.7/site-packages/django/core/management/__init__.py", line 71, in get_commands
for app_config in reversed(list(apps.get_app_configs())):
File "/Users/matteobetti/Progetti/Enydros/enysoft/lib/python2.7/site-packages/django/apps/registry.py", line 137, in get_app_configs
self.check_apps_ready()
File "/Users/matteobetti/Progetti/Enydros/enysoft/lib/python2.7/site-packages/django/apps/registry.py", line 124, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</code></pre>
<p>i add <code>import django</code> e <code>django.setup</code> before <code>execute_from_command_line(sys.argvs)</code></p>
<p>and i get this stacktrace: </p>
<pre><code>Traceback (most recent call last):
File "/Users/matteobetti/Progetti/Enydros/enysoft/manage.py", line 10, in <module>
django.setup()
File "/Users/matteobetti/Progetti/Enydros/enysoft/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Users/matteobetti/Progetti/Enydros/enysoft/lib/python2.7/site-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/Users/matteobetti/Progetti/Enydros/enysoft/lib/python2.7/site-packages/django/apps/config.py", line 90, in create
module = import_module(entry)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/matteobetti/Progetti/Enydros/enysoft/frontend/apps.py", line 2, in <module>
from frontend.services.container import app_context
File "/Users/matteobetti/Progetti/Enydros/enysoft/frontend/services/container.py", line 1, in <module>
from frontend.services.enysoft_services import RService, FixedSizeDirectoryCache, TubeSectionService, CsvSanifier
File "/Users/matteobetti/Progetti/Enydros/enysoft/frontend/services/enysoft_services.py", line 5, in <module>
import rpy2.robjects as ro
File "/Users/matteobetti/Progetti/Enydros/enysoft/lib/python2.7/site-packages/rpy2/robjects/__init__.py", line 15, in <module>
import rpy2.rinterface as rinterface
File "/Users/matteobetti/Progetti/Enydros/enysoft/lib/python2.7/site-packages/rpy2/rinterface/__init__.py", line 16, in <module>
tmp = subprocess.check_output(("R", "RHOME"), universal_newlines=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 566, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
</code></pre>
<p>Another things to say is that my collegue with linux have no problem with the same virtualenv configuration, the only one that have problem is i, using mac Os. </p>
| 0 | 2016-07-26T21:23:05Z | 38,615,663 | <p>Try to add you settings file to system <code>PATH</code>. In my projects settings are in the folder /project_name/project_name/settings/development.py (that is slightly different from standard way). And in my case Evironment variables string is </p>
<p><code>DJANGO_SETTINGS_MODULE=project_name.settings.development;PYTHONUNBUFFERED=1</code></p>
<p>So change it according to your parameters and then try to run server again. I hope this helps.</p>
| 0 | 2016-07-27T14:20:51Z | [
"python",
"django",
"virtualenv"
] |
Error while Running POX controller script using Python 3.5.1 | 38,600,144 | <p>I have written a POX controller script for my research and it is working great under <code>Python 2.7</code> interpreter (Using PyCharm IDE). However, when I choose <code>Python 3.5.1</code>, I get the following error:</p>
<pre><code>/usr/bin/python3.5 /home/XXX/pox/pox.py openflow.discovery my_controller
Traceback (most recent call last):
File "/home/XXX/pox/pox.py", line 42, in <module>
import pox.boot
File "/home/XXX/pox/pox/boot.py", line 55, in <module>
import pox.core
File "/home/XXX/pox/pox/core.py", line 155, in <module>
import pox.lib.recoco as recoco
File "/home/XXX/pox/pox/lib/recoco/__init__.py", line 1, in <module>
import recoco
ImportError: No module named 'recoco'
Process finished with exit code 1
</code></pre>
<p>Has someone encountered such an error before? </p>
<p>Thank you</p>
| 0 | 2016-07-26T21:26:22Z | 38,676,610 | <p>I have the same problem with Python 3.X.</p>
<p>POX requires Python 2.7 according to the documentation <a href="https://github.com/noxrepo/pox" rel="nofollow">POX Readme</a>, <a href="https://openflow.stanford.edu/display/ONL/POX+Wiki#POXWiki-DoesPOXsupportPython3%3F" rel="nofollow">POX Wiki: Does POX support Python 3</a>.</p>
<p>To run POX with Python 3, it would first be necessary porting POX to Python 3.</p>
<p>Use Python 2.7 and anything should work fine.</p>
| 0 | 2016-07-30T17:26:53Z | [
"python",
"python-3.x",
"pycharm",
"sdn",
"pox"
] |
Python inheriting docstring errors 'read only' | 38,600,186 | <p>I have read all the other posts here about this topic but since most of them are quite old I feel better to open a new one with my own problem, since the solutions proposed there don't work for me but I have not seen any complaint in the comments.</p>
<p>For instance the solution proposed <a href="http://stackoverflow.com/a/8101598/551045">here</a> gives me the error:</p>
<pre><code>AttributeError: 'NoneType' object attribute '__doc__' is read-only
</code></pre>
<p>The second proposed solution <a href="http://stackoverflow.com/a/810111/551045">there</a> gives me:</p>
<pre><code>TypeError: Error when calling the metaclass bases
readonly attribute
</code></pre>
<p>The only reference I found about the docstring being read-only was <a href="http://www.jesshamrick.com/2013/04/17/rewriting-python-docstrings-with-a-metaclass/" rel="nofollow">here</a></p>
<p>I'm using Python 2.7.9. Am I doing something wrong or are <strong>doc</strong> attributes really read only?</p>
<p>Here is the SSCCE with one of the solutions of the other post:</p>
<pre><code># decorator proposed in http://stackoverflow.com/a/8101598/551045
def fix_docs(cls):
for name, func in vars(cls).items():
if not func.__doc__:
print func, 'needs doc'
for parent in cls.__bases__:
parfunc = getattr(parent, name)
if parfunc and getattr(parfunc, '__doc__', None):
func.__doc__ = parfunc.__doc__
break
return cls
class X(object):
"""
some doc
"""
def please_implement(self):
"""
I have a very thorough documentation
:return:
"""
raise NotImplementedError
@fix_docs
class SpecialX(X):
def please_implement(self):
return True
print help(SpecialX.please_implement)
</code></pre>
<p>This outputs:</p>
<pre><code>Traceback (most recent call last):
None needs doc
File "C:/Users/RedX/.PyCharm2016.2/config/scratches/scratch.py", line 29, in <module>
class SpecialX(X):
File "C:/Users/RedX/.PyCharm2016.2/config/scratches/scratch.py", line 9, in fix_docs
func.__doc__ = parfunc.__doc__
AttributeError: 'NoneType' object attribute '__doc__' is read-only
</code></pre>
<p>With the fixed decorator:</p>
<pre><code>import types
def fix_docs(cls):
for name, func in vars(cls).items():
if isinstance(func, types.FunctionType) and not func.__doc__:
print func, 'needs doc'
for parent in cls.__bases__:
parfunc = getattr(parent, name, None)
if parfunc and getattr(parfunc, '__doc__', None):
func.__doc__ = parfunc.__doc__
break
return cls
class DocStringInheritor(type):
"""
A variation on
http://groups.google.com/group/comp.lang.python/msg/26f7b4fcb4d66c95
by Paul McGuire
"""
def __new__(meta, name, bases, clsdict):
if not('__doc__' in clsdict and clsdict['__doc__']):
for mro_cls in (mro_cls for base in bases for mro_cls in base.mro()):
doc=mro_cls.__doc__
if doc:
clsdict['__doc__']=doc
break
for attr, attribute in clsdict.items():
if not attribute.__doc__:
for mro_cls in (mro_cls for base in bases for mro_cls in base.mro()
if hasattr(mro_cls, attr)):
doc=getattr(getattr(mro_cls,attr),'__doc__')
if doc:
attribute.__doc__=doc
break
return type.__new__(meta, name, bases, clsdict)
class X(object):
"""
some doc
"""
#__metaclass__ = DocStringInheritor
x = 20
def please_implement(self):
"""
I have a very thorough documentation
:return:
"""
raise NotImplementedError
@property
def speed(self):
"""
Current speed in knots/hour.
:return:
"""
return 0
@speed.setter
def speed(self, value):
"""
:param value:
:return:
"""
pass
@fix_docs
class SpecialX(X):
def please_implement(self):
return True
@property
def speed(self):
return 10
@speed.setter
def speed(self, value):
self.sp = value
class VerySpecial(X):
p = 0
"""doc"""
def please_implement(self):
"""
:return bool: Always false.
"""
return False
def not_inherited(self):
"""
Look at all these words!
:return:
"""
help(X.speed)
help(SpecialX.speed)
help(SpecialX.please_implement)
help(VerySpecial.please_implement)
help(VerySpecial.not_inherited)
</code></pre>
<p>Output:</p>
<pre><code><function please_implement at 0x026E4AB0> needs doc
Help on property:
Current speed in knots/hour.
:return:
Help on property:
Help on method please_implement in module __main__:
please_implement(self) unbound __main__.SpecialX method
I have a very thorough documentation
:return:
Help on method please_implement in module __main__:
please_implement(self) unbound __main__.VerySpecial method
:return bool: Always false.
Help on method not_inherited in module __main__:
not_inherited(self) unbound __main__.VerySpecial method
Look at all these words!
:return:
</code></pre>
| 1 | 2016-07-26T21:28:58Z | 38,602,787 | <p>Using the now fixed version of the <a href="http://stackoverflow.com/a/8101118/190597">DocStringInheritor</a> metaclass,</p>
<pre><code>class DocStringInheritor(type):
"""
A variation on
http://groups.google.com/group/comp.lang.python/msg/26f7b4fcb4d66c95
by Paul McGuire
"""
def __new__(meta, name, bases, clsdict):
if not('__doc__' in clsdict and clsdict['__doc__']):
for mro_cls in (mro_cls for base in bases for mro_cls in base.mro()):
doc=mro_cls.__doc__
if doc:
clsdict['__doc__']=doc
break
for attr, attribute in clsdict.items():
if not attribute.__doc__:
for mro_cls in (mro_cls for base in bases for mro_cls in base.mro()
if hasattr(mro_cls, attr)):
doc=getattr(getattr(mro_cls,attr),'__doc__')
if doc:
if isinstance(attribute, property):
clsdict[attr] = property(attribute.fget, attribute.fset, attribute.fdel, doc)
else:
attribute.__doc__=doc
break
return type.__new__(meta, name, bases, clsdict)
class X(object):
__metaclass__ = DocStringInheritor
"""
some doc
"""
x = 20
def please_implement(self):
"""
I have a very thorough documentation
:return:
"""
raise NotImplementedError
@property
def speed(self):
"""
Current speed in knots/hour.
:return:
"""
return 0
@speed.setter
def speed(self, value):
"""
:param value:
:return:
"""
pass
class SpecialX(X):
def please_implement(self):
return True
@property
def speed(self):
return 10
@speed.setter
def speed(self, value):
self.sp = value
class VerySpecial(X):
p = 0
"""doc"""
def please_implement(self):
"""
:return bool: Always false.
"""
return False
def not_inherited(self):
"""
Look at all these words!
:return:
"""
help(X.speed)
help(SpecialX.speed)
help(SpecialX.please_implement)
help(VerySpecial.please_implement)
help(VerySpecial.not_inherited)
</code></pre>
<p>yields</p>
<pre><code>Help on property:
Current speed in knots/hour.
:return:
Help on property:
Current speed in knots/hour.
:return:
Help on function please_implement in module __main__:
please_implement(self)
I have a very thorough documentation
:return:
Help on function please_implement in module __main__:
please_implement(self)
:return bool: Always false.
Help on function not_inherited in module __main__:
not_inherited(self)
Look at all these words!
:return:
</code></pre>
| 1 | 2016-07-27T02:35:51Z | [
"python",
"python-2.7",
"python-decorators"
] |
Line printing in Python | 38,600,216 | <p>This exercise is from chapter 20 of Zed Shaw's book.</p>
<p>I am trying to understand a behavior of line number.</p>
<p>When I use the following code, the line number from the text file gets printed as 4, which is wrong. It is in the 3rd line.</p>
<pre><code>current_line += current_line
</code></pre>
<p>However, the line number shows correct when I use the following</p>
<pre><code>current_line = current_line + 1
</code></pre>
<p>Can someone kindly explain what is the difference in the above two lines, which looks same to me and why it is making a difference.</p>
<p>Following is the full code:</p>
<pre><code>from sys import argv
script, input_file = argv
def print_all(f):
print f.read()
def rewind(f):
f.seek(0)
def print_a_line(line_count, f):
print line_count, f.readline()
current_file = open(input_file)
print "First let's print the whole file:\n"
print_all(current_file)
print "Now let's rewind, kind of like a tape."
rewind(current_file)
print "Let's print three lines:"
current_line = 1
print_a_line(current_line, current_file)
current_line += current_line
print_a_line(current_line, current_file)
#current_line = current_line + 1
current_line += current_line
print_a_line(current_line, current_file)
</code></pre>
| 3 | 2016-07-26T21:30:34Z | 38,600,272 | <p>current_line += current_line expands out to</p>
<pre><code>current_line = current_line + current_line
</code></pre>
<p>So lets take a look at what you did, by expanding it out (We will ignore the print statements).</p>
<pre><code>current_line = 1
current_line = current_line + current_line # (1 + 1 = 2)
#current_line = 2
current_line = current_line + current_line # (2 + 2 = 4)
#current_line = 4
</code></pre>
<p>I think you meant to use</p>
<pre><code>current_line += 1
</code></pre>
| 2 | 2016-07-26T21:34:15Z | [
"python",
"line-numbers"
] |
Line printing in Python | 38,600,216 | <p>This exercise is from chapter 20 of Zed Shaw's book.</p>
<p>I am trying to understand a behavior of line number.</p>
<p>When I use the following code, the line number from the text file gets printed as 4, which is wrong. It is in the 3rd line.</p>
<pre><code>current_line += current_line
</code></pre>
<p>However, the line number shows correct when I use the following</p>
<pre><code>current_line = current_line + 1
</code></pre>
<p>Can someone kindly explain what is the difference in the above two lines, which looks same to me and why it is making a difference.</p>
<p>Following is the full code:</p>
<pre><code>from sys import argv
script, input_file = argv
def print_all(f):
print f.read()
def rewind(f):
f.seek(0)
def print_a_line(line_count, f):
print line_count, f.readline()
current_file = open(input_file)
print "First let's print the whole file:\n"
print_all(current_file)
print "Now let's rewind, kind of like a tape."
rewind(current_file)
print "Let's print three lines:"
current_line = 1
print_a_line(current_line, current_file)
current_line += current_line
print_a_line(current_line, current_file)
#current_line = current_line + 1
current_line += current_line
print_a_line(current_line, current_file)
</code></pre>
| 3 | 2016-07-26T21:30:34Z | 38,600,314 | <p>You're not increasing the value of of <code>current_line</code> by a constant factor of 1, instead you're increasing by a <em>geometric progression</em>.</p>
<p><code>current_line += current_line</code> assigns the value of <code>current_line</code> to be itself <code>+</code> itself:</p>
<pre><code>current_line = 5
current_line = current_line + current_line
>>> current_line
>>> 10
</code></pre>
<p><code>current_line = current_line + 1</code> or <code>current_line += 1</code> (<code>+=1</code> <em>is syntactic sugar for increasing a value by 1</em>) increases the value of <code>current_line</code> by 1.</p>
<pre><code>current_line = 5
current_line = current_line + 1
current_line += 1
>>> current_line
>>> 7
</code></pre>
<p>Since <code>current_line</code> is a <strong>counter</strong> for the line number, <code>+= 1</code> should be used in this case. </p>
| 1 | 2016-07-26T21:37:21Z | [
"python",
"line-numbers"
] |
Formatting string into datetime using Pandas - trouble with directives | 38,600,300 | <p>I have a string that is the full year followed by the ISO week of the year (so some years have 53 weeks, because the week counting starts at the first full week of the year). I want to convert it to a <code>datetime</code> object using <code>pandas.to_datetime()</code>. So I do:</p>
<pre><code>pandas.to_datetime('201145', format='%Y%W')
</code></pre>
<p>and it returns:</p>
<pre><code>Timestamp('2011-01-01 00:00:00')
</code></pre>
<p>which is not right. Or if I try:</p>
<pre><code>pandas.to_datetime('201145', format='%Y%V')
</code></pre>
<p>it tells me that <code>%V</code> is a bad directive.</p>
<p>What am I doing wrong?</p>
| 4 | 2016-07-26T21:36:31Z | 38,600,540 | <p>I think that the following question would be useful to you: <a href="http://stackoverflow.com/questions/304256/whats-the-best-way-to-find-the-inverse-of-datetime-isocalendar">Reversing date.isocalender()</a></p>
<p>Using the functions provided in that question this is how I would proceed:</p>
<pre><code>import datetime
import pandas as pd
def iso_year_start(iso_year):
"The gregorian calendar date of the first day of the given ISO year"
fourth_jan = datetime.date(iso_year, 1, 4)
delta = datetime.timedelta(fourth_jan.isoweekday()-1)
return fourth_jan - delta
def iso_to_gregorian(iso_year, iso_week, iso_day):
"Gregorian calendar date for the given ISO year, week and day"
year_start = iso_year_start(iso_year)
return year_start + datetime.timedelta(days=iso_day-1, weeks=iso_week-1)
def time_stamp(yourString):
year = int(yourString[0:4])
week = int(yourString[-2:])
day = 1
return year, week, day
yourTimeStamp = iso_to_gregorian( time_stamp('201145')[0] , time_stamp('201145')[1], time_stamp('201145')[2] )
print yourTimeStamp
</code></pre>
<p>Then run that function for your values and append them as date time objects to the dataframe.</p>
<p>The result I got from your specified string was:</p>
<pre><code>2011-11-07
</code></pre>
| 2 | 2016-07-26T21:57:31Z | [
"python",
"string",
"date",
"datetime",
"pandas"
] |
Python Power set , can't figure out my error | 38,600,315 | <p>My code will crash and run forever:</p>
<pre><code>def subsets(nums):
"""
:type nums: List[int]
:rtype: List[List[int]]
"""
results = [[]]
for num in nums:
for result in results:
results.extend([result + [num]])
return results
</code></pre>
<p>While I googled, and find similar solution:</p>
<pre><code>def subsets(nums):
"""
:type nums: List[int]
:rtype: List[List[int]]
"""
results = [[]]
for num in nums:
results.extend([result + [num] for result in results])
return results
</code></pre>
<p>What's the difference here? </p>
| 0 | 2016-07-26T21:37:22Z | 38,600,410 | <p>The critical part is this:</p>
<pre><code>for result in results:
results.extend([result + [num]])
</code></pre>
<p>Here, you are iterating over the <code>results</code> list. An iterator is always something living, that does not finish until you actually reached the end. For lists, you can simply imagine this as a pointer that starts at the first element, and then keeps going to the next until it reaches the end.</p>
<p>Except that in your case, you are adding an element (since <code>[result + [num]]</code> is a one-element list) to the <code>results</code> list on every iteration. So as the iterator keeps going forward, you keep adding one element to the end making sure the iterator can never reach the end.</p>
<p>As a general rule, you should never modify the collection you are currently iterating. So in this case, you shouldnât modify <code>results</code> while you are iterating the same thing.</p>
<p>And thatâs exactly what the following line in that other solution avoids:</p>
<pre><code>results.extend([result + [num] for result in results])
</code></pre>
<p>This uses a list comprehension and is essentially equivalent to this:</p>
<pre><code>tmp = []
for result in results:
tmp.append(result + [num])
results.extend(tmp)
</code></pre>
<p>As you can see, <code>results</code> is not modified <em>while</em> iterating over it. The <code>tmp</code> list is first created, and then once thatâs done, the <code>results</code> list is modified by extending it by the whole <code>tmp</code> list.</p>
| 4 | 2016-07-26T21:46:01Z | [
"python",
"powerset"
] |
How to URL encode Chinese characters? | 38,600,436 | <p>I used the following code to encode parameters list:</p>
<pre><code>params['username'] = user
params['q'] = q
params = urllib.quote(params)
</code></pre>
<p>But it doesn't work when <code>q</code> is equal to <code>馿¸¯</code>. The following error is returned:</p>
<pre><code>'ascii' codec can't encode characters in position 0-1: ordinal not in range(128)
</code></pre>
<p>How should I fix it?</p>
| 1 | 2016-07-26T21:48:41Z | 38,604,877 | <p>It seems that you're working on Python 2+.</p>
<p>Cause your question isn't clear enough, I offer a normal way to solve it.</p>
<p>Here's two advice to fix it:</p>
<ul>
<li>add <code># -*- coding: utf-8 -*-</code> before your file</li>
<li>encode Chinese characters to utf-8</li>
</ul>
<p>Here's example:</p>
<pre><code># -*- coding: utf-8 -*-
import urllib
def fix_codecs(s):
if isinstance(s, unicode):
return s.encode('utf-8')
else:
try:
return s.decode('gbk').encode('utf-8')
except:
return s
s = '馿¸¯'
s = fix_codecs(s)
print urllib.quote(s)
</code></pre>
| 1 | 2016-07-27T06:02:23Z | [
"python",
"urlencode",
"chinese-locale"
] |
Combination of two lists while keeping the order | 38,600,453 | <p>I'm trying to join two lists and output all possible combinations of the merged list that maintains the ordering of the original two lists. For example:</p>
<pre><code>list_1 = [9,8]
list_2 = [2,1]
#output
combo= [9821,9281,2981,2918,2198,9218]
</code></pre>
<p>where in each element in the list "combo", <strong>2</strong> always comes before <strong>1</strong> and <strong>9</strong> always comes before <strong>8</strong>.</p>
<p>so far I've used permutations from itertools to do loop all possible permutations, but it is not fast enough.</p>
<p>Here's what I got:</p>
<pre><code>from itertools import permutations
seq = [5, 9, 8, 2, 1]
plist = []
root = seq[0]
left = filter(lambda x: x > root, seq)
right = filter(lambda x: x < root, seq)
for pseq in permutations(seq[1:]):
pseq = (root,) + pseq
if list(filter(lambda x: x > root, pseq)) == left and list(filter(lambda x: x < root, pseq)) == right:
plist.append(pseq)
print plist
</code></pre>
<p>Thanks!</p>
| 9 | 2016-07-26T21:50:11Z | 38,600,652 | <p>Give this a try:</p>
<pre><code>import itertools
lst1 = ['a', 'b']
lst2 = [1, 2]
for locations in itertools.combinations(range(len(lst1) + len(lst2)), len(lst2)):
result = lst1[:]
for location, element in zip(locations, lst2):
result.insert(location, element)
print(''.join(map(str, result)))
# Output:
# 12ab
# 1a2b
# 1ab2
# a12b
# a1b2
# ab12
</code></pre>
<p>The way I think of the problem, you start with the first sequence (<code>ab</code> in this case), and then look for all the possible places you can insert the elements of the second sequence (in this case, a <code>1</code> and then a <code>2</code>).</p>
<p>The <code>itertools.combinations</code> call gives you those combinations. In the above example, it iterates through the positions <code>(0, 1)</code>, <code>(0, 2)</code>, <code>(0, 3)</code>, <code>(1, 2)</code>, <code>(1, 3)</code>, <code>(2, 3)</code>.</p>
<p>For each of those sets of coordinates, we just insert the elements from the second list at the specified indexes.</p>
<p><strong>UPDATE</strong></p>
<p>Here's a recursive solution that handles any number of lists, based on @Äặng Xuân Thà nh's suggestion in his answer:</p>
<pre><code>import itertools
def in_order_combinations(*lists):
lists = list(filter(len, lists))
if len(lists) == 0:
yield []
for lst in lists:
element = lst.pop()
for combination in in_order_combinations(*lists):
yield combination + [element]
lst.append(element)
for combo in in_order_combinations(['a', 'b'], [1, 2]):
print(''.join(map(str, combo)))
</code></pre>
<p>The basic idea is that, starting with <code>ab</code> and <code>12</code>, you know that all possible solutions will either end with <code>b</code> or <code>2</code>. The ones that end with <code>b</code> will all start with a solution for (<code>a</code>, <code>12</code>). The ones that end with <code>2</code> will all start with a solution for (<code>ab</code>, <code>1</code>).</p>
<p>The base case for the recursion is simply when there are no lists left. (Empty lists are pruned as we go.)</p>
| 4 | 2016-07-26T22:07:51Z | [
"python",
"combinations",
"permutation",
"itertools"
] |
Combination of two lists while keeping the order | 38,600,453 | <p>I'm trying to join two lists and output all possible combinations of the merged list that maintains the ordering of the original two lists. For example:</p>
<pre><code>list_1 = [9,8]
list_2 = [2,1]
#output
combo= [9821,9281,2981,2918,2198,9218]
</code></pre>
<p>where in each element in the list "combo", <strong>2</strong> always comes before <strong>1</strong> and <strong>9</strong> always comes before <strong>8</strong>.</p>
<p>so far I've used permutations from itertools to do loop all possible permutations, but it is not fast enough.</p>
<p>Here's what I got:</p>
<pre><code>from itertools import permutations
seq = [5, 9, 8, 2, 1]
plist = []
root = seq[0]
left = filter(lambda x: x > root, seq)
right = filter(lambda x: x < root, seq)
for pseq in permutations(seq[1:]):
pseq = (root,) + pseq
if list(filter(lambda x: x > root, pseq)) == left and list(filter(lambda x: x < root, pseq)) == right:
plist.append(pseq)
print plist
</code></pre>
<p>Thanks!</p>
| 9 | 2016-07-26T21:50:11Z | 38,600,753 | <p>I don't know more about python but I have an idea may help.<br>
The idea is using the recursive:<br>
To join two lists n & m items, we have two cases:</p>
<ul>
<li>Join two lists n-1 & m items, then put the n-th items at the end.</li>
<li>Join two lists n & m-1 items, then put the m-th items at the end.</li>
</ul>
<p>Use this recursive, you just need to handle the simplest case: join two lists with one of them is empty.
It will fast then using permutation.
Hope it helps.</p>
| 1 | 2016-07-26T22:15:22Z | [
"python",
"combinations",
"permutation",
"itertools"
] |
Combination of two lists while keeping the order | 38,600,453 | <p>I'm trying to join two lists and output all possible combinations of the merged list that maintains the ordering of the original two lists. For example:</p>
<pre><code>list_1 = [9,8]
list_2 = [2,1]
#output
combo= [9821,9281,2981,2918,2198,9218]
</code></pre>
<p>where in each element in the list "combo", <strong>2</strong> always comes before <strong>1</strong> and <strong>9</strong> always comes before <strong>8</strong>.</p>
<p>so far I've used permutations from itertools to do loop all possible permutations, but it is not fast enough.</p>
<p>Here's what I got:</p>
<pre><code>from itertools import permutations
seq = [5, 9, 8, 2, 1]
plist = []
root = seq[0]
left = filter(lambda x: x > root, seq)
right = filter(lambda x: x < root, seq)
for pseq in permutations(seq[1:]):
pseq = (root,) + pseq
if list(filter(lambda x: x > root, pseq)) == left and list(filter(lambda x: x < root, pseq)) == right:
plist.append(pseq)
print plist
</code></pre>
<p>Thanks!</p>
| 9 | 2016-07-26T21:50:11Z | 38,601,070 | <p>A solution using a recursive generator (requires Python 3 for <code>yield from ...</code>):</p>
<pre><code>def f(a,b,p=[]):
if len(a)==0 or len(b)==0:
yield p+a+b
else:
yield from f(a[1:],b,p+[a[0]])
yield from f(a,b[1:],p+[b[0]])
</code></pre>
<p>at each step, you can pick the first character of <code>a</code> or the first character of <code>b</code>, and recursively build the rest of the list(s). if one of the two becomes empty, there are no more choice points.</p>
<pre><code>>>> list(f([9,8],[2,1]))
[[9, 8, 2, 1], [9, 2, 8, 1], [9, 2, 1, 8], [2, 9, 8, 1], [2, 9, 1, 8], [2, 1, 9, 8]]
</code></pre>
<p>Update: starting from the above solution, here's an implementation that handles any number of lists:</p>
<pre><code>def f(*args,p=[]):
if any(len(arg)==0 for arg in args):
yield p+[el for arg in args for el in arg]
else:
for i,arg in enumerate(args):
args1=list(args)
args1[i]=arg[1:]
yield from f(*args1,p=p+[arg[0]])
</code></pre>
| 2 | 2016-07-26T22:46:53Z | [
"python",
"combinations",
"permutation",
"itertools"
] |
Combination of two lists while keeping the order | 38,600,453 | <p>I'm trying to join two lists and output all possible combinations of the merged list that maintains the ordering of the original two lists. For example:</p>
<pre><code>list_1 = [9,8]
list_2 = [2,1]
#output
combo= [9821,9281,2981,2918,2198,9218]
</code></pre>
<p>where in each element in the list "combo", <strong>2</strong> always comes before <strong>1</strong> and <strong>9</strong> always comes before <strong>8</strong>.</p>
<p>so far I've used permutations from itertools to do loop all possible permutations, but it is not fast enough.</p>
<p>Here's what I got:</p>
<pre><code>from itertools import permutations
seq = [5, 9, 8, 2, 1]
plist = []
root = seq[0]
left = filter(lambda x: x > root, seq)
right = filter(lambda x: x < root, seq)
for pseq in permutations(seq[1:]):
pseq = (root,) + pseq
if list(filter(lambda x: x > root, pseq)) == left and list(filter(lambda x: x < root, pseq)) == right:
plist.append(pseq)
print plist
</code></pre>
<p>Thanks!</p>
| 9 | 2016-07-26T21:50:11Z | 38,601,112 | <p>Long(ish) one-liner</p>
<pre><code>from itertools import *
from copy import deepcopy
list({''.join(str(l.pop(0)) for l in deepcopy(p)) for p in permutations(chain(repeat(list_1, len(list_1)), repeat(list_2, len(list_2))))})
</code></pre>
<p>See my answer to the similar problem <a href="http://stackoverflow.com/questions/36260956/all-possible-ways-to-interleave-two-strings/36264336#36264336">All possible ways to interleave two strings</a> for an explanation.</p>
| 1 | 2016-07-26T22:50:40Z | [
"python",
"combinations",
"permutation",
"itertools"
] |
Combination of two lists while keeping the order | 38,600,453 | <p>I'm trying to join two lists and output all possible combinations of the merged list that maintains the ordering of the original two lists. For example:</p>
<pre><code>list_1 = [9,8]
list_2 = [2,1]
#output
combo= [9821,9281,2981,2918,2198,9218]
</code></pre>
<p>where in each element in the list "combo", <strong>2</strong> always comes before <strong>1</strong> and <strong>9</strong> always comes before <strong>8</strong>.</p>
<p>so far I've used permutations from itertools to do loop all possible permutations, but it is not fast enough.</p>
<p>Here's what I got:</p>
<pre><code>from itertools import permutations
seq = [5, 9, 8, 2, 1]
plist = []
root = seq[0]
left = filter(lambda x: x > root, seq)
right = filter(lambda x: x < root, seq)
for pseq in permutations(seq[1:]):
pseq = (root,) + pseq
if list(filter(lambda x: x > root, pseq)) == left and list(filter(lambda x: x < root, pseq)) == right:
plist.append(pseq)
print plist
</code></pre>
<p>Thanks!</p>
| 9 | 2016-07-26T21:50:11Z | 38,601,116 | <p>It would be a bit cleaner if your output was a list of lists instead of concatenated digits, but it doesn't matter. Here's a simple recursive solution in python3 (but you can trivially convert it to python2).</p>
<pre><code>def combine(xs, ys):
if xs == []: return [ys]
if ys == []: return [xs]
x, *xs_tail = xs
y, *ys_tail = ys
return [ [x] + l for l in combine(xs_tail, ys) ] + \
[ [y] + l for l in combine(ys_tail, xs) ]
</code></pre>
<p>This will return a list of lists:</p>
<pre><code>>>> combine([9, 8], [2, 1])
[[9, 8, 2, 1], [9, 2, 1, 8], [9, 2, 8, 1], [2, 1, 9, 8], [2, 9, 8, 1], [2, 9, 1, 8]]
</code></pre>
<p>Here's how to convert it to your desired output:</p>
<pre><code>def list_to_int(digits):
return int(''.join(map(str, digits)))
def combine_flat(xs, ys):
return [list_to_int(l) for l in combine(xs, ys)]
</code></pre>
| 3 | 2016-07-26T22:51:00Z | [
"python",
"combinations",
"permutation",
"itertools"
] |
code to regenerate a dataframe in pandas for Stackoverflow/SO questions | 38,600,508 | <p>Lets say I have the following dataframe. I want to ask a question on Stackoverflow/SO regarding a type of manipulation I am trying to do. Now, to help users on SO its generally best practice to supply the code to regenerate the dataframe in question.</p>
<pre><code> sunlight
sum count
city date
SFO 2014-05-31 -1805.04 31
SFO 2014-06-30 -579.52 30
SFO 2014-07-31 1025.51 31
SFO 2014-08-31 -705.18 31
SFO 2014-09-30 -1214.33 30
</code></pre>
<p>I don't want to manually type in all the text required to supply the code that generates the above dataframe. Is there a pandas function/command I can invoke that would output the dataframe in some sort of structure that someone can easily copy and paste into their python/ipython command line in order to generate the dataframe object. Something like <code>df.head().to_clipboard()</code> but instead of the copying the display of the df, copy the code required to produce the df. </p>
<p>The above dataframe is fairly simple but for complicated dataframes its extremely cumbersome to manually type in the code required to generate the dataframe in a SO question. </p>
| 2 | 2016-07-26T21:54:42Z | 38,600,809 | <p>Use <code>to_dict()</code></p>
<p>Let's say you have this <code>df</code></p>
<pre><code>df = pd.DataFrame(np.arange(16).reshape(4, 4), list('abcd'),
pd.MultiIndex.from_product([list('AB'), ['One', 'Two']]))
df
</code></pre>
<p><a href="http://i.stack.imgur.com/0Jsjf.png" rel="nofollow"><img src="http://i.stack.imgur.com/0Jsjf.png" alt="enter image description here"></a></p>
<pre><code>print df
A B
One Two One Two
a 0 1 2 3
b 4 5 6 7
c 8 9 10 11
d 12 13 14 15
</code></pre>
<p>I'd first print <code>df.to_dict()</code></p>
<pre><code>print df.to_dict()
{('B', 'One'): {'a': 2, 'c': 10, 'b': 6, 'd': 14}, ('A', 'Two'): {'a': 1, 'c': 9, 'b': 5, 'd': 13}, ('A', 'One'): {'a': 0, 'c': 8, 'b': 4, 'd': 12}, ('B', 'Two'): {'a': 3, 'c': 11, 'b': 7, 'd': 15}}
</code></pre>
<p>Then I'd copy that and paste it into a <code>pd.DataFrame()</code>. You can slightly format the copied text for readability.</p>
<pre><code>df = pd.DataFrame({('B', 'One'): {'a': 2, 'c': 10, 'b': 6, 'd': 14},
('A', 'Two'): {'a': 1, 'c': 9, 'b': 5, 'd': 13},
('A', 'One'): {'a': 0, 'c': 8, 'b': 4, 'd': 12},
('B', 'Two'): {'a': 3, 'c': 11, 'b': 7, 'd': 15}})
df
</code></pre>
<p><a href="http://i.stack.imgur.com/0Jsjf.png" rel="nofollow"><img src="http://i.stack.imgur.com/0Jsjf.png" alt="enter image description here"></a></p>
<pre><code>print df
A B
One Two One Two
a 0 1 2 3
b 4 5 6 7
c 8 9 10 11
d 12 13 14 15
</code></pre>
| 4 | 2016-07-26T22:21:57Z | [
"python",
"pandas"
] |
How to open a Popup window which contains a graph written in javascript? | 38,600,514 | <p>I am using Django to implement a website and I would like to add a button such that when someone clicks it, it would open a Popup window which contains a graph written in javascript. And since I am writing the website in Django, I need to call a function in views.py to get the updated data and then draw the graph based on that. I originally want to update the graph on one page but now I would like to open a popup window. Could someone help me on how to modify the code so that the button would Popup a smaller window which contains the graph I implemented? Thanks!
Here is my code in main.html:</p>
<pre><code># I first have a button that could be clicked
<div class="col-lg-4">
<p><button class="btn btn-primary" type="button" name="display_list" id="display_list">Display List</button></p>
</div>
# here is the script I used to open up a Popup window such that the returned result would be displayed on that separate window
<script>
$(document).ready(function(){
$("#display_list").click(function(){
$.get("display_list/", function(ret){
$('#result').bPopup(); #I think probably I did this wrong?
});
});
});
</script>
</code></pre>
<p>And here is the code I used to draw the graph in a separate html file(show_result.html):</p>
<pre><code><div id="result">
<script> javascript that draws a graph on html webpage. I pass the updated variable from the corresponding function in views.py to here by calling render(). </script>
</div>
</code></pre>
<p>Here is my function in the views.py:</p>
<pre><code>def display_list(request):
#some function implementation to get the result and put it in context
return render(request, "show_result.html",context)
</code></pre>
<p>And this is the code in my url file:</p>
<pre><code>url(r'^display_list/$', views.display_list, name='display_list'),
</code></pre>
<p>Is it possible to Popup a div in html? And what should I do in my case?</p>
<p>Thanks a lot.</p>
| 0 | 2016-07-26T21:55:11Z | 38,600,858 | <p>It's completely possible using bootstrap modals, here's documentation about it :
<a href="http://www.w3schools.com/bootstrap/bootstrap_modal.asp" rel="nofollow">http://www.w3schools.com/bootstrap/bootstrap_modal.asp</a></p>
<p>And a sample code :</p>
<pre><code><div class="modal fade" id="id_you_want_for_modal" tabindex="-1" role="dialog" aria-labelledby="myModalLabel">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button>
<h4 class="modal-title" id="myModalLabel"> Title you want </h4>
</div>
<div class="modal-body">
<!-- Your Graph here -->
</div>
</div>
</div>
</div>
</code></pre>
<p>Do not forget to include the bootstrap js and css files in your project :</p>
<p><a href="http://getbootstrap.com/getting-started/" rel="nofollow">http://getbootstrap.com/getting-started/</a></p>
<p><a href="http://www.w3schools.com/bootstrap/bootstrap_get_started.asp" rel="nofollow">http://www.w3schools.com/bootstrap/bootstrap_get_started.asp</a></p>
<p>Hope it helps !</p>
| 0 | 2016-07-26T22:26:09Z | [
"javascript",
"jquery",
"python",
"html",
"django"
] |
How to open a Popup window which contains a graph written in javascript? | 38,600,514 | <p>I am using Django to implement a website and I would like to add a button such that when someone clicks it, it would open a Popup window which contains a graph written in javascript. And since I am writing the website in Django, I need to call a function in views.py to get the updated data and then draw the graph based on that. I originally want to update the graph on one page but now I would like to open a popup window. Could someone help me on how to modify the code so that the button would Popup a smaller window which contains the graph I implemented? Thanks!
Here is my code in main.html:</p>
<pre><code># I first have a button that could be clicked
<div class="col-lg-4">
<p><button class="btn btn-primary" type="button" name="display_list" id="display_list">Display List</button></p>
</div>
# here is the script I used to open up a Popup window such that the returned result would be displayed on that separate window
<script>
$(document).ready(function(){
$("#display_list").click(function(){
$.get("display_list/", function(ret){
$('#result').bPopup(); #I think probably I did this wrong?
});
});
});
</script>
</code></pre>
<p>And here is the code I used to draw the graph in a separate html file(show_result.html):</p>
<pre><code><div id="result">
<script> javascript that draws a graph on html webpage. I pass the updated variable from the corresponding function in views.py to here by calling render(). </script>
</div>
</code></pre>
<p>Here is my function in the views.py:</p>
<pre><code>def display_list(request):
#some function implementation to get the result and put it in context
return render(request, "show_result.html",context)
</code></pre>
<p>And this is the code in my url file:</p>
<pre><code>url(r'^display_list/$', views.display_list, name='display_list'),
</code></pre>
<p>Is it possible to Popup a div in html? And what should I do in my case?</p>
<p>Thanks a lot.</p>
| 0 | 2016-07-26T21:55:11Z | 38,605,945 | <p>There can be two ways of performing the task that you want. Here are the methods.</p>
<p><strong>Version 1 (The synchronous way)</strong><br>
Suppose you have a url say <code>/x/</code> which opens <code>main.html</code>. So you can add whatever data, graph needs to the <code>context</code> on a <code>GET</code> call. Example:</p>
<pre><code>def x(request):
context = {}
# Add data that is needed to draw the graph, in your context
return render(request, "main.html",context)
</code></pre>
<p>Now you have the data that is needed to draw your graph, in your <code>main.html</code>'s <code>context</code>. Now you can simply use a Bootstrap modal to draw your graph in a pop up.</p>
<pre><code><div class="col-lg-4">
<p><button class="btn btn-primary" type="button" data-toggle="modal" data-target="#myModal"id="display_list">Display List</button></p>
</div>
<!-- Modal -->
<div id="myModal" class="modal fade" role="dialog">
<div class="modal-dialog">
<!-- Modal content-->
<div class="modal-content">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal">&times;</button>
<h4 class="modal-title">Modal Header</h4>
</div>
<div class="modal-body" id="modal-body">
<p>Some text in the modal.</p>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-default" data-dismiss="modal">Close</button>
</div>
</div>
</div>
</div>
</code></pre>
<p>You don't need the click event listener on <code>#display_list</code> since Bootstrap handles that.</p>
<pre><code><script>
// Put your graph logic here and use '#modal-body' to render the graph
</script>
</code></pre>
<p><strong>Version 2 (The Async way)</strong> <br>
In this case we have already opened the page on <code>/x/</code> and we will get data from <code>/display_list/</code> via an <code>AJAX</code> <code>GET</code> call.</p>
<pre><code>def display_list(request):
''' Function to return data in json format to ajax call '''
context = {}
#Get all data required to draw the graph and put it in context
return JsonResponse(context)
</code></pre>
<p>Since when you click the button you want to send <code>AJAX</code> request and then open the modal, you need to remove <code>data-toggle="modal" data-target="#myModal"</code> from the button to prevent it from opening. Change the button to:</p>
<pre><code><p><button class="btn btn-primary" type="button" id="display_list">Display List</button></p>
</code></pre>
<p>Now you can hit the url <code>/display_list/</code> to get your data. In your <code>main.html</code> add the Bootstrap modal element as in <strong>version 1</strong>. Now add the following Javascript to <code>main.html</code> to get the data.</p>
<pre><code><script>
$(document).ready(function(){
$("#display_list").click(function(e){
e.preventDefault();
var modalBody = $("#modal-body");
// AJAX call to get the data
$.ajax({
url: '/display_list/',
type: 'GET',
success: function(data, status, xhr) {
console.log(data);
// Add your graph logic here and use modalBody to draw on it
}
});
//Now display the modal
$("#myModal").modal('show');
});
});
</script>
</code></pre>
<p><strong>NOTE</strong> <br>
Remember to add Bootstrap's CSS and JS files.</p>
<pre><code><link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
</code></pre>
<p><strong>Example</strong> <br>
So I will be going by <strong>version 1</strong> here in the example. I am using the graph provided <a href="http://bl.ocks.org/mbostock/1153292" rel="nofollow">here</a>
<strong>Step 1:</strong> Your <code>view x</code> should look like this:</p>
<pre><code>def x(request):
context = {}
links = [
{'source': "Microsoft", 'target': "Amazon", 'type': "licensing"},
{'source': "Microsoft", 'target': "HTC", 'type': "licensing"},
{'source': "Samsung", 'target': "Apple", 'type': "suit"},
{'source': "Motorola", 'target': "Apple", 'type': "suit"},
{'source': "Nokia", 'target': "Apple", 'type': "resolved"},
{'source': "HTC", 'target': "Apple", 'type': "suit"},
{'source': "Kodak", 'target': "Apple", 'type': "suit"},
{'source': "Microsoft", 'target': "Barnes & Noble", 'type': "suit"},
{'source': "Microsoft", 'target': "Foxconn", 'type': "suit"},
{'source': "Oracle", 'target': "Google", 'type': "suit"},
{'source': "Apple", 'target': "HTC", 'type': "suit"},
{'source': "Microsoft", 'target': "Inventec", 'type': "suit"},
{'source': "Samsung", 'target': "Kodak", 'type': "resolved"},
{'source': "LG", 'target': "Kodak", 'type': "resolved"},
{'source': "RIM", 'target': "Kodak", 'type': "suit"},
{'source': "Sony", 'target': "LG", 'type': "suit"},
{'source': "Kodak", 'target': "LG", 'type': "resolved"},
{'source': "Apple", 'target': "Nokia", 'type': "resolved"},
{'source': "Qualcomm", 'target': "Nokia", 'type': "resolved"},
{'source': "Apple", 'target': "Motorola", 'type': "suit"},
{'source': "Microsoft", 'target': "Motorola", 'type': "suit"},
{'source': "Motorola", 'target': "Microsoft", 'type': "suit"},
{'source': "Huawei", 'target': "ZTE", 'type': "suit"},
{'source': "Ericsson", 'target': "ZTE", 'type': "suit"},
{'source': "Kodak", 'target': "Samsung", 'type': "resolved"},
{'source': "Apple", 'target': "Samsung", 'type': "suit"},
{'source': "Kodak", 'target': "RIM", 'type': "suit"},
{'source': "Nokia", 'target': "Qualcomm", 'type': "suit"}
]
context['links'] = links
return render(request, 'main.html', context)
</code></pre>
<p><strong>Step 2:</strong> In your <code>main.html</code> add the following to your <code><head></code> tag.</p>
<pre><code><script src="https://code.jquery.com/jquery-2.2.4.min.js" integrity="sha256-BbhdlvQf/xTY9gja0Dq3HiwQF8LaCRTXxZKRutelT44=" crossorigin="anonymous"></script>
<script src="//d3js.org/d3.v3.min.js"></script>
<link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<style>
.link {
fill: none;
stroke: #666;
stroke-width: 1.5px;
}
#licensing {
fill: green;
}
.link.licensing {
stroke: green;
}
.link.resolved {
stroke-dasharray: 0,2 1;
}
circle {
fill: #ccc;
stroke: #333;
stroke-width: 1.5px;
}
text {
font: 10px sans-serif;
pointer-events: none;
text-shadow: 0 1px 0 #fff, 1px 0 0 #fff, 0 -1px 0 #fff, -1px 0 0 #fff;
}
.modal-dialog {
width: 63% !important;
}
</style>
</code></pre>
<p><strong>Step 3:</strong> This will be your <code><body></code> tag in <code>main.html</code>, in this you will need to have a global <code>var links = {{ links|safe }}</code> variable. We cant shift the script to a separate file since django template tag wont work there.</p>
<pre><code><body>
<div class="col-lg-4">
<p><button class="btn btn-primary" type="button" data-toggle="modal" data-target="#myModal" id="display_list">Display List</button></p>
</div>
<!-- Modal -->
<div id="myModal" class="modal fade" role="dialog">
<div class="modal-dialog">
<!-- Modal content-->
<div class="modal-content">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal">&times;</button>
<h4 class="modal-title">Modal Header</h4>
</div>
<div class="modal-body" id="modal-body">
</div>
<div class="modal-footer">
<button type="button" class="btn btn-default" data-dismiss="modal">Close</button>
</div>
</div>
</div>
</div>
<script>
var links = {{ links|safe}};
var nodes = {};
// Use elliptical arc path segments to doubly-encode directionality.
function tick() {
path.attr("d", linkArc);
circle.attr("transform", transform);
text.attr("transform", transform);
}
function linkArc(d) {
var dx = d.target.x - d.source.x,
dy = d.target.y - d.source.y,
dr = Math.sqrt(dx * dx + dy * dy);
return "M" + d.source.x + "," + d.source.y + "A" + dr + "," + dr + " 0 0,1 " + d.target.x + "," + d.target.y;
}
function transform(d) {
return "translate(" + d.x + "," + d.y + ")";
}
// Compute the distinct nodes from the links.
links.forEach(function(link) {
link.source = nodes[link.source] || (nodes[link.source] = {name: link.source});
link.target = nodes[link.target] || (nodes[link.target] = {name: link.target});
});
var width = 860,
height = 500;
var force = d3.layout.force()
.nodes(d3.values(nodes))
.links(links)
.size([width, height])
.linkDistance(60)
.charge(-300)
.on("tick", tick)
.start();
var svg = d3.select("#modal-body").append("svg")
.attr("width", width)
.attr("height", height);
// Per-type markers, as they don't inherit styles.
svg.append("defs").selectAll("marker")
.data(["suit", "licensing", "resolved"])
.enter().append("marker")
.attr("id", function(d) { return d; })
.attr("viewBox", "0 -5 10 10")
.attr("refX", 15)
.attr("refY", -1.5)
.attr("markerWidth", 6)
.attr("markerHeight", 6)
.attr("orient", "auto")
.append("path")
.attr("d", "M0,-5L10,0L0,5");
var path = svg.append("g").selectAll("path")
.data(force.links())
.enter().append("path")
.attr("class", function(d) { return "link " + d.type; })
.attr("marker-end", function(d) { return "url(#" + d.type + ")"; });
var circle = svg.append("g").selectAll("circle")
.data(force.nodes())
.enter().append("circle")
.attr("r", 6)
.call(force.drag);
var text = svg.append("g").selectAll("text")
.data(force.nodes())
.enter().append("text")
.attr("x", 8)
.attr("y", ".31em")
.text(function(d) { return d.name; });
</script>
</body>
</code></pre>
<p>Thats it you are ready to go. Look carefully you don' require to add a click event to <code>#display_list</code> as Bootstrap handles all that.</p>
<p>Here's a <a href="https://jsbin.com/rohoba/edit?html,css" rel="nofollow">JSBin demo</a></p>
| 1 | 2016-07-27T07:03:43Z | [
"javascript",
"jquery",
"python",
"html",
"django"
] |
Loop over list of dictionaries and within list values | 38,600,591 | <p>Suppose I have a list of two dictionaries:</p>
<pre><code>iter_list = [{0: [2, 1, 3],
1: [3, 2, 1],
2: [1, 2, 3],
3: [2, 3, 1],
4: [1, 2, 3],
5: [2, 3, 1]},
{0: [2, 3, 1],
1: [1, 2, 3],
2: [1, 2, 3],
3: [1, 2, 3],
4: [2, 3, 1],
5: [1, 3, 2]}]
</code></pre>
<p>Each dictionary has 6 keys numbered 0 through 5.
I would like to loop through each dictionary one at a time in the order of the key and have the output be a pair with the key (ordered by the key value) and the first value of the corresponding list, the first value corresponding with the second key etc. followed by the key and the second value etc. Hopefully the example output will clarify:</p>
<pre><code>0 2
1 3
2 1
3 2
4 1
5 2
0 1
1 2
2 2
3 3
4 2
5 3
0 3
1 1
2 3
3 1
4 3
5 1
0 2 //2nd dictionary iteration
1 1
2 1
3 1
4 2
5 1
0 3
1 2
2 2
3 2
4 3
5 3
0 1
1 3
2 3
3 3
4 1
5 2
</code></pre>
<p>I've only been able to figure out how to loop through the first position of the first dictionary, but can't figure out how to use the [0] as an iteration variable to go through all positions in the value lists before moving on to the next dictionary. </p>
<pre><code>for i in iter_list:
for key, value in i.iteritems():
print key, value[0]
</code></pre>
<p>Any help is greatly appreciated! Thanks!</p>
| 0 | 2016-07-26T22:02:03Z | 38,600,663 | <p>If I understand you correctly, in your case, you need to iterate the iter_list 3 times</p>
<pre><code>>>> for i in iter_list:
... for x in range(3):
... for key, value in i.iteritems():
... print key, value[x]
...
</code></pre>
<p>Output:</p>
<pre><code>0 2
1 3
2 1
3 2
4 1
5 2
0 1
1 2
2 2
3 3
4 2
5 3
0 3
1 1
2 3
3 1
4 3
5 1
0 2
1 1
2 1
3 1
4 2
5 1
0 3
1 2
2 2
3 2
4 3
5 3
0 1
1 3
2 3
3 3
4 1
5 2
</code></pre>
| 5 | 2016-07-26T22:08:48Z | [
"python",
"python-2.7"
] |
Loop over list of dictionaries and within list values | 38,600,591 | <p>Suppose I have a list of two dictionaries:</p>
<pre><code>iter_list = [{0: [2, 1, 3],
1: [3, 2, 1],
2: [1, 2, 3],
3: [2, 3, 1],
4: [1, 2, 3],
5: [2, 3, 1]},
{0: [2, 3, 1],
1: [1, 2, 3],
2: [1, 2, 3],
3: [1, 2, 3],
4: [2, 3, 1],
5: [1, 3, 2]}]
</code></pre>
<p>Each dictionary has 6 keys numbered 0 through 5.
I would like to loop through each dictionary one at a time in the order of the key and have the output be a pair with the key (ordered by the key value) and the first value of the corresponding list, the first value corresponding with the second key etc. followed by the key and the second value etc. Hopefully the example output will clarify:</p>
<pre><code>0 2
1 3
2 1
3 2
4 1
5 2
0 1
1 2
2 2
3 3
4 2
5 3
0 3
1 1
2 3
3 1
4 3
5 1
0 2 //2nd dictionary iteration
1 1
2 1
3 1
4 2
5 1
0 3
1 2
2 2
3 2
4 3
5 3
0 1
1 3
2 3
3 3
4 1
5 2
</code></pre>
<p>I've only been able to figure out how to loop through the first position of the first dictionary, but can't figure out how to use the [0] as an iteration variable to go through all positions in the value lists before moving on to the next dictionary. </p>
<pre><code>for i in iter_list:
for key, value in i.iteritems():
print key, value[0]
</code></pre>
<p>Any help is greatly appreciated! Thanks!</p>
| 0 | 2016-07-26T22:02:03Z | 38,600,694 | <p>A list comprehension should do:</p>
<pre><code>>>> output_list = [[k, d[k][i]] for d in iter_list for i in range(3) for k in sorted(d)]:
>>> for k, dki in output_list:
... print k, dki
...
0 2
1 3
2 1
3 2
4 1
5 2
0 1
1 2
2 2
3 3
4 2
5 3
0 3
1 1
2 3
3 1
4 3
5 1
0 2
1 1
2 1
3 1
4 2
5 1
0 3
1 2
2 2
3 2
4 3
5 3
0 1
1 3
2 3
3 3
4 1
5 2
</code></pre>
| 1 | 2016-07-26T22:10:18Z | [
"python",
"python-2.7"
] |
Loop over list of dictionaries and within list values | 38,600,591 | <p>Suppose I have a list of two dictionaries:</p>
<pre><code>iter_list = [{0: [2, 1, 3],
1: [3, 2, 1],
2: [1, 2, 3],
3: [2, 3, 1],
4: [1, 2, 3],
5: [2, 3, 1]},
{0: [2, 3, 1],
1: [1, 2, 3],
2: [1, 2, 3],
3: [1, 2, 3],
4: [2, 3, 1],
5: [1, 3, 2]}]
</code></pre>
<p>Each dictionary has 6 keys numbered 0 through 5.
I would like to loop through each dictionary one at a time in the order of the key and have the output be a pair with the key (ordered by the key value) and the first value of the corresponding list, the first value corresponding with the second key etc. followed by the key and the second value etc. Hopefully the example output will clarify:</p>
<pre><code>0 2
1 3
2 1
3 2
4 1
5 2
0 1
1 2
2 2
3 3
4 2
5 3
0 3
1 1
2 3
3 1
4 3
5 1
0 2 //2nd dictionary iteration
1 1
2 1
3 1
4 2
5 1
0 3
1 2
2 2
3 2
4 3
5 3
0 1
1 3
2 3
3 3
4 1
5 2
</code></pre>
<p>I've only been able to figure out how to loop through the first position of the first dictionary, but can't figure out how to use the [0] as an iteration variable to go through all positions in the value lists before moving on to the next dictionary. </p>
<pre><code>for i in iter_list:
for key, value in i.iteritems():
print key, value[0]
</code></pre>
<p>Any help is greatly appreciated! Thanks!</p>
| 0 | 2016-07-26T22:02:03Z | 38,600,789 | <p>perhaps you want to do this:</p>
<pre><code>for i in iter_list:
for j in range(3):
for key, value in i.iteritems():
print key, value[j]
</code></pre>
<p>assuming the values in the dictionaries have a fixed length of 3, otherwise you have to figure out where 3 comes from.
For example, it could be the <em>minimum</em> value length across all values of all dictionaries:</p>
<pre><code>>>> min(len(v) for k,v in i.iteritems() for i in iter_list)
3
</code></pre>
| 1 | 2016-07-26T22:19:03Z | [
"python",
"python-2.7"
] |
Loop over list of dictionaries and within list values | 38,600,591 | <p>Suppose I have a list of two dictionaries:</p>
<pre><code>iter_list = [{0: [2, 1, 3],
1: [3, 2, 1],
2: [1, 2, 3],
3: [2, 3, 1],
4: [1, 2, 3],
5: [2, 3, 1]},
{0: [2, 3, 1],
1: [1, 2, 3],
2: [1, 2, 3],
3: [1, 2, 3],
4: [2, 3, 1],
5: [1, 3, 2]}]
</code></pre>
<p>Each dictionary has 6 keys numbered 0 through 5.
I would like to loop through each dictionary one at a time in the order of the key and have the output be a pair with the key (ordered by the key value) and the first value of the corresponding list, the first value corresponding with the second key etc. followed by the key and the second value etc. Hopefully the example output will clarify:</p>
<pre><code>0 2
1 3
2 1
3 2
4 1
5 2
0 1
1 2
2 2
3 3
4 2
5 3
0 3
1 1
2 3
3 1
4 3
5 1
0 2 //2nd dictionary iteration
1 1
2 1
3 1
4 2
5 1
0 3
1 2
2 2
3 2
4 3
5 3
0 1
1 3
2 3
3 3
4 1
5 2
</code></pre>
<p>I've only been able to figure out how to loop through the first position of the first dictionary, but can't figure out how to use the [0] as an iteration variable to go through all positions in the value lists before moving on to the next dictionary. </p>
<pre><code>for i in iter_list:
for key, value in i.iteritems():
print key, value[0]
</code></pre>
<p>Any help is greatly appreciated! Thanks!</p>
| 0 | 2016-07-26T22:02:03Z | 38,600,868 | <p>A different approach using <a href="https://docs.python.org/2.7/library/functions.html#zip" rel="nofollow"><code>zip</code></a> and <a href="https://docs.python.org/2.7/library/itertools.html#itertools.chain" rel="nofollow"><code>itertools.chain</code></a> and <a href="https://docs.python.org/2.7/tutorial/controlflow.html#unpacking-argument-lists" rel="nofollow">unpacking</a> tricks:</p>
<pre><code>from itertools import chain
packed_pairs = zip(*[[(k, v) for v in vs] for ds in iter_list for k, vs in ds.items()])
pairs = chain(*packed_pairs)
for pair in pairs:
print pair[0], pair[1]
</code></pre>
<p>Output:</p>
<pre><code>0 2
1 3
2 1
3 2
4 1
5 2
0 2
1 1
2 1
3 1
4 2
5 1
0 1
1 2
2 2
3 3
4 2
5 3
0 3
1 2
2 2
3 2
4 3
5 3
0 3
1 1
2 3
3 1
4 3
5 1
0 1
1 3
2 3
3 3
4 1
5 2
</code></pre>
<p><strong>Note:</strong> make sure to use <code>pairs = list(pairs)</code> if you want to re-use <code>pairs</code>.</p>
| 2 | 2016-07-26T22:27:10Z | [
"python",
"python-2.7"
] |
Names features importance plot after preprocessing | 38,600,813 | <p>Before building a model I make scaling like this</p>
<pre><code>X = StandardScaler(with_mean = 0, with_std = 1).fit_transform(X)
</code></pre>
<p>and after build a features importance plot </p>
<pre><code>xgb.plot_importance(bst, color='red')
plt.title('importance', fontsize = 20)
plt.yticks(fontsize = 10)
plt.ylabel('features', fontsize = 20)
</code></pre>
<p><a href="http://i.stack.imgur.com/8ighl.png" rel="nofollow"><img src="http://i.stack.imgur.com/8ighl.png" alt="enter image description here"></a></p>
<p>The problem is that instead of feature's names we get f0, f1, f2, f3 etc.....
How to return feature's names?</p>
<p>thanks</p>
| 0 | 2016-07-26T22:22:14Z | 38,616,018 | <p>first we get list of feature names before preprocessing </p>
<pre><code>dtrain = xgb.DMatrix( X, label=y)
dtrain.feature_names
</code></pre>
<p>Then</p>
<pre><code>bst.get_fscore()
mapper = {'f{0}'.format(i): v for i, v in enumerate(dtrain.feature_names)}
mapped = {mapper[k]: v for k, v in bst.get_fscore().items()}
mapped
xgb.plot_importance(mapped, color='red')
</code></pre>
<p>that's all</p>
| 0 | 2016-07-27T14:34:24Z | [
"python",
"xgboost"
] |
incorrect camera pose from two-view SFM | 38,600,871 | <p>In order to validate results of two-view SFM approach for estimating camera pose [R|t], I made use of the chessboard patterns which I used for calibration, especially that "calibrateCamera" function in OpenCV returns rotation and translation vectors for each pattern. And hence, relative pose between let's say the first two patterns can be calculated easily. </p>
<p>Yet I don't get the correct camera pose, and I have been struggling so hard figuring out the problem, but to no vain.</p>
<p>I would really appreciate your contributions to solve my problem. </p>
<p><strong>MY CODE Description:</strong> </p>
<ul>
<li>undistort images</li>
<li>find chessboard corners in two images</li>
<li>match points (verified by plotting side to side the two images and the lines)</li>
<li>estimate fundamental matrix (verified : x'T * F * x = 0)</li>
<li>Essential Matrix (E) = KT * F * K (verified : X'T * E * X = 0)</li>
<li>SVD of E = U * S * VT</li>
<li><p>R = U * W * VT or U * WT * VT such that WT = [0,-1,0; 1,0,0; 0,0,1]</p>
<pre><code>FundMat, mask = cv2.findFundamentalMat(imgpoints1, imgpoints2, cv2.FM_LMEDS)
# is the fundamental matrix is really a fundamental Matrix. xFx'=0 ??
# verfication of fundamental matrix
for i in range(len(imgpoints1)):
X = np.array([imgpoints1[i][0],imgpoints1[i][1],1])
X_prime = np.array([imgpoints2[i][0],imgpoints2[i][1],1])
err = np.dot(np.dot(X_prime.T,FundMat),X)
if mask[i] == True:
print(err)
# E = [t]R = (K_-T)_-1 * F * K = K_T*F*K
term1 = np.dot(np.transpose(mtx), FundMat) # newcameramtx , mtx
E = np.dot(term1, mtx) # newcameramtx , mtx
# verfication of Essential matrix
for i in range(len(imgpoints1)):
X_norm = np.dot(np.linalg.inv(mtx), np.array([imgpoints1[i][0],imgpoints1[i][1],1]).T)
X_prime_norm = np.dot(np.linalg.inv(mtx), np.array([imgpoints2[i][0],imgpoints2[i][1],1]).T)
err_Ess = np.dot(np.dot(X_prime_norm.T,E),X_norm)
if mask[i] == True:
print(err_Ess)
# SVD of E
U,S,V_T = np.linalg.svd(E)
# computation of Rotation and Translation without enforcement
W = np.array([[0,-1,0],[1,0,0],[0,0,1]])
Rot1 = np.dot(np.dot(U, W), V_T)
Rot2 = np.dot(np.dot(U, W.T), V_T)
</code></pre></li>
</ul>
| 2 | 2016-07-26T22:27:29Z | 38,615,754 | <p>Your problem is that you are using the points from the chessboard: you cannot estimate the Fundamental matrix from coplanar points. One way to fix this is to match scene points using a generic approach, like SIFT or SURF. The other way is to estimate the Essential matrix directly using the 5-point algorithm, because the Essential matrix can be calculated from coplanar points.</p>
<p>Also, keep in mind that you can only calculate the camera pose up to scale from the Essential matrix. In other words, your translation will end up being a unit vector. One way to calculate the scale factor to get the actual length of the translation is to use your chessboard.</p>
| 0 | 2016-07-27T14:24:14Z | [
"python",
"opencv",
"computer-vision",
"structure-from-motion"
] |
How to serialize pyspark GroupedData object? | 38,600,908 | <p>I am running a <code>groupBy()</code> on a dataset having several millions of records and want to save the resulting output (a pyspark <code>GroupedData</code> object) so that I can de-serialize it later and resume from that point (running aggregations on top of that as needed).</p>
<pre><code>df.groupBy("geo_city")
<pyspark.sql.group.GroupedData at 0x10503c5d0>
</code></pre>
<p>I want to avoid converting the GroupedData object into DataFrames or RDDs in order to save it to text file or parquet/avro format (as the conversion operation is expensive). Is there some other efficient way to store the <code>GroupedData</code> object into some binary format for faster read/write? Possibly some equivalent of pickle in Spark?</p>
| 1 | 2016-07-26T22:31:59Z | 38,601,071 | <p>There is none because <code>GroupedData</code> is not really a thing. It doesn't perform any operations on data at all. It only describes how actual aggregation should proceed when you execute an action on the results of a subsequent <code>agg</code>.</p>
<p>You could probably serialize underlaying JVM object and restore it later but it is a waste of time. Since <code>groupBy</code> only describes what has to be done the cost of recreating <code>GroupedData</code> object from scratch should be negligible. </p>
| 2 | 2016-07-26T22:47:01Z | [
"python",
"apache-spark",
"pyspark",
"spark-dataframe",
"pyspark-sql"
] |
In Django, how to get all instances of a model where an instance of another model related to the first by fk does not exist? | 38,600,932 | <p>This is really tricky to me. I have the following models:</p>
<pre><code>class Item(models.Model):
title = models.CharField(max_length=150)
item_type = models.CharField(choices=item_choices)
class Suggestion(models.Model):
user = models.ForeignKey(User)
item = models.ForeignKey(Item)
</code></pre>
<p>I need to get all the items of a given item_type that don't have a suggestion related to it and to a given user. Can this be achieved in one query?</p>
<p>Many thanks!</p>
| 0 | 2016-07-26T22:34:35Z | 38,600,990 | <p>One approach is to create a queryset of items with suggestions related to the user, then use that as a subquery.</p>
<pre><code>items_suggested_by_user = Item.objects.filter(suggestion__user=user)
item_types = Item.objects.filter(item_type=item_type)
items = item_types.exclude(item__in=items_suggested_by_user)
</code></pre>
| 1 | 2016-07-26T22:39:58Z | [
"python",
"django"
] |
In Django, how to get all instances of a model where an instance of another model related to the first by fk does not exist? | 38,600,932 | <p>This is really tricky to me. I have the following models:</p>
<pre><code>class Item(models.Model):
title = models.CharField(max_length=150)
item_type = models.CharField(choices=item_choices)
class Suggestion(models.Model):
user = models.ForeignKey(User)
item = models.ForeignKey(Item)
</code></pre>
<p>I need to get all the items of a given item_type that don't have a suggestion related to it and to a given user. Can this be achieved in one query?</p>
<p>Many thanks!</p>
| 0 | 2016-07-26T22:34:35Z | 38,600,999 | <p>This can be done with <code>.exclude()</code>:</p>
<pre><code>items = Item.objects.filter(item_type=item_type).exclude(suggestion__user=user)
</code></pre>
<p>This follows the backwards relation to <code>Suggestion</code>, and the forward relation to <code>User</code>, and excludes any <code>Item</code>s where the related user matches the given user. </p>
| 2 | 2016-07-26T22:40:43Z | [
"python",
"django"
] |
Inherit property getter documentation | 38,600,933 | <p>The decorator proposed <a href="http://stackoverflow.com/a/8101598/551045">here</a> is able to inherit the docstring for methods but not for properties and getters.</p>
<p>I have tried to naively expand it but it seems that docstrings of properties are read-only. Is there any way to inherit those?</p>
<pre><code>import types
def fix_docs(cls):
for name, func in vars(cls).items():
if isinstance(func, (types.FunctionType, property)) and not func.__doc__:
print func, 'needs doc'
for parent in cls.__bases__:
parfunc = getattr(parent, name, None)
if parfunc and getattr(parfunc, '__doc__', None):
func.__doc__ = parfunc.__doc__
break
return cls
class X(object):
"""
some doc
"""
angle = 10
"""Not too steep."""
def please_implement(self):
"""
I have a very thorough documentation
:return:
"""
raise NotImplementedError
@property
def speed(self):
"""
Current speed in knots/hour.
:return:
"""
return 0
@speed.setter
def speed(self, value):
"""
:param value:
:return:
"""
pass
@fix_docs
class SpecialX(X):
angle = 30
def please_implement(self):
return True
@property
def speed(self):
return 10
@speed.setter
def speed(self, value):
self.sp = value
help(X.speed)
help(X.angle)
help(SpecialX.speed)
help(SpecialX.ange)
</code></pre>
<p>This only gets me</p>
<pre><code>Traceback (most recent call last):
<function please_implement at 0x036101B0> needs doc
<property object at 0x035BE930> needs doc
File "C:\Program Files (x86)\JetBrains\PyCharm Community Edition 2016.2\helpers\pydev\pydevd.py", line 1556, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files (x86)\JetBrains\PyCharm Community Edition 2016.2\helpers\pydev\pydevd.py", line 940, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:/Users/RedX/.PyCharm2016.2/config/scratches/scratch.py", line 48, in <module>
class SpecialX(X):
File "C:/Users/RedX/.PyCharm2016.2/config/scratches/scratch.py", line 10, in fix_docs
func.__doc__ = parfunc.__doc__
TypeError: readonly attribute
</code></pre>
| 1 | 2016-07-26T22:34:37Z | 38,600,995 | <p>Yeah, property docstrings are read-only. You'd have to make a new property:</p>
<pre><code>replacement = property(fget=original.fget,
fset=original.fset,
fdel=original.fdel,
__doc__=parentprop.__doc__)
</code></pre>
<p>and replace the original with that.</p>
<p>It might be slightly better to replace the original function's docstring and then regenerate the property to automatically pass that through:</p>
<pre><code>original.fget.__doc__ = parentprop.__doc__
replacement = property(fget=original.fget,
fset=original.fset,
fdel=original.fdel)
</code></pre>
| 0 | 2016-07-26T22:40:22Z | [
"python",
"python-decorators"
] |
Inherit property getter documentation | 38,600,933 | <p>The decorator proposed <a href="http://stackoverflow.com/a/8101598/551045">here</a> is able to inherit the docstring for methods but not for properties and getters.</p>
<p>I have tried to naively expand it but it seems that docstrings of properties are read-only. Is there any way to inherit those?</p>
<pre><code>import types
def fix_docs(cls):
for name, func in vars(cls).items():
if isinstance(func, (types.FunctionType, property)) and not func.__doc__:
print func, 'needs doc'
for parent in cls.__bases__:
parfunc = getattr(parent, name, None)
if parfunc and getattr(parfunc, '__doc__', None):
func.__doc__ = parfunc.__doc__
break
return cls
class X(object):
"""
some doc
"""
angle = 10
"""Not too steep."""
def please_implement(self):
"""
I have a very thorough documentation
:return:
"""
raise NotImplementedError
@property
def speed(self):
"""
Current speed in knots/hour.
:return:
"""
return 0
@speed.setter
def speed(self, value):
"""
:param value:
:return:
"""
pass
@fix_docs
class SpecialX(X):
angle = 30
def please_implement(self):
return True
@property
def speed(self):
return 10
@speed.setter
def speed(self, value):
self.sp = value
help(X.speed)
help(X.angle)
help(SpecialX.speed)
help(SpecialX.ange)
</code></pre>
<p>This only gets me</p>
<pre><code>Traceback (most recent call last):
<function please_implement at 0x036101B0> needs doc
<property object at 0x035BE930> needs doc
File "C:\Program Files (x86)\JetBrains\PyCharm Community Edition 2016.2\helpers\pydev\pydevd.py", line 1556, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files (x86)\JetBrains\PyCharm Community Edition 2016.2\helpers\pydev\pydevd.py", line 940, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:/Users/RedX/.PyCharm2016.2/config/scratches/scratch.py", line 48, in <module>
class SpecialX(X):
File "C:/Users/RedX/.PyCharm2016.2/config/scratches/scratch.py", line 10, in fix_docs
func.__doc__ = parfunc.__doc__
TypeError: readonly attribute
</code></pre>
| 1 | 2016-07-26T22:34:37Z | 38,601,305 | <p>This version supports multiple inheritance and copying the documentation from a base's base by using <code>__mro__</code> instead of <code>__bases__</code>.</p>
<pre><code>def fix_docs(cls):
"""
This will copy all the missing documentation for methods from the parent classes.
:param type cls: class to fix up.
:return type: the fixed class.
"""
for name, func in vars(cls).items():
if isinstance(func, types.FunctionType) and not func.__doc__:
for parent in cls.__bases__:
parfunc = getattr(parent, name, None)
if parfunc and getattr(parfunc, '__doc__', None):
func.__doc__ = parfunc.__doc__
break
elif isinstance(func, property) and not func.fget.__doc__:
for parent in cls.__bases__:
parprop = getattr(parent, name, None)
if parprop and getattr(parprop.fget, '__doc__', None):
newprop = property(fget=func.fget,
fset=func.fset,
fdel=func.fdel,
parprop.fget.__doc__)
setattr(cls, name, newprop)
break
return cls
</code></pre>
<p>Tests:</p>
<pre><code>import pytest
class X(object):
def please_implement(self):
"""
I have a very thorough documentation
:return:
"""
raise NotImplementedError
@property
def speed(self):
"""
Current speed in knots/hour.
:return:
"""
return 0
@speed.setter
def speed(self, value):
"""
:param value:
:return:
"""
pass
class SpecialX(X):
def please_implement(self):
return True
@property
def speed(self):
return 10
@speed.setter
def speed(self, value):
self.sp = value
class VerySpecial(X):
def speed(self):
"""
The fastest speed in knots/hour.
:return: 100
"""
return 100
def please_implement(self):
"""
I have my own words!
:return bool: Always false.
"""
return False
def not_inherited(self):
"""
Look at all these words!
:return:
"""
class A(object):
def please_implement(self):
"""
This doc is not used because X is resolved first in the MRO.
:return:
"""
pass
class B(A):
pass
class HasNoWords(SpecialX, B):
def please_implement(self):
return True
@property
def speed(self):
return 10
@speed.setter
def speed(self, value):
self.sp = value
def test_class_does_not_inhirit_works():
fix_docs(X)
@pytest.mark.parametrize('clazz', [
SpecialX,
HasNoWords
])
def test_property_and_method_inherit(clazz):
x = fix_docs(clazz)
assert x.please_implement.__doc__ == """
I have a very thorough documentation
:return:
"""
assert x.speed.__doc__ == """
Current speed in knots/hour.
:return:
"""
def test_inherited_class_with_own_doc_is_not_overwritten():
x = fix_docs(VerySpecial)
assert x.please_implement.__doc__ == """
I have my own words!
:return bool: Always false.
"""
assert x.speed.__doc__ == """
The fastest speed in knots/hour.
:return: 100
"""
</code></pre>
| 0 | 2016-07-26T23:10:51Z | [
"python",
"python-decorators"
] |
How can i run deepdream on android | 38,600,982 | <p>How can I run Google deep dream on android? Can I execute the phython script or do I need to port it to java for performance reasons?</p>
| 0 | 2016-07-26T22:38:50Z | 39,757,493 | <p>Short answer is no.</p>
<p>Google deepdream is an iPython notebook with dependencies on caffe which itself has several dependencies. </p>
<p>There is however no reason why someone couldn't develop a similar tool for Android. There is an app called dreamscope for producing these kinds of images that is available for android, but I would presume they do all of their image computation in the cloud.</p>
| 0 | 2016-09-28T20:55:50Z | [
"java",
"android",
"python"
] |
Python: I have pandas dataframe which has the same column name. How to change one of those? | 38,601,014 | <p>I have pandas dataframe which has the same column names. (column names are a,b,a,a,a)
Below is example.</p>
<p><a href="http://i.stack.imgur.com/wYLzJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/wYLzJ.png" alt="enter image description here"></a></p>
<p>Is there any way I can change column name only for 3rd column from the left by specifying column location? I found that there is a way to change column name by making a new list. But I wanted to see if there is any way I can specify column location and change the name.
Below is what I want.
<a href="http://i.stack.imgur.com/ZORzf.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZORzf.png" alt="enter image description here"></a></p>
<p>Since I am new to programming, I would appreciate any of your help!</p>
| 2 | 2016-07-26T22:42:33Z | 38,601,198 | <p>Does this work?:</p>
<pre><code>column_names = df.columns.values
column_names[2] = 'Changed'
df.columns = column_names
</code></pre>
| 1 | 2016-07-26T22:58:28Z | [
"python",
"pandas",
"dataframe",
"rename"
] |
Python: I have pandas dataframe which has the same column name. How to change one of those? | 38,601,014 | <p>I have pandas dataframe which has the same column names. (column names are a,b,a,a,a)
Below is example.</p>
<p><a href="http://i.stack.imgur.com/wYLzJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/wYLzJ.png" alt="enter image description here"></a></p>
<p>Is there any way I can change column name only for 3rd column from the left by specifying column location? I found that there is a way to change column name by making a new list. But I wanted to see if there is any way I can specify column location and change the name.
Below is what I want.
<a href="http://i.stack.imgur.com/ZORzf.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZORzf.png" alt="enter image description here"></a></p>
<p>Since I am new to programming, I would appreciate any of your help!</p>
| 2 | 2016-07-26T22:42:33Z | 38,602,453 | <p>df.rename(inplace=True,columns={'3col':'Changed'})</p>
| 0 | 2016-07-27T01:52:34Z | [
"python",
"pandas",
"dataframe",
"rename"
] |
Easy way to use parallel options of scikit-learn functions on HPC | 38,601,026 | <p>In many functions from scikit-learn implemented user-friendly parallelization. For example in
<code>sklearn.cross_validation.cross_val_score</code> you just pass desired number of computational jobs in <code>n_jobs</code> argument. And for PC with multi-core processor it will work very nice. But if I want use such option in high performance cluster (with installed OpenMPI package and using SLURM for resource management) ? As I know <code>sklearn</code> uses <code>joblib</code> for parallelization, which uses <code>multiprocessing</code>. And, as I know (from this, for example, <a href="http://stackoverflow.com/questions/25772289/python-multiprocessing-within-mpi">Python multiprocessing within mpi</a>) Python programs parallelized with <code>multiprocessing</code> easy to scale oh whole MPI architecture with <code>mpirun</code> utility. Can I spread computation of <code>sklearn</code> functions on several computational nodes just using <code>mpirun</code> and <code>n_jobs</code> argument? </p>
| 9 | 2016-07-26T22:43:34Z | 38,814,491 | <p>SKLearn manages its parallelism with <a href="https://pythonhosted.org/joblib/" rel="nofollow">Joblib</a>. Joblib can swap out the multiprocessing backend for other distributed systems like <a href="http://distributed.readthedocs.io/en/latest/" rel="nofollow">dask.distributed</a> or <a href="https://ipython.org/ipython-doc/3/parallel/" rel="nofollow">IPython Parallel</a>. See <a href="https://github.com/scikit-learn/scikit-learn/issues/7168" rel="nofollow">this issue</a> on the <code>sklearn</code> github page for details.</p>
<h3>Example using Joblib with Dask.distributed</h3>
<p>Code taken from the issue page linked above.</p>
<pre><code>from distributed.joblib import DistributedBackend
# it is important to import joblib from sklearn if we want the distributed features to work with sklearn!
from sklearn.externals.joblib import Parallel, parallel_backend, register_parallel_backend
...
search = RandomizedSearchCV(model, param_space, cv=10, n_iter=1000, verbose=1)
register_parallel_backend('distributed', DistributedBackend)
with parallel_backend('distributed', scheduler_host='your_scheduler_host:your_port'):
search.fit(digits.data, digits.target)
</code></pre>
<p>This requires that you set up a <code>dask.distributed</code> scheduler and workers on your cluster. General instructions are available here: <a href="http://distributed.readthedocs.io/en/latest/setup.html" rel="nofollow">http://distributed.readthedocs.io/en/latest/setup.html</a></p>
<h3>Example using Joblib with <code>ipyparallel</code></h3>
<p>Code taken from the same issue page.</p>
<pre><code>from sklearn.externals.joblib import Parallel, parallel_backend, register_parallel_backend
from ipyparallel import Client
from ipyparallel.joblib import IPythonParallelBackend
digits = load_digits()
c = Client(profile='myprofile')
print(c.ids)
bview = c.load_balanced_view()
# this is taken from the ipyparallel source code
register_parallel_backend('ipyparallel', lambda : IPythonParallelBackend(view=bview))
...
with parallel_backend('ipyparallel'):
search.fit(digits.data, digits.target)
</code></pre>
<p><strong>Note:</strong> in both the above examples, the <code>n_jobs</code> parameter seems to not matter anymore.</p>
<h3>Set up dask.distributed with SLURM</h3>
<p>For SLURM the easiest way to do this is probably to run a <code>dask-scheduler</code> locally</p>
<pre><code>$ dask-scheduler
Scheduler running at 192.168.12.201:8786
</code></pre>
<p>And then use SLURM to submit many <code>dask-worker</code> jobs pointing to this process.</p>
<pre><code>$ sbatch --array=0-200 dask-worker 192.168.201:8786 --nthreads 1
</code></pre>
<p>(I don't actually know SLURM well, so the syntax above could be incorrect, hopefully the intention is clear)</p>
<h3>Use dask.distributed directly</h3>
<p>Alternatively you can set up a dask.distributed or IPyParallel cluster and then use these interfaces directly to parallelize your SKLearn code. Here is an example video of SKLearn and Joblib developer Olivier Grisel, doing exactly that at PyData Berlin: <a href="https://youtu.be/Ll6qWDbRTD0?t=1561" rel="nofollow">https://youtu.be/Ll6qWDbRTD0?t=1561</a></p>
<h3>Try <code>dklearn</code></h3>
<p>You could also try the <em>experimental</em> <code>dklearn</code> package, which has a <code>RandomizedSearchCV</code> object that is API compatible with scikit-learn but computationally implemented on top of Dask</p>
<p><a href="https://github.com/dask/dask-learn" rel="nofollow">https://github.com/dask/dask-learn</a></p>
<pre><code>pip install git+https://github.com/dask/dask-learn
</code></pre>
| 3 | 2016-08-07T13:11:15Z | [
"python",
"parallel-processing",
"scikit-learn",
"multiprocessing",
"cluster-computing"
] |
Class that takes another class as argument, copies behavior | 38,601,052 | <p>I'd like to create a class in Python that takes a single argument in the constructor, another Python class. The instance of the Copy class should have all the attributes and methods of the original class, without knowing what they should be beforehand. Here's some code that almost works:</p>
<pre><code>import copy
class A():
l = 'a'
class Copy():
def __init__(self, original_class):
self = copy.deepcopy(original_class)
print(self.l)
c = Copy(A)
print(c.l)
</code></pre>
<p>The print statement in the constructor prints 'a', but the final one gives the error <code>AttributeError: Copy instance has no attribute 'l'</code>.</p>
| 0 | 2016-07-26T22:45:13Z | 38,601,169 | <p>I am unsure why you would wish to do this, but you probably have your reasons.</p>
<p>You can leverage off normal inheritance:</p>
<p>eg:</p>
<pre><code> class A(object):
l = 'a'
class C(object):
l = 'c'
def createClass(cls):
class B(cls):
pass
return B
cls = createClass(C) # or A or whatever
print cls.l
=> result: 'c'
</code></pre>
| 1 | 2016-07-26T22:56:28Z | [
"python",
"class",
"oop",
"inheritance",
"metaclass"
] |
Class that takes another class as argument, copies behavior | 38,601,052 | <p>I'd like to create a class in Python that takes a single argument in the constructor, another Python class. The instance of the Copy class should have all the attributes and methods of the original class, without knowing what they should be beforehand. Here's some code that almost works:</p>
<pre><code>import copy
class A():
l = 'a'
class Copy():
def __init__(self, original_class):
self = copy.deepcopy(original_class)
print(self.l)
c = Copy(A)
print(c.l)
</code></pre>
<p>The print statement in the constructor prints 'a', but the final one gives the error <code>AttributeError: Copy instance has no attribute 'l'</code>.</p>
| 0 | 2016-07-26T22:45:13Z | 38,601,263 | <p>You need to copy the <code>__dict__</code>:</p>
<pre><code>import copy
class A():
l = 'a'
class Copy():
def __init__(self, original_class):
self.__dict__ = copy.deepcopy(original_class.__dict__)
print(self.l)
c = Copy(A) # -> a
print(c.l) # -> a
</code></pre>
| 1 | 2016-07-26T23:05:51Z | [
"python",
"class",
"oop",
"inheritance",
"metaclass"
] |
Class that takes another class as argument, copies behavior | 38,601,052 | <p>I'd like to create a class in Python that takes a single argument in the constructor, another Python class. The instance of the Copy class should have all the attributes and methods of the original class, without knowing what they should be beforehand. Here's some code that almost works:</p>
<pre><code>import copy
class A():
l = 'a'
class Copy():
def __init__(self, original_class):
self = copy.deepcopy(original_class)
print(self.l)
c = Copy(A)
print(c.l)
</code></pre>
<p>The print statement in the constructor prints 'a', but the final one gives the error <code>AttributeError: Copy instance has no attribute 'l'</code>.</p>
| 0 | 2016-07-26T22:45:13Z | 38,601,499 | <p>This is an interesting question to point out a pretty cool feature of Python's pass-by-value semantics, as it is intimately tied to why your original code doesn't work correctly and why @martineau's solution works well.</p>
<h2>Why Your Code As Written Doesn't Work</h2>
<p>Python doesn't support pure pass-by-reference or pass-by-value semantics - instead, it does the following:</p>
<pre><code># Assume x is an object
def f(x):
# doing the following modifies `x` globally
x.attribute = 5
# but doing an assignment only modifies x locally!
x = 10
print(x)
</code></pre>
<p>To see this in action,</p>
<pre><code># example
class Example(object):
def __init__(self):
pass
x = Example()
print(x)
>>> <__main__.Example instance at 0x020DC4E0>
f(e) # will print the value of x inside `f` after assignment
>>> 10
print(x) # This is unchanged
>>> <__main__.Example instance at 0x020DC4E0>
e.attribute # But somehow this exists!
>>> 5
</code></pre>
<p>What happens? Assignment creates a <em>local</em> <code>x</code> which is then assigned a value. Once this happens, the original parameter that was passed in as an argument is inaccessible.</p>
<p>However, so long as the <em>name</em> <code>x</code> is bound to the object that's passed in, you can modify attributes and it will be reflected in the object you passed in. The minute you 'give away' the name <code>x</code> to something else, however, that name is no longer bound to the original parameter you passed in.</p>
<hr>
<p>Why is this relevant here? </p>
<p>If you pay careful attention to the signature for <code>__init__</code>, you'll notice it takes <code>self</code> as a parameter. What is <code>self</code>? </p>
<p>Ordinarily, <code>self</code> refers to the object instance. So the name <code>self</code> is bound to the object instance. </p>
<p>This is where the fun starts. <strong>By assigning to <code>self</code> in your code, this property no longer holds true!</strong></p>
<pre><code>def __init__(self, original_class):
# The name `self` is no longer bound to the object instance,
# but is now a local variable!
self = copy.deepcopy(original_class)
print(self.l) # this is why this works!
</code></pre>
<p>The minute you leave <code>__init__</code>, this new local variable <code>self</code> goes out of scope. That is why doing <code>c.l</code> yields an error outside of the constructor - you never actually assigned to the object in the first place!</p>
<h2>Why @martineau's Solution Works</h2>
<p>@martineau simply took advantage of this behaviour to note that the <code>__dict__</code> attribute exists on the <code>self</code> object, and assigns to it:</p>
<pre><code>class Copy():
def __init__(self, original_class):
# modifying attributes modifies the object self refers to!
self.__dict__ = copy.deepcopy(original_class.__dict__)
print(self.l)
</code></pre>
<p>This now works because the <code>__dict__</code> attribute is what Python calls when Python needs to lookup a method signature or attribute when it sees the namespace operator <code>.</code>, and also because <code>self</code> has not been changed but still refers to the object instance. By assigning to <code>self.__dict__</code>, you obtain an almost exact copy of the original class ('almost exact' because even <code>deepcopy</code> has limits).</p>
<hr>
<p>The moral of the story should be clear: never assign anything to <code>self</code> directly. Instead, <em>only</em> assign to attributes of <code>self</code> if you ever need to. Python's metaprogramming permits a wide degree of flexibility in this regard, and you should always consult <a href="https://docs.python.org/3/library/stdtypes.html#special-attributes" rel="nofollow">the documentation</a> in this regard.</p>
| 1 | 2016-07-26T23:35:31Z | [
"python",
"class",
"oop",
"inheritance",
"metaclass"
] |
Is it possible to get_attribute() from several elements with the same name? | 38,601,055 | <p>I have a lot of list elements with the same class name but with different id. </p>
<p>Example:</p>
<pre><code><li class="test class" id="111-11-111"> pass </li>
<li class="test class" id="222-22-222"> pass </li>
<li class="test class" id="333-33-333"> pass </li>
</code></pre>
<p>And I need to extract those id's.
For a single list element it is not a problem:</p>
<pre><code>driver.find_element_by_css_selector(".test.class").get_attribute("id")
</code></pre>
<p>But I need to somehow reach to a next id's.
If try to <code>find_elements_by...</code> I receive the following exception:</p>
<blockquote>
<p>'list' object has no attribute <code>'get_attribute'</code>.</p>
</blockquote>
<p>Is there a way to extract them?</p>
| 2 | 2016-07-26T22:45:53Z | 38,601,128 | <p>you can use xpath:</p>
<pre><code>listOfLi = driver.find_elements_by_xpath("//li[class='test class']")
</code></pre>
<p>or css selector:</p>
<pre><code>listOfLi = driver.find_elements_by_css_selector(".test.class")
</code></pre>
<p>you can access each li element by indexing them one by one:</p>
<pre><code>for eachLiElement in listOfLi:
string = eachLiElement.get_attribute("id")
</code></pre>
<p><code>string</code> will give you each element's id.</p>
<p>If you only want to get the second id, you can do it by </p>
<p><code>secondId = listOfLi[1].get_attribute("id")</code></p>
<p><code>secondId</code> will have <code>222-22-222</code></p>
| 3 | 2016-07-26T22:51:56Z | [
"python",
"selenium",
"selenium-webdriver"
] |
Append to array for each item of a loop | 38,601,075 | <p>With the following code:</p>
<pre><code>class Calendar_Data(Resource):
def get(self):
result = []
details_array = []
# Times are converted to seconds
for day in life.days:
for span in day.spans:
if type(span.place) is str:
details = {
'name': span.place,
'date': 0,
'value': (span.length() * 60),
}
details_array.append(details)
data = {
'date': datetime.datetime.strptime(day.date, '%Y_%m_%d').strftime('%Y-%m-%d'),
'total': (day.somewhere() * 60),
'details': details_array
}
result.append(data)
return result
</code></pre>
<p>What I'm trying to do is for each day that is present in a list of days, get the corresponding spans for that day, and fill the array with the <code>details</code>. Then pass that <code>details</code> to the <code>data</code> array in order to have it for each day of that list of days.</p>
<p>The problem here is when I use these nested loops above, it fills me the <code>details</code> with the all the spans from all days, instead of each single day.</p>
<p>I don't thin using a <code>zip</code> in this case would work. Maybe some list comprehension but I still did not understand that fully.</p>
<p>Example input:</p>
<pre><code>--2016_01_15
@UTC
0000-0915: home
0924-0930: seixalinho station
1000-1008: cais do sodre station
1009-1024: cais do sodre station->saldanha station
1025-1027: saldanha station
1030-1743: INESC
1746-1750: saldanha station
1751-1815: saldanha station->cais do sodre station
1815-1834: cais do sodre station {Waiting for the boat trip back. The boat was late}
1920-2359: home [dinner]
--2016_01_16
0000-2136: home
2147-2200: fabio's house
2237-2258: bar [drinks]
</code></pre>
<p>For the sixteenth of january the details array is supposed to have 3 items, but each day constantly shows all of the items of all days.</p>
| 0 | 2016-07-26T22:47:26Z | 38,601,361 | <p>You aren't redeclaring your list (Python has lists not arrays) in between each loop. You need to move the creation of your <code>details_array</code> inside one of the loops so it is recreated as empty. You'll likely wnt it to look like this:</p>
<pre><code>for day in life.days:
details_array = []
for span in day.spans:
</code></pre>
<p>This way for each new iteration of a <code>day</code> you'll have a new empty list.</p>
| 1 | 2016-07-26T23:18:11Z | [
"python",
"arrays",
"loops"
] |
Python does not recognize subclass unless it's imported in __init__.py | 38,601,088 | <p>I have a project with multiple sub-folders, most of which are Python packages. One of them contains an abstract class called BaseStep (created using the <code>abc</code> module), which during runtime looks for subclasses of itself using: <code>for subclass in cls.__subclasses__(): ...</code>. <code>BaseStep</code> is located in the <code>pipeline</code> directory, in a python file named <code>base_step.py</code>, and is thus accessed by doing <code>pipeline.base_step.BaseStep</code>. </p>
<p>This package looks like: </p>
<pre><code>pipeline/
__init__.py
base_step.py
</code></pre>
<p>In another Python package, I would like to create some examples of how to use BaseStep. This package is called <code>examples</code> and I have a python file there called <code>sample_step.py</code>. Within <code>sample_step.py</code> I have created a class that extends the <code>BaseStep</code> class called <code>SampleStep</code>. Thus it is accessed by doing <code>examples.sample_step.SampleStep</code>. </p>
<p>This package looks like: </p>
<pre><code>examples/
__init__.py
sample_step.py
</code></pre>
<p>When I try to access the <code>__subclasses__()</code> during runtime, however, I cannot see <code>SampleStep</code> listed as one of them. </p>
<p>The only way <code>SampleStep</code> shows up as a subclass of <code>BaseStep</code> is if the <code>__init__.py</code> of the <code>pipeline</code> directory includes an import of the <code>SampleStep</code>: </p>
<pre><code>from examples.sample_step import SampleStep
</code></pre>
<p>Why is this the case? Why do I have to import my sample step inside the <em>pipeline</em> package? Why can't <code>BaseStep</code> identify subclasses in other packages? Any help understanding inheritance and importing would be deeply appreciated. </p>
<hr>
<p><strong>EDIT</strong></p>
<p>Thanks for the comments. From the comments, I realized I did not explain how SampleStep is being imported, and deleted some of my responses to the comments. </p>
<p>The <code>examples</code> module actually looks like this: </p>
<pre><code>examples/
__init__.py
runner.py
sample_step.py
</code></pre>
<p>Within <strong>init</strong>.py, I have <code>from examples.sample_step import SampleStep</code>. Then I call <code>runner.py</code>, which instantiates <code>SampleStep</code> by calling a function in <code>BaseStep</code> that looks at its subclasses. At least that is what it tries to do -- it fails because <code>BaseStep</code> does not realize that <code>SampleStep</code> is a subclass.</p>
<p>@Blckknght said "You can import the module from anywhere (as long as that "anywhere" is getting loaded itself)". So the more specific question is: why doesn't importing <code>SampleStep</code> within the <code>examples/__init__.py</code> at runtime get <code>BaseStep</code> to recognize that it's a subclass? </p>
| 0 | 2016-07-26T22:47:55Z | 38,601,109 | <p>The <code>SampleStep</code> subclass <strong>does not exist</strong> until its file is imported.</p>
| 2 | 2016-07-26T22:50:24Z | [
"python",
"oop",
"python-import"
] |
Python does not recognize subclass unless it's imported in __init__.py | 38,601,088 | <p>I have a project with multiple sub-folders, most of which are Python packages. One of them contains an abstract class called BaseStep (created using the <code>abc</code> module), which during runtime looks for subclasses of itself using: <code>for subclass in cls.__subclasses__(): ...</code>. <code>BaseStep</code> is located in the <code>pipeline</code> directory, in a python file named <code>base_step.py</code>, and is thus accessed by doing <code>pipeline.base_step.BaseStep</code>. </p>
<p>This package looks like: </p>
<pre><code>pipeline/
__init__.py
base_step.py
</code></pre>
<p>In another Python package, I would like to create some examples of how to use BaseStep. This package is called <code>examples</code> and I have a python file there called <code>sample_step.py</code>. Within <code>sample_step.py</code> I have created a class that extends the <code>BaseStep</code> class called <code>SampleStep</code>. Thus it is accessed by doing <code>examples.sample_step.SampleStep</code>. </p>
<p>This package looks like: </p>
<pre><code>examples/
__init__.py
sample_step.py
</code></pre>
<p>When I try to access the <code>__subclasses__()</code> during runtime, however, I cannot see <code>SampleStep</code> listed as one of them. </p>
<p>The only way <code>SampleStep</code> shows up as a subclass of <code>BaseStep</code> is if the <code>__init__.py</code> of the <code>pipeline</code> directory includes an import of the <code>SampleStep</code>: </p>
<pre><code>from examples.sample_step import SampleStep
</code></pre>
<p>Why is this the case? Why do I have to import my sample step inside the <em>pipeline</em> package? Why can't <code>BaseStep</code> identify subclasses in other packages? Any help understanding inheritance and importing would be deeply appreciated. </p>
<hr>
<p><strong>EDIT</strong></p>
<p>Thanks for the comments. From the comments, I realized I did not explain how SampleStep is being imported, and deleted some of my responses to the comments. </p>
<p>The <code>examples</code> module actually looks like this: </p>
<pre><code>examples/
__init__.py
runner.py
sample_step.py
</code></pre>
<p>Within <strong>init</strong>.py, I have <code>from examples.sample_step import SampleStep</code>. Then I call <code>runner.py</code>, which instantiates <code>SampleStep</code> by calling a function in <code>BaseStep</code> that looks at its subclasses. At least that is what it tries to do -- it fails because <code>BaseStep</code> does not realize that <code>SampleStep</code> is a subclass.</p>
<p>@Blckknght said "You can import the module from anywhere (as long as that "anywhere" is getting loaded itself)". So the more specific question is: why doesn't importing <code>SampleStep</code> within the <code>examples/__init__.py</code> at runtime get <code>BaseStep</code> to recognize that it's a subclass? </p>
| 0 | 2016-07-26T22:47:55Z | 38,601,149 | <p>Python is a dynamic language. Things like class definitions technically happen at runtime, not some earlier compile time as in other languages like C and Java. This means that until you import your <code>sample_step</code> module and run definition of its <code>SampleStep</code> class, that class doesn't exist as far as the Python interpreter is concerned.</p>
<p>You can import the module from anywhere (as long as that "anywhere" is getting loaded itself). It just needs to be loaded once for the subclass to show up in the <code>BaseStep.__subclasses__</code> list.</p>
| 3 | 2016-07-26T22:53:57Z | [
"python",
"oop",
"python-import"
] |
How to improve scapy performance reading large files | 38,601,091 | <p>I have to read and parse .pcap files that are too large to load into memory. I am currently using sniff in offline mode</p>
<pre><code>sniff(offline=file_in, prn=customAction, store=0)
</code></pre>
<p>with a customAction function that looks roughly like this:</p>
<pre><code>customAction(packet):
global COUNT
COUNT = COUNT + 1
# do some other stuff that takes practically 0 time
</code></pre>
<p>Currently this processes packets too slowly. I am already using subprocess in a 'driver' program to run this script on multiple files simultaneously on different cores but I really need to improve single core performance.</p>
<p>I tried using pypy and was disappointed that performance using pypy less than 10% better than using python3 (anaconda).</p>
<p>Average time to run 50k packets using pypy is 52.54 seconds</p>
<p>Average time to run 50k packets using python3 is 56.93 seconds</p>
<p>Is there any way to speed things up?</p>
<p>EDIT: Below is the result of cProfile, as you can see the code is a bit slower while being profiled but all of the time is spent doing things is scapy.</p>
<pre><code>66054791 function calls (61851423 primitive calls) in 85.482 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
957/1 0.017 0.000 85.483 85.483 {built-in method builtins.exec}
1 0.001 0.001 85.483 85.483 parser-3.py:1(<module>)
1 0.336 0.336 83.039 83.039 sendrecv.py:542(sniff)
50001 0.075 0.000 81.693 0.002 utils.py:817(recv)
50001 0.379 0.000 81.618 0.002 utils.py:794(read_packet)
795097/50003 3.937 0.000 80.140 0.002 base_classes.py:195(__call__)
397549/50003 6.467 0.000 79.543 0.002 packet.py:70(__init__)
397545/50000 1.475 0.000 76.451 0.002 packet.py:616(dissect)
397397/50000 0.817 0.000 74.002 0.001 packet.py:598(do_dissect_payload)
397545/200039 6.908 0.000 49.511 0.000 packet.py:580(do_dissect)
199083 0.806 0.000 32.319 0.000 dns.py:144(getfield)
104043 1.023 0.000 22.996 0.000 dns.py:127(decodeRR)
397548 0.343 0.000 15.059 0.000 packet.py:99(init_fields)
397549 6.043 0.000 14.716 0.000 packet.py:102(do_init_fields)
6673299/6311213 6.832 0.000 13.259 0.000 packet.py:215(__setattr__)
3099782/3095902 5.785 0.000 8.197 0.000 copy.py:137(deepcopy)
3746538/2335718 4.181 0.000 6.980 0.000 packet.py:199(setfieldval)
149866 1.885 0.000 6.678 0.000 packet.py:629(guess_payload_class)
738212 5.730 0.000 6.311 0.000 fields.py:675(getfield)
1756450 3.393 0.000 5.521 0.000 fields.py:78(getfield)
49775 0.200 0.000 5.401 0.000 dns.py:170(decodeRR)
1632614 2.275 0.000 4.591 0.000 packet.py:191(__getattr__)
985050/985037 1.720 0.000 4.229 0.000 {built-in method builtins.hasattr}
326681/194989 0.965 0.000 2.876 0.000 packet.py:122(add_payload)
...
</code></pre>
<p>EDIT 2: Full code example:</p>
<pre><code>from scapy.all import *
from scapy.utils import PcapReader
import time, sys, logging
COUNT = 0
def customAction(packet):
global COUNT
COUNT = COUNT + 1
file_temp = sys.argv[1]
path = '/'.join(file_temp.split('/')[:-2])
file_in = '/'.join(file_temp.split('/')[-2:])
name = file_temp.split('/')[-1:][0].split('.')[0]
os.chdir(path)
q_output_file = 'processed/q_' + name + '.csv'
a_output_file = 'processed/a_' + name + '.csv'
log_file = 'log/' + name + '.log'
logging.basicConfig(filename=log_file, level=logging.DEBUG)
t0=time.time()
sniff(offline=file_in, prn=customAction, lfilter=lambda x:x.haslayer(DNS), store=0)
t1=time.time()
logging.info("File '{}' took {:.2f} seconds to parse {} packets.".format(name, t1-t0, COUNT))
</code></pre>
| 2 | 2016-07-26T22:48:30Z | 38,654,715 | <p>It seems that scapy causes PyPy's JIT warm-up times to be high, but the JIT is still working if you run for long enough. Here are the results I got (on Linux 64):</p>
<pre><code>size of .pcap CPython time PyPy time
2MB 4.9s 7.3s
5MB 15.3s 9.1s
15MB 1m15s 21s
</code></pre>
| 0 | 2016-07-29T09:02:05Z | [
"python",
"performance",
"scapy",
"pcap",
"pypy"
] |
How to improve scapy performance reading large files | 38,601,091 | <p>I have to read and parse .pcap files that are too large to load into memory. I am currently using sniff in offline mode</p>
<pre><code>sniff(offline=file_in, prn=customAction, store=0)
</code></pre>
<p>with a customAction function that looks roughly like this:</p>
<pre><code>customAction(packet):
global COUNT
COUNT = COUNT + 1
# do some other stuff that takes practically 0 time
</code></pre>
<p>Currently this processes packets too slowly. I am already using subprocess in a 'driver' program to run this script on multiple files simultaneously on different cores but I really need to improve single core performance.</p>
<p>I tried using pypy and was disappointed that performance using pypy less than 10% better than using python3 (anaconda).</p>
<p>Average time to run 50k packets using pypy is 52.54 seconds</p>
<p>Average time to run 50k packets using python3 is 56.93 seconds</p>
<p>Is there any way to speed things up?</p>
<p>EDIT: Below is the result of cProfile, as you can see the code is a bit slower while being profiled but all of the time is spent doing things is scapy.</p>
<pre><code>66054791 function calls (61851423 primitive calls) in 85.482 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
957/1 0.017 0.000 85.483 85.483 {built-in method builtins.exec}
1 0.001 0.001 85.483 85.483 parser-3.py:1(<module>)
1 0.336 0.336 83.039 83.039 sendrecv.py:542(sniff)
50001 0.075 0.000 81.693 0.002 utils.py:817(recv)
50001 0.379 0.000 81.618 0.002 utils.py:794(read_packet)
795097/50003 3.937 0.000 80.140 0.002 base_classes.py:195(__call__)
397549/50003 6.467 0.000 79.543 0.002 packet.py:70(__init__)
397545/50000 1.475 0.000 76.451 0.002 packet.py:616(dissect)
397397/50000 0.817 0.000 74.002 0.001 packet.py:598(do_dissect_payload)
397545/200039 6.908 0.000 49.511 0.000 packet.py:580(do_dissect)
199083 0.806 0.000 32.319 0.000 dns.py:144(getfield)
104043 1.023 0.000 22.996 0.000 dns.py:127(decodeRR)
397548 0.343 0.000 15.059 0.000 packet.py:99(init_fields)
397549 6.043 0.000 14.716 0.000 packet.py:102(do_init_fields)
6673299/6311213 6.832 0.000 13.259 0.000 packet.py:215(__setattr__)
3099782/3095902 5.785 0.000 8.197 0.000 copy.py:137(deepcopy)
3746538/2335718 4.181 0.000 6.980 0.000 packet.py:199(setfieldval)
149866 1.885 0.000 6.678 0.000 packet.py:629(guess_payload_class)
738212 5.730 0.000 6.311 0.000 fields.py:675(getfield)
1756450 3.393 0.000 5.521 0.000 fields.py:78(getfield)
49775 0.200 0.000 5.401 0.000 dns.py:170(decodeRR)
1632614 2.275 0.000 4.591 0.000 packet.py:191(__getattr__)
985050/985037 1.720 0.000 4.229 0.000 {built-in method builtins.hasattr}
326681/194989 0.965 0.000 2.876 0.000 packet.py:122(add_payload)
...
</code></pre>
<p>EDIT 2: Full code example:</p>
<pre><code>from scapy.all import *
from scapy.utils import PcapReader
import time, sys, logging
COUNT = 0
def customAction(packet):
global COUNT
COUNT = COUNT + 1
file_temp = sys.argv[1]
path = '/'.join(file_temp.split('/')[:-2])
file_in = '/'.join(file_temp.split('/')[-2:])
name = file_temp.split('/')[-1:][0].split('.')[0]
os.chdir(path)
q_output_file = 'processed/q_' + name + '.csv'
a_output_file = 'processed/a_' + name + '.csv'
log_file = 'log/' + name + '.log'
logging.basicConfig(filename=log_file, level=logging.DEBUG)
t0=time.time()
sniff(offline=file_in, prn=customAction, lfilter=lambda x:x.haslayer(DNS), store=0)
t1=time.time()
logging.info("File '{}' took {:.2f} seconds to parse {} packets.".format(name, t1-t0, COUNT))
</code></pre>
| 2 | 2016-07-26T22:48:30Z | 38,660,456 | <p>I think that the short answer is that Scapy is just slow as hell. I tried just scanning a pcap file with sniff() or PcapReader, and not doing anything with the packets. The process was reading less than 3MB/s from my SSD, and the CPU usage was 100%. There are other pcap reader libraries for Python out there. I'd suggest experimenting with one of those. </p>
| 0 | 2016-07-29T13:43:58Z | [
"python",
"performance",
"scapy",
"pcap",
"pypy"
] |
'str' object does not support item assignment (Python) | 38,601,139 | <pre><code>def remove_char(text):
for letter in text[1]:
text[0] = text[0].replace(letter," ")
return text[0]
</code></pre>
<p>This is returning:</p>
<pre><code>'str' object does not support item assignment
</code></pre>
<p>Why? And how can I make this work?</p>
| 0 | 2016-07-26T22:52:46Z | 38,601,176 | <p>In Python, strings are not mutable, which means they cannot be changed. You can, however, replace the whole variable with the new version of the string.</p>
<p>Example:</p>
<pre><code>text = ' ' + text[1:] # replaces first character with space
</code></pre>
| 2 | 2016-07-26T22:56:51Z | [
"python"
] |
'str' object does not support item assignment (Python) | 38,601,139 | <pre><code>def remove_char(text):
for letter in text[1]:
text[0] = text[0].replace(letter," ")
return text[0]
</code></pre>
<p>This is returning:</p>
<pre><code>'str' object does not support item assignment
</code></pre>
<p>Why? And how can I make this work?</p>
| 0 | 2016-07-26T22:52:46Z | 38,601,180 | <p>Strings are immutable. By trying:</p>
<pre><code>text[0] = text[0].replace(letter," ")
</code></pre>
<p>you are trying to access the string and change it, which is disallowed due to the string's immutability. Instead, you can use a <code>for</code> loop and some slicing:</p>
<pre><code>for i in range(0, len(y)):
if y[i] == ",":
print y[i+1:len(y)]
break
</code></pre>
<p>You can change the string a variable is assigned to (second piece of code) rather than the string itself (your piece of code).</p>
| 1 | 2016-07-26T22:56:58Z | [
"python"
] |
'str' object does not support item assignment (Python) | 38,601,139 | <pre><code>def remove_char(text):
for letter in text[1]:
text[0] = text[0].replace(letter," ")
return text[0]
</code></pre>
<p>This is returning:</p>
<pre><code>'str' object does not support item assignment
</code></pre>
<p>Why? And how can I make this work?</p>
| 0 | 2016-07-26T22:52:46Z | 38,602,456 | <p>Assuming that the parameter <code>text</code> is a string, the line <code>for letter in text[1]:</code> doesn't make much sense to me since <code>text[1]</code> is a single character. What's the point of iterating over a one-letter string?</p>
<p>However, if <code>text</code> is a <strong>list of strings</strong>, then your function doesn't throw any exceptions, it simply returns the string that results from replacing in the first string (<code>text[0]</code>) all the letters of the second string (<code>text[1]</code>) by a blank space (<code>" "</code>).</p>
<p>The following examples show how <code>remove_char</code> works when its argument is a list of strings:</p>
<pre><code>In [462]: remove_char(['spam', 'parrot', 'eggs'])
Out[462]: 's m'
In [463]: remove_char(['bar', 'baz', 'foo'])
Out[463]: ' r'
In [464]: remove_char(['CASE', 'sensitive'])
Out[464]: 'CASE'
</code></pre>
<p>Perhaps this is not the intended behaviour...</p>
| 0 | 2016-07-27T01:53:01Z | [
"python"
] |
Show multiple paths of equal length in NetworkX | 38,601,205 | <p>If there are multiple paths from a source to destination, how do I get these ALL of these paths using NetworkX? Note that this is a simplified example, I want to actually be using the nx.all_pairs_shortest_path() function and get all shortest paths between any two nodes.</p>
<p>Code:</p>
<pre><code>import networkx as nx
G = nx.Graph([(0, 1), (0, 2), (1, 3), (2, 3)])
nx.draw(G)
print(nx.shortest_path(G,0,3))
</code></pre>
<p>Output I Get:</p>
<pre><code>[0, 1, 3]
</code></pre>
<p>Output I Want:</p>
<pre><code>[[0, 1, 3], [0, 2, 3]]
</code></pre>
| 0 | 2016-07-26T22:59:14Z | 38,601,709 | <p><code>all_shortest_paths</code> does what you're after, but it is a generator. If you want a list, then put <code>list</code> around it</p>
<pre><code>shortest_path_generator = nx.all_shortest_paths(G,0,3)
list(shortest_path_generator)
</code></pre>
| 0 | 2016-07-27T00:00:54Z | [
"python",
"networkx",
"shortest-path"
] |
Run script with arguments via ssh with at command from python script | 38,601,256 | <p>I have a python program which needs to call a script on a remote system via ssh. </p>
<p>This ssh call needs to happen (once) at a specified date which can be done via the linux at command.</p>
<p>I am able to call both of these external bash commands using either the <code>os</code> module or the <code>subprocess</code> module from my python program. The issue comes when passing certain arguments to the remote script.</p>
<p>In addition to being run remotely and at a later date, the (bash) script I wish to call requires several arguments to be passed to it, these arguments are python variables which I wish to pass on to the script.</p>
<pre><code>user="user@remote"
arg1="argument with spaces"
arg2="two"
cmd="ssh "+user+"' /home/user/path/script.sh "+arg1+" "+arg2+"'"
os.system(cmd)
</code></pre>
<p>One of these arguments is a string which contains spaces but would ideally be passed as a single argument; </p>
<p>for example:</p>
<p><code>./script.sh "Argument with Spaces"</code>
where $1 is equal to <code>"Argument with Spaces"</code></p>
<p>I have tried various combinations of escaping double and single quotes in both python and the string itself and the use of grave accents around the entire ssh command. The most successful version calls the script with the arguments as desired, but ignores the at command and runs immediately. </p>
<p>Is there a clean way within python to accomplish this?</p>
| 0 | 2016-07-26T23:05:12Z | 38,601,355 | <p><strong>new answer</strong></p>
<p>now that you edited your question you should probably be using format strings</p>
<pre><code>cmd = '''ssh {user} "{cmd} '{arg0}' '{arg1}'"'''.format(user="user@remote",cmd="somescript",arg0="hello",arg2="hello world")
print cmd
</code></pre>
<hr>
<p><strong>old answer</strong></p>
<p>I think you can use a <code>-c</code> switch with <code>ssh</code> to execute some code on a remote machine (<code>ssh user@host.net -c "python myscript.py arg1 arg2"</code>)</p>
<p>alternatively I needed more than that so I use this paramiko wrapper class (you will need to install paramiko)</p>
<pre><code>from contextlib import contextmanager
import os
import re
import paramiko
import time
class SshClient:
"""A wrapper of paramiko.SSHClient"""
TIMEOUT = 10
def __init__(self, connection_string,**kwargs):
self.key = kwargs.pop("key",None)
self.client = kwargs.pop("client",None)
self.connection_string = connection_string
try:
self.username,self.password,self.host = re.search("(\w+):(\w+)@(.*)",connection_string).groups()
except (TypeError,ValueError):
raise Exception("Invalid connection sting should be 'user:pass@ip'")
try:
self.host,self.port = self.host.split(":",1)
except (TypeError,ValueError):
self.port = "22"
self.connect(self.host,int(self.port),self.username,self.password,self.key)
def reconnect(self):
self.connect(self.host,int(self.port),self.username,self.password,self.key)
def connect(self, host, port, username, password, key=None):
self.client = paramiko.SSHClient()
self.client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.client.connect(host, port, username=username, password=password, pkey=key, timeout=self.TIMEOUT)
def close(self):
if self.client is not None:
self.client.close()
self.client = None
def execute(self, command, sudo=False,**kwargs):
should_close=False
if not self.is_connected():
self.reconnect()
should_close = True
feed_password = False
if sudo and self.username != "root":
command = "sudo -S -p '' %s" % command
feed_password = self.password is not None and len(self.password) > 0
stdin, stdout, stderr = self.client.exec_command(command,**kwargs)
if feed_password:
stdin.write(self.password + "\n")
stdin.flush()
result = {'out': stdout.readlines(),
'err': stderr.readlines(),
'retval': stdout.channel.recv_exit_status()}
if should_close:
self.close()
return result
@contextmanager
def _get_sftp(self):
yield paramiko.SFTPClient.from_transport(self.client.get_transport())
def put_in_dir(self, src, dst):
if not isinstance(src,(list,tuple)):
src = [src]
print self.execute('''python -c "import os;os.makedirs('%s')"'''%dst)
with self._get_sftp() as sftp:
for s in src:
sftp.put(s, dst+os.path.basename(s))
def get(self, src, dst):
with self._get_sftp() as sftp:
sftp.get(src, dst)
def rm(self,*remote_paths):
for p in remote_paths:
self.execute("rm -rf {0}".format(p),sudo=True)
def mkdir(self,dirname):
print self.execute("mkdir {0}".format(dirname))
def remote_open(self,remote_file_path,open_mode):
with self._get_sftp() as sftp:
return sftp.open(remote_file_path,open_mode)
def is_connected(self):
transport = self.client.get_transport() if self.client else None
return transport and transport.is_active()
</code></pre>
<p>you can then use it as follows</p>
<pre><code>client = SshClient("username:password@host.net")
result = client.execute("python something.py cmd1 cmd2")
print result
result2 = client.execute("cp some_file /etc/some_file",sudo=True)
print result2
</code></pre>
| 2 | 2016-07-26T23:17:44Z | [
"python",
"bash",
"ssh",
"at-job"
] |
Display a sequence of images with next and previous buttons in python matplotlib | 38,601,307 | <p>I have a sequence of images that I want to display them one by one and allow the user to navigate between the images using next and previous buttons in matplotlib.
Here is my code:</p>
<pre><code>fig, ax = plt.subplots()
class Index(object):
ind = 0
def next(self, event):
self.ind += 1
print self.ind
orig_frame = cv2.imread(FFMPEG_PATH + all_frames[self.ind])
ax.imshow(orig_frame)
def prev(self, event):
self.ind -= 1
orig_frame = cv2.imread(FFMPEG_PATH + all_frames[self.ind])
ax.imshow(orig_frame)
callback = Index()
axnext = plt.axes([0.8, 0.7, 0.1, 0.075])
axprev = plt.axes([0.8, 0.6, 0.1, 0.075])
next_button = Button(axnext, 'Next')
next_button.on_clicked(callback.next)
prev_button = Button(axprev, 'Previous')
prev_button.on_clicked(callback.prev)
plt.show()
plt.waitforbuttonpress()
</code></pre>
<p>The problem is that as soon as I click the next button, it does not display anything and the program terminates immediately. What should I do?</p>
| 1 | 2016-07-26T23:11:05Z | 38,601,396 | <p>Try to call <code>fig.canvas.draw()</code> in the end of functions <code>prev</code> and <code>next</code>, because you need to redraw a plot after loading a new picture.</p>
<p>Another problem: your index <code>self.ind</code> may be out of range you do not check it value before <code>imread</code>.</p>
| 0 | 2016-07-26T23:21:21Z | [
"python",
"matplotlib"
] |
run_async_query in Python gcloud BigQuery using Standard SQL instead of Legacy SQL | 38,601,331 | <p>I need to run an async query using the gcloud python BigQuery library. Furthermore I need to run the query using the beta <a href="https://cloud.google.com/bigquery/sql-reference/" rel="nofollow">standard sql</a> instead of the default <a href="https://cloud.google.com/bigquery/query-reference" rel="nofollow">legacy sql</a>.<br><br>According to the documentation <a href="https://googlecloudplatform.github.io/gcloud-python/latest/bigquery-client.html#gcloud.bigquery.client.Client.run_async_query" rel="nofollow">here</a>, <a href="https://googlecloudplatform.github.io/gcloud-python/latest/bigquery-job.html#gcloud.bigquery.job.QueryJob" rel="nofollow">here</a>, and <a href="https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.query.useLegacySql" rel="nofollow">here</a> I believe I should be able to just set the <code>use_legacy_sql</code> property on the job to <code>False</code>. However, this still results in an error due to the query being processed against Legacy SQL. <strong>How do I successfully use this property to indicate which SQL standard I want the query to be processed with?</strong></p>
<p>Example Python code below:</p>
<pre><code>stdz_table = stdz_dataset.table('standardized_table1')
job_name = 'asyncjob-test'
query = """
SELECT TIMESTAMP('2016-03-30 10:32:15', 'America/Chicago') AS special_date
FROM my_dataset.my_table_20160331;
"""
stdz_job = bq_client.run_async_query(job_name,query)
stdz_job.use_legacy_sql = False
stdz_job.allow_large_results = True
stdz_job.create_disposition = 'CREATE_IF_NEEDED'
stdz_job.destination = stdz_table
stdz_job.write_disposition = 'WRITE_TRUNCATE'
stdz_job.begin()
# wait for job to finish
while True:
stdz_job.reload()
if stdz_job.state == 'DONE':
# print use_legacy_sql value, and any errors (will be None if job executed successfully)
print stdz_job.use_legacy_sql
print json.dumps(stdz_job.errors)
break
time.sleep(1)
</code></pre>
<p>This outputs:</p>
<pre><code>False
[{"reason": "invalidQuery", "message": "2.20 - 2.64: Bad number of arguments. Expected 1 arguments.", "location": "query"}]
</code></pre>
<p>which is the same error you'd get if you ran it in the BigQuery console using Legacy SQL. When I copy paste the query in BigQuery console and run it using Standard SQL, it executes fine. Note: The error location (2.20 - 2.64) might not be exactly correct for the query above since it is a sample and I have obfuscated some of my personal info in it.</p>
| 0 | 2016-07-26T23:14:02Z | 38,798,887 | <p>The <a href="https://github.com/GoogleCloudPlatform/gcloud-python/blob/master/gcloud/bigquery/job.py#L997" rel="nofollow">use_legacy_sql property</a> did not exist as of version 0.17.0, so you'd have needed to check out the current master branch. However it does now exist of as release 0.18.0 so after upgrading gcloud-python via pip you should be good to go.</p>
| 1 | 2016-08-05T23:08:24Z | [
"python",
"google-bigquery",
"gcloud-python",
"google-cloud-python"
] |
Hibernate a BeagleBone Black | 38,601,376 | <p>I'm a student, I am a beginner with beaglebones. I have a project and in the project we have a BeagleBone Black connected to a battery and solar panels.
It will work autonomously, and the beagle will send datas by the 3G network through an 3G usb.
What I want to do is to save as more energy as it's possible. What I thought first was to switch on hibernation or sleep mode the beaglebone. To switch on hibernate/sleep mode and then wake up the Beagle every x seconds or minutes or anything else.
So I want to know if it's possible and if there is an OS more adapted for that use.
I succeeded to disable the usb chipset and then to reactivate it several minutes later.</p>
<p>Thank you if you can help me !</p>
| 0 | 2016-07-26T23:19:23Z | 38,619,053 | <p>It looks like it might:</p>
<p><a href="http://processors.wiki.ti.com/index.php/AM335x_Linux_Power_Management_User_Guide#Suspend_.26_Resume" rel="nofollow">http://processors.wiki.ti.com/index.php/AM335x_Linux_Power_Management_User_Guide#Suspend_.26_Resume</a></p>
<p>Although I'm not sure if all of this was properly mainlined, so it would need to be tested on a current image. I <em>might</em> do that later today and amend this.</p>
<p>This will still leave you with waking up the BBB. You'll have to see what's the best option for that. Maybe the PMIC has a suitable input for that.</p>
<p>Another thing is that you should exercise extreme caution when it comes to IO connections between the BBB and other components of your setup, while the BBB is off or in suspend. The SoC tends to self destruct if voltage is applied to IO pins while it's off.</p>
| 0 | 2016-07-27T16:56:11Z | [
"python",
"sleep",
"beagleboneblack"
] |
Violent Python - Having trouble with OptParse outputting the correct information | 38,601,395 | <p>I am working directly from the Violent Python PDF, page 147.</p>
<p>I am currently using the pygeoip module to find the location of IP adresses. I was able to do this first step fairly easily and it is represented by the #1 hash in the code.</p>
<p>The second step includes taking data from a pcap file and matching the corresponding ip addresses (both destination and origin ip's) to their pyschial locations. For some reason, I can't get the program to return this information. Instead I get a printed string from my optParse instance.</p>
<p>my current code is:</p>
<pre><code>#1. pg 136 of Violent Python by TJ O'Connor
#We are using the imported pygeoip module to search the database from
#http://dev.maxmind.com/geoip/legacy/geolite/ and match it with an ip address
import pygeoip
GI = pygeoip.GeoIP('/home/cody/workspace/violent_python/opt/GeoIP/GeoLiteCity.dat')
#output should be the location of the given ip; NOTE: does not work for IPV6
gi = pygeoip.GeoIP('/home/cody/workspace/violent_python/opt/GeoIP/GeoLiteCity.dat')
def printRecord(tgt):
rec = gi.record_by_name(tgt)
city = rec['city']
region = rec['region_code']
country = rec['country_name']
long = rec['longitude']
lat = rec['latitude']
print '[*] Target: ' + tgt + ' Geo-located.'
print '[+] ' +str(city)+','+str(lat)+ ',longitude: '+str(long)
tgt = '173.255.226.98'
printRecord(tgt)
#reading a pcap capture; NOTE: it would be useful to learn how to view live
#traffic via studying pypcap
import dpkt
import socket
def printPcap(pcap):
for (ts,buf) in pcap:
try:
eth = dpkt.ethernet.Ethernet(buf)
ip = eth.data
src = socket.inet_ntoa(ip.src)
dst = socket.inet_ntoa(ip.dst)
print '[+] Src: ' + src + ' --> Dst: ' + dst
except:
pass
def main():
f = open('geotest.pcap')
pcap = dpkt.pcap.Reader(f)
printPcap(pcap)
if __name__ == '__main__':
main()
#create a new function that returns a pyschial location for an IP address
import dpkt, socket, pygeoip, optparse
gi = pygeoip.GeoIP("/home/cody/workspace/violent_python/opt/GeoIP/GeoLiteCity.dat")
def retGeoStr(ip):
try:
rec = gi.record_by_name(ip)
city = rec['city']
country = rec['country_code3']
if (city != ''):
geoLoc = city+' , '+country
else:
geoLoc = country
return geoLoc
except:
return 'Unregistered'
#2. this is the entire set up put together
import dpkt,socket,pygeoip,optparse
gi = pygeoip.GeoIP("/home/cody/workspace/violent_python/opt/GeoIP/GeoLiteCity.dat")
def retGeoStr(ip):
try:
rec = gi.record_by_name(ip)
city = rec['city']
country = rec['country_code3']
if city != '':
geoLoc = city + ',' + country
else:
geoLoc = country
return geoLoc
except:
return 'Unregistered'
def printPcap(pcap):
for (ts, buf) in pcap:
try:
eth = dpkt.ethernet.Ethernet(buf)
ip = eth.data
src = socket.inet_ntoa(ip.src)
dst = socket.inet_ntoa(ip.dst)
print '[+] Src: ' + src + '----> Dst: ' + dst
print '[+] Src: ' +retGeoStr(src) + '----> Dst: ' + retGeoStr(dst)
except:
pass
def main():
parser = optparse.OptionParser('usage%prog -p <pcap file>')
parser.add_option('-p',dest='pcapFile',type='string',\
help='specify pcap filename')
(options,args) = parser.parse_args()
if options.pcapFile == None:
print parser.usage
exit(0)
pcapFile = options.pcapFile
f = open(pcapFile)
pcap = dpkt.pcap.Reader(f)
if __name__ == '__main__':
main()
'''
Desiered output:
analyst# python geoPrint.py -p geotest.pcap
[+] Src: 110.8.88.36 --> Dst: 188.39.7.79
[+] Src: KOR --> Dst: London, GBR
[+] Src: 28.38.166.8 --> Dst: 21.133.59.224
[+] Src: Columbus, USA --> Dst: Columbus, USA
[+] Src: 153.117.22.211 --> Dst: 138.88.201.132
[+] Src: Wichita, USA --> Dst: Hollywood, USA
[+] Src: 1.103.102.104 --> Dst: 5.246.3.148
[+] Src: KOR --> Dst: Unregistered
[+] Src: 166.123.95.157 --> Dst: 219.173.149.77
[+] Src: Washington, USA --> Dst: Kawabe, JPN
[+] Src: 8.155.194.116 --> Dst: 215.60.119.128
[+] Src: USA --> Dst: Columbus, USA
[+] Src: 133.115.139.226 --> Dst: 137.153.2.196
[+] Src: JPN --> Dst: Tokyo, JPN
[+] Src: 217.30.118.1 --> Dst: 63.77.163.212
[+] Src: Edinburgh, GBR --> Dst: USA
[+] Src: 57.70.59.157 --> Dst: 89.233.181.180
[+] Src: Endeavour Hills, AUS --> Dst: Prague, CZE
'''
#3. we are going to build the kml document to map to google maps
</code></pre>
<p>My actual output:</p>
<pre><code>[*] Target: 173.255.226.98 Geo-located.
[+] Newark,40.7357,longitude: -74.1724
[+] Src: 110.8.88.36 --> Dst: 188.39.7.79
[+] Src: 28.38.166.8 --> Dst: 21.133.59.224
[+] Src: 153.117.22.211 --> Dst: 138.88.201.132
[+] Src: 1.103.102.104 --> Dst: 5.246.3.148
[+] Src: 166.123.95.157 --> Dst: 219.173.149.77
[+] Src: 8.155.194.116 --> Dst: 215.60.119.128
[+] Src: 133.115.139.226 --> Dst: 137.153.2.196
[+] Src: 217.30.118.1 --> Dst: 63.77.163.212
[+] Src: 57.70.59.157 --> Dst: 89.233.181.180
usage%prog -p <pcap file>
</code></pre>
<p>Please help me! I can't figure this out but I think it has something to do with my parser</p>
| 0 | 2016-07-26T23:21:16Z | 38,601,608 | <p>Like viraptor said, separate the exercises into different script files and then try again. Not only were the exercises written to be in separate scripts, it is far easier to find a bug if you have less code to look through (and less code that can potentially go wrong).</p>
| 1 | 2016-07-26T23:48:34Z | [
"python",
"optparse"
] |
What is the point of uWSGI? | 38,601,440 | <p>I'm looking at the <a href="https://en.wikipedia.org/wiki/Web_Server_Gateway_Interface" rel="nofollow">WSGI specification</a> and I'm trying to figure out how servers like <a href="https://uwsgi-docs.readthedocs.io/en/latest" rel="nofollow">uWSGI</a> fit into the picture. I understand the point of the WSGI spec is to separate web servers like nginx from web applications like something you'd write using <a href="http://flask.pocoo.org" rel="nofollow">Flask</a>. What I don't understand is what uWSGI is for. Why can't nginx directly call my Flask application? Can't flask speak WSGI directly to it? Why does uWSGI need to get in between them?</p>
<p>There are two sides in the WSGI spec: the server and the web app. Which side is uWSGI on?</p>
| 0 | 2016-07-26T23:27:07Z | 38,604,388 | <p>A traditional web server does not understand or have any way to run Python applications. That's why WSGI server come in. On the other hand Nginx supports reverse proxy to handle requests and pass back responses for Python WSGI servers.</p>
<p>This link might help you: <a href="https://www.fullstackpython.com/wsgi-servers.html" rel="nofollow">https://www.fullstackpython.com/wsgi-servers.html</a></p>
| 2 | 2016-07-27T05:27:37Z | [
"python",
"nginx",
"flask",
"wsgi",
"uwsgi"
] |
What is the point of uWSGI? | 38,601,440 | <p>I'm looking at the <a href="https://en.wikipedia.org/wiki/Web_Server_Gateway_Interface" rel="nofollow">WSGI specification</a> and I'm trying to figure out how servers like <a href="https://uwsgi-docs.readthedocs.io/en/latest" rel="nofollow">uWSGI</a> fit into the picture. I understand the point of the WSGI spec is to separate web servers like nginx from web applications like something you'd write using <a href="http://flask.pocoo.org" rel="nofollow">Flask</a>. What I don't understand is what uWSGI is for. Why can't nginx directly call my Flask application? Can't flask speak WSGI directly to it? Why does uWSGI need to get in between them?</p>
<p>There are two sides in the WSGI spec: the server and the web app. Which side is uWSGI on?</p>
| 0 | 2016-07-26T23:27:07Z | 38,685,758 | <p>Okay, I think I get this now. I read the following description in a <a href="https://en.wikipedia.org/wiki/Web_Server_Gateway_Interface" rel="nofollow">Wikipedia artile</a>:</p>
<blockquote>
<p>Between the server and the application, there may be a WSGI
middleware, which implements both sides of the API. The server
receives a request from a client and forwards it to the middleware.
After processing, it sends a request to the application. The
application's response is forwarded by the middleware to the server
and ultimately to the client.</p>
</blockquote>
<p>It doesn't specifically say, but I'm guessing that uWSGI is one of these middlewares. My understanding is that uWSGI acts as an adapter around your Flask app so that Flask (or any other framework you want to use) doesn't have to know specifically how to implement the app side of the WSGI specification.</p>
<p>So to answer my own question, uWSGI and Flask together form the app side of WSGI and nginx is the web server side.</p>
<p>I believe you can also have uWSGI run as the web server in which case it would play both roles, but I doubt most people would do it that way.</p>
| 0 | 2016-07-31T15:54:03Z | [
"python",
"nginx",
"flask",
"wsgi",
"uwsgi"
] |
Return list of columns in Dataframe that have a specific value as new column | 38,601,477 | <p>I have a number of columns that have values of either "Yes" or "No" in them. I am hoping to create a function that adds an additional column listing the columns for a specific row that are equal to "Yes". If they are all equal to "No" it would simply return nothing. </p>
<p>Example </p>
<pre><code>Column 1 Column 2 Column 3 Column 4 Column 5 New Column
Yes No No Yes No Column 1, Column 4
</code></pre>
| 2 | 2016-07-26T23:32:14Z | 38,601,556 | <p>Assuming that all of your column names are strings:</p>
<pre><code>df['New Column'] = df.apply(lambda row: ', '.join(row.index[row == 'Yes']), axis=1)
</code></pre>
<p>If you have non-string column names (e.g. an integer) you can do essentially the same thing, but cast the type to string first:</p>
<pre><code>df['New Column'] = df.apply(lambda row: ', '.join(row.index.astype(str)[row == 'Yes']), axis=1)
</code></pre>
<p>For each row, I'm using Boolean indexing on the row's index (i.e. the columns) to only select the locations that are <code>'Yes'</code>. Then I'm simply doing a string join on all of the <code>'Yes'</code> column names.</p>
<p>Sample Output (with two additional sample rows):</p>
<pre><code> Column 1 Column 2 Column 3 Column 4 Column 5 New Column
0 Yes No No Yes No Column 1, Column 4
1 No No No No No
2 No Yes No No No Column 2
</code></pre>
| 1 | 2016-07-26T23:42:22Z | [
"python",
"pandas"
] |
Return list of columns in Dataframe that have a specific value as new column | 38,601,477 | <p>I have a number of columns that have values of either "Yes" or "No" in them. I am hoping to create a function that adds an additional column listing the columns for a specific row that are equal to "Yes". If they are all equal to "No" it would simply return nothing. </p>
<p>Example </p>
<pre><code>Column 1 Column 2 Column 3 Column 4 Column 5 New Column
Yes No No Yes No Column 1, Column 4
</code></pre>
| 2 | 2016-07-26T23:32:14Z | 38,601,985 | <p>I'd do this:</p>
<pre><code>df['New'] = df.apply(lambda x: df.columns[x == 'Yes'].tolist(), axis=1)
df
</code></pre>
<p><a href="http://i.stack.imgur.com/TeA8a.png" rel="nofollow"><img src="http://i.stack.imgur.com/TeA8a.png" alt="enter image description here"></a></p>
| 1 | 2016-07-27T00:39:16Z | [
"python",
"pandas"
] |
How to access kml/xml attributes after filtering in python(lxml)? | 38,601,538 | <p>I have looked around a bit and can't seem to find a solution to my problem. My underlying problem is that I need to find the name of all KML elements whose child polygons contain points with a given lat/lon. </p>
<p>Looking around I found that using keytree, shapely, and lxml I can filter all of the KML elements down to the polygons in question and then access their parents. When I try to access the parent's attributes, however, I keep getting an empty list. I have tried the following:</p>
<pre><code>def __init__(self):
root=etree.fromstring(open("Example.kml", "r").read())
kmlns = root.tag.split("}")[0][1:]
polygons=root.findall(".//{%s}Polygon"%kmlns)
p = Point(-128.1605,52.474) #this point exists in one of the polygons
hits = filter(
lambda e: shape(keytree.geometry(e)).contains(p),
polygons)
print hits
hit_parent=hits[0].getparent()
print hit_parent.attrib#this prints {}
</code></pre>
<p>I was able to find the row where the polygon was by using the debugger in pycharm; according to that, hits[0] had a sourceline attribute and when I went to that line number in my KML document the polygon did indeed contain the point. Scrolling up to the polygon's parent I found that it had attributes(ie not an empty list). I am new to xml and kml parsing; am I looking in the wrong place? Here is the polygon and its parent from the kml:</p>
<pre><code><Placemark>
<name>THIS IS THE NAME</name>
<visibility>0</visibility>
<styleUrl>#falseColor184010</styleUrl>
<ExtendedData>
<SchemaData schemaUrl="#S_AL_TA_BC_2_41_eng_SSSSISSSSSSSSSSSSSSSSSSSSS10">
<SimpleData name="ACQTECH">Computed</SimpleData>
<SimpleData name="METACOVER">Partial</SimpleData>
<SimpleData name="CREDATE">20030416</SimpleData>
<SimpleData name="REVDATE">20130504</SimpleData>
<SimpleData name="ACCURACY">-1</SimpleData>
<SimpleData name="PROVIDER">Federal</SimpleData>
<SimpleData name="DATASETNAM">BC</SimpleData>
<SimpleData name="SPECVERS">1.1</SimpleData>
<SimpleData name="NID">7103157bba3511d892e2080020a0f4c9</SimpleData>
<SimpleData name="ALCODE">07876</SimpleData>
<SimpleData name="LANGUAGE1">English</SimpleData>
<SimpleData name="NAME1">NEEKAS 4</SimpleData>
<SimpleData name="LANGUAGE2">French</SimpleData>
<SimpleData name="NAME2">NEEKAS NO 4</SimpleData>
<SimpleData name="LANGUAGE3">No Language</SimpleData>
<SimpleData name="NAME3">NULL</SimpleData>
<SimpleData name="LANGUAGE4">No Language</SimpleData>
<SimpleData name="NAME4">NULL</SimpleData>
<SimpleData name="LANGUAGE5">No Language</SimpleData>
<SimpleData name="NAME5">NULL</SimpleData>
<SimpleData name="JUR1">BC</SimpleData>
<SimpleData name="JUR2"></SimpleData>
<SimpleData name="JUR3"></SimpleData>
<SimpleData name="JUR4"></SimpleData>
<SimpleData name="ALTYPE">Indian Reserve</SimpleData>
<SimpleData name="WEBREF">http://clss.nrcan.gc.ca/map-carte/mapbrowser-navigateurcartographique-eng.php?cancode=07876</SimpleData>
</SchemaData>
</ExtendedData>
<Polygon>
<outerBoundaryIs>
<LinearRing>
<coordinates>
-128.1615722,52.47385589999999,0 -128.1618475,52.47338730000003,0 -128.1623126999999,52.47275560000004,0 -128.1622705,52.47253640000001,0 -128.162017,52.47243320000002,0 -128.1619326,52.4722527,0 -128.1618904,52.4721108,0 -128.161827,52.47202060000003,0 -128.1615523,52.47204629999998,0 -128.1613199,52.47211069999996,0 -128.1607705,52.47205899999999,0 -128.1604538,52.47172369999999,0 -128.1600750999999,52.47149440000001,0 -128.1600821,52.47510580000001,0 -128.1615621,52.47510469999996,0 -128.1615294999999,52.474926,0 -128.1615508,52.47452629999999,0 -128.1615298,52.47416529999997,0 -128.1615722,52.47385589999999,0
</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
</code></pre>
<p>I want to get "THIS IS THE NAME" from the parent of the polygon.</p>
| 0 | 2016-07-26T23:40:55Z | 38,601,943 | <p>Your target text is not attribute of any element. Given <code><Polygon></code> as context element, you want to go to parent element <code><Placemark></code> and then get its child element <code><name></code>. This can be done in a single line using XPath :</p>
<pre><code>....
print hits
hit_parent = hits[0].find("./../{%s}name"%kmlns)
print hit_parent.text
</code></pre>
| 1 | 2016-07-27T00:31:49Z | [
"python",
"python-2.7",
"xml-parsing",
"lxml",
"kml"
] |
Set specific cell of multi-indexed Pandas DataFrame | 38,601,573 | <p>I have a DataFrame (df_test) with row labels ('letters') and column names ('numbers') which can be grouped by row labels.</p>
<pre><code>>>> letters = ['a','a','a','a','a','b','b','b','c','c','c','c']
>>> n = {'numbers': [0,1,2,3,4,0,1,2,0,1,2,3]}
>>> df_test = pd.DataFrame(n, index=letters)
>>> print df_test
numbers
a 0
a 1
a 2
a 3
a 4
b 0
b 1
b 2
c 0
c 1
c 2
c 3
</code></pre>
<p>I want to create a new column called 'Position'. The first row of each group (i.e. group a, group b, group c) should be 'S', the last row should be 'E', and the intervening rows should be 'M'. <em>(For start, middle and end. ;))</em> It would look like this:</p>
<pre><code> numbers Position
a 0 S
a 1 M
a 2 M
a 3 M
a 4 E
b 0 S
b 1 M
b 2 E
c 0 S
c 1 M
c 2 M
c 3 E
</code></pre>
<p>I have tried using a combination of .loc and .iloc to assign my new value to the correct cell but get an error message.</p>
<pre><code>>>> df_test['Position'] = 'M'
>>> for idxName,frame in df_test.groupby(level=0):
df_test.loc[idxName,('Position')].iloc[0] = 'S'
df_test.loc[idxName,('Position')].iloc[-1] = 'E'
__main__:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
__main__:3: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
</code></pre>
<p>I imagine that the problem arises from trying to use .iloc[ ] after having used .loc[ ] but I don't know Pandas well enough to have a different solution and haven't found anything online despite hours of searching. Any help with (a) understanding why I am getting the warning and (b) setting my cells to the correct values would be much appreciated!</p>
| 2 | 2016-07-26T23:44:20Z | 38,601,750 | <p>write a function to <code>apply</code> with in a <code>groupby</code></p>
<pre><code>def first_last_me(df, c='Position'):
df[c] = 'M'
df.iloc[0, -1] = 'S'
df.iloc[-1, -1] = 'E'
return df
df_test.groupby(level=0).apply(first_last_me)
</code></pre>
<p><a href="http://i.stack.imgur.com/tvozQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/tvozQ.png" alt="enter image description here"></a></p>
| 1 | 2016-07-27T00:06:09Z | [
"python",
"pandas",
"dataframe"
] |
Set specific cell of multi-indexed Pandas DataFrame | 38,601,573 | <p>I have a DataFrame (df_test) with row labels ('letters') and column names ('numbers') which can be grouped by row labels.</p>
<pre><code>>>> letters = ['a','a','a','a','a','b','b','b','c','c','c','c']
>>> n = {'numbers': [0,1,2,3,4,0,1,2,0,1,2,3]}
>>> df_test = pd.DataFrame(n, index=letters)
>>> print df_test
numbers
a 0
a 1
a 2
a 3
a 4
b 0
b 1
b 2
c 0
c 1
c 2
c 3
</code></pre>
<p>I want to create a new column called 'Position'. The first row of each group (i.e. group a, group b, group c) should be 'S', the last row should be 'E', and the intervening rows should be 'M'. <em>(For start, middle and end. ;))</em> It would look like this:</p>
<pre><code> numbers Position
a 0 S
a 1 M
a 2 M
a 3 M
a 4 E
b 0 S
b 1 M
b 2 E
c 0 S
c 1 M
c 2 M
c 3 E
</code></pre>
<p>I have tried using a combination of .loc and .iloc to assign my new value to the correct cell but get an error message.</p>
<pre><code>>>> df_test['Position'] = 'M'
>>> for idxName,frame in df_test.groupby(level=0):
df_test.loc[idxName,('Position')].iloc[0] = 'S'
df_test.loc[idxName,('Position')].iloc[-1] = 'E'
__main__:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
__main__:3: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
</code></pre>
<p>I imagine that the problem arises from trying to use .iloc[ ] after having used .loc[ ] but I don't know Pandas well enough to have a different solution and haven't found anything online despite hours of searching. Any help with (a) understanding why I am getting the warning and (b) setting my cells to the correct values would be much appreciated!</p>
| 2 | 2016-07-26T23:44:20Z | 38,621,283 | <p>Because @piRSquared's answer didn't work with my DataFrame for reasons still unknown, this is what I ended up going with.</p>
<pre><code>>>> letters = ['a','a','a','a','a','b','b','b','c','c','c','c']
>>> n = {'numbers': [0,1,2,3,4,0,1,2,0,1,2,3]}
>>> df_test = pd.DataFrame(n, index=letters)
>>> df_test['Position'] = 'M'
>>> df_test2 = pd.DataFrame()
>>> for idxName,frame in df_test.groupby(level=0):
frameLen = len(df_test.ix[idxName])
df_s = df_test.ix[idxName].iloc[0:1].copy()
df_e = df_test.ix[idxName].iloc[-1:frameLen].copy()
df_s['Position'] = 'S'
df_e['Position'] = 'E'
df_test2 = df_test2.append([df_s,df_test.loc[idxName].ix[1:-1],df_e],ignore_index=False)
>>> df_test2
numbers Position
a 0 S
a 1 M
a 2 M
a 3 M
a 4 E
b 0 S
b 1 M
b 2 E
c 0 S
c 1 M
c 2 M
c 3 E
</code></pre>
<p>I will try to figure out how to do this using 'apply' if possible, but for now this hack works.</p>
| 0 | 2016-07-27T19:09:11Z | [
"python",
"pandas",
"dataframe"
] |
What is the standard docstring for a django model metaclass? | 38,601,591 | <p>Models in django can come with a meta class like so:</p>
<pre><code>class Address(models.Model):
"""Address model."""
class Meta:
"""Meta McMetaface."""
verbose_name = "Address"
verbose_name_plural = "Addresses"
address_line = models.CharField(max_length=256)
postcode = models.CharField(max_length=10)
def __str__(self):
"""Return address without post code."""
return self.address_line
</code></pre>
<p>My metaclass is whimsical at best. Does Python or Django have a standard text for meta classes?</p>
| 0 | 2016-07-26T23:46:23Z | 38,602,102 | <p>There's no point in writing a docstring for Meta. It's a standard name that every model defines, and means the same thing in every model. Just don't write a docstring.</p>
| 1 | 2016-07-27T00:58:29Z | [
"python",
"django",
"metaclass",
"pep"
] |
Butterworth filter applied on a column of a pandas dataframe | 38,601,628 | <p>I have a dataframe like this (just much bigger, with smaller step of x):</p>
<pre><code> x val1 val2 val3
0 0.0 10.0 NaN NaN
1 0.5 10.5 NaN NaN
2 1.0 11.0 NaN NaN
3 1.5 11.5 NaN 11.60
4 2.0 12.0 NaN 12.08
5 2.5 12.5 12.2 12.56
6 3.0 13.0 19.8 13.04
7 3.5 13.5 13.3 13.52
8 4.0 14.0 19.8 14.00
9 4.5 14.5 14.4 14.48
10 5.0 NaN 19.8 14.96
11 5.5 15.5 15.5 15.44
12 6.0 16.0 19.8 15.92
13 6.5 16.5 16.6 16.40
14 7.0 17.0 19.8 18.00
15 7.5 17.5 17.7 NaN
16 8.0 18.0 19.8 NaN
17 8.5 18.5 18.8 NaN
18 9.0 19.0 19.8 NaN
19 9.5 19.5 19.9 NaN
20 10.0 20.0 19.8 NaN
</code></pre>
<p>My original issue was calculating derivatives for each of the columns and it was resolved in this question: <a href="http://stackoverflow.com/questions/38580073/how-to-get-indexes-of-values-in-a-pandas-dataframe/38580158?noredirect=1#comment64575213_38580158">How to get indexes of values in a Pandas DataFrame?</a>
The solution posted by Alexander was with my previous code as follows:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_csv('H:/DocumentsRedir/pokus/dataframe.csv', delimiter=',')
vals = list(df.columns.values)[1:]
dVal = df.iloc[:, 1:].diff() # `x` is in column 0.
dX = df['x'].diff()
dVal.apply(lambda series: series / dX)
</code></pre>
<p>However, I need to do some smoothing (let's say to 2 m here, from the original 0.5 m spacing of x), because the values of the derivatives just get crazy at the fine scale.
I have tried the <strong>scipy</strong> function <strong>filtfilt</strong> and <strong>butter</strong> (I want to use the butterworth filter, which is a common practice in my discipline), but probably I am not using them correctly. UPDATE: Also tried savgol_filter.</p>
<p><strong>How should I implement these functions in this code?</strong></p>
<p>(This is how I modified the code:</p>
<pre><code>step = 0.5
relevant_scale = 2
order_butterworth = 4
b, a = butter(order_butterworth, step/relevant_scale, btype='low', analog=False)
smoothed=filtfilt(b,a,data.iloc[:, 1:]) # the first column is x
dVal = smoothed.diff()
dz = data['Depth'].diff()
derivative = (dVal.apply(lambda series: series / dz))*1000
</code></pre>
<p>But my resulting smoothed was an array of NaNs and got an error <code>AttributeError: 'numpy.ndarray' object has no attribute 'diff'</code>)</p>
<p>This problem was solved by the answer - <a href="http://stackoverflow.com/a/38691551/5553319">http://stackoverflow.com/a/38691551/5553319</a> and the code really works on continuous data. However, what happens with the hardly noticeable change which I made in the source data? (A NaN value in the middle.)
<a href="http://i.stack.imgur.com/z48Dh.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/z48Dh.jpg" alt="enter image description here"></a></p>
<p>So how can we make this solution stable even in the case we miss a datapoint in an otherwise continuous array of data?
Ok, also answered in the comments. Such missing datapoints need to be interpolated.</p>
| 1 | 2016-07-26T23:50:59Z | 38,691,551 | <p>The error you are seeing is because you are trying to call the method <code>.diff()</code> on the result of <code>filtfilt</code>, which is a numpy array which doesn't have that method. If you really want to use a first order difference, you can just use <code>np.gradient(smoothed)</code></p>
<p>Now, it appears that your real goal is to obtain a lag-free estimate of the derivative of a noisy signal. I would recommend that you rather use something like the <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.savgol_filter.html" rel="nofollow">Savitzky Golay filter</a> which will allow you to get the derivative estimate in one application of the filter. You can see an example of derivative estimation on a noisy signal <a href="http://nbviewer.jupyter.org/github/alchemyst/chemengcookbook/blob/master/Common%20problems%20solved%20in%20Python.ipynb#Given-noisy-data" rel="nofollow">here</a></p>
<p>You will also need to accomodate the <code>NaN</code>s in your data. Here is how I would do it with your data:</p>
<pre><code>import scipy.signal
import matplotlib.pyplot as plt
# Intelligent use of the index allows us to keep track of the x for the data.
df = df.set_index('x')
dx = df.index[1]
for col in df:
# Get rid of nans
# NOTE: If you have nans in between your data points, this does the wrong thing,
# but for the data you show for contiguous data this is fine.
nonans = df[col].dropna()
smoothed = scipy.signal.savgol_filter(nonans, 5, 2, deriv=1, delta=dx)
plt.plot(nonans.index, smoothed, label=col)
plt.legend()
</code></pre>
<p>This results in the following figure:</p>
<p><a href="http://i.stack.imgur.com/8erWS.png" rel="nofollow"><img src="http://i.stack.imgur.com/8erWS.png" alt="Sample plot"></a></p>
| 1 | 2016-08-01T05:25:47Z | [
"python",
"numpy",
"pandas",
"scipy"
] |
Match up/overlap arrays in python | 38,601,653 | <p>I have two lists, one of integers and one of tuples containing integers as follows (this is an example)</p>
<pre><code>firstList = [4, 8, 20, 25, 60, 123]
secondList = [(0, 3), (4, 5), (7, 14), (19, 22), (40, 90), (100, 140)]
</code></pre>
<p>I am wanting to match up (overlap) the first array onto the second array as well as possible. For example:</p>
<p>The first section of the array <code>[4, 8, 20]</code> would match up with <code>[(4, 5), (7, 14), (19, 22)]</code> as those first numbers fall on or between the numbers in their respective tuples. </p>
<p><code>25</code> would be ignored as there is no match for it</p>
<p><code>[60, 123]</code> would then match up with <code>[(40, 90), (100, 140)]</code></p>
<p>The above is fairly simple to implement however what I am getting stuck with is where there is an offset that throws everything off. Say <code>4</code> is subtracted from all the numbers in the first array so we are now left with</p>
<pre><code>firstList = [0, 4, 16, 21, 56, 119]
secondList = [(0, 3), (4, 5), (7, 14), (19, 22), (40, 90), (100, 140)]
</code></pre>
<p>Some numbers still match up, however not all of them and now <code>21</code> falls within the range of <code>(19, 22)</code> when it should be ignored and a number of other numbers fall within the wrong ranges now. How can these new numbers still be matched up in the "best possible way" in other words in a fashion in which there are the most matches?</p>
<p>Ideally the method would figure out to add an offset back <code>4</code> and then match up the numbers again.</p>
<hr>
<p><strong>Clarification edit</strong>
The numbers and tuples will be ordered in ascending sequence (as seen above), any given number can't be matched with multiple tuples and any given tuple can't be matched with multiple numbers. It's 1 to 1.</p>
<p>The three things I am wanting returned from this are 1: The offset (4) in this case that gets things matched up the best. 2: What are the numbers that match up with a tuple and which tuple do they match up with. 3: What numbers don't match up in the best case scenerio (with the most matches).</p>
<p>Here's an illustration, I don't know if this will help though. Imagine a long piece of paper along a table with the number line. The sections indicated by the tuples have stripes of colour painted along them (each number section a different colours) painted along them (e.g. red strip from 0 to 3, purple strip from 4 to 5, etc). The areas that don't have any number section are left blank (or white)</p>
<p>Then you have a second paper (also with a printed numberline) placed above this first one with holes cut out at the given numbers in <code>firstList</code> Where should you slide this top numberline over the stationary bottom numberline so as to see colours in as many of the holes as possible ensuring that the same colour does not appear twice?</p>
<p>I don't know if that makes things any clearer - I hope it does!</p>
<p>Is there a library/algorithm or method for performing such a task apart from brute forcing it?</p>
| -1 | 2016-07-26T23:54:46Z | 38,601,792 | <p>Your question isn't fully defined, but here's something that makes sense to me:</p>
<pre><code>def binSearch(x, L, low=None, high=None):
if low is None: low = 0
if high is None: high = len(L)
if low==high: return None
mid = int((upper+lower)/2)
lower, upper = L[mid]
if lower <= x <= upper: return mid
if x>high: return binSearch(x, L, mid+1, high)
return binSearch(x, L, low, mid)
def match(L, T):
answer = {}
for num in L:
m = binSearch(num, T)
if m is None: continue
answer[num] = L[m]
L.pop(m)
return answer
def main():
firstList = [4, 8, 20, 25, 60, 123]
secondList = [(0, 3), (4, 5), (7, 14), (19, 22), (40, 90), (100, 140)]
matches = match(firstList, secondList)
for k in sorted(matches):
print(k, "matches with", matches[k])
</code></pre>
| 0 | 2016-07-27T00:11:50Z | [
"python",
"arrays",
"python-3.x"
] |
Is there a better way to structure these function calls? | 38,601,671 | <p>I have a class that needs one of a number of helper classes. It will only ever use one of these helpers, and all of them have the same interface. So I'm writing code like this:</p>
<pre><code>if self.type == Class.Type.a:
helper = A()
elif self.type == Class.Type.b:
helper = B()
elif self.type == Class.Type.c:
helper = C()
helper.do_stuff()
</code></pre>
<p>Currently there are about 5 different types, but I can see that expanding, and this structure is already in my class about four or five times, and I can see the need for this logic at least a dozen times.</p>
<p>Is there a better way to perform this logic? I think that Python often uses dictionaries to perform this type of logic, but I don't yet see how that idiom can work.</p>
| 0 | 2016-07-26T23:56:12Z | 38,601,690 | <p>You can indeed use a dictionary:</p>
<pre><code>my_dict = {Class.Type.a:A,
Class.Type.b:B,
Class.Type.c:C}
helper = my_dict[self.type]()
helper.do_stuff()
</code></pre>
| 1 | 2016-07-26T23:58:24Z | [
"python"
] |
parsing through dict to separate keys and values | 38,601,673 | <p>I have code that looks through a folder full of tsv files and grabs the name of the file as the key and the column headers as the values in the dict.</p>
<pre><code>row1 =[]
listdict=[]
for f in files:
with open(dir_path +'/'+f, 'rU') as file:
reader = csv.reader(file)
row1 = next(reader)
dict = {f: row1}
listdict.append(dict)
</code></pre>
<p>when trying to access the dict <code>listdict['file_name.tsv']</code>, I get an error</p>
<pre><code>*** TypeError: list indices must be integers, not str
</code></pre>
<p>when using ints <code>listdict[0]</code>, I can't seem to access values separately as they're all clumped together as 1 value.</p>
<pre><code>{'file_name.tsv': ['header1\theader2\theader3\theader4']}
</code></pre>
<p>How can I access each header separately. My goal is to create a csv output that lists the filename and all of the associated headers.</p>
| 0 | 2016-07-26T23:56:23Z | 38,601,722 | <p>You've created a list by doing <code>listdicts = []</code>.</p>
<p>Create a dictionary instead: <code>listdicts = {}</code>.</p>
<pre><code> listdict={}
for f in files:
with open(dir_path +'/'+f, 'r') as file:
reader = csv.reader(file)
listdict[f] = next(reader).split() # split will split the headers by `\t`
</code></pre>
<p>and access the corresponding first header with <code>listdict['file_name.tsv'][0]</code>.</p>
| 0 | 2016-07-27T00:02:21Z | [
"python"
] |
parsing through dict to separate keys and values | 38,601,673 | <p>I have code that looks through a folder full of tsv files and grabs the name of the file as the key and the column headers as the values in the dict.</p>
<pre><code>row1 =[]
listdict=[]
for f in files:
with open(dir_path +'/'+f, 'rU') as file:
reader = csv.reader(file)
row1 = next(reader)
dict = {f: row1}
listdict.append(dict)
</code></pre>
<p>when trying to access the dict <code>listdict['file_name.tsv']</code>, I get an error</p>
<pre><code>*** TypeError: list indices must be integers, not str
</code></pre>
<p>when using ints <code>listdict[0]</code>, I can't seem to access values separately as they're all clumped together as 1 value.</p>
<pre><code>{'file_name.tsv': ['header1\theader2\theader3\theader4']}
</code></pre>
<p>How can I access each header separately. My goal is to create a csv output that lists the filename and all of the associated headers.</p>
| 0 | 2016-07-26T23:56:23Z | 38,601,726 | <p>You probably want this</p>
<pre><code>filedict={}
for f in files:
with open(dir_path +'/'+f, 'rU') as file:
reader = csv.reader(file)
row1 = next(reader)
filedict[f] = row1
</code></pre>
| 0 | 2016-07-27T00:02:51Z | [
"python"
] |
parsing through dict to separate keys and values | 38,601,673 | <p>I have code that looks through a folder full of tsv files and grabs the name of the file as the key and the column headers as the values in the dict.</p>
<pre><code>row1 =[]
listdict=[]
for f in files:
with open(dir_path +'/'+f, 'rU') as file:
reader = csv.reader(file)
row1 = next(reader)
dict = {f: row1}
listdict.append(dict)
</code></pre>
<p>when trying to access the dict <code>listdict['file_name.tsv']</code>, I get an error</p>
<pre><code>*** TypeError: list indices must be integers, not str
</code></pre>
<p>when using ints <code>listdict[0]</code>, I can't seem to access values separately as they're all clumped together as 1 value.</p>
<pre><code>{'file_name.tsv': ['header1\theader2\theader3\theader4']}
</code></pre>
<p>How can I access each header separately. My goal is to create a csv output that lists the filename and all of the associated headers.</p>
| 0 | 2016-07-26T23:56:23Z | 38,602,036 | <p>A <code>list</code> is used when you want an ordered list of elements.</p>
<p>A <code>dict</code> is used when you want an unordered set of key-value pairs.</p>
<p>If you want to query up file headers by filename (ex. <code>listdict['file_name.tsv']</code>) you will need to use a <code>dict</code>. Additionally, if you want to query up individual file headers in a file, you will need to use a <code>list</code> to retain the order:</p>
<pre><code>listdict={}
for f in files:
with open(dir_path +'/'+f, 'r') as file:
reader = csv.reader(file, delimiter='\t')
row1 = next(reader) # stores first row of tsv file in a list
listdict[f] = row1
</code></pre>
<p>An entry in <code>listdict</code> will look like:</p>
<pre><code>{'file_name.tsv': ['header1', 'header2', 'header3', 'header4']}
</code></pre>
<p><code>listdict['file_name.tsv']</code> will give you <code>['header1', 'header2', 'header3', 'header4']</code>.</p>
<p><code>listdict['file_name.tsv'][0]</code> will give you the value <code>'header1'</code>.</p>
| 4 | 2016-07-27T00:47:46Z | [
"python"
] |
Maya: Defer a script until after VRay is registered? | 38,601,706 | <p>I'm trying to delay a part of my pipeline tool (which runs during the startup of Maya) to run after VRay has been registered. </p>
<p>I'm currently delaying the initialization of the tool in a userSetup.py like so:</p>
<pre><code>def run_my_tool():
import my_tool
reload(my_tool)
mc.evalDeferred("run_my_tool()")
</code></pre>
<p>I've tried using evalDeferred within the tool to delay the execution of the render_settings script, but it keeps running before VRay has been registered. Any thoughts on how to create a listener for the VRay register event, or what event that is? Thanks!</p>
<p><strong>EDIT</strong>:</p>
<p>Made a new topic to figure out how to correctly use theodox's condition/scriptJob commands suggestion <a href="http://stackoverflow.com/questions/39252664/maya-python-running-condition-command-and-scriptjob-command-from-within-a-mod">here</a>.</p>
| 0 | 2016-07-27T00:00:32Z | 38,601,804 | <p>Uiron over at tech-artists.com showed me how to do this properly. Here's a <a href="http://tech-artists.org/forum/showthread.php?3203-need-help-passing-second-argument-to-event-for-scriptJob-in-maya" rel="nofollow">link to the thread</a></p>
<p>Here's the post by uiron:</p>
<p>"don't pass the python code as string unless you have to. Wherever a python callback is accepted (that's not everywhere in Maya's api, but mostly everywhere), try one of these:</p>
<pre><code># notice that we're passing a function, not function call
mc.scriptJob(runOnce=True, e=["idle", myObject.myMethod], permanent=True)
mc.scriptJob(runOnce=True, e=["idle", myGlobalFunction], permanent=True)
# when in doubt, wrap into temporary function; remember that in Python you can
# declare functions anywhere in the code, even inside other functions
open_file_path = '...'
def idle_handler(*args):
# here's where you solve the 'how to pass the argument into the handler' problem -
# use variable from outer scope
file_manip_open_fn(open_file_path)
mc.scriptJob(runOnce=True, e=["idle", idle_handler], permanent=True)
</code></pre>
<p>"</p>
| 0 | 2016-07-27T00:13:48Z | [
"python",
"maya"
] |
Flask - Cannot commit changes on view | 38,601,812 | <p>I have this Flask app deployed on a server.</p>
<p>Everything works fine except for the views where I need to db.session.commit() anytihing. I've been looking for circular imports and db engine stuff but I can't seem to understand why it doesn't work.</p>
<p>Below I post my <strong>init</strong>.py and one of the views I can't seem to make work.</p>
<pre><code> #__init__.py
from flask import Flask
from flask.ext.bootstrap import Bootstrap
from flask.ext.moment import Moment
from flask.ext.sqlalchemy import SQLAlchemy
from config import config
from flask.ext.login import LoginManager
import os
basedir = os.path.abspath(os.path.dirname(__file__))
bootstrap = Bootstrap()
moment = Moment()
db = SQLAlchemy()
login_manager = LoginManager()
login_manager.session_protection = 'strong'
login_manager.login_view = 'auth.login'
def create_app(config_name):
app = Flask(__name__)
app.config.from_object(config[config_name])
config[config_name].init_app(app)
bootstrap.init_app(app)
moment.init_app(app)
db.app = app
db.init_app(app)
db.create_all()
login_manager.init_app(app)
#BLUEPRINTS
from main import main as main_blueprint
app.register_blueprint(main_blueprint)
from auth import auth as auth_blueprint
app.register_blueprint(auth_blueprint, url_prefix = '/auth')
from api import api as api_blueprint
app.register_blueprint(api_blueprint, url_prefix = '/api')
return app
app = create_app(os.getenv('FLASK_CONFIG') or 'default')
if __name__ == "__main__":
app.run()
</code></pre>
<p>The config.py file where the db settings are defined</p>
<pre><code># -*- coding: utf-8 -*-
import os
basedir = os.path.abspath(os.path.dirname(__file__))
class Config:
SECRET_KEY = os.environ.get('SECRET_KEY') or 'hard to guess string'
SQLALCHEMY_COMMIT_ON_TEARDOWN = True
SQLALCHEMY_TRACK_MODIFICATIONS = True
# MAIL_SERVER = 'smtp.googlemail.com'
MAIL_SERVER = 'smtp.live.com'
MAIL_PORT = 587
MAIL_USE_TLS = True
# Variáveis de configuração definidas na linha de comandos por motivos de segurança
MAIL_USERNAME = os.environ.get('MAIL_USERNAME')
MAIL_PASSWORD = os.environ.get('MAIL_PASSWORD')
ORGANITE_MAIL_SUBJECT_PREFIX = '[Organite]'
ORGANITE_MAIL_SENDER = '*****@hotmail.com'
ORGANITE_ADMIN = '*****@hotmail.com'
@staticmethod
def init_app(app):
pass
class DevelopmentConfig(Config):
DEBUG = True
SQLALCHEMY_DATABASE_URI = 'sqlite:///data-dev.sqlite'
class TestConfig(Config):
TESTING = True
SQLALCHEMY_DATABASE_URI = os.environ.get('TEST_DATABASE_URL') or \
'sqlite:///' + os.path.join(basedir, 'data-test.sqlite')
class ProductionConfig(Config):
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') or \
'sqlite:///' + os.path.join(basedir, 'data.sqlite')
config = {
'development': DevelopmentConfig,
'testing': TestConfig,
'production': ProductionConfig,
'default': DevelopmentConfig
}
</code></pre>
<p>Then on my api blueprint one of the views that don't work:</p>
<pre><code># -*- coding: utf-8 -*-
from flask import jsonify, request, session, redirect, url_for, current_app
from flask.ext.login import login_user, logout_user, login_required, \
current_user
from .. import db
from ..models import User
from flask.ext.login import login_required
from ..decorators import admin_required, permission_required
from . import api
import cx_Oracle
import datetime
import json
import os
os.environ["NLS_LANG"] = ".UTF8"
#Remover utilizador > JSON
@api.route('/delete/<id>', methods=['GET'])
@login_required
@admin_required
def del_user(id):
user = User.query.filter_by(num_mec=id).first()
if user is not None:
try:
db.session.delete(user)
db.session.commit()
status = 'Sucesso'
except:
status = 'Falhou'
else:
status='Falhou'
db.session.close()
return jsonify({'result': status})
</code></pre>
<p>No matter the changes I make the result will always be 'Falhou', meaning the db.session.commit() failed.</p>
<p>I don't even know how to see log errors for this kind of things and I can't seem to understand why it doesn't work.</p>
<p>Please, help, I am running out of time to finish this project.</p>
| 0 | 2016-07-27T00:14:38Z | 38,628,001 | <p>In this case, unfortunately, the real cause of the error is being obscured by the <code>try: except</code> block which suppresses the error and just handles it by returning the failure result.</p>
<p>Remove the <code>try: except</code> and you will see the cause of the session commit failure in the crash log that Flask generates (in this case, as you mentioned in the comment, there was not sufficient permissions on the database file).</p>
<p>As a general rule, unless an exception is expected and must be handled gracefully, it's better to let it crash and burn with maximum error reporting (at least in debug mode) so that bugs don't slip by silently unnoticed. Since there is no general use-case where you would expect a failed commit to be normal behaviour, you should not be wrapping it in a <code>try: except</code> block.</p>
| 0 | 2016-07-28T05:49:23Z | [
"python",
"flask",
"flask-sqlalchemy"
] |
What is the quickest way to ensure a specific column is last (or first) in a dataframe | 38,601,841 | <p>given <code>df</code></p>
<pre><code>df = pd.DataFrame(np.arange(8).reshape(2, 4), columns=list('abcd'))
</code></pre>
<p><a href="http://i.stack.imgur.com/UXlIN.png" rel="nofollow"><img src="http://i.stack.imgur.com/UXlIN.png" alt="enter image description here"></a></p>
<p>Suppose I need column <code>'b'</code> to be at the end. I could do:</p>
<pre><code>df[['a', 'c', 'd', 'b']]
</code></pre>
<p><a href="http://i.stack.imgur.com/miBss.png" rel="nofollow"><img src="http://i.stack.imgur.com/miBss.png" alt="enter image description here"></a></p>
<p>But what is the most efficient way to ensure that a given column is at the end?</p>
<p>This is what I've been going with. What would others do?</p>
<pre><code>def put_me_last(df, column):
return pd.concat([df.drop(column, axis=1), df[column]], axis=1)
put_me_last(df, 'b')
</code></pre>
<p><a href="http://i.stack.imgur.com/miBss.png" rel="nofollow"><img src="http://i.stack.imgur.com/miBss.png" alt="enter image description here"></a></p>
<hr>
<h3>Timing Results</h3>
<p><strong>conclusion</strong>
mfripp is the winner. Seems as if <code>reindex_axis</code> is a large efficiency gain over <code>[]</code>. That is really good info.</p>
<p><a href="http://i.stack.imgur.com/C8T6N.png" rel="nofollow"><img src="http://i.stack.imgur.com/C8T6N.png" alt="enter image description here"></a></p>
<p><strong>code</strong></p>
<pre><code>from string import lowercase
df_small = pd.DataFrame(np.arange(8).reshape(2, 4), columns=list('abcd'))
df_large = pd.DataFrame(np.arange(1000000).reshape(10000, 100),
columns=pd.MultiIndex.from_product([list(lowercase[:-1]), ['One', 'Two', 'Three', 'Four']]))
def pir1(df, column):
return pd.concat([df.drop(column, axis=1), df[column]], axis=1)
def pir2(df, column):
if df.columns[-1] == column:
return df
else:
pos = df.columns.values.__eq__('b').argmax()
return df[np.roll(df.columns, len(df.columns) - 1 - pos)]
def pir3(df, column):
if df.columns[-1] == column:
return df
else:
pos = df.columns.values.__eq__('b').argmax()
cols = df.columns.values
np.concatenate([cols[:pos], cols[1+pos:], cols[[pos]]])
return df[np.concatenate([cols[:pos], cols[1+pos:], cols[[pos]]])]
def pir4(df, column):
if df.columns[-1] == column:
return df
else:
return df[np.roll(df.columns.drop(column).insert(0, column), -1)]
def carsten1(df, column):
cols = list(df)
if cols[-1] == column:
return df
else:
return pd.concat([df.drop(column, axis=1), df[column]], axis=1)
def carsten2(df, column):
cols = list(df)
if cols[-1] == column:
return df
else:
idx = cols.index(column)
new_cols = cols[:idx] + cols[idx + 1:] + [column]
return df[new_cols]
def mfripp1(df, column):
new_cols = [c for c in df.columns if c != column] + [column]
return df[new_cols]
def mfripp2(df, column):
new_cols = [c for c in df.columns if c != column] + [column]
return df.reindex_axis(new_cols, axis='columns', copy=False)
def ptrj1(df, column):
return df.reindex(columns=df.columns.drop(column).append(pd.Index([column])))
def shivsn1(df, column):
column_list=list(df)
column_list.remove(column)
column_list.append(column)
return df[column_list]
def merlin1(df, column):
return df[df.columns.drop(["b"]).insert(99999, 'b')]
list_of_funcs = [pir1, pir2, pir3, pir4, carsten1, carsten2, mfripp1, mfripp2, ptrj1, shivsn1]
def test_pml(df, pml):
for c in df.columns:
pml(df, c)
summary = pd.DataFrame([], [f.__name__ for f in list_of_funcs], ['Small', 'Large'])
for f in list_of_funcs:
summary.at[f.__name__, 'Small'] = timeit(lambda: test_pml(df_small, f), number=100)
summary.at[f.__name__, 'Large'] = timeit(lambda: test_pml(df_large, f), number=10)
</code></pre>
| 3 | 2016-07-27T00:18:38Z | 38,601,975 | <p>Well, the first (and, depending on your use case, most efficient) optimization is to ensure at first that you don't have to rearrange the DataSet. If the column you want to be the last one is already in its place, then you can just return the df unchanged. Try this one:</p>
<pre><code>def put_me_last2(df, column):
if list(df)[-1] == column:
return df
else: return pd.concat([df.drop(column, axis=1), df[column]], axis=1)
</code></pre>
<p>I've tried it with 8 million entries instead of the 8 from your example, and the speed was about the same when I demanded column <code>b</code> as the last one, and 300 times faster (500us vs 150ms) when I wanted the last column to be <code>d</code> (i.e. the case without reordering).</p>
<p>This won't help you if you have lots of columns or usually want to rearrange columns, but it doesn't hurt, either.</p>
<p><strong>Update:</strong></p>
<p>I've found a faster method: Dont drop and re-add a column, but use <code>df[cols]</code> with the wanted list of columns. Gives me about 40% speedup (90ms vs 150ms with 8 million entries).</p>
<pre><code>def put_me_last3(df, column):
cols = list(df)
if cols[-1] == column:
return df
else:
idx = cols.index(column)
new_cols = cols[:idx] + cols[idx + 1:] + [column]
return df[new_cols]
</code></pre>
| 3 | 2016-07-27T00:37:59Z | [
"python",
"pandas"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.