title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
python error:list.remove(x) x not in list | 38,593,206 | <p>I was trying to compare integer from a list and remove it from the list and append it on an array with python. Every time I run my code there is an error occured.
"list.remove(x) x not in list". I can't figure it out whats happening on such an error. Can anybody give me some advice? Thank you.</p>
<pre><code>def maxcompare(n):
lis = map(int, n.split(','))
threeans = []
for i in range(3):
maxnum = [0]
for j in n[1:]:
if j > maxnum:
maxnum = j
lis.remove(maxnum)
threeans.append(maxnum)
return maxnum
</code></pre>
<p>comparing integers
print out the three biggest integers
maxcompare('2,8,9,7,6,10,5')</p>
| -1 | 2016-07-26T14:50:57Z | 38,593,724 | <p> <li> If you're trying to map to a 2D (or greater dimensional) array, it won't work because some of what you split will necessarily contain brackets. You can't take the <code>int</code> of brackets.
<li> So given that lis is necessarily 1D, lis will never contain maxnum because that would imply that lis is a 2D array. [0, 1] does not contain [0]; [[0], [1]] does.
<li> Also, no integer is greater than [0] in Python. <code>sys.maxint > [0]</code> returns <code>False</code></p>
<p>That's why your code doesn't work.
See galaxyan's code for a pythonic and quick solution.</p>
| 0 | 2016-07-26T15:13:31Z | [
"python",
"python-2.7"
] |
Clustering with DBSCAN is surprisingly slow | 38,593,322 | <p>I am experimenting with clustering and I am surprised how slow it seems to be. I have produced a random graph with 30 communities each containing 30 nodes. Nodes in a communities have a 90% chance of being connected and edges between nodes not in the same community have a 10% chance of being connected. I am measuring the similarly between two nodes as the <a href="https://en.wikipedia.org/wiki/Jaccard_index" rel="nofollow">Jaccard similarity</a> between their sets of neighbors.</p>
<p>This toy example spends about 15 seconds just on the dbscan part and this increases very rapidly if I increase the number of nodes. As there are only 900 nodes in total this seems very slow.</p>
<pre><code>from __future__ import division
import numpy as np
from sklearn.cluster import dbscan
import networkx as nx
import matplotlib.pyplot as plt
import time
#Define the Jaccard distance. Following example for clustering with Levenshtein distance from from http://scikit-learn.org/stable/faq.html
def jaccard_distance(x,y):
return 1 - len(neighbors[x].intersection(neighbors[y]))/len(neighbors[x].union(neighbors[y]))
def jaccard_metric(x,y):
i, j = int(x[0]), int(y[0]) # extract indices
return jaccard_distance(i, j)
#Simulate a planted partition graph. The simplest form of community detection benchmark.
num_communities = 30
size_of_communities = 30
print "planted partition"
G = nx.planted_partition_graph(num_communities, size_of_communities, 0.9, 0.1,seed=42)
#Make a hash table of sets of neighbors for each node.
neighbors={}
for n in G:
for nbr in G[n]:
if not (n in neighbors):
neighbors[n] = set()
neighbors[n].add(nbr)
print "Made data"
X= np.arange(len(G)).reshape(-1,1)
t = time.time()
db = dbscan(X, metric = jaccard_metric, eps=0.85, min_samples=2)
print db
print "Clustering took ", time.time()-t, "seconds"
</code></pre>
<blockquote>
<p>How can I make this more scalable to larger numbers of nodes?</p>
</blockquote>
| 1 | 2016-07-26T14:56:31Z | 38,682,426 | <p>Here a solution that speeds up the DBSCAN call about 1890-fold on my machine:</p>
<pre><code># the following code should be added to the question's code (it uses G and db)
import igraph
# use igraph to calculate Jaccard distances quickly
edges = zip(*nx.to_edgelist(G))
G1 = igraph.Graph(len(G), zip(*edges[:2]))
D = 1 - np.array(G1.similarity_jaccard(loops=False))
# DBSCAN is much faster with metric='precomputed'
t = time.time()
db1 = dbscan(D, metric='precomputed', eps=0.85, min_samples=2)
print "clustering took %.5f seconds" %(time.time()-t)
assert np.array_equal(db, db1)
</code></pre>
<p>Here the output:</p>
<pre class="lang-none prettyprint-override"><code>...
Clustering took 8.41049790382 seconds
clustering took 0.00445 seconds
</code></pre>
| 2 | 2016-07-31T09:05:25Z | [
"python",
"scikit-learn"
] |
How to read video stream as input in OpenCV python? | 38,593,332 | <p>i have a nikon d90 camera. I want to get realtime video to my pc. And i want to read that streaming video using OpenCV python for processing the video frames.how to do that?.pls help me.</p>
| -6 | 2016-07-26T14:57:08Z | 38,602,117 | <p>I agree with the comments above, more details are needed to know exactly how you are planning connect your camera. Here's a working example for a webcam, notice that you should replace the <code>input_id</code> with your camera's. You would work on <code>frame</code> for further processing.</p>
<pre><code>import cv2
def get_video(input_id):
camera = cv2.VideoCapture(input_id)
while True:
okay, frame = camera.read()
if not okay:
break
cv2.imshow('video', frame)
cv2.waitKey(1)
pass
if __name__ == '__main__':
get_video(0)
</code></pre>
| 0 | 2016-07-27T01:00:45Z | [
"python",
"opencv"
] |
Django - create a class instance in AppConfig.ready() only once | 38,593,444 | <p>I need to create a class instance (lets say backend requests session) on the app startup(runserver), and I don't want to rewrite this session after running other management command. How can I achieve this? I tried several approaches and I'm not sure why something like this doesn't work. </p>
<pre><code># app/apps.py
class MyConfig(AppConfig):
....
requests_session = None
....
def ready(self):
if MyConfig.requests_session is None:
MyConfig.requests_session = requests.Session()
</code></pre>
<p>Unfortunately, the condition is always met and the session is recreated. This approach is recommended in the <a href="https://docs.djangoproject.com/es/1.9/ref/applications/#django.apps.AppConfig.ready" rel="nofollow">documentation</a> though. </p>
<p>Other solution for me would be to run MyConfig.ready() only after using selected subset of management commands, is that possible? </p>
<p>Is there completely different better way for me to store requests session? </p>
<p>TIA</p>
| 0 | 2016-07-26T15:01:17Z | 38,608,883 | <p>I <em>think</em> it should work if you use an instance variable instead of a class variable:</p>
<pre><code># app/apps.py
class MyConfig(AppConfig):
def __init__(self, app_name, app_module):
super(MyConfig, self).__init__(app_name, app_module)
self.requests_session = None
def ready(self):
if self.requests_session is None:
self.requests_session = requests.Session()
</code></pre>
<p>The question now is how to access this instance variable elsewhere. You can do that like so:</p>
<pre><code>from django.apps import apps
# Here myapp is the label of your app - change it as required
# This returns the instance of your app config that was initialised
# at startup.
my_app_config = apps.get_app_config('myapp')
# Use the stored request session
req = my_app_config.requests_session
</code></pre>
<p>Note that this instance variable only exists in the context of the current process. If you run a management command in a separate process (e.g., <code>manage.py ...</code>) then that will create a new instance of each app.</p>
| 0 | 2016-07-27T09:23:44Z | [
"python",
"django"
] |
Initialize instance with mother class attributes | 38,593,456 | <p>I am working with fits files that I read with fits.open() from the astropy lib. I get a hdu (header data unit), which is an instance of astropy.io.fits.hdu.image.PrimaryHDU.</p>
<p>Now, for a specific project, I want to work on the data in this hdu, by writing specific methods. A good way on doing it, I thought, is writing my own class that would be a subclass of PrimaryHDU. My new object would have all the attributes and methods of the PrimaryHDU instance, plus attributes and methods that I will write. But I cannot get it to work properly. How can my new object get all the attributes and method of the parent object? The closest I have come to is with the following piece of code (with, for example, a new method call "subtract"):</p>
<pre><code>from astropy.io.fits.hdu.image import PrimaryHDU
class MyHDU(PrimaryHDU):
def __init__(self, hdu):
PrimaryHDU.__init__(self, data=hdu.data, header=hdu.header)
def subtract(self, val):
self.data = self.data - val
</code></pre>
<p>It is kind of ok, but I can see that my new object doesn't have all the attributes set to the same value as the original object (hdu).... which seems normal, when I look at my code actually... but how can I initialize my new object with all the attribute of the parent object? And am I correct to make my new class inherit from the PrimaryHDU class?
Thanks</p>
| 0 | 2016-07-26T15:01:47Z | 38,593,608 | <blockquote>
<p>how can I initialize my new object with all the attribute of the parent object?</p>
</blockquote>
<p>You can't. You don't inherit from an instance (ie object), you inherit from a class.</p>
<p>What you should be doing is pass all the arguments you need in order to init both the parent and the subclass. In the child class's <code>__init__</code> method call <code>super().__init__</code> (the <code>__init__</code> method of the parent class) and then initialize the rest of the child class:</p>
<pre><code>from astropy.io.fits.hdu.image import PrimaryHDU
class MyHDU(PrimaryHDU):
def __init__(self, args_to_init_PrimaryHDU_obj, hdu):
super().__init__(args_to_init_PrimaryHDU_obj)
# if using Python 2 the above line should be
# super(PrimaryHDU, self).__init__(args_to_init_PrimaryHDU_obj)
self.data = hdu.data
self.header = hdu.header
def subtract(self, val):
self.data = self.data - val
</code></pre>
| 0 | 2016-07-26T15:08:27Z | [
"python",
"class",
"inheritance"
] |
Unable to submit python files to HTCondor- placed in 'held' | 38,593,488 | <p>I am attempting to run a python 2.7 program on HTCondor, however after submitting the job and using 'condor_q' to assess the job status, I see that the job is put in 'held'. </p>
<p>After querying using 'condor_q -analyse jobNo.' the error message is "Hold reason: Error from Ubuntu: Failed to execute '/var/lib/condor/execute/dir_12033/condor_exec.exe': (errno=8: 'Exec format error').</p>
<p>I am unsure how to resolve this error, any help would be much appreciated. As I am relatively new to HTCondor and Ubuntu could any guidance be step wise and easy to follow</p>
<p>I am running Ubuntu 16.04 and the latest release of HTCondor</p>
| 0 | 2016-07-26T15:03:36Z | 38,618,691 | <p>Update, managed to solve my problem. I needed to make sure that all directory paths were correct as I found that HTCondor was looking within its own files for the resources my submission program used. I therefore needed to define a variable in the .py file that contains the directory to the resource</p>
| 0 | 2016-07-27T16:36:48Z | [
"python",
"ubuntu"
] |
Remove duplicates from list of dictionaries within list of dictionaries | 38,593,527 | <p>I have list:</p>
<pre><code>my_list = [{'date': '10.06.2016',
'account': [{'name': 'a'},
{'name': 'a'},
{'name': 'b'},
{'name': 'b'}]},
{'date': '22.06.2016',
'account': [{'name': 'a'},
{'name': 'a'}]}]
</code></pre>
<p>I want to remove duplicates from the list of dictionaries in <code>'account'</code>:</p>
<pre><code>my_list = [{'date': '10.06.2016',
'account': [{'name': 'a'},
{'name': 'b'}]},
{'date': '22.06.2016',
'account': [{'name': 'a'}]}]
</code></pre>
<p>When using <code>set</code>, I get the following error:</p>
<blockquote>
<p>TypeError: unhashable type: 'dict'</p>
</blockquote>
<p>Can anybody help me with this problem?</p>
| 1 | 2016-07-26T15:05:19Z | 38,593,726 | <p>Sets can only have <a href="https://docs.python.org/2/glossary.html#term-hashable" rel="nofollow">hashable</a> members and neither lists nor dicts are - but they can be checked for equality.</p>
<p>you can do</p>
<pre><code>def without_duplicates(inlist):
outlist=[]
for e in inlist:
if e not in outlist:
outlist.append(e)
return outlist
</code></pre>
<p>this can be slow for really big lists</p>
| 0 | 2016-07-26T15:13:42Z | [
"python",
"python-2.7",
"dictionary"
] |
Remove duplicates from list of dictionaries within list of dictionaries | 38,593,527 | <p>I have list:</p>
<pre><code>my_list = [{'date': '10.06.2016',
'account': [{'name': 'a'},
{'name': 'a'},
{'name': 'b'},
{'name': 'b'}]},
{'date': '22.06.2016',
'account': [{'name': 'a'},
{'name': 'a'}]}]
</code></pre>
<p>I want to remove duplicates from the list of dictionaries in <code>'account'</code>:</p>
<pre><code>my_list = [{'date': '10.06.2016',
'account': [{'name': 'a'},
{'name': 'b'}]},
{'date': '22.06.2016',
'account': [{'name': 'a'}]}]
</code></pre>
<p>When using <code>set</code>, I get the following error:</p>
<blockquote>
<p>TypeError: unhashable type: 'dict'</p>
</blockquote>
<p>Can anybody help me with this problem?</p>
| 1 | 2016-07-26T15:05:19Z | 38,594,797 | <p>This structure is probably over complicated, but it gets the job done.</p>
<pre><code>my_list = [{'date': '10.06.2016',
'account': [{'name': 'a'},
{'name': 'a'},
{'name': 'b'},
{'name': 'b'}]},
{'date': '22.06.2016',
'account': [{'name': 'a'},
{'name': 'a'}]}]
>>> [{'date': date,
'account': [{'name': name} for name in group]
} for group, date in zip([set(account.get('name')
for account in item.get('account'))
for item in my_list],
[d.get('date') for d in my_list])]
[{'account': [{'name': 'a'}, {'name': 'b'}], 'date': '10.06.2016'},
{'account': [{'name': 'a'}], 'date': '22.06.2016'}]
</code></pre>
| 1 | 2016-07-26T16:03:29Z | [
"python",
"python-2.7",
"dictionary"
] |
Remove duplicates from list of dictionaries within list of dictionaries | 38,593,527 | <p>I have list:</p>
<pre><code>my_list = [{'date': '10.06.2016',
'account': [{'name': 'a'},
{'name': 'a'},
{'name': 'b'},
{'name': 'b'}]},
{'date': '22.06.2016',
'account': [{'name': 'a'},
{'name': 'a'}]}]
</code></pre>
<p>I want to remove duplicates from the list of dictionaries in <code>'account'</code>:</p>
<pre><code>my_list = [{'date': '10.06.2016',
'account': [{'name': 'a'},
{'name': 'b'}]},
{'date': '22.06.2016',
'account': [{'name': 'a'}]}]
</code></pre>
<p>When using <code>set</code>, I get the following error:</p>
<blockquote>
<p>TypeError: unhashable type: 'dict'</p>
</blockquote>
<p>Can anybody help me with this problem?</p>
| 1 | 2016-07-26T15:05:19Z | 38,594,855 | <p>Give this code a try:</p>
<pre><code>for d in my_list:
for k in d:
if k == 'account':
v = []
for d2 in d[k]:
if d2 not in v:
v.append(d2)
d[k] = v
</code></pre>
<p>This is what you get after running the snippet above:</p>
<pre><code>In [347]: my_list
Out[347]:
[{'account': [{'name': 'a'}, {'name': 'b'}], 'date': '10.06.2016'},
{'account': [{'name': 'a'}], 'date': '22.06.2016'}]
</code></pre>
| 0 | 2016-07-26T16:07:05Z | [
"python",
"python-2.7",
"dictionary"
] |
Remove duplicates from list of dictionaries within list of dictionaries | 38,593,527 | <p>I have list:</p>
<pre><code>my_list = [{'date': '10.06.2016',
'account': [{'name': 'a'},
{'name': 'a'},
{'name': 'b'},
{'name': 'b'}]},
{'date': '22.06.2016',
'account': [{'name': 'a'},
{'name': 'a'}]}]
</code></pre>
<p>I want to remove duplicates from the list of dictionaries in <code>'account'</code>:</p>
<pre><code>my_list = [{'date': '10.06.2016',
'account': [{'name': 'a'},
{'name': 'b'}]},
{'date': '22.06.2016',
'account': [{'name': 'a'}]}]
</code></pre>
<p>When using <code>set</code>, I get the following error:</p>
<blockquote>
<p>TypeError: unhashable type: 'dict'</p>
</blockquote>
<p>Can anybody help me with this problem?</p>
| 1 | 2016-07-26T15:05:19Z | 38,595,456 | <pre><code>def deduplicate_account_names(l):
for d in l:
names = set(map(lambda d: d.get('name'), d['account']))
d['account'] = [{'name': name} for name in names]
# even shorter:
# def deduplicate_account_names(l):
# for d in l:
# d['account'] = [{'name': name} for name in set(map(lambda d: d.get('name'), d['account']))]
my_list = [{'date': '10.06.2016',
'account': [{'name': 'a'},
{'name': 'a'},
{'name': 'b'},
{'name': 'b'}]},
{'date': '22.06.2016',
'account': [{'name': 'a'},
{'name': 'a'}]}]
deduplicate_account_names(my_list)
print(my_list)
# [ {'date': '10.06.2016',
# 'account': [ {'name': 'a'},
# {'name': 'b'} ] },
# {'date': '22.06.2016',
# 'account': [ {'name': 'a'} ] } ]
</code></pre>
| 0 | 2016-07-26T16:38:40Z | [
"python",
"python-2.7",
"dictionary"
] |
Link to Django Admin | 38,593,722 | <p>Being relatively new to Django and despite studying the documentation I am somewhat stuck and would greatly appreciate any help.</p>
<p>I have a template for a list view that is only available for staff.</p>
<p>I want to be able to click on the indidvidual link in the list and be taken through to the individual item in the admin.</p>
<p>At the moment I have the following in the template which works fine although only as an 'experiment'</p>
<pre><code><a href='{% url 'admin: contacts_contact_changelist' %}' {{ contact.id }}
</code></pre>
<p>How do I go about implementing the link and is this the right approach? </p>
| 0 | 2016-07-26T15:13:29Z | 38,595,366 | <p>To go to a particular contact's edit page use:</p>
<pre><code><a href="{% url 'admin:contacts_contact_change' contact.id %}">link name</a>
</code></pre>
<p>This is described in the <a href="https://docs.djangoproject.com/en/1.9/ref/contrib/admin/#admin-reverse-urls" rel="nofollow">documentation</a>.</p>
| 0 | 2016-07-26T16:34:13Z | [
"python",
"django"
] |
Unexpected socket.getaddrinfo behavior in Python using SOCK_STREAM | 38,593,744 | <ul>
<li>I'm busy trying to use <code>socket.getaddrinfo()</code> to resolve a domain name. When I pass in:</li>
</ul>
<p><code>host = 'www.google.com', port = 80, family = socket.AF_INET, type = 0, proto = 0, flags = 0</code> </p>
<p>I get a pair of socket infos like you'd expect, one with SocketKind.SOCK_DGRAM (for UDP) and and the other with SocketKind.SOCK_STREAM (TCP). </p>
<ul>
<li><p>When I set proto to <code>socket.IPPROTO_TCP</code> I narrow it to only TCP as expected.</p></li>
<li><p>However, when I use <code>proto = socket.SOCK_STREAM</code> (which shouldn't work) I get back a SocketKind.SOCK_RAW. </p></li>
<li><p>Also, Python won't let me use <code>proto = socket.IPPROTO_RAW</code> - I get 'Bad hints'. </p></li>
</ul>
<p>Any thoughts on what's going on here?</p>
| 0 | 2016-07-26T15:14:39Z | 38,660,201 | <p><code>socket.SOCK_STREAM</code> should be passed in the <code>type</code> field. Using it in the <code>proto</code> field probably has a very random effect, which is what you're seeing. Proto only takes the <code>IPPROTO</code> constants. For a raw socket, you should use <code>type = socket.SOCK_RAW</code>. I'm not sure <code>getaddrinfo</code> supports that though, it's mostly for TCP and UDP.</p>
<p>It's probably better to have some actual code in your questions. It's much easier to see what's going on then.</p>
| 0 | 2016-07-29T13:30:54Z | [
"python",
"sockets",
"python-3.x",
"getaddrinfo"
] |
Numerically Representing Mathematica's Root Object in Open-Source Language | 38,593,771 | <h2>Question</h2>
<p>I would like to reproduce a <code>Root[]</code> object, <em>ideally</em> in a <code>python</code> function.</p>
<p>Is there any particular library that would be suited for this process?</p>
<h2>Attempts</h2>
<p>If I understand the <code>Root[]</code> function properly, it is simply finding the nth degree root of a polynomial and so I and taking a stab that <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.roots.html" rel="nofollow"><code>numpy.roots</code></a> would suffice by taking <code>#1</code> as the argument <code>x</code> in the <code>numpy</code> docs.</p>
<h2>Background</h2>
<p>I have a number of 5th order polynomial root which cannot be reduced with <code>ToRadicals</code> due to the order that were obtained from a particularly nasty Inverse Laplace Transform.</p>
<h2>Minimal Mathematica example</h2>
<pre><code>r = 1/Ï;
ct = Cos[θ];
r2 = r^2;
r3 = r^3;
r4 = r2^2;
ct2 = ct^2;
ct3 = ct^3;
ct4 = ct2^2;
ct5 = ct^5;
ct6 = ct2^3;
p2 = Ï^2;
fn := Root[2*Ï*p2*r^5 + 2*Ï*p2*r^5*ct - 2*Ï^2*p2*r^5*ct - 2*Ï*p2*r^5*ct2 - 2*Ï*p2*r^5*ct3 + 2*Ï^2*p2*r^5*ct3 + (r4 + 4*p2*r4 + 4*Ï*p2*r4 + r4*ct - 2*Ï*r4*ct + 4*p2*r4*ct - 2*Ï*p2*r4*ct - 2*Ï^2*p2*r4*ct - r4*ct2 - 4*p2*r4*ct2 - r4*ct3 + 2*Ï*r4*ct3 - 4*p2*r4*ct3 + 6*Ï*p2*r4*ct3 - 2*Ï^2*p2*r4*ct3)*#1 + (4*r3 + 8*p2*r3 + 2*Ï*p2*r3 + 3*r3*ct - 6*Ï*r3*ct + 4*p2*r3*ct - 4*Ï*p2*r3*ct - 3*r3*ct2 - 4*p2*r3*ct2 + 2*Ï*p2*r3*ct2 - 2*r3*ct3 + 4*Ï*r3*ct3)*#1^2 + (6*r2 + 4*p2*r2 + 3*r2*ct - 6*Ï*r2*ct - 3*r2*ct2 - r2*ct3 + 2*Ï*r2*ct3)*#1^3 + (4*r + r*ct - 2*Ï*r*ct - r*ct2)*#1^4 + #1^5 &, 5]
</code></pre>
| 0 | 2016-07-26T15:15:48Z | 38,596,170 | <p>If you are interested in symbolic calculations, you can use <a href="http://www.sympy.org/en/index.html" rel="nofollow">SymPy</a>. In particular, SymPy has polynomial objects and the classes <code>RootOf</code> and <code>CRootOf</code> to represent the roots of polynomials.</p>
<p>For example,</p>
<pre><code>In [103]: from sympy import Symbol, poly
In [104]: x = Symbol('x')
In [105]: p = poly(x**4 - 3*x**2 + x - 1)
In [106]: p
Out[106]: Poly(x**4 - 3*x**2 + x - 1, x, domain='ZZ')
In [107]: p.root(0)
Out[107]: CRootOf(x**4 - 3*x**2 + x - 1, 0)
</code></pre>
<p><code>CRootOf(poly, k)</code> is a placeholder for the kth root of the polynomial. To find its numerical value, use the <code>.evalf()</code> method:</p>
<pre><code>In [109]: p.root(0).evalf()
Out[109]: -1.94397243715073
</code></pre>
<p>Here are the numerical values of all the roots:</p>
<pre><code>In [110]: [p.root(k).evalf() for k in range(p.degree())]
Out[110]:
[-1.94397243715073,
1.66143946800762,
0.141266484571554 - 0.538201812325831*I,
0.141266484571554 + 0.538201812325831*I]
</code></pre>
| 2 | 2016-07-26T17:17:52Z | [
"python",
"numpy",
"scipy",
"wolfram-mathematica",
"solver"
] |
Rosalind consensus and profile python | 38,593,923 | <p>I'm working on the problem "Consensus nd Profile" on the Rosalind Bioinformatics website (<a href="http://rosalind.info/problems/cons/" rel="nofollow">http://rosalind.info/problems/cons/</a>). I tried my code using the sample input on the website and my output matches the sample output. But when I tried the larger dataset the website said my output is wrong. Could someone help me identify where my problem is? Thank you!</p>
<p>Sample input:</p>
<pre><code>>Rosalind_1
ATCCAGCT
>Rosalind_2
GGGCAACT
>Rosalind_3
ATGGATCT
>Rosalind_4
AAGCAACC
>Rosalind_5
TTGGAACT
>Rosalind_6
ATGCCATT
>Rosalind_7
ATGGCACT
</code></pre>
<p>I've extracted the dna strings and stored them in a list called strings (my trial with the larger dataset is correct at this step so I omitted my code here):</p>
<pre><code>['ATCCAGCT', 'GGGCAACT', 'ATGGATCT', 'AAGCAACC', 'TTGGAACT', 'ATGCCATT', 'ATGGCACT']
</code></pre>
<p>My code afterwards:</p>
<pre><code>#convert strings into matrix
matrix = []
for i in strings:
matrix.append([j for j in i])
M = np.array(matrix).reshape(len(matrix),len(matrix[0]))
</code></pre>
<p>M looks like this for sample input:</p>
<pre><code>[['A' 'T' 'C' 'C' 'A' 'G' 'C' 'T']
['G' 'G' 'G' 'C' 'A' 'A' 'C' 'T']
['A' 'T' 'G' 'G' 'A' 'T' 'C' 'T']
['A' 'A' 'G' 'C' 'A' 'A' 'C' 'C']
['T' 'T' 'G' 'G' 'A' 'A' 'C' 'T']
['A' 'T' 'G' 'C' 'C' 'A' 'T' 'T']
['A' 'T' 'G' 'G' 'C' 'A' 'C' 'T']]
</code></pre>
<p>My code afterwards:</p>
<pre><code>#convert string matrix into profile matrix
A = []
C = []
G = []
T = []
for i in range(len(matrix[0])):
A_count = 0
C_count = 0
G_count = 0
T_count = 0
for j in M[:,i]:
if j == "A":
A_count += 1
elif j == "C":
C_count += 1
elif j == "G":
G_count += 1
elif j == "T":
T_count += 1
A.append(A_count)
C.append(C_count)
G.append(G_count)
T.append(T_count)
profile_matrix = {"A": A, "C": C, "G": G, "T": T}
for k, v in profile_matrix.items():
print k + ":" + " ".join(str(x) for x in v)
#get consensus string
P = []
P.append(A)
P.append(C)
P.append(G)
P.append(T)
profile = np.array(P).reshape(4, len(A))
consensus = []
for i in range(len(A)):
if max(profile[:,i]) == profile[0,i]:
consensus.append("A")
elif max(profile[:,i]) == profile[1,i]:
consensus.append("C")
elif max(profile[:,i]) == profile[2,i]:
consensus.append("G")
elif max(profile[:,i]) == profile[3,i]:
consensus.append("T")
print "".join(consensus)
</code></pre>
<p>These codes give the correct sample output:</p>
<pre><code>A:5 1 0 0 5 5 0 0
C:0 0 1 4 2 0 6 1
T:1 5 0 0 0 1 1 6
G:1 1 6 3 0 1 0 0
ATGCAACT
</code></pre>
<p>But when I tried the larger dataset the website said my answer was wrong...Could someone point out where I'm wrong? (I'm a beginner, thank you for your patience!)</p>
| 1 | 2016-07-26T15:22:39Z | 38,624,216 | <p>Your algorithm is totally fine. As @C_Z_ pointed out "make sure your format matches the sample output exactly" which is unfortunately not the case.</p>
<pre><code>print k + ":" + " ".join(str(x) for x in v)
</code></pre>
<p>should be</p>
<pre><code>print k + ": " + " ".join(str(x) for x in v)
</code></pre>
<p>and come after, not before, the consensus sequence.
If you change the order and add the space your answer will get get accepted by rosalind.</p>
<hr>
<p>Since that's a trivial answer to your question, here is an alternative solution for the same problem without using numpy:
Instead of using variable for each nucleotide, use a dictionary. It's not fun to do the same thing with 23 amino acids, e.g.</p>
<pre><code>from collections import defaultdict
for i in range(len(strings[0])):
counter.append(defaultdict(int))
for seq in seqs:
counter[i][seq[i]] += 1
consensus += max(counter[i], key=counter[i].get)
</code></pre>
<p><code>counter</code> stores a <code>dictionary</code> for each position with all the counts for all bases. The key for the dictionary is the current base.</p>
| 1 | 2016-07-27T22:28:09Z | [
"python",
"python-2.7",
"bioinformatics",
"rosalind"
] |
Warning messages when importing matplotlib.pyplot | 38,593,969 | <p>When importing matplotlib:</p>
<pre><code>from matplotlib import pyplot as plt
</code></pre>
<p>I get an User Warning:</p>
<pre><code>~/.virtualenvs/cv/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
</code></pre>
<p>And in addition I get about 50 lines of <code>"Notice Copyright [c] 1994 Luc[as] de Groot <luc@FontFabrik.com> Published by TEndFontMetrics..."</code></p>
<p>Importing Matplotlib alone worked as expected and no warnings were shown. I'm using Matplotlib version 1.5.1 and Python 2.7.12 on Mac OS X El Capitan (10.11.3)</p>
<p>So far everything seems to work fine, but it takes extra time to import and the messages flood the terminal. What is the reason for this and is it possible to disable this behaviour?</p>
<p>UPDATE: I am using a virtual environment</p>
| 1 | 2016-07-26T15:24:11Z | 38,633,199 | <p>I deleted fontList.cache and tex.cach as advised on <a href="http://stackoverflow.com/questions/34771191/matplotlib-taking-time-when-being-imported">answer to the question</a> linked by whrrgarbl. I addition, I created a file <code>~/.matplotlib/matplotlibrc</code> and added line: <code>backend: TkAgg</code> as suggested <a href="http://stackoverflow.com/questions/21784641/installation-issue-with-matplotlib-python">here</a>.</p>
<p>This solved the issue.</p>
| 1 | 2016-07-28T10:03:44Z | [
"python",
"matplotlib"
] |
Fail in test of function, it doesn't update database | 38,593,978 | <p>There is my function:</p>
<p>views.py</p>
<pre><code>def save(request):
data = {'mark':request.POST.get('mark'), 'task':request.POST.get('task')}
Task.objects.filter(id=request.POST.get('s')).update(mark=data['mark'], task=data['task'])
return redirect(list)
</code></pre>
<p>What is wrong in my test?It doesn't update database.Please help!!!</p>
<p>tests.py</p>
<pre><code>from todo.models import Task
class TaskTest(TestCase):
def test_ok_update_task(self):
s=1
Task.objects.create(mark=True, task='task')
data = {'mark': False, 'task': '1'}
self.client.post('/save', data)
task_1 = Task.objects.filter(id=s).get()
self.assertNotEquals(task_1.mark, True)
self.assertNotEquals(task_1.task, 'task')
self.assertEquals(task_1.mark, data['mark'])
self.assertEquals(task_1.task, data['task'])
</code></pre>
<p>models.py</p>
<pre><code>class Task(models.Model):
mark = models.NullBooleanField()
task=models.CharField(max_length=200, null=True)
up_url=models.CharField(max_length=1000, null=True)
down_url=models.CharField(max_length=1000, null=True)
update_url=models.CharField(max_length=1000, null=True)
delete_url=models.CharField(max_length=1000, null=True)
</code></pre>
| -1 | 2016-07-26T15:24:24Z | 38,595,460 | <p>Your view expects <code>request.POST['s']</code> to contain the id</p>
<pre><code>Task.objects.filter(id=request.POST.get('s'))
</code></pre>
<p>but you have forgotten to include it in your data.</p>
<pre><code>data = {'mark': False, 'task': '1'}
</code></pre>
<p>An easy way to debug problems like this is to add print statements to your view and tests. If you had added <code>print request.POST</code> and <code>print request.POST.get('s')</code> to your model, you'd probably have spotted the problem. </p>
| 1 | 2016-07-26T16:39:04Z | [
"python",
"django",
"django-testing",
"django-tests"
] |
Getting the maximum mode per group using groupby | 38,594,027 | <p>I have generated a table that shows the mode values of my dataset.
The dataset was originally grouped by "date", "hour" and "room" in order to be able to get the mode value of foot traffic.
The groupby was performed as follows:</p>
<pre><code>dataframe = df.groupby([df['date'], df['hour'], df['room']])
</code></pre>
<p>Then I generated the mode value(s) for "traffic" of each groups the following way:</p>
<pre><code>dataframe = dataframe['traffic'].apply(lambda x: x.mode())
</code></pre>
<p>As a result I have my dataframe which displays the proper groups and shows the modal value per room, per hour and per day.
My issue is that in certain cases the number of modal values is more than one (as 2 or 3 values have had the same number of observations)</p>
<p>The current dataframe looks like this:</p>
<pre><code> mode
date hour room
6 12 room1 0 15
room2 0 23
1 26
room3 0 1
1 2
13 room2 0 9
1 11
room2 0 15
</code></pre>
<p>As you can see above, <strong>for room2 at 12:00 on January 6 there are 2 modal values (23 and 26).</strong><br>
My issue here is that ideally I would drop the lowest "mode" value(s) from each group where there are more than 1 observation.</p>
<p>I have looked at several approaches but cannot get this to function.
I was thinking that the following would work:</p>
<pre><code>dataframe.apply(lambda x: x[dataframe['mode'] == dataframe['mode'].max()])
</code></pre>
<p>I would then remove duplicates, but this does not affect the dataframe..</p>
<p>or</p>
<pre><code>dataframe.filter(lambda x : x[dataframe['mode'] == dataframe['mode'].max()], dataframe['mode'])
</code></pre>
<p>which gives me a "'function' object is not iterable" error<br>
or</p>
<pre><code>for elem in range(0, dataframe.size -1): #to iterate over the daaframe rows
if elem != dataframe['mode'].max(): #to identify rows that aren't max mode value
dataframe = dataframe.drop([elem]) #to drop these rows
</code></pre>
<p>To answer the request from Conner, please see below the original csv data (dataframe called "df"):</p>
<pre><code> room time capacity areaName hour date traffic
0 room1 Mon Nov 02 09:00:00 40 area01 9 2 14
1 room1 Mon Nov 02 09:05:00 40 area01 9 2 15
2 room1 Mon Nov 02 09:10:00 80 area01 9 2 23
3 room1 Mon Nov 02 09:15:00 80 area01 9 2 23
...
14 room2 Mon Nov 02 11:00:00 40 area03 11 2 67
15 room2 Mon Nov 02 11:50:00 80 area03 11 2 64
16 room2 Mon Nov 02 11:10:00 40 area03 11 2 72
</code></pre>
<p>If anyone knew a way to go through each group and only keep the max mode value if there are several ones I would greatly appreciate.</p>
<p>Thank you for your time!</p>
<p>-Romain</p>
| 0 | 2016-07-26T15:26:04Z | 38,594,308 | <p>I was looking for something like this. FYI you can get this with <code>df.head(n=10).to_csv(path, index=False)</code></p>
<pre><code>room,time,capacity,areaName,hour,date,traffic
room1,Mon Nov 02 09:00:00,40,area01,9,2,14
room1,Mon Nov 02 09:05:00,40,area01,9,2,15
room1,Mon Nov 02 09:10:00,80,area01,9,2,23
room1,Mon Nov 02 09:15:00,80,area01,9,2,23
room2,Mon Nov 02 11:00:00,40,area03,11,2,67
room2,Mon Nov 02 11:50:00,80,area03,11,2,64
room2,Mon Nov 02 11:10:00,40,area03,11,2,72
</code></pre>
<p>(Below I use equivalent code to be more concise)</p>
<p>This gives you a <code>groupby</code> object</p>
<pre><code>df = df.groupby(['date', 'hour', 'room'])
</code></pre>
<blockquote>
<p>It turns out, unlike <code>mean</code>, <code>max</code>, <code>median</code>, <code>min</code> and <code>mad</code>there is no <code>mode</code> method for <code>GroupBy</code> objects!</p>
</blockquote>
<p>Once you've done this</p>
<pre><code>df = dataframe['traffic'].apply(lambda x: x.mode())
</code></pre>
<p>You can reset the index and regroup to apply the <code>max</code> per group:</p>
<pre><code>df = df.reset_index()
df = df.groupby(['date', 'hour', 'room']).max()
</code></pre>
| 0 | 2016-07-26T15:38:44Z | [
"python",
"python-3.x",
"pandas",
"dataframe",
"lambda"
] |
Error running tests with Conda and Tox | 38,594,047 | <p>I am having trouble running tests with Tox while having virtual environments created with Conda. The steps to reproduce the error are below.</p>
<p>Download the repository (it is small) and <code>cd</code> to it:</p>
<pre><code>git clone https://github.com/opensistemas-hub/osbrain.git
cd osbrain
</code></pre>
<p>Create the virtual environment with Conda:</p>
<pre><code>conda create -n asdf python=3.5
source activate asdf
pip install tox
</code></pre>
<p>Try to run the tests (note that Python 3.5 is the only Python interpreter set in the <code>tox.ini</code> file):</p>
<pre><code>tox
</code></pre>
<p>I would expect Tox to be able to use the Python 3.5 interpreter available in the Conda virtual environment to run the tests. However, instead, I am getting an error:</p>
<pre><code>ERROR: The executable ~/osbrain/.tox/py35/bin/python3.5 is not
functioning
ERROR: It thinks sys.prefix is '/usr' (should be '~/osbrain/.tox/py35')
ERROR: virtualenv is not compatible with this system or executable
Running virtualenv with interpreter ~/.miniconda3/envs/asdf/bin/python3.5
</code></pre>
<p>My question is: why am I getting that error and how can I avoid this? (i.e.: how could I run the tests locally for this project and using Tox?)</p>
| 1 | 2016-07-26T15:27:09Z | 38,731,951 | <p>Virtualenv and conda/conda-env do not currently play nice together. See <a href="https://github.com/conda-forge/staged-recipes/issues/1139" rel="nofollow">https://github.com/conda-forge/staged-recipes/issues/1139</a> and <a href="https://groups.google.com/a/continuum.io/forum/#!topic/conda/63B0jnPR-V4" rel="nofollow">https://groups.google.com/a/continuum.io/forum/#!topic/conda/63B0jnPR-V4</a>.</p>
<p>UPDATE</p>
<p>Also related: <a href="https://bitbucket.org/hpk42/tox/issues/273/support-conda-envs-when-using-miniconda" rel="nofollow">https://bitbucket.org/hpk42/tox/issues/273/support-conda-envs-when-using-miniconda</a></p>
| 1 | 2016-08-02T23:20:28Z | [
"python",
"conda",
"tox"
] |
Run library module as a script by cron (python -m) | 38,594,059 | <p>I created python library. In order to avoid install of executable python scripts I allowed library modules to work as executables. It is possible to run module as script with -m opion (python -m).
Manually everything works:</p>
<pre><code>$ python -m Library.Core.Runner runFirst
</code></pre>
<p>But when I try to run this with cron it does not work:</p>
<pre><code>*/5 * * * * /usr/bin/python -m Library.Core.Runner runFirst >> /var/log/MyProject/runFirst.log 2>&1
</code></pre>
<p>Output from /var/log/cron:</p>
<pre><code>Jul 26 18:25:01 myhostname crond[23735]: (/usr/bin/python) ERROR (getpwnam() failed)
</code></pre>
<p>How can I fix it?</p>
<p>Environment: CentOS 7, Python 2.7.5</p>
| 0 | 2016-07-26T15:27:41Z | 38,654,211 | <p>Add user name to let cron run the task on behalf of him:</p>
<pre><code>*/5 * * * * <username> /usr/bin/python -m Library.Core.Runner runFirst >> /var/log/MyProject/runFirst.log 2>&1
*/5 * * * * igor /usr/bin/python -m Library.Core.Runner runFirst >> /var/log/MyProject/runFirst.log 2>&1
</code></pre>
| 0 | 2016-07-29T08:36:16Z | [
"python",
"cron"
] |
subprocess.call() or subprocess.Popen for generating/using output files | 38,594,141 | <p>I am trying to write a Python code to interact with another software that uses command line prompts. Once the command line prompt is executed, multiple output files are generated in the directory and the software uses those output files to do some calculations. Plus, the rest of my program utilizes those output files. Currently, I run my python code, then manually enter in the command line prompt, then call the rest of my code and that works fine, however when I attempted to put: </p>
<pre><code>subprocess.call(['sfit4Layer0.py', '-bv5', '-fs'], shell=False)
</code></pre>
<p>into my file, it does not execute properly (the output files are not generated). </p>
<p>When I made the above code it's own individual python code and called it immediately after the first part of my code - it also worked.</p>
<p>According to my output, I am convinced that the problem is this: the call generated multiple files and then uses those files to make calculations, however, it is not generating the proper files so I get errors in my output. So in a way it seems like it is getting ahead of itself: not waiting for the output files to be produced before making calculations, but again, it works when I run this command separately outside of the program. Any ideas on why this would happen?</p>
<p>What am I doing wrong? Do I need to specify the directory (could the output files be put somewhere else in my computer)? Do I need to use subprocess.Popen ? I have searched the internet but I am new(ish) to Python and thoroughly stumped.</p>
<p>Any suggestions welcome. Thanks!</p>
<p>EDIT: for those who asked, here is the sfit4Layer0.py code:</p>
<pre><code>#! /usr/bin/python
##! /usr/local/python-2.7/bin/python
##! /usr/bin/python
##! /usr/bin/python
# Change the above line to point to the location of your python executable
#----------------------------------------------------------------------------------------
# Name:
# sfit4Layer0.py
#
# Purpose:
# This file is the zeroth order running of sfit4. It accomplishes the following:
# 1) Calls pspec to convert binary spectra file (bnr) to ascii (t15asc)
# 2) Calls hbin to gather line parameters from hitran
# 3) Calls sfit4
# 4) Calls error analysis from Layer1mods.py
# 5) Clean outputs from sfit4 call
#
#
# External Subprocess Calls:
# 1) pspec executable file from pspec.f90
# 2) hbin executable file from hbin.f90
# 3) sfit4 executable file from sfit4.f90
# 4) errAnalysis from Layer1mods.py
#
#
#
# Notes:
# 1) Options include:
# -i <dir> : Optional. Data directory. Default is current working directory
# -b <dir/str> : Optional. Binary directory. Default is hard-code.
# -f <str> : Run flags, h = hbin, p = pspec, s = sfit4, e = error analysis, c = clean
#
#
# Usage:
# ./sfit4Layer0.py [options]
#
#
# Examples:
# 1) This example runs hbin, pspec, sfit4, error analys, and cleans working directory prior to execution
# ./sfit4Layer0.py -f hpsec
#
# 2) This example just runs sfit4
# ./sfit4Layer0.py -f s
#
# 3) This example cleans the output file created by sfit4 in directory (/User/home/datafiles/) which is not the current directory
# ./sfit4Layer0.py -i /User/home/datafiles/ -f c
#
# Version History:
# Created, May, 2013 Eric Nussbaumer (ebaumer@ucar.edu)
#
#
# License:
# Copyright (c) 2013-2014 NDACC/IRWG
# This file is part of sfit4.
#
# sfit4 is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# sfit4 is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with sfit4. If not, see <http://www.gnu.org/licenses/>
#
#----------------------------------------------------------------------------------------
#---------------
# Import modules
#---------------
import sys
import os
import getopt
import sfitClasses as sc
from Layer1Mods import errAnalysis
from Tkinter import Tk
from tkFileDialog import askopenfilename
#------------------------
# Define helper functions
#------------------------
def usage(binDirVer):
print 'sfit4Layer0.py -f <str> [-i <dir> [-b <dir/str> ] \n'
print '-i <dir> Data directory. Optional: default is current working directory'
print '-f <str> Run Flags: Necessary: h = hbin, p = pspec, s = sfit4, e = error analysis, c = clean'
print '-b <dir/str> Binary sfit directory. Optional: default is hard-coded in main(). Also accepts v1, v2, etc.'
for ver in binDirVer:
print ' {}: {}'.format(ver,binDirVer[ver])
sys.exit()
def main(argv):
#----------------
# Initializations
#----------------
#------------
# Directories
#------------
wrkDir = os.getcwd() # Set current directory as the data directory
binDir = '/data/bin' # Default binary directory. Used of nothing is specified on command line
binDirVer = {
'v1': '/data/ebaumer/Code/sfit-core-code/src/', # Version 1 for binary directory (Eric)
'v2': '/data/tools/400/sfit-core/src/', # Version 2 for binary directory (Jim)
'v3': '/Users/jamesw/FDP/sfit/400/sfit-core/src/', # Version 2 for binary directory (Jim)
'v4': '/home/ebaumer/Code/sfit4/src/',
'v5': '/Users/allisondavis/Documents/Summer2016/sfit4_0.9.4.3/src'
}
#----------
# Run flags
#----------
hbinFlg = False # Flag to run hbin
pspecFlg = False # Flag to run pspec
sfitFlg = False # Flag to run sfit4
errFlg = False # Flag to run error analysis
clnFlg = False # Flag to clean directory of output files listed in ctl file
#--------------------------------
# Retrieve command line arguments
#--------------------------------
try:
opts, args = getopt.getopt(sys.argv[1:], 'i:b:f:?')
except getopt.GetoptError as err:
print str(err)
usage(binDirVer)
sys.exit()
#-----------------------------
# Parse command line arguments
#-----------------------------
for opt, arg in opts:
# Data directory
if opt == '-i':
wrkDir = arg
sc.ckDir(wrkDir,exitFlg=True)
# Binary directory
elif opt == '-b':
if not sc.ckDir(arg,exitFlg=False,quietFlg=True):
try: binDir = binDirVer[arg.lower()]
except KeyError: print '{} not a recognized version for -b option'.format(arg); sys.exit()
else: binDir = arg
if not(binDir.endswith('/')): binDir = binDir + '/'
# Run flags
elif opt == '-f':
flgs = list(arg)
for f in flgs:
if f.lower() == 'h': hbinFlg = True
elif f.lower() == 'p': pspecFlg = True
elif f.lower() == 's': sfitFlg = True
elif f.lower() == 'e': errFlg = True
elif f.lower() == 'c': clnFile = True
else: print '{} not an option for -f ... ignored'.format(f)
elif opt == '-?':
usage(binDirVer)
sys.exit()
else:
print 'Unhandled option: {}'.format(opt)
sys.exit()
#--------------------------------------
# If necessary change working directory
# to directory with input data.
#--------------------------------------
if os.path.abspath(wrkDir) != os.getcwd(): os.chdir(wrkDir)
if not(wrkDir.endswith('/')): wrkDir = wrkDir + '/'
#--------------------------
# Initialize sfit ctl class
#--------------------------
ctlFileName = wrkDir + 'sfit4.ctl'
if sc.ckFile(wrkDir+'sfit4.ctl'): ctlFileName = wrkDir + 'sfit4.ctl'
else:
Tk().withdraw()
ctlFileName = askopenfilename(initialdir=wrkDir,message='Please select sfit ctl file')
ctlFile = sc.CtlInputFile(ctlFileName)
ctlFile.getInputs()
#------------------------
# Initialize sb ctl class
#------------------------
if errFlg:
if sc.ckFile(wrkDir+'sb.ctl'): sbCtlFileName = wrkDir + 'sb.ctl'
else:
TK().withdraw()
sbCtlFileName = askopenfilename(initialdir=wrkDir,message='Please select sb ctl file')
sbCtlFile = sc.CtlInputFile(sbCtlFileName)
sbCtlFile.getInputs()
#---------------------------
# Clean up output from sfit4
#---------------------------
if clnFlg:
for k in ctlFile.inputs['file.out']:
if 'file.out' in k:
try: os.remove(wrkDir + ctlFile.inputs[k])
except OSError: pass
#----------
# Run pspec
#----------
if pspecFlg:
print '*************'
print 'Running pspec'
print '*************'
rtn = sc.subProcRun( [binDir + 'pspec'] )
#----------
# Run hbin
#----------
if hbinFlg:
print '************'
print 'Running hbin'
print '************'
rtn = sc.subProcRun( [binDir + 'hbin'] )
#----------
# Run sfit4
#----------
if sfitFlg:
print '************'
print 'Running sfit'
print '************'
rtn = sc.subProcRun( [binDir + 'sfit4'] )
#-------------------
# Run error analysis
#-------------------
if errFlg:
print '**********************'
print 'Running error analysis'
print '**********************'
rtn = errAnalysis(ctlFile,sbCtlFile,wrkDir)
if __name__ == "__main__":
main(sys.argv[1:])
</code></pre>
| 0 | 2016-07-26T15:32:03Z | 38,594,599 | <p>give this a try: <code>subprocess.call('sfit4Layer0.py -bv5 -fs', shell=True)</code></p>
| 0 | 2016-07-26T15:53:34Z | [
"python",
"subprocess"
] |
subprocess.call() or subprocess.Popen for generating/using output files | 38,594,141 | <p>I am trying to write a Python code to interact with another software that uses command line prompts. Once the command line prompt is executed, multiple output files are generated in the directory and the software uses those output files to do some calculations. Plus, the rest of my program utilizes those output files. Currently, I run my python code, then manually enter in the command line prompt, then call the rest of my code and that works fine, however when I attempted to put: </p>
<pre><code>subprocess.call(['sfit4Layer0.py', '-bv5', '-fs'], shell=False)
</code></pre>
<p>into my file, it does not execute properly (the output files are not generated). </p>
<p>When I made the above code it's own individual python code and called it immediately after the first part of my code - it also worked.</p>
<p>According to my output, I am convinced that the problem is this: the call generated multiple files and then uses those files to make calculations, however, it is not generating the proper files so I get errors in my output. So in a way it seems like it is getting ahead of itself: not waiting for the output files to be produced before making calculations, but again, it works when I run this command separately outside of the program. Any ideas on why this would happen?</p>
<p>What am I doing wrong? Do I need to specify the directory (could the output files be put somewhere else in my computer)? Do I need to use subprocess.Popen ? I have searched the internet but I am new(ish) to Python and thoroughly stumped.</p>
<p>Any suggestions welcome. Thanks!</p>
<p>EDIT: for those who asked, here is the sfit4Layer0.py code:</p>
<pre><code>#! /usr/bin/python
##! /usr/local/python-2.7/bin/python
##! /usr/bin/python
##! /usr/bin/python
# Change the above line to point to the location of your python executable
#----------------------------------------------------------------------------------------
# Name:
# sfit4Layer0.py
#
# Purpose:
# This file is the zeroth order running of sfit4. It accomplishes the following:
# 1) Calls pspec to convert binary spectra file (bnr) to ascii (t15asc)
# 2) Calls hbin to gather line parameters from hitran
# 3) Calls sfit4
# 4) Calls error analysis from Layer1mods.py
# 5) Clean outputs from sfit4 call
#
#
# External Subprocess Calls:
# 1) pspec executable file from pspec.f90
# 2) hbin executable file from hbin.f90
# 3) sfit4 executable file from sfit4.f90
# 4) errAnalysis from Layer1mods.py
#
#
#
# Notes:
# 1) Options include:
# -i <dir> : Optional. Data directory. Default is current working directory
# -b <dir/str> : Optional. Binary directory. Default is hard-code.
# -f <str> : Run flags, h = hbin, p = pspec, s = sfit4, e = error analysis, c = clean
#
#
# Usage:
# ./sfit4Layer0.py [options]
#
#
# Examples:
# 1) This example runs hbin, pspec, sfit4, error analys, and cleans working directory prior to execution
# ./sfit4Layer0.py -f hpsec
#
# 2) This example just runs sfit4
# ./sfit4Layer0.py -f s
#
# 3) This example cleans the output file created by sfit4 in directory (/User/home/datafiles/) which is not the current directory
# ./sfit4Layer0.py -i /User/home/datafiles/ -f c
#
# Version History:
# Created, May, 2013 Eric Nussbaumer (ebaumer@ucar.edu)
#
#
# License:
# Copyright (c) 2013-2014 NDACC/IRWG
# This file is part of sfit4.
#
# sfit4 is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# sfit4 is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with sfit4. If not, see <http://www.gnu.org/licenses/>
#
#----------------------------------------------------------------------------------------
#---------------
# Import modules
#---------------
import sys
import os
import getopt
import sfitClasses as sc
from Layer1Mods import errAnalysis
from Tkinter import Tk
from tkFileDialog import askopenfilename
#------------------------
# Define helper functions
#------------------------
def usage(binDirVer):
print 'sfit4Layer0.py -f <str> [-i <dir> [-b <dir/str> ] \n'
print '-i <dir> Data directory. Optional: default is current working directory'
print '-f <str> Run Flags: Necessary: h = hbin, p = pspec, s = sfit4, e = error analysis, c = clean'
print '-b <dir/str> Binary sfit directory. Optional: default is hard-coded in main(). Also accepts v1, v2, etc.'
for ver in binDirVer:
print ' {}: {}'.format(ver,binDirVer[ver])
sys.exit()
def main(argv):
#----------------
# Initializations
#----------------
#------------
# Directories
#------------
wrkDir = os.getcwd() # Set current directory as the data directory
binDir = '/data/bin' # Default binary directory. Used of nothing is specified on command line
binDirVer = {
'v1': '/data/ebaumer/Code/sfit-core-code/src/', # Version 1 for binary directory (Eric)
'v2': '/data/tools/400/sfit-core/src/', # Version 2 for binary directory (Jim)
'v3': '/Users/jamesw/FDP/sfit/400/sfit-core/src/', # Version 2 for binary directory (Jim)
'v4': '/home/ebaumer/Code/sfit4/src/',
'v5': '/Users/allisondavis/Documents/Summer2016/sfit4_0.9.4.3/src'
}
#----------
# Run flags
#----------
hbinFlg = False # Flag to run hbin
pspecFlg = False # Flag to run pspec
sfitFlg = False # Flag to run sfit4
errFlg = False # Flag to run error analysis
clnFlg = False # Flag to clean directory of output files listed in ctl file
#--------------------------------
# Retrieve command line arguments
#--------------------------------
try:
opts, args = getopt.getopt(sys.argv[1:], 'i:b:f:?')
except getopt.GetoptError as err:
print str(err)
usage(binDirVer)
sys.exit()
#-----------------------------
# Parse command line arguments
#-----------------------------
for opt, arg in opts:
# Data directory
if opt == '-i':
wrkDir = arg
sc.ckDir(wrkDir,exitFlg=True)
# Binary directory
elif opt == '-b':
if not sc.ckDir(arg,exitFlg=False,quietFlg=True):
try: binDir = binDirVer[arg.lower()]
except KeyError: print '{} not a recognized version for -b option'.format(arg); sys.exit()
else: binDir = arg
if not(binDir.endswith('/')): binDir = binDir + '/'
# Run flags
elif opt == '-f':
flgs = list(arg)
for f in flgs:
if f.lower() == 'h': hbinFlg = True
elif f.lower() == 'p': pspecFlg = True
elif f.lower() == 's': sfitFlg = True
elif f.lower() == 'e': errFlg = True
elif f.lower() == 'c': clnFile = True
else: print '{} not an option for -f ... ignored'.format(f)
elif opt == '-?':
usage(binDirVer)
sys.exit()
else:
print 'Unhandled option: {}'.format(opt)
sys.exit()
#--------------------------------------
# If necessary change working directory
# to directory with input data.
#--------------------------------------
if os.path.abspath(wrkDir) != os.getcwd(): os.chdir(wrkDir)
if not(wrkDir.endswith('/')): wrkDir = wrkDir + '/'
#--------------------------
# Initialize sfit ctl class
#--------------------------
ctlFileName = wrkDir + 'sfit4.ctl'
if sc.ckFile(wrkDir+'sfit4.ctl'): ctlFileName = wrkDir + 'sfit4.ctl'
else:
Tk().withdraw()
ctlFileName = askopenfilename(initialdir=wrkDir,message='Please select sfit ctl file')
ctlFile = sc.CtlInputFile(ctlFileName)
ctlFile.getInputs()
#------------------------
# Initialize sb ctl class
#------------------------
if errFlg:
if sc.ckFile(wrkDir+'sb.ctl'): sbCtlFileName = wrkDir + 'sb.ctl'
else:
TK().withdraw()
sbCtlFileName = askopenfilename(initialdir=wrkDir,message='Please select sb ctl file')
sbCtlFile = sc.CtlInputFile(sbCtlFileName)
sbCtlFile.getInputs()
#---------------------------
# Clean up output from sfit4
#---------------------------
if clnFlg:
for k in ctlFile.inputs['file.out']:
if 'file.out' in k:
try: os.remove(wrkDir + ctlFile.inputs[k])
except OSError: pass
#----------
# Run pspec
#----------
if pspecFlg:
print '*************'
print 'Running pspec'
print '*************'
rtn = sc.subProcRun( [binDir + 'pspec'] )
#----------
# Run hbin
#----------
if hbinFlg:
print '************'
print 'Running hbin'
print '************'
rtn = sc.subProcRun( [binDir + 'hbin'] )
#----------
# Run sfit4
#----------
if sfitFlg:
print '************'
print 'Running sfit'
print '************'
rtn = sc.subProcRun( [binDir + 'sfit4'] )
#-------------------
# Run error analysis
#-------------------
if errFlg:
print '**********************'
print 'Running error analysis'
print '**********************'
rtn = errAnalysis(ctlFile,sbCtlFile,wrkDir)
if __name__ == "__main__":
main(sys.argv[1:])
</code></pre>
| 0 | 2016-07-26T15:32:03Z | 38,594,664 | <p>If you are trying to call another python script from your python script the subprocess.call() method is suited for your operation.</p>
<p>I would suggest you to turn </p>
<pre><code>subprocess.call(['sfit4Layer0.py', '-bv5', '-fs'], shell=True)
</code></pre>
<p>Hope this solution works.</p>
<p>Popen is generally used when you want to pipe the STDIN/STDOUT or STDERR from one process to another. </p>
<p>Also when ever you write files make sure you create file or directory paths as absolute paths, this will help you in placing the file always on your desired location.</p>
<p>Happy Coding.</p>
| 0 | 2016-07-26T15:56:16Z | [
"python",
"subprocess"
] |
subprocess.call() or subprocess.Popen for generating/using output files | 38,594,141 | <p>I am trying to write a Python code to interact with another software that uses command line prompts. Once the command line prompt is executed, multiple output files are generated in the directory and the software uses those output files to do some calculations. Plus, the rest of my program utilizes those output files. Currently, I run my python code, then manually enter in the command line prompt, then call the rest of my code and that works fine, however when I attempted to put: </p>
<pre><code>subprocess.call(['sfit4Layer0.py', '-bv5', '-fs'], shell=False)
</code></pre>
<p>into my file, it does not execute properly (the output files are not generated). </p>
<p>When I made the above code it's own individual python code and called it immediately after the first part of my code - it also worked.</p>
<p>According to my output, I am convinced that the problem is this: the call generated multiple files and then uses those files to make calculations, however, it is not generating the proper files so I get errors in my output. So in a way it seems like it is getting ahead of itself: not waiting for the output files to be produced before making calculations, but again, it works when I run this command separately outside of the program. Any ideas on why this would happen?</p>
<p>What am I doing wrong? Do I need to specify the directory (could the output files be put somewhere else in my computer)? Do I need to use subprocess.Popen ? I have searched the internet but I am new(ish) to Python and thoroughly stumped.</p>
<p>Any suggestions welcome. Thanks!</p>
<p>EDIT: for those who asked, here is the sfit4Layer0.py code:</p>
<pre><code>#! /usr/bin/python
##! /usr/local/python-2.7/bin/python
##! /usr/bin/python
##! /usr/bin/python
# Change the above line to point to the location of your python executable
#----------------------------------------------------------------------------------------
# Name:
# sfit4Layer0.py
#
# Purpose:
# This file is the zeroth order running of sfit4. It accomplishes the following:
# 1) Calls pspec to convert binary spectra file (bnr) to ascii (t15asc)
# 2) Calls hbin to gather line parameters from hitran
# 3) Calls sfit4
# 4) Calls error analysis from Layer1mods.py
# 5) Clean outputs from sfit4 call
#
#
# External Subprocess Calls:
# 1) pspec executable file from pspec.f90
# 2) hbin executable file from hbin.f90
# 3) sfit4 executable file from sfit4.f90
# 4) errAnalysis from Layer1mods.py
#
#
#
# Notes:
# 1) Options include:
# -i <dir> : Optional. Data directory. Default is current working directory
# -b <dir/str> : Optional. Binary directory. Default is hard-code.
# -f <str> : Run flags, h = hbin, p = pspec, s = sfit4, e = error analysis, c = clean
#
#
# Usage:
# ./sfit4Layer0.py [options]
#
#
# Examples:
# 1) This example runs hbin, pspec, sfit4, error analys, and cleans working directory prior to execution
# ./sfit4Layer0.py -f hpsec
#
# 2) This example just runs sfit4
# ./sfit4Layer0.py -f s
#
# 3) This example cleans the output file created by sfit4 in directory (/User/home/datafiles/) which is not the current directory
# ./sfit4Layer0.py -i /User/home/datafiles/ -f c
#
# Version History:
# Created, May, 2013 Eric Nussbaumer (ebaumer@ucar.edu)
#
#
# License:
# Copyright (c) 2013-2014 NDACC/IRWG
# This file is part of sfit4.
#
# sfit4 is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# sfit4 is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with sfit4. If not, see <http://www.gnu.org/licenses/>
#
#----------------------------------------------------------------------------------------
#---------------
# Import modules
#---------------
import sys
import os
import getopt
import sfitClasses as sc
from Layer1Mods import errAnalysis
from Tkinter import Tk
from tkFileDialog import askopenfilename
#------------------------
# Define helper functions
#------------------------
def usage(binDirVer):
print 'sfit4Layer0.py -f <str> [-i <dir> [-b <dir/str> ] \n'
print '-i <dir> Data directory. Optional: default is current working directory'
print '-f <str> Run Flags: Necessary: h = hbin, p = pspec, s = sfit4, e = error analysis, c = clean'
print '-b <dir/str> Binary sfit directory. Optional: default is hard-coded in main(). Also accepts v1, v2, etc.'
for ver in binDirVer:
print ' {}: {}'.format(ver,binDirVer[ver])
sys.exit()
def main(argv):
#----------------
# Initializations
#----------------
#------------
# Directories
#------------
wrkDir = os.getcwd() # Set current directory as the data directory
binDir = '/data/bin' # Default binary directory. Used of nothing is specified on command line
binDirVer = {
'v1': '/data/ebaumer/Code/sfit-core-code/src/', # Version 1 for binary directory (Eric)
'v2': '/data/tools/400/sfit-core/src/', # Version 2 for binary directory (Jim)
'v3': '/Users/jamesw/FDP/sfit/400/sfit-core/src/', # Version 2 for binary directory (Jim)
'v4': '/home/ebaumer/Code/sfit4/src/',
'v5': '/Users/allisondavis/Documents/Summer2016/sfit4_0.9.4.3/src'
}
#----------
# Run flags
#----------
hbinFlg = False # Flag to run hbin
pspecFlg = False # Flag to run pspec
sfitFlg = False # Flag to run sfit4
errFlg = False # Flag to run error analysis
clnFlg = False # Flag to clean directory of output files listed in ctl file
#--------------------------------
# Retrieve command line arguments
#--------------------------------
try:
opts, args = getopt.getopt(sys.argv[1:], 'i:b:f:?')
except getopt.GetoptError as err:
print str(err)
usage(binDirVer)
sys.exit()
#-----------------------------
# Parse command line arguments
#-----------------------------
for opt, arg in opts:
# Data directory
if opt == '-i':
wrkDir = arg
sc.ckDir(wrkDir,exitFlg=True)
# Binary directory
elif opt == '-b':
if not sc.ckDir(arg,exitFlg=False,quietFlg=True):
try: binDir = binDirVer[arg.lower()]
except KeyError: print '{} not a recognized version for -b option'.format(arg); sys.exit()
else: binDir = arg
if not(binDir.endswith('/')): binDir = binDir + '/'
# Run flags
elif opt == '-f':
flgs = list(arg)
for f in flgs:
if f.lower() == 'h': hbinFlg = True
elif f.lower() == 'p': pspecFlg = True
elif f.lower() == 's': sfitFlg = True
elif f.lower() == 'e': errFlg = True
elif f.lower() == 'c': clnFile = True
else: print '{} not an option for -f ... ignored'.format(f)
elif opt == '-?':
usage(binDirVer)
sys.exit()
else:
print 'Unhandled option: {}'.format(opt)
sys.exit()
#--------------------------------------
# If necessary change working directory
# to directory with input data.
#--------------------------------------
if os.path.abspath(wrkDir) != os.getcwd(): os.chdir(wrkDir)
if not(wrkDir.endswith('/')): wrkDir = wrkDir + '/'
#--------------------------
# Initialize sfit ctl class
#--------------------------
ctlFileName = wrkDir + 'sfit4.ctl'
if sc.ckFile(wrkDir+'sfit4.ctl'): ctlFileName = wrkDir + 'sfit4.ctl'
else:
Tk().withdraw()
ctlFileName = askopenfilename(initialdir=wrkDir,message='Please select sfit ctl file')
ctlFile = sc.CtlInputFile(ctlFileName)
ctlFile.getInputs()
#------------------------
# Initialize sb ctl class
#------------------------
if errFlg:
if sc.ckFile(wrkDir+'sb.ctl'): sbCtlFileName = wrkDir + 'sb.ctl'
else:
TK().withdraw()
sbCtlFileName = askopenfilename(initialdir=wrkDir,message='Please select sb ctl file')
sbCtlFile = sc.CtlInputFile(sbCtlFileName)
sbCtlFile.getInputs()
#---------------------------
# Clean up output from sfit4
#---------------------------
if clnFlg:
for k in ctlFile.inputs['file.out']:
if 'file.out' in k:
try: os.remove(wrkDir + ctlFile.inputs[k])
except OSError: pass
#----------
# Run pspec
#----------
if pspecFlg:
print '*************'
print 'Running pspec'
print '*************'
rtn = sc.subProcRun( [binDir + 'pspec'] )
#----------
# Run hbin
#----------
if hbinFlg:
print '************'
print 'Running hbin'
print '************'
rtn = sc.subProcRun( [binDir + 'hbin'] )
#----------
# Run sfit4
#----------
if sfitFlg:
print '************'
print 'Running sfit'
print '************'
rtn = sc.subProcRun( [binDir + 'sfit4'] )
#-------------------
# Run error analysis
#-------------------
if errFlg:
print '**********************'
print 'Running error analysis'
print '**********************'
rtn = errAnalysis(ctlFile,sbCtlFile,wrkDir)
if __name__ == "__main__":
main(sys.argv[1:])
</code></pre>
| 0 | 2016-07-26T15:32:03Z | 38,595,732 | <pre><code>subprocess.call(['sfit4Layer0.py', '-bv5', '-fs'], shell=False)
</code></pre>
<ol>
<li>assumes that sfit4Layer0.py is executable, which it may not be</li>
<li>assumes that sfit4Layer0.py includes the #!/usr/bin/python shebang, which it may not. </li>
</ol>
<p>if the script does contain a shebang and is not executable
Try:</p>
<pre><code>subprocess.call(['python','/path/to/script/sfit4Layer0.py','-bv5','-fs'], shell=False)
</code></pre>
<p>if the script is executable and contains the shebang
Try:</p>
<pre><code>subprocess.call(['/path/to/script/sfit4Layer0.py -bv5 -fs'], shell=True)
</code></pre>
| 0 | 2016-07-26T16:52:39Z | [
"python",
"subprocess"
] |
pandas convert string columns to datetime, allowing missing but not invalid | 38,594,246 | <p>I have a <code>pandas</code> data frame with multiple columns of strings representing dates, with empty strings representing missing dates. For example</p>
<pre><code>import numpy as np
import pandas as pd
# expected date format is 'm/%d/%Y'
custId = np.array(list(range(1,6)))
eventDate = np.array(["06/10/1992","08/24/2012","04/24/2015","","10/14/2009"])
registerDate = np.array(["06/08/2002","08/20/2012","04/20/2015","","10/10/2009"])
# both date columns of dfGood should convert to datetime without error
dfGood = pd.DataFrame({'custId':custId, 'eventDate':eventDate, 'registerDate':registerDate})
</code></pre>
<p>I am trying to:</p>
<ul>
<li>Efficiently convert columns where all strings are valid dates or empty into columns of type <code>datetime64</code> (with <code>NaT</code> for the empty)</li>
<li>Raise <code>ValueError</code> when any non-empty string does not conform to the expected format, </li>
</ul>
<p>Example of where <code>ValueError</code> should be raised:</p>
<pre><code># 2nd string invalid
registerDate = np.array(["06/08/2002","20/08/2012","04/20/2015","","10/10/2009"])
# eventDate column should convert, registerDate column should raise ValueError
dfBad = pd.DataFrame({'custId':custId, 'eventDate':eventDate, 'registerDate':registerDate})
</code></pre>
<p>This function does what I want at the element level:</p>
<pre><code>from datetime import datetime
def parseStrToDt(s, format = '%m/%d/%Y'):
"""Parse a string to datetime with the supplied format."""
return pd.NaT if s=='' else datetime.strptime(s, format)
print(parseStrToDt("")) # correctly returns NaT
print(parseStrToDt("12/31/2011")) # correctly returns 2011-12-31 00:00:00
print(parseStrToDt("12/31/11")) # correctly raises ValueError
</code></pre>
<p>However, I have <a href="http://stackoverflow.com/questions/8089940/applying-string-operations-to-numpy-arrays">read</a> that string operations shouldn't be <code>np.vectorize</code>-d. I thought this could be done efficiently using <code>pandas.DataFrame.apply</code>, as in:</p>
<pre><code>dfGood[['eventDate','registerDate']].applymap(lambda s: parseStrToDt(s)) # raises TypeError
dfGood.loc[:,'eventDate'].apply(lambda s: parseStrToDt(s)) # raises same TypeError
</code></pre>
<p>I'm guessing that the <code>TypeError</code> has something to do with my function returning a different <code>dtype</code>, but I do want to take advantage of dynamic typing and replace the string with a datetime (unless ValueError is raise)... so how can I do this? </p>
| 1 | 2016-07-26T15:36:24Z | 38,595,584 | <p><code>pandas</code> doesn't have an option that exactly replicates what you want, here's one way to do it, which should be relatively efficient.</p>
<pre><code>In [4]: dfBad
Out[4]:
custId eventDate registerDate
0 1 06/10/1992 06/08/2002
1 2 08/24/2012 20/08/2012
2 3 04/24/2015 04/20/2015
3 4
4 5 10/14/2009 10/10/2009
In [7]: cols
Out[7]: ['eventDate', 'registerDate']
In [9]: dts = dfBad[cols].apply(lambda x: pd.to_datetime(x, errors='coerce', format='%m/%d/%Y'))
In [10]: dts
Out[10]:
eventDate registerDate
0 1992-06-10 2002-06-08
1 2012-08-24 NaT
2 2015-04-24 2015-04-20
3 NaT NaT
4 2009-10-14 2009-10-10
In [11]: mask = pd.isnull(dts) & (dfBad[cols] != '')
In [12]: mask
Out[12]:
eventDate registerDate
0 False False
1 False True
2 False False
3 False False
4 False False
In [13]: mask.any()
Out[13]:
eventDate False
registerDate True
dtype: bool
In [14]: is_bad = mask.any()
In [23]: if is_bad.any():
...: raise ValueError("bad dates in col(s) {0}".format(is_bad[is_bad].index.tolist()))
...: else:
...: df[cols] = dts
...:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-579c06ce3c77> in <module>()
1 if is_bad.any():
----> 2 raise ValueError("bad dates in col(s) {0}".format(is_bad[is_bad].index.tolist()))
3 else:
4 df[cols] = dts
5
ValueError: bad dates in col(s) ['registerDate']
</code></pre>
| 1 | 2016-07-26T16:44:53Z | [
"python",
"numpy",
"pandas",
"python-datetime"
] |
pandas convert string columns to datetime, allowing missing but not invalid | 38,594,246 | <p>I have a <code>pandas</code> data frame with multiple columns of strings representing dates, with empty strings representing missing dates. For example</p>
<pre><code>import numpy as np
import pandas as pd
# expected date format is 'm/%d/%Y'
custId = np.array(list(range(1,6)))
eventDate = np.array(["06/10/1992","08/24/2012","04/24/2015","","10/14/2009"])
registerDate = np.array(["06/08/2002","08/20/2012","04/20/2015","","10/10/2009"])
# both date columns of dfGood should convert to datetime without error
dfGood = pd.DataFrame({'custId':custId, 'eventDate':eventDate, 'registerDate':registerDate})
</code></pre>
<p>I am trying to:</p>
<ul>
<li>Efficiently convert columns where all strings are valid dates or empty into columns of type <code>datetime64</code> (with <code>NaT</code> for the empty)</li>
<li>Raise <code>ValueError</code> when any non-empty string does not conform to the expected format, </li>
</ul>
<p>Example of where <code>ValueError</code> should be raised:</p>
<pre><code># 2nd string invalid
registerDate = np.array(["06/08/2002","20/08/2012","04/20/2015","","10/10/2009"])
# eventDate column should convert, registerDate column should raise ValueError
dfBad = pd.DataFrame({'custId':custId, 'eventDate':eventDate, 'registerDate':registerDate})
</code></pre>
<p>This function does what I want at the element level:</p>
<pre><code>from datetime import datetime
def parseStrToDt(s, format = '%m/%d/%Y'):
"""Parse a string to datetime with the supplied format."""
return pd.NaT if s=='' else datetime.strptime(s, format)
print(parseStrToDt("")) # correctly returns NaT
print(parseStrToDt("12/31/2011")) # correctly returns 2011-12-31 00:00:00
print(parseStrToDt("12/31/11")) # correctly raises ValueError
</code></pre>
<p>However, I have <a href="http://stackoverflow.com/questions/8089940/applying-string-operations-to-numpy-arrays">read</a> that string operations shouldn't be <code>np.vectorize</code>-d. I thought this could be done efficiently using <code>pandas.DataFrame.apply</code>, as in:</p>
<pre><code>dfGood[['eventDate','registerDate']].applymap(lambda s: parseStrToDt(s)) # raises TypeError
dfGood.loc[:,'eventDate'].apply(lambda s: parseStrToDt(s)) # raises same TypeError
</code></pre>
<p>I'm guessing that the <code>TypeError</code> has something to do with my function returning a different <code>dtype</code>, but I do want to take advantage of dynamic typing and replace the string with a datetime (unless ValueError is raise)... so how can I do this? </p>
| 1 | 2016-07-26T15:36:24Z | 38,596,497 | <p>Just to take the accepted answer a little further, I replaced the columns of all valid or missing strings with their parsed datetimes, and then raised an error for the remaining unparsed columns:</p>
<pre><code>dtCols = ['eventDate', 'registerDate']
dts = dfBad[dtCols].apply(lambda x: pd.to_datetime(x, errors='coerce', format='%m/%d/%Y'))
mask = pd.isnull(dts) & (dfBad[dtCols] != '')
colHasError = mask.any()
invalidCols = colHasError[colHasError].index.tolist()
validCols = list(set(dtCols) - set(invalidCols))
dfBad[validCols] = dts[validCols] # replace the completely valid/empty string cols with dates
if colHasError.any():
raise ValueError("bad dates in col(s) {0}".format(invalidCols))
# raises: ValueError: bad dates in col(s) ['registerDate']
print(dfBad) # eventDate got converted, registerDate didn't
</code></pre>
<p>The accepted answer contains the main insight, though, which is to go ahead and coerce errors to <code>NaT</code> and then distinguish the non-empty but invalid strings from the empty ones with the mask.</p>
| 1 | 2016-07-26T17:37:37Z | [
"python",
"numpy",
"pandas",
"python-datetime"
] |
how to compute quantile bins in Pandas when there are ties in the data? | 38,594,277 | <p>Consider the following simple example. I am interested in getting a categorical variable that contains categories corresponding to quantiles.</p>
<pre><code> df = pd.DataFrame({'A':'foo foo foo bar bar bar'.split(),
'B':[0, 0, 1]*2})
df
Out[67]:
A B
0 foo 0
1 foo 0
2 foo 1
3 bar 0
4 bar 0
5 bar 1
</code></pre>
<p>In Pandas, <code>qtile</code> does the job. Unfortunately, <code>qtile</code> will fail here because of the ties in the data. </p>
<pre><code>df['C'] = df.groupby(['A'])['B'].transform(
lambda x: pd.qcut(x, 3, labels=range(1,4)))
</code></pre>
<p>gives the classic <code>ValueError: Bin edges must be unique: array([ 0. , 0. , 0.33333333, 1. ])</code></p>
<p>Is there another robust solution (from any other python package) that does not require to reinvent the wheel? </p>
<p>It has to be. I dont want to code myself my own quantile bin function. Any decent stats package can handle ties when creating quantile bins (<code>SAS</code>, <code>Stata</code>, etc). </p>
<p>I want to have something that is based on sound methodological choices and robust. </p>
<p>For instance, look here for a solution in SAS <a href="https://support.sas.com/documentation/cdl/en/proc/61895/HTML/default/viewer.htm#a000146840.htm" rel="nofollow">https://support.sas.com/documentation/cdl/en/proc/61895/HTML/default/viewer.htm#a000146840.htm</a>.</p>
<p>Or here for the well known xtile in Stata (<a href="http://www.stata.com/manuals13/dpctile.pdf" rel="nofollow">http://www.stata.com/manuals13/dpctile.pdf</a>). Note this SO post <a href="http://stackoverflow.com/questions/11585564/definitive-way-to-match-stata-weighted-xtile-command-using-python">Definitive way to match Stata weighted xtile command using Python?</a></p>
<p>What am I missing? Maybe using <code>Scipy</code>? </p>
<p>Many thanks!</p>
| 0 | 2016-07-26T15:37:47Z | 38,596,578 | <p>IIUC, you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.digitize.html" rel="nofollow"><code>numpy.digitize</code></a></p>
<pre><code>df['C'] = df.groupby(['A'])['B'].transform(lambda x: np.digitize(x,bins=np.array([0,1,2])))
A B C
0 foo 0 1
1 foo 0 1
2 foo 1 2
3 bar 0 1
4 bar 0 1
5 bar 1 2
</code></pre>
| 1 | 2016-07-26T17:42:06Z | [
"python",
"pandas",
"scipy",
"statistics",
"stata"
] |
How to use logging, pytest fixture and capsys? | 38,594,296 | <p>I am trying to unit-test some algorithm that uses logging library.</p>
<p>I have a fixture that creates a logger.</p>
<p>In my <strong>1st test case</strong>, I do not use this fixture and uses a print to log to stdout. This test case <strong>passes</strong>.</p>
<p>In my <strong>2nd test case</strong>, I use this fixture, but not as documented in pytest doc. I just call the associated function in my test to get the logger. Then I use the logger to log to stdout. This test case <strong>passes</strong>.</p>
<p>In my <strong>3rd test case</strong>, I use this fixture as documented in pytest doc. The fixture is passed as an argument to the test function. Then I use the logger to log to stdout. This test case <strong>fails</strong>! It does not find anything in stdout. But in the error message, it says that my log is in the captured stdout call.</p>
<p>What am I doing wrong?</p>
<pre><code>import pytest
import logging
import sys
@pytest.fixture()
def logger():
logger = logging.getLogger('Some.Logger')
logger.setLevel(logging.INFO)
stdout = logging.StreamHandler(sys.stdout)
logger.addHandler(stdout)
return logger
def test_print(capsys):
print 'Bouyaka!'
stdout, stderr = capsys.readouterr()
assert 'Bouyaka!' in stdout
# passes
def test_logger_without_fixture(capsys):
logger().info('Bouyaka!')
stdout, stderr = capsys.readouterr()
assert 'Bouyaka!' in stdout
# passes
def test_logger_with_fixture(logger, capsys):
logger.info('Bouyaka!')
stdout, stderr = capsys.readouterr()
assert 'Bouyaka!' in stdout
# fails with this error:
# > assert 'Bouyaka!' in stdout
# E assert 'Bouyaka!' in ''
#
# tests/test_logging.py:21: AssertionError
# ---- Captured stdout call ----
# Bouyaka!
</code></pre>
<p>There is no change if I reorder the test cases by the way.</p>
| 1 | 2016-07-26T15:38:21Z | 38,609,291 | <p>I'm guessing the logger gets created (via the fixture) before the <code>capsys</code> fixture is set up.</p>
<p>Some ideas:</p>
<ul>
<li>Use the <a href="https://pypi.python.org/pypi/pytest-catchlog" rel="nofollow">pytest-catchlog</a> plugin</li>
<li>Maybe reverse <code>logger, capsys</code></li>
<li>Make <code>logger</code> request the <code>capsys</code> fixture</li>
<li>Use <code>capfd</code> which is more lowlevel capturing without altering <code>sys</code></li>
</ul>
| 1 | 2016-07-27T09:41:49Z | [
"python",
"unit-testing",
"logging",
"py.test"
] |
How to use logging, pytest fixture and capsys? | 38,594,296 | <p>I am trying to unit-test some algorithm that uses logging library.</p>
<p>I have a fixture that creates a logger.</p>
<p>In my <strong>1st test case</strong>, I do not use this fixture and uses a print to log to stdout. This test case <strong>passes</strong>.</p>
<p>In my <strong>2nd test case</strong>, I use this fixture, but not as documented in pytest doc. I just call the associated function in my test to get the logger. Then I use the logger to log to stdout. This test case <strong>passes</strong>.</p>
<p>In my <strong>3rd test case</strong>, I use this fixture as documented in pytest doc. The fixture is passed as an argument to the test function. Then I use the logger to log to stdout. This test case <strong>fails</strong>! It does not find anything in stdout. But in the error message, it says that my log is in the captured stdout call.</p>
<p>What am I doing wrong?</p>
<pre><code>import pytest
import logging
import sys
@pytest.fixture()
def logger():
logger = logging.getLogger('Some.Logger')
logger.setLevel(logging.INFO)
stdout = logging.StreamHandler(sys.stdout)
logger.addHandler(stdout)
return logger
def test_print(capsys):
print 'Bouyaka!'
stdout, stderr = capsys.readouterr()
assert 'Bouyaka!' in stdout
# passes
def test_logger_without_fixture(capsys):
logger().info('Bouyaka!')
stdout, stderr = capsys.readouterr()
assert 'Bouyaka!' in stdout
# passes
def test_logger_with_fixture(logger, capsys):
logger.info('Bouyaka!')
stdout, stderr = capsys.readouterr()
assert 'Bouyaka!' in stdout
# fails with this error:
# > assert 'Bouyaka!' in stdout
# E assert 'Bouyaka!' in ''
#
# tests/test_logging.py:21: AssertionError
# ---- Captured stdout call ----
# Bouyaka!
</code></pre>
<p>There is no change if I reorder the test cases by the way.</p>
| 1 | 2016-07-26T15:38:21Z | 38,614,135 | <p>Thanks a lot for your ideas!</p>
<p>Reverse <code>logger, capsys</code>, make <code>logger</code> request the <code>capsys</code> fixture and use <code>capfd</code> do not change anything.</p>
<p>I tried <a href="https://pypi.python.org/pypi/pytest-catchlog" rel="nofollow">pytest-catchlog</a> plugin and it <strong>works fine</strong>!</p>
<pre><code>import pytest
import logging
@pytest.fixture()
def logger():
logger = logging.getLogger('Some.Logger')
logger.setLevel(logging.INFO)
return logger
def test_logger_with_fixture(logger, caplog):
logger.info('Bouyaka!')
assert 'Bouyaka!' in caplog.text
# passes!
</code></pre>
<p>In my original tests, I logged to stdout and stderr and captured them.
This is an even better solution, as I do not need this tweak to check that my logs work fine.</p>
<p>Well, now I just need to rework all my tests to use caplog, but this is my own business ;)</p>
<p>The only thing left, now that I have a better solution, is to understand what is wrong in my original test case <code>def test_logger_with_fixture(logger, capsys)</code>.</p>
| 1 | 2016-07-27T13:17:55Z | [
"python",
"unit-testing",
"logging",
"py.test"
] |
Converting Excel to CSV python | 38,594,352 | <p>I am using xlrd to convert my .xls Excel file to a CSVfile yet when I try to open the workbook my program crashes sending an error message</p>
<pre><code>bof_error('Expected BOF record; found %r' % self.mem[savpos:savpos+8])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/xlrd/book.py", line 1224, in bof_error
raise XLRDError('Unsupported format, or corrupt file: ' + msg)
xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record; found 'Chrom\tPo'
</code></pre>
<p>The <code>Chrom\tPo</code> is part of my header for the excel file yet I don't understand what the error is with the Excel file and how to change it. </p>
<p>The program crashes right when i try to open the excel file using xlrd.open_workbook('Excel File') </p>
| -1 | 2016-07-26T15:40:56Z | 38,595,058 | <p>I would use <code>[openpyxl]</code><a class='doc-link' href="https://stackoverflow.com/documentation/python/2986/python-and-excel#t=20160726160750581884">1</a> for this.</p>
<pre><code>import openpyxl
wb = openpyxl.load_workbook(file_name)
ws = wb.worksheets[page_number]
table = []
for row_num in range(ws.get_highest_row()):
temp_row = []
for col_num in range(ws.get_highest_column()):
temp_row.append(ws.cell(row=row_num, col=col_num).value)
table.append(temp_row[:])
</code></pre>
<p>This will give you the contents of the sheet as a 2-D list, which you can then write out to a csv or use as you wish.</p>
<p>If you're stuck with <code>xlrd</code> for whatever reason, You may just need to convert your file from <code>xls</code> to <code>xlsx</code></p>
| 0 | 2016-07-26T16:17:20Z | [
"python"
] |
Does django's URL Conf support mapping an argument to strings? | 38,594,396 | <h2>The Problem</h2>
<p>I'm unsure of the best way to phrase this, but here goes: (note some of this may not be syntactically/semantically correct, as it's not my actual code, but I needed it to help explain what I'm asking) </p>
<p>Say I have a model the model <code>Album</code>:</p>
<pre><code>Class Album(models.Model):
ALBUM_TYPE_SINGLE = 1
ALBUM_TYPE_DEMO = 2
ALBUM_TYPE_GREATEST_HITS = 3
ALBUM_CHOICES = (
(ALBUM_TYPE_SINGLE, 'Single Record'),
(ALBUM_TYPE_DEMO, 'Demo Album'),
(ALBUM_TYPE_GREATEST_HITS, 'Greatest Hits'),
)
album_type = models.IntegerField(choices=ALBUM_CHOICES)
</code></pre>
<p>And I want to have separate URLs for the various types of albums. Currently, the URL Conf is something like so:</p>
<pre><code>urlpatterns = [
url(r'^singles/(?P<pk>.+)/$', views.AlbumView, name="singles"),
url(r'^demos/(?P<pk>.+)/$', views.AlbumView, name="demos"),
url(r'^greatest-hits/(?P<pk>.+)/$', views.AlbumView, name="greatest_hits"),
]
</code></pre>
<p>And when I want to serve the appropriate URL, I need to check the <code>album_type</code> manually:</p>
<pre><code>if object.album_type == Album.ALBUM_TYPE_SINGLE:
return reverse('singles', object.id)
elif object.album_type == Album.ALBUM_TYPE_DEMO:
return reverse('demos', object.id)
elif object.album_type == Album.ALBUM_TYPE_GREATEST_HITS:
return reverse('greatest_hits', object.id)
</code></pre>
<p>However, this is cumbersome to do, and I'm wondering if there is a way to pass in the <code>album_type</code> field to the call to <code>reverse</code> and have it automatically get the URL based on that. i.e. something like this:</p>
<pre><code>urlpatterns = [
url(r'^(?P<type>[singles|demos|greatest-hits])/(?P<pk>.+)/$', views.AlbumView, name="albums"),
]
</code></pre>
<p>and called with</p>
<pre><code>reverse("albums", object.album_type, object.id)
</code></pre>
<hr>
<h2>Attempted solutions</h2>
<p>I considered setting the choice strings to be</p>
<pre><code>ALBUM_CHOICES = (
(ALBUM_TYPE_SINGLE, 'singles'),
(ALBUM_TYPE_DEMO, 'demos'),
(ALBUM_TYPE_GREATEST_HITS, 'greatest-hits'),
)
</code></pre>
<p>which would then allow me to send <code>object.get_album_type_display()</code> as a string variable for type, which works, however, I need to be able to use <code>reverse</code> to build the URL while only having access to the integer value of <code>album_type</code> and not the display value.</p>
<p>I know this is an oddly specific question for an oddly specific scenario, but if anyone has any kind of potential solutions, I'd be very grateful! Thank you in advance!</p>
| 1 | 2016-07-26T15:43:11Z | 38,594,770 | <p>I would change the field to a <code>CharField</code>, and use the URL slug as the actual value rather than the display value:</p>
<pre><code>Class Album(models.Model):
ALBUM_TYPE_SINGLE = 'singles'
ALBUM_TYPE_DEMO = 'demos'
ALBUM_TYPE_GREATEST_HITS = 'greatest-hits'
ALBUM_CHOICES = (
(ALBUM_TYPE_SINGLE, 'Single Record'),
(ALBUM_TYPE_DEMO, 'Demo Album'),
(ALBUM_TYPE_GREATEST_HITS, 'Greatest Hits'),
)
album_type = models.CharField(choices=ALBUM_CHOICES, max_length=50)
</code></pre>
<p>In your <code>urls.py</code>:</p>
<pre><code>urlpatterns = [
url(r'^(?P<type>singles|demos|greatest-hits)/(?P<pk>.+)/$', views.AlbumView, name="albums"),
]
</code></pre>
<p>Then you can reverse it by passing <code>album.album_type</code>:</p>
<pre><code>reverse('albums', args=(album.album_type, album.pk))
</code></pre>
| 2 | 2016-07-26T16:01:55Z | [
"python",
"django"
] |
Smartsheet CHECKBOX Cells Always Returned as Empty | 38,594,429 | <p>Whenever I retrieve a SmartSheet row and loop through the cells within it, all cells of type CHECKBOX always have a displayValue or value of Null, regardless of the status of the checkbox on the sheet. Has anyone else experienced this? (I am using their python sdk)</p>
| 1 | 2016-07-26T15:44:40Z | 38,619,885 | <p>It's possible for the value of any cell to be null if the cell has never had any value set (e.g. if you add a value to a cell in a new row, all the other cells will be null). I'm guessing that's what you're seeing.</p>
<p>If you check one of those checkbox cells, save, then uncheck it, you should see its value as <code>false</code> in the API.</p>
<p>For the purposes of your program logic, you can treat any null checkbox cells as unchecked.</p>
| 0 | 2016-07-27T17:45:46Z | [
"python",
"smartsheet-api"
] |
Smartsheet CHECKBOX Cells Always Returned as Empty | 38,594,429 | <p>Whenever I retrieve a SmartSheet row and loop through the cells within it, all cells of type CHECKBOX always have a displayValue or value of Null, regardless of the status of the checkbox on the sheet. Has anyone else experienced this? (I am using their python sdk)</p>
| 1 | 2016-07-26T15:44:40Z | 38,623,660 | <p>I believe you've found a bug with the Python SDK.</p>
<p>Kevin's answer correctly describes the behavior of the API itself, as I verified (using Postman), via the following request / response.</p>
<p><strong>Get Row - Request:</strong></p>
<pre><code>GET https://api.smartsheet.com/2.0/sheets/7359436428732292/rows/863888846677892
</code></pre>
<p><strong>Get Row - Response:</strong></p>
<pre><code>{
"id": 863888846677892,
"sheetId": 7359436428732292,
"rowNumber": 1,
"version": 88,
"expanded": true,
"accessLevel": "OWNER",
"createdAt": "2016-07-06T22:21:58Z",
"modifiedAt": "2016-07-27T01:50:46Z",
"cells": [
{
"columnId": 4509804229093252,
"value": true
},
...
]
}
</code></pre>
<p>In the example Response above, the cell contains a Checkbox that is selected (<strong>value</strong>=true). So the API itself is behaving properly.</p>
<p>However, if I use the <a href="https://github.com/smartsheet-platform/smartsheet-python-sdk" rel="nofollow">Smartsheet Python SDK</a> to examine the exact same cell, the <strong>value</strong> attribute is being incorrectly set to <em>null</em>:</p>
<p><strong>Python code:</strong></p>
<pre><code>import os
import smartsheet
os.environ['SMARTSHEET_ACCESS_TOKEN'] = 'MY_TOKEN_VALUE'
smartsheet = smartsheet.Smartsheet()
# get sheet
sheet = smartsheet.Sheets.get_sheet(7359436428732292)
print('SheetId:\n' + str(sheet.id) + '\n')
print('RowId:\n' + str(sheet.rows[0].id) + '\n')
print('ColumnId (for the checkbox column):\n' + str(sheet.columns[0].id) + '\n')
print('Column object (for the checkbox column):\n' + str(sheet.columns[0]) + '\n')
print('Cell that contains a selected Checkbox:\n' + str(sheet.rows[0].cells[0]))
</code></pre>
<p><strong>This code generates the following output:</strong></p>
<p>SheetId:</p>
<pre><code>7359436428732292
</code></pre>
<p>RowId:</p>
<pre><code>863888846677892
</code></pre>
<p>ColumnId (for the checkbox column):</p>
<pre><code>4509804229093252
</code></pre>
<p>Column object (for the checkbox column):</p>
<pre><code>{"index": 0, "locked": null, "systemColumnType": null, "autoNumberFormat": null, "symbol": null, "format": null, "primary": null, "options": [], "filter": null, "width": 75, "lockedForUser": null, "title": "CBcol", "hidden": null, "type": "CHECKBOX", "id": 4509804229093252, "tags": []}
</code></pre>
<p>Cell that contains a selected Checkbox:</p>
<pre><code>{"format": null, "displayValue": null, "linksOutToCells": null, "columnType": null, "columnId": 4509804229093252, "hyperlink": null, "value": null, "conditionalFormat": null, "strict": true, "formula": null, "linkInFromCell": null}
</code></pre>
<p>So, unfortunately, it seems that the Python SDK isn't properly setting <strong>value</strong> for Checkbox columns (or for Symbol columns like "Star" that behave like checkbox columns). I'd suggest that you log this issue <a href="https://github.com/smartsheet-platform/smartsheet-python-sdk/issues" rel="nofollow">here</a>, so that Smartsheet is aware and can address it. If you're able to fix the issue locally (i.e., by modifying your local copy of the <strong>smartsheet-python-sdk</strong> package), then you can also submit a <a href="https://help.github.com/articles/using-pull-requests" rel="nofollow">pull request</a>.</p>
| 2 | 2016-07-27T21:40:20Z | [
"python",
"smartsheet-api"
] |
Smartsheet CHECKBOX Cells Always Returned as Empty | 38,594,429 | <p>Whenever I retrieve a SmartSheet row and loop through the cells within it, all cells of type CHECKBOX always have a displayValue or value of Null, regardless of the status of the checkbox on the sheet. Has anyone else experienced this? (I am using their python sdk)</p>
| 1 | 2016-07-26T15:44:40Z | 39,716,751 | <p>in the cell.py file of the smartsheet library, replace the old:</p>
<pre><code>@value.setter
def value(self, value):
if isinstance(value, six.string_types):
self._value = value
</code></pre>
<p>with:</p>
<pre><code>@value.setter
def value(self, value):
if isinstance(value, six.string_types) or value is True:
self._value = value
</code></pre>
| 0 | 2016-09-27T05:38:49Z | [
"python",
"smartsheet-api"
] |
pyVmomi start service on virtual machine | 38,594,473 | <p>I am trying to start a service within the client virtual machine through pyVmomi. I couldn't find much in the official documentation on this, and I searched the net to no avail. I then modified the code I have used to successfully launch a silent MSI install, to simply run cmd.exe with the argument 'net start' plus the service name. That returns a valid process ID as if it has launched cmd, however the service doesn't start. I did wonder if it was permissions, however there is a specific error relating to permissions (or lack of) in the guest VM, and this isn't thrown. I don't get any errors at all. Any thoughts on how to start a service through pyVmomi?</p>
<pre><code>def startService(ServiceName):
"""
starts a specified windows service [serviceName]
"""
pm = esxiContent.guestOperationsManager.processManager
ps = vim.vm.guest.ProcessManager.ProgramSpec(
programPath='cmd.exe', arguments='net start ' + ServiceName,
)
pid = pm.StartProgramInGuest(vm, creds, ps)
print(pid)
</code></pre>
| 0 | 2016-07-26T15:47:19Z | 38,954,175 | <p>In the end I couldn't find an answer so instead I simply wrote a batch file with the net start command inside it, then executed this using the processManager above.</p>
| 0 | 2016-08-15T11:25:24Z | [
"python",
"pyvmomi"
] |
Trouble calling R function from Python with rpy2 | 38,594,515 | <p>I am trying to use rpy2 to call the R package MatchIt. I am having difficulty seeing the outcome of the matched pairs from the $match.matrix. Here is the R code I am trying to execute in python.</p>
<pre><code>matched <- cbind(lalonde[row.names(foo$match.matrix),"re78"],lalonde[foo$match.matrix,"re78"])
</code></pre>
<p>Here is my python code:</p>
<pre><code>import readline
import rpy2.robjects
from rpy2.robjects.packages import importr
from rpy2.robjects import pandas2ri
from rpy2 import robjects as ro
import numpy as np
from scipy.stats import ttest_ind
import pandas as pd
from pandas import Series,DataFrame
pandas2ri.activate()
R = ro.r
MatchIt = importr('MatchIt')
base = importr('base')
df = R('lalonde')
lalonde = pandas2ri.py2ri(df)
formula = 'treat ~ age + educ + black + hispan + married + nodegree + re74 + re75'
foo = MatchIt.matchit(formula = R(formula),
data = lalonde,
method = R('"nearest"'),
ratio = 1)
matched = \
base.cbind(lalonde.rx[base.row_names(foo.rx2('match.matrix')),"re78"],
lalonde.rx[foo.rx2('match.matrix'),"re78"])
</code></pre>
<p>This chunk runs :</p>
<pre><code>lalonde.rx(base.row_names(foo.rx2('match.matrix')),
"re78")
</code></pre>
<p>but this chunk</p>
<pre><code>lalonde.rx[foo.rx2('match.matrix'),"re78"].
</code></pre>
<p>returns an error of:</p>
<pre><code>ValueError: The first parameter must be a tuple.
</code></pre>
<p>The output of </p>
<pre><code>cbind(lalonde[row.names(foo$match.matrix),"re78"], lalonde[foo$match.matrix,"re78"])
</code></pre>
<p>should be a dataframe which matches the row names and cell values of foo$match.matrix with the values of "re78" in the lalonde dataframe</p>
| 0 | 2016-07-26T15:50:16Z | 38,600,758 | <p>Here <code>lalonde</code> is defined elsewhere (but thanks to @Parfait's question we know that this is a data frame). Now you'll have to break down your one-liner triggering the error to pinpoint the exact place of trouble (and we can't do that for you - the thing about self-contained and reproducible examples is that they are helping us help you).</p>
<pre><code>matched = \
base.cbind(lalonde[base.row_names(foo.rx2('match.matrix')),"re78"],
lalonde[foo.rx2('match.matrix'),"re78"])
</code></pre>
<p>Is this breaking with the first subset of <code>lalonde</code> ?</p>
<pre><code>lalonde[base.row_names(foo.rx2('match.matrix')),"re78"]
</code></pre>
<p>Since <code>type(lalonde)</code> is <code>rpy2.robjects.vectors.DataFrame</code> this is an R/rpy2 data frame. Extracting a subset like one would do it in R can be achieved with <code>.rx</code> (as in <strong>r</strong>-style e<strong>x</strong>traction - see <a href="http://rpy2.readthedocs.io/en/version_2.8.x/vector.html#extracting-r-style" rel="nofollow">http://rpy2.readthedocs.io/en/version_2.8.x/vector.html#extracting-r-style</a>
).</p>
<pre><code>lalonde.rx(base.row_names(foo.rx2('match.matrix')),
"re78")
</code></pre>
<p>It is important to understand what is happening with this call. By default the elements to extract in each direction of the data structure (here rows and columns of the data frame respectively) must be R vectors (vector of names, or vector of one-offset index integers) or a Python data structure that the conversion mechanism can translate into an R vector (of names or integers). <code>base.row_names</code> will return the row names (and that's a vector of names) but <code>foo.rx2('match.matrix')</code> might be something else. </p>
<p>Here <code>type(foo.rx2('match.matrix'))</code> is indicating that this is a matrix. Using matrices can be used be used to cherry pick cells in an R array, but in that case there can only be one parameter for the extraction... and we presently have two (the second is <code>"re78"</code>).</p>
<p>Since the first column of that <code>match.matrix</code> contains the indices (row numbers) in <code>lalonde</code>, the following should be what you want:</p>
<pre><code>matched = \
base.cbind(lalonde.rx[base.row_names(foo.rx2('match.matrix')),"re78"],
lalonde.rx[foo.rx2('match.matrix').rx(True, 1),"re78"])
</code></pre>
| 2 | 2016-07-26T22:15:57Z | [
"python",
"rpy2",
"cbind"
] |
Use Scrapy to get information from a website that generates data using java script | 38,594,544 | <p>I am trying to crawl a website using Scrapy, however the website URL doesn't change, the pages are loaded using java script. </p>
<p>This is how the site and URL look before I perform a search:
<a href="http://i.stack.imgur.com/nBx7H.png" rel="nofollow"><img src="http://i.stack.imgur.com/nBx7H.png" alt="image 1"></a></p>
<p>This is how the site and URL look after I perform a search:
<a href="http://i.stack.imgur.com/kwxsI.png" rel="nofollow"><img src="http://i.stack.imgur.com/kwxsI.png" alt="image 2"></a></p>
<p>How can I get the data from the site using Scrapy given these conditions?</p>
| 1 | 2016-07-26T15:51:14Z | 38,602,655 | <ul>
<li>You need analyse the website how to load the data.
Maybe you can read the URL and HTTP header.</li>
<li>Use some tools (like POSTman) to simulate the process about loading data.</li>
<li>Use scrapy to implement the process.</li>
</ul>
| 0 | 2016-07-27T02:18:42Z | [
"python",
"web-scraping",
"scrapy",
"scrapy-spider"
] |
python query with mongo | 38,594,616 | <p>I'm beginner in python. However in my python code I'm querying mongodb.</p>
<pre><code>from pymongo import MongoClient
import sys
client = MongoClient()
db = client.customer-care
cursor = db.interactions.find()
for document in cursor:
sys.stdout=open("test.txt","w")
print(document)
sys.stdout.close()
</code></pre>
<p>However this code gives me following error:</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 6, in <module>
db = client.customer-care
NameError: name 'care' is not defined
</code></pre>
| -1 | 2016-07-26T15:54:08Z | 38,595,031 | <p>Python <a href="https://docs.python.org/3/reference/lexical_analysis.html#identifiers" rel="nofollow">identifiers</a> cannot include the dash character. </p>
<p>If you must use a database name with a dash in it (I would advise that you consider camel-case), use this syntax:</p>
<pre><code>db = client['customer-care']
</code></pre>
| 1 | 2016-07-26T16:16:13Z | [
"python",
"mongodb"
] |
How to integrate a SK-Learn Naive Bayes trained model in Python Flask prediction web app? | 38,594,624 | <p>I'm trying to build a prediction tool using SK-Learn's Naive Bayes Classifier and the Python Flask micro-framework. From what I have googled, I can pickle the model and then unpickle the model when I load the app on the browser, but how can I do that exactly?</p>
<p>My app should receive user input values, then pass these values to the model, and then display the predicted values back to the users (as d3 graphs, thus the need to convert the predicted values into JSON format). </p>
<p>This is what I've tried so far:</p>
<p><strong>Pickling the model</strong></p>
<pre><code>from sklearn.naive_bayes import GaussianNB
import numpy as np
import csv
def loadCsv(filename):
lines = csv.reader(open(filename,"rb"))
dataset = list(lines)
for i in range(len(dataset)):
dataset[i] = [float(x) for x in dataset[i]]
return dataset
datasetX = loadCsv("pollutants.csv")
datasetY = loadCsv("acute_bronchitis.csv")
X = np.array(datasetX)
Y = np.array(datasetY).ravel()
model = GaussianNB()
model.fit(X,Y)
#import pickle
from sklearn.externals import joblib
joblib.dump(model,'acute_bronchitis.pkl')
</code></pre>
<p><strong>The HTML form to collect user input:</strong></p>
<pre><code><form class = "prediction-options" method = "post" action = "/prediction/results">
<input type = "range" class = "prediction-option" name = "aqi" min = 0 max = 100 value = 0></input>
<label class = "prediction-option-label">AQI</label>
<input type = "range" class = "prediction-option" name = "pm2_5" min = 0 max = 100 value = 0></input>
<label class = "prediction-option-label">PM2.5</label>
<input type = "range" class = "prediction-option" name = "pm10" min = 0 max = 100 value = 0></input>
<label class = "prediction-option-label">PM10</label>
<input type = "range" class = "prediction-option" name = "so2" min = 0 max = 100 value = 0></input>
<label class = "prediction-option-label">SO2</label>
<input type = "range" class = "prediction-option" name = "no2" min = 0 max = 100 value = 0></input>
<label class = "prediction-option-label">NO2</label>
<input type = "range" class = "prediction-option" name = "co" min = 0 max = 100 value = 0></input>
<label class = "prediction-option-label">CO</label>
<input type = "range" class = "prediction-option" name = "o3" min = 0 max = 100 value = 0></input>
<label class = "prediction-option-label">O3</label>
<input type = "submit" class = "submit-prediction-options" value = "Get Patient Estimates" />
</form>
</code></pre>
<p><strong>The Python Flask <code>app.py</code>:</strong></p>
<pre><code>from flask import Flask, render_template, request
import json
from sklearn.naive_bayes import GaussianNB
import numpy as np
import pickle as pkl
from sklearn.externals import joblib
model_acute_bronchitis = pkl.load(open('data/acute_bronchitis.pkl'))
@app.route("/prediction/results", methods = ['POST'])
def predict():
input_aqi = request.form['aqi']
input_pm2_5 = request.form['pm2_5']
input_pm10 = request.form['pm10']
input_so2 = request.form['so2']
input_no2 = request.form['no2']
input_co = request.form['co']
input_o3 = request.form['o3']
input_list = [[input_aqi,input_pm2_5,input_pm10,input_so2,input_no2,input_co,input_o3]]
output_acute_bronchitis = model_acute_bronchitis.predict(input_list)
prediction = json.dumps(output_acute_bronchitis)
return prediction
</code></pre>
<p>However, I got the following error message:<code>TypeError: 'NDArrayWrapper' object does not support indexing</code> which I found might be caused by using sk-learn's joblib to pickle the model. </p>
<p>So, I tried to see if I could use joblib's load function to load the model in Flask instead, and I got this error message: </p>
<pre><code>/Users/Vanessa/Desktop/User/lib/python2.7/site-packages/sklearn/utils/validation.py:386: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.
DeprecationWarning)
[2016-07-27 12:45:30,747] ERROR in app: Exception on /prediction/results [POST]
Traceback (most recent call last):
File "/Users/Vanessa/Desktop/User/lib/python2.7/site-packages/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/Users/Vanessa/Desktop/User/lib/python2.7/site-packages/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/Vanessa/Desktop/User/lib/python2.7/site-packages/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Users/Vanessa/Desktop/User/lib/python2.7/site-packages/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/Vanessa/Desktop/User/lib/python2.7/site-packages/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "app.py", line 95, in predict
output_acute_bronchitis = model_acute_bronchitis.predict(input_list)
File "/Users/Vanessa/Desktop/User/lib/python2.7/site-packages/sklearn/naive_bayes.py", line 65, in predict
jll = self._joint_log_likelihood(X)
File "/Users/Vanessa/Desktop/User/lib/python2.7/site-packages/sklearn/naive_bayes.py", line 394, in _joint_log_likelihood
n_ij -= 0.5 * np.sum(((X - self.theta_[i, :]) ** 2) /
TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U32') dtype('<U32') dtype('<U32')
127.0.0.1 - - [27/Jul/2016 12:45:30] "POST /prediction/results HTTP/1.1" 500 -
</code></pre>
<p>What am I doing wrong? Are there simpler alternatives to do what I hope to achieve?</p>
| 0 | 2016-07-26T15:54:24Z | 38,620,244 | <p>I think the problem with your code is that the data from your form is being read as a string. For example, in <code>input_aqi = request.form['aqi']</code>, <code>input_aqi</code> has a string. Therefore, in <code>output_acute_bronchitis = model_acute_bronchitis.predict(input_list)</code>, you end up passing <code>predict</code> an array of strings due to which you see this error. You can fix this by simply converting all your form inputs to floats as follows:</p>
<pre><code>input_aqi = float(request.form['aqi'])
</code></pre>
<p>You will have to do this for all the form inputs that you are putting in <code>input_list</code>.</p>
<p>Hope that helps.</p>
| 1 | 2016-07-27T18:08:06Z | [
"python",
"flask",
"scikit-learn",
"prediction",
"naivebayes"
] |
How to feature-ize timeseries data in Pandas? | 38,594,625 | <p>I have data that are structured as below:</p>
<pre><code>Group, ID, Time, Feat1, Feat2, Feat3
A, 1, 0, 1.52, 2.94, 3.1
A, 1, 2, 1.67, 2.99, 3.3
A, 1, 4, 1.9, 3.34, 5.6
</code></pre>
<p>In this data, there are individuals who have been measured repeatedly.</p>
<p>I'd like to restructure the data such that each feature-time combination is a unique column, as below:</p>
<pre><code>Group, ID, Feat1_Time0, Feat1_Time2, Feat1_Time4, Feat2_Time0, Feat2_Time2, Feat2_Time4, Feat3_Time0, Feat3_Time2, Feat3_Time4
A, 1, 1.52, 2.94, 3.1, 1.67, 2.99, 3.3, 1.9, 3.34, 5.6
</code></pre>
<p>Is there a simple way to handle this, without using a for-loop? I've tried accomplishing what I need with the for-loop method, but it is inelegant and clunky, and given real data of 10<sup>4</sup> columns, it would take a while as well.</p>
| 1 | 2016-07-26T15:54:25Z | 38,595,389 | <pre><code>df = pd.DataFrame({'Group': {0: 'A', 1: 'A', 2: 'A', 3: 'A', 4: 'A', 5: 'A'},
'Time': {0: 0, 1: 2, 2: 4, 3: 0, 4: 2, 5: 4},
'ID': {0: 1, 1: 1, 2: 1, 3: 2, 4: 2, 5: 2},
'Feat1': {0: 1.52, 1: 1.6699999999999999, 2: 1.8999999999999999, 3: 1.52, 4: 1.6699999999999999, 5: 1.8999999999999999},
'Feat3': {0: 3.1000000000000001, 1: 3.2999999999999998, 2: 5.5999999999999996, 3: 3.1000000000000001, 4: 3.2999999999999998, 5: 5.5999999999999996},
'Feat2': {0: 2.9399999999999999, 1: 2.9900000000000002, 2: 3.3399999999999999, 3: 2.9399999999999999, 4: 2.9900000000000002, 5: 3.3399999999999999}})
df1 = df.set_index(['Group', 'ID', 'Time']).unstack()
df1
</code></pre>
<p><a href="http://i.stack.imgur.com/e8t8l.png" rel="nofollow"><img src="http://i.stack.imgur.com/e8t8l.png" alt="enter image description here"></a></p>
<pre><code>df1.columns = df1.columns.to_series().apply(pd.Series).astype(str).T.apply('_'.join)
df1.reset_index()
</code></pre>
<p><a href="http://i.stack.imgur.com/d1T9i.png" rel="nofollow"><img src="http://i.stack.imgur.com/d1T9i.png" alt="enter image description here"></a></p>
| 1 | 2016-07-26T16:35:30Z | [
"python",
"pandas"
] |
Python Lambda function can't execute HTTPS cURL command | 38,594,627 | <p>I have a Lambda function which is defined by the following simple script which directs my EC2 instance to perform a backup:</p>
<pre><code>def lambda_handler(event, context):
import subprocess
result = subprocess.call("curl -k https://loadbalancer.stuff.domain.com/backup/", shell=True)
return result
</code></pre>
<p>I have run the shell command in the subprocess above from 2 separate servers and it works fine on those, however I get a return error code of 6 when done from Lambda which means: </p>
<blockquote>
<p>CURLE_COULDNT_RESOLVE_HOST (6):
Couldn't resolve host. The given remote host was not resolved.</p>
</blockquote>
<p>I have no idea how to proceed, and I've tried literally every single other way of performing this HTTPS request. What am I doing wrong here?</p>
| -1 | 2016-07-26T15:54:33Z | 38,600,602 | <p>Your Lambda function is executing outside of your VPC.</p>
<p>But your load balancer is inside your VPC, and not publicly accessible. So the Lambda function does not have a network path to the load balancer.</p>
<p>There are 2 possible resolutions:</p>
<ol>
<li>Put your Lambda function inside your VPC, or</li>
<li>Make your load balancer public.</li>
</ol>
| 1 | 2016-07-26T22:03:03Z | [
"python",
"amazon-web-services",
"curl",
"amazon-lambda"
] |
How do I run "windows features" through Python? | 38,594,634 | <p>I'm trying to run <code>tftp</code> from a python script. I enabled it as a Windows 10 feature (<a href="http://www.trishtech.com/2014/10/enable-tftp-telnet-in-windows-10/" rel="nofollow">http://www.trishtech.com/2014/10/enable-tftp-telnet-in-windows-10/</a>) so I am able to run it via the command line. But when I try to run it through python I get errors. I tried 3 methods of calling it: os.system, subprocess.call, subprocess.Popen, and as a 'control' used those subprocesses to call 'dir' successfully. I also tried calling tftp from the full path. Code and output pasted below. I added delays and some printing to help parsing.</p>
<p>My guess is that when using those methods, the additional "windows features" that I enabled are not part of the environment that Python sees. But I'm not sure how to specify the environment to include the feature!</p>
<pre><code>call("dir", shell = True)
print "*******************************************"
time.sleep(1)
call("tftp", shell = True)
print "*******************************************"
time.sleep(1)
call("C:\Windows\System32\\tftp.exe", shell = True)
print "*******************************************"
time.sleep(1)
Popen("dir", shell = True)
print "*******************************************"
time.sleep(1)
Popen("tftp", shell = True)
print "*******************************************"
time.sleep(1)
Popen("C:\Windows\System32\\tftp.exe", shell = True)
print "*******************************************"
time.sleep(1)
os.system("dir")
print "*******************************************"
time.sleep(1)
os.system("tftp")
print "*******************************************"
time.sleep(1)
os.system("C:\Windows\System32\\tftp.exe")
print "*******************************************"
time.sleep(1)
</code></pre>
<p>And the output I get is:</p>
<pre><code>Directory of C:\[REDACTED]
07/26/2016 11:05 AM <DIR> .
07/26/2016 11:05 AM <DIR> ..
07/26/2016 11:05 AM 1,724 [REDACTED].py
07/26/2016 11:05 AM 1,828 [REDACTED].pyc
07/23/2016 05:36 PM 1,933 prep_for_bootload.py
07/26/2016 07:52 AM 13 s.bat
07/26/2016 11:19 AM 1,449 scrap.py
07/26/2016 10:38 AM 672 test_script.py
07/26/2016 10:39 AM 90 upload_script.bat
7 File(s) 7,709 bytes
2 Dir(s) 222,286,884,864 bytes free
*******************************************
'tftp' is not recognized as an internal or external command,
operable program or batch file.
*******************************************
'C:\Windows\System32\tftp.exe' is not recognized as an internal or external command,
operable program or batch file.
*******************************************
*******************************************
Volume in drive C has no label.
Volume Serial Number is 2AC7-8D0E
Directory of C:\Users\[REDACTED]
07/26/2016 11:05 AM <DIR> .
07/26/2016 11:05 AM <DIR> ..
07/26/2016 11:05 AM 1,724 [REDACTED]
07/26/2016 11:05 AM 1,828 [REDACTED]
07/23/2016 05:36 PM 1,933 prep_for_bootload.py
07/26/2016 07:52 AM 13 s.bat
07/26/2016 11:19 AM 1,449 scrap.py
07/26/2016 10:38 AM 672 test_script.py
07/26/2016 10:39 AM 90 upload_script.bat
7 File(s) 7,709 bytes
2 Dir(s) 222,286,884,864 bytes free
*******************************************
'tftp' is not recognized as an internal or external command,
operable program or batch file.
*******************************************
'C:\Windows\System32\tftp.exe' is not recognized as an internal or external command,
operable program or batch file.
Volume in drive C has no label.
Volume Serial Number is 2AC7-8D0E
Directory of C:\Users\[REDACTED]
07/26/2016 11:05 AM <DIR> .
07/26/2016 11:05 AM <DIR> ..
07/26/2016 11:05 AM 1,724 [REDACTED]
07/26/2016 11:05 AM 1,828 [REDACTED]
07/23/2016 05:36 PM 1,933 prep_for_bootload.py
07/26/2016 07:52 AM 13 s.bat
07/26/2016 11:19 AM 1,449 scrap.py
07/26/2016 10:38 AM 672 test_script.py
07/26/2016 10:39 AM 90 upload_script.bat
7 File(s) 7,709 bytes
2 Dir(s) 222,286,884,864 bytes free
*******************************************
'tftp' is not recognized as an internal or external command,
operable program or batch file.
*******************************************
'C:\Windows\System32\tftp.exe' is not recognized as an internal or external command,
operable program or batch file.
*******************************************
</code></pre>
| -1 | 2016-07-26T15:54:46Z | 38,600,996 | <p>See response from: @eryksun for the correct answer. Quoted here:</p>
<p>"To work in either case, use system32 = 'SysNative' if platform.architecture()[0] == '32bit' else 'System32'. Then create the fully qualified path as tftp_path = os.path.join(os.environ['SystemRoot'], system32, 'TFTP.EXE')"</p>
| 0 | 2016-07-26T22:40:34Z | [
"python",
"windows",
"command-line"
] |
Pylint is reporting false-positive error | 38,594,658 | <pre><code>error zb1.buildup 1 0 Unable to import 'application'
</code></pre>
<p>Here is the screenshot of my structure. It's screaming about all my imports from my current project. Does it not add the project as a path?</p>
<p>I know pylint is a static code checker but this feels obviously wrong. Let me know if I made a mistake of on my part. Thank you! </p>
<p>P.S. Just in case here is the pylint command <code>pylint --output-format=html ../zb1 > pylint.html</code> . Also code works, just in case you are wondering.</p>
<p><strong>buildup.py</strong></p>
<pre><code>from application import app, db #import app
if __name__ == "__main__":
db.create_all()
</code></pre>
<p><a href="http://i.stack.imgur.com/tbgVB.png" rel="nofollow"><img src="http://i.stack.imgur.com/tbgVB.png" alt="Screenshot"></a></p>
<pre><code>$ pylint --version
No config file found, using default configuration
pylint 1.6.4,
astroid 1.4.7
Python 3.5.2 (default, Jun 29 2016, 13:43:58)
[GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)]
</code></pre>
| 1 | 2016-07-26T15:55:53Z | 38,668,697 | <p>You're having an issue with the python search path. A relatively simple solution is to define the PYTHONPATH environment variable. Assuming that you're trying to invoke pylint from within zb1, the following should work:</p>
<pre><code>PYTHONPATH=`pwd` pylint --output-format=html ../zb1 > pylint.html
</code></pre>
<p>The addition at the beginning of the line defined the PYTHONPATH environment variable for that invocation of pylint.</p>
| 1 | 2016-07-29T23:10:16Z | [
"python",
"python-3.x",
"pylint"
] |
What kind of monitor has Python? | 38,594,673 | <p>What kind of monitors (concurrent programming) has Python? Brinch Hansen, Hoare or Lampson / Redell (like Java)?</p>
<ul>
<li>Brinch Hansen: the notifier thread exits the monitor, and the notified continues.</li>
<li>Hoare: the notifier thread enters in a special queue, and the notified continues.</li>
<li>Lampson / Redell: the notifier thread continues, and the notified enters in the entry queue.</li>
</ul>
| -1 | 2016-07-26T15:56:36Z | 38,594,836 | <p>This is answered in the official <a href="https://docs.python.org/2/library/threading.html#condition-objects" rel="nofollow">documentation</a>:</p>
<blockquote>
<p>The wait() method releases the lock, and then blocks until it is
awakened by a notify() or notifyAll() call for the same condition
variable in another thread. Once awakened, it re-acquires the lock and
returns. It is also possible to specify a timeout.</p>
<p>The notify() method wakes up one of the threads waiting for the
condition variable, if any are waiting. The notifyAll() method wakes
up all threads waiting for the condition variable.</p>
<p><strong>Note: the notify() and notifyAll() methods donât release the lock</strong>;
this means that <strong>the thread or threads awakened will not return from
their wait() call immediately</strong>, but only when the thread that called
notify() or notifyAll() finally relinquishes ownership of the lock.</p>
</blockquote>
| 1 | 2016-07-26T16:05:46Z | [
"python",
"concurrency"
] |
Extract Json without property names in Python | 38,594,688 | <p>I am trying to extract location data from a json API service. This is how far i got:</p>
<pre><code>>>> import json
>>> import urllib
>>> from urllib import urlopen
>>> url = urlopen('THE API URL').read()
>>> print url
[["244630489","53.099040","6.040552","0","0","99","2016-07-26T15:28:59"]]
>>> result = json.loads(url)
>>> print result
[[u'244630489', u'53.099040', u'6.040552', u'0', u'0', u'99', u'2016-07-26T15:28:59']]
</code></pre>
<p>Now i would like to extract the second en thirth value. I cant figure out how to do it using json.loads, because there are no property names.</p>
<p>Can anyone help me out?</p>
| -1 | 2016-07-26T15:57:39Z | 38,594,850 | <p>You're getting an array represented as a list of list, do the following:</p>
<pre><code>import json
import urllib
from urllib import urlopen
url = urlopen('THE API URL').read()
print url
print url[0][1:3]
</code></pre>
<p>Which will print:</p>
<pre><code>['53.099040', '6.040552']
</code></pre>
<p>Or similarly with json:</p>
<pre><code>result = json.loads(url)
print result[0][1:3]
</code></pre>
<p>Which will print:</p>
<pre><code>[u'53.099040', u'6.040552']
</code></pre>
| 0 | 2016-07-26T16:06:37Z | [
"python",
"json"
] |
Extract Json without property names in Python | 38,594,688 | <p>I am trying to extract location data from a json API service. This is how far i got:</p>
<pre><code>>>> import json
>>> import urllib
>>> from urllib import urlopen
>>> url = urlopen('THE API URL').read()
>>> print url
[["244630489","53.099040","6.040552","0","0","99","2016-07-26T15:28:59"]]
>>> result = json.loads(url)
>>> print result
[[u'244630489', u'53.099040', u'6.040552', u'0', u'0', u'99', u'2016-07-26T15:28:59']]
</code></pre>
<p>Now i would like to extract the second en thirth value. I cant figure out how to do it using json.loads, because there are no property names.</p>
<p>Can anyone help me out?</p>
| -1 | 2016-07-26T15:57:39Z | 38,594,853 | <p>If you want to get a part of <code>result</code>, you would do</p>
<pre><code>part = result[0][1:3]
</code></pre>
<p>or</p>
<pre><code>a=result[0][1]
b=result[0][2]
</code></pre>
<p>as <code>result</code>is a nested list in your case</p>
| 0 | 2016-07-26T16:06:58Z | [
"python",
"json"
] |
object not iterable - how do I fix this? | 38,594,695 | <p>Right, I have changed a whole load of things and hopefully corrected some errors. The error message is now: <code>>> for row in teacherbooklist: TypeError: 'int' object is not iterable</code></p>
<pre class="lang-py prettyprint-override"><code>username = input("Enter Username: ")
password = input("Enter Password: ")
with open('teacherbook_login.txt', 'w') as teacherbookfile:
teacherbookfileWriter=csv.writer(teacherbookfile)
teacherbooklist = teacherbookfileWriter.writerow([username,password])
for row in teacherbooklist:
if username == teacherbooklist[1] and password == teacherbooklist[2]:
print(row)
</code></pre>
| -5 | 2016-07-26T15:58:07Z | 38,595,254 | <p>The documentation does not specify any return value of either the <code>writerow</code> or the <code>writerows</code> method of <a href="https://docs.python.org/2.7/library/csv.html#writer-objects" rel="nofollow">Writer objects</a> so you should not rely on that.</p>
<p><code>teacherbooklist</code> seems to be an integer - maybe the bytes written or something.</p>
<p>If you want to read from the file you would need to create a csv reader.</p>
| 0 | 2016-07-26T16:27:55Z | [
"python",
"csv"
] |
Python 2 Notebook in JuliaBox and Plotting Graphs | 38,594,706 | <p>I am using a Python 2 notebook in JuliaBox. I am attempting to plot some data, but I keep getting the error:</p>
<pre><code>TclError: no display name and no $DISPLAY environment variable
</code></pre>
<p>Julia itself has the capabilities to plot via python in a Julia notebook. I've tested this myself. The PyPlot command accesses the matplotlib.pyplot, right?</p>
<pre><code>using PyPlot
plot([1,2,3,4])
</code></pre>
<p>However, the Python 2 notebook is causing me difficulty. Here's what I have:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
plt.plot([1,2,3])
</code></pre>
<p>Perhaps the notebook doesn't have the capability to plot? Thanks!</p>
| 0 | 2016-07-26T15:58:38Z | 38,596,040 | <p>Julia doesn't ask you to do so explicitly, but when using matplotlib in python, you need to instruct it to <em>show</em> the resulting plot.</p>
<p>i.e. in python add the line:</p>
<pre><code>plt.show()
</code></pre>
<p>I will also point out that the arguments you passed are lists, not numpy arrays. Your example will still work, but presumably (given you imported numpy) you meant to be working with arrays. </p>
| 1 | 2016-07-26T17:09:07Z | [
"python",
"matplotlib",
"julia-lang"
] |
Django nested admin: how to set foreign key to grandparents'id? | 38,594,707 | <p>I have a django project where I use <a href="https://github.com/theatlantic/django-nested-admin" rel="nofollow">nested-admin</a>, and I'm trying to figure out how to set the foreign key of an element to its grandparents' id.</p>
<p>I have the following <code>model.py</code>:</p>
<pre><code>class Person(models.Model):
name = models.CharField(max_length=80)
def __str__(self):
return self.name
class Job(models.Model):
location = models.CharField(max_length=20, null=True, blank=True)
person = models.ForeignKey(Person)
class Project(models.Model):
name = models.CharField(max_length=20, null=True, blank=True)
job = models.ForeignKey(Job, null=True)
person = models.ForeignKey(Person)
</code></pre>
<p><code>Project</code>'s FK to <code>Job</code> can be NULL for the simple reason that a project is not necessarily related to a Job (i.e. personal project).</p>
<p><code>admin.py</code>:</p>
<pre><code>class ProjectInline(NestedStackedInline):
model = Project
extra = 1
class JobInline(NestedStackedInline):
model = Job
inlines = [ProjectInline]
extra = 1
@admin.register(Person)
class PersonAdmin(NestedModelAdmin):
model = Person
inlines = [JobInline]
</code></pre>
<p>Currently, the <code>Person</code> form looks like this:</p>
<p><img src="http://i.stack.imgur.com/w9Fop.png" alt="Current look. See Person field[2]"></p>
<p>The fact that we can choose the <code>Person</code> in <code>Project</code> doesn't make any sense. I want to attach it directly to the <code>Job</code>'s person.</p>
<p>I believe I have to customize a form or a view, and looked into <code>UpdateView</code> (like <a href="http://stackoverflow.com/questions/24813496/django-automatically-set-foreign-key-in-form-with-class-based-views">this link</a>), but I haven't manage to do what I want.</p>
| 0 | 2016-07-26T15:58:40Z | 38,673,789 | <h3>Answer to the question asked</h3>
<p>I managed to get something working using the <a href="https://github.com/s-block/django-nested-inline/blob/master/nested_inline/admin.py#L45" rel="nofollow">nested_admin</a> source code.</p>
<p>First, exclude the <code>person</code> field from the <code>project</code> form:</p>
<pre><code>class ProjectInline(NestedStackedInline):
# [...]
exclude = ["person"]
</code></pre>
<p>In <code>PersonAdmin</code>, add the following code:</p>
<pre><code>def save_formset(self, request, form, formset, change):
for form in formset.forms:
for job_form in form.nested_formsets:
job_form.instance.person = form.instance.person
for project_form in job_form:
project_form.instance.person = form.instance.person
super(PersonAdmin, self).save_formset(request, form, formset, change)
</code></pre>
<p>This way, the <code>person</code> field won't be displayed, preventing the user to change it, and it will automatically be set to the same value as the <code>Job</code>'s <code>Person</code> you're modifying.</p>
<h3>New problem appears!</h3>
<p>However, another problem will appear if you try to add <code>ProjectInline</code> to <code>Person</code>:</p>
<p><a href="http://i.stack.imgur.com/eTshr.png" rel="nofollow"><img src="http://i.stack.imgur.com/eTshr.png" alt="The problem you'll encounter"></a></p>
<p>As you can see, the Project "website" is both displayed inside the Job subset and the Person.</p>
<h3>Best solution</h3>
<p>After a lot of work, you will probably manage to do exactly what you want.</p>
<p>But in my opinion, the <em>easiest</em> solution would be to create <strong>two</strong> different <code>Project</code> model, both inheriting from the same base. You'd have a <code>PersonnalProject</code>, with a FK to <code>Person</code>, and a <code>JobProject</code>, with a FK to <code>Job</code>.</p>
<p>It will take more place in database but is so much easier to handle afterwards.</p>
| 0 | 2016-07-30T12:09:34Z | [
"python",
"python-3.x",
"django-models",
"django-forms"
] |
How do I push new files to GitHub? | 38,594,717 | <p>I created a new repository on github.com and then cloned it to my local machine with</p>
<pre><code>git clone https://github.com/usrname/mathematics.git
</code></pre>
<p>I added 3 new files under the folder <code>mathematics</code></p>
<pre><code>$ tree
.
âââ LICENSE
âââ numerical_analysis
â  âââ regression_analysis
â  âââ simple_regression_analysis.md
â  âââ simple_regression_analysis.png
â  âââ simple_regression_analysis.py
</code></pre>
<p>Now, I'd like to upload 3 new files to my GitHub using Python, more specifically, <a href="https://github.com/PyGithub/PyGithub" rel="nofollow">PyGithub</a>. Here is what I have tried:</p>
<pre><code>#!/usr/bin/env python
# *-* coding: utf-8 *-*
from github import Github
def main():
# Step 1: Create a Github instance:
g = Github("usrname", "passwd")
repo = g.get_user().get_repo('mathematics')
# Step 2: Prepare files to upload to GitHub
files = ['mathematics/numerical_analysis/regression_analysis/simple_regression_analysis.py', 'mathematics/numerical_analysis/regression_analysis/simple_regression_analysis.png']
# Step 3: Make a commit and push
commit_message = 'Add simple regression analysis'
tree = repo.get_git_tree(sha)
repo.create_git_commit(commit_message, tree, [])
repo.push()
if __name__ == '__main__':
main()
</code></pre>
<p>I don't know </p>
<ul>
<li>how to get the string <code>sha</code> for <code>repo.get_git_tree</code></li>
<li>how do I make a connection between step 2 and 3, i.e. pushing specific files</li>
</ul>
<p>Personally, <a href="http://pygithub.readthedocs.io/en/stable/introduction.html" rel="nofollow">PyGithub documentation</a> is not readable. I am unable to find the right api after searching for long time.</p>
| 9 | 2016-07-26T15:59:16Z | 38,595,049 | <pre><code>import subprocess
p = subprocess.Popen("git rev-parse HEAD".split(), stdout=subprocess.PIPE)
out, err = p.communicate()
sha = out.strip()
</code></pre>
<p>There's probably a way to do this with PyGithub, but this should work for a quick hack.</p>
| 0 | 2016-07-26T16:17:03Z | [
"python",
"git",
"github",
"pygithub"
] |
How do I push new files to GitHub? | 38,594,717 | <p>I created a new repository on github.com and then cloned it to my local machine with</p>
<pre><code>git clone https://github.com/usrname/mathematics.git
</code></pre>
<p>I added 3 new files under the folder <code>mathematics</code></p>
<pre><code>$ tree
.
âââ LICENSE
âââ numerical_analysis
â  âââ regression_analysis
â  âââ simple_regression_analysis.md
â  âââ simple_regression_analysis.png
â  âââ simple_regression_analysis.py
</code></pre>
<p>Now, I'd like to upload 3 new files to my GitHub using Python, more specifically, <a href="https://github.com/PyGithub/PyGithub" rel="nofollow">PyGithub</a>. Here is what I have tried:</p>
<pre><code>#!/usr/bin/env python
# *-* coding: utf-8 *-*
from github import Github
def main():
# Step 1: Create a Github instance:
g = Github("usrname", "passwd")
repo = g.get_user().get_repo('mathematics')
# Step 2: Prepare files to upload to GitHub
files = ['mathematics/numerical_analysis/regression_analysis/simple_regression_analysis.py', 'mathematics/numerical_analysis/regression_analysis/simple_regression_analysis.png']
# Step 3: Make a commit and push
commit_message = 'Add simple regression analysis'
tree = repo.get_git_tree(sha)
repo.create_git_commit(commit_message, tree, [])
repo.push()
if __name__ == '__main__':
main()
</code></pre>
<p>I don't know </p>
<ul>
<li>how to get the string <code>sha</code> for <code>repo.get_git_tree</code></li>
<li>how do I make a connection between step 2 and 3, i.e. pushing specific files</li>
</ul>
<p>Personally, <a href="http://pygithub.readthedocs.io/en/stable/introduction.html" rel="nofollow">PyGithub documentation</a> is not readable. I am unable to find the right api after searching for long time.</p>
| 9 | 2016-07-26T15:59:16Z | 38,595,057 | <p>If you do not need pygithub specifically, the dulwich git-library offers <a href="https://www.dulwich.io/docs/tutorial/porcelain.html" rel="nofollow">high level git commands</a>. For the commands have a look at <a href="https://www.dulwich.io/apidocs/dulwich.porcelain.html" rel="nofollow">https://www.dulwich.io/apidocs/dulwich.porcelain.html</a></p>
| 0 | 2016-07-26T16:17:20Z | [
"python",
"git",
"github",
"pygithub"
] |
How do I push new files to GitHub? | 38,594,717 | <p>I created a new repository on github.com and then cloned it to my local machine with</p>
<pre><code>git clone https://github.com/usrname/mathematics.git
</code></pre>
<p>I added 3 new files under the folder <code>mathematics</code></p>
<pre><code>$ tree
.
âââ LICENSE
âââ numerical_analysis
â  âââ regression_analysis
â  âââ simple_regression_analysis.md
â  âââ simple_regression_analysis.png
â  âââ simple_regression_analysis.py
</code></pre>
<p>Now, I'd like to upload 3 new files to my GitHub using Python, more specifically, <a href="https://github.com/PyGithub/PyGithub" rel="nofollow">PyGithub</a>. Here is what I have tried:</p>
<pre><code>#!/usr/bin/env python
# *-* coding: utf-8 *-*
from github import Github
def main():
# Step 1: Create a Github instance:
g = Github("usrname", "passwd")
repo = g.get_user().get_repo('mathematics')
# Step 2: Prepare files to upload to GitHub
files = ['mathematics/numerical_analysis/regression_analysis/simple_regression_analysis.py', 'mathematics/numerical_analysis/regression_analysis/simple_regression_analysis.png']
# Step 3: Make a commit and push
commit_message = 'Add simple regression analysis'
tree = repo.get_git_tree(sha)
repo.create_git_commit(commit_message, tree, [])
repo.push()
if __name__ == '__main__':
main()
</code></pre>
<p>I don't know </p>
<ul>
<li>how to get the string <code>sha</code> for <code>repo.get_git_tree</code></li>
<li>how do I make a connection between step 2 and 3, i.e. pushing specific files</li>
</ul>
<p>Personally, <a href="http://pygithub.readthedocs.io/en/stable/introduction.html" rel="nofollow">PyGithub documentation</a> is not readable. I am unable to find the right api after searching for long time.</p>
| 9 | 2016-07-26T15:59:16Z | 39,616,681 | <p>I can give you some information support, but also one concrete solution. </p>
<p><a href="https://github.com/gitpython-developers/GitPython/blob/db44286366a09f1f65986db2a1c8b470fb417068/git/test/test_index.py" rel="nofollow">Here</a> you can find examples of adding new files to your repository, and <a href="https://www.youtube.com/watch?v=AWsLtWuaz_o&feature=youtu.be" rel="nofollow">here</a> is a video tutorial for this.</p>
<p>Below you can see a list of python packages that work with GitHub found on the developer page of GitHub:</p>
<ul>
<li><a href="https://github.com/PyGithub/PyGithub" rel="nofollow">PyGithub</a></li>
<li><a href="https://github.com/copitux/python-github3" rel="nofollow">Pygithub3</a> </li>
<li><a href="https://github.com/ducksboard/libsaas" rel="nofollow">libsaas</a> </li>
<li><a href="https://github.com/sigmavirus24/github3.py" rel="nofollow">github3.py</a> </li>
<li><a href="https://github.com/demianbrecht/sanction" rel="nofollow">sanction</a> </li>
<li><a href="https://github.com/jpaugh/agithub" rel="nofollow">agithub</a> </li>
<li><a href="https://github.com/michaelliao/githubpy" rel="nofollow">githubpy</a></li>
<li><a href="https://github.com/turnkeylinux/octohub" rel="nofollow">octohub</a> </li>
<li><a href="http://github-flask.readthedocs.io/en/latest/" rel="nofollow">Github-Flask</a> </li>
<li><a href="https://github.com/jkeylu/torngithub" rel="nofollow">torngithub</a></li>
</ul>
<p>But also you can push your files with commands in IPython if you need:</p>
<pre><code>In [1]: import subprocess
In [2]: print subprocess.check_output('git init', shell=True)
Initialized empty Git repository in /home/code/.git/
In [3]: print subprocess.check_output('git add .', shell=True)
In [4]: print subprocess.check_output('git commit -m "a commit"', shell=True)
</code></pre>
| 1 | 2016-09-21T12:33:37Z | [
"python",
"git",
"github",
"pygithub"
] |
How do I push new files to GitHub? | 38,594,717 | <p>I created a new repository on github.com and then cloned it to my local machine with</p>
<pre><code>git clone https://github.com/usrname/mathematics.git
</code></pre>
<p>I added 3 new files under the folder <code>mathematics</code></p>
<pre><code>$ tree
.
âââ LICENSE
âââ numerical_analysis
â  âââ regression_analysis
â  âââ simple_regression_analysis.md
â  âââ simple_regression_analysis.png
â  âââ simple_regression_analysis.py
</code></pre>
<p>Now, I'd like to upload 3 new files to my GitHub using Python, more specifically, <a href="https://github.com/PyGithub/PyGithub" rel="nofollow">PyGithub</a>. Here is what I have tried:</p>
<pre><code>#!/usr/bin/env python
# *-* coding: utf-8 *-*
from github import Github
def main():
# Step 1: Create a Github instance:
g = Github("usrname", "passwd")
repo = g.get_user().get_repo('mathematics')
# Step 2: Prepare files to upload to GitHub
files = ['mathematics/numerical_analysis/regression_analysis/simple_regression_analysis.py', 'mathematics/numerical_analysis/regression_analysis/simple_regression_analysis.png']
# Step 3: Make a commit and push
commit_message = 'Add simple regression analysis'
tree = repo.get_git_tree(sha)
repo.create_git_commit(commit_message, tree, [])
repo.push()
if __name__ == '__main__':
main()
</code></pre>
<p>I don't know </p>
<ul>
<li>how to get the string <code>sha</code> for <code>repo.get_git_tree</code></li>
<li>how do I make a connection between step 2 and 3, i.e. pushing specific files</li>
</ul>
<p>Personally, <a href="http://pygithub.readthedocs.io/en/stable/introduction.html" rel="nofollow">PyGithub documentation</a> is not readable. I am unable to find the right api after searching for long time.</p>
| 9 | 2016-07-26T15:59:16Z | 39,620,098 | <p>I tried to use the <a href="https://developer.github.com/v3/" rel="nofollow">GitHub API</a> to commit multiple files. This page for the <a href="https://developer.github.com/v3/git/" rel="nofollow">Git Data API</a> says that it should be "pretty simple". For the results of that investigation, see <a href="http://stackoverflow.com/a/39627647/3657941">this answer</a>.</p>
<p>I recommend using something like <a href="http://gitpython.readthedocs.io/en/stable/tutorial.html" rel="nofollow">GitPython</a>:</p>
<pre><code>from git import Repo
repo_dir = 'mathematics'
repo = Repo(repo_dir)
file_list = [
'numerical_analysis/regression_analysis/simple_regression_analysis.py',
'numerical_analysis/regression_analysis/simple_regression_analysis.png'
]
commit_message = 'Add simple regression analysis'
repo.index.add(file_list)
repo.index.commit(commit_message)
origin = repo.remote('origin')
origin.push()
</code></pre>
<p><strong>Note:</strong> This version of the script was run in the parent directory of the repository.</p>
| 2 | 2016-09-21T14:59:19Z | [
"python",
"git",
"github",
"pygithub"
] |
How do I push new files to GitHub? | 38,594,717 | <p>I created a new repository on github.com and then cloned it to my local machine with</p>
<pre><code>git clone https://github.com/usrname/mathematics.git
</code></pre>
<p>I added 3 new files under the folder <code>mathematics</code></p>
<pre><code>$ tree
.
âââ LICENSE
âââ numerical_analysis
â  âââ regression_analysis
â  âââ simple_regression_analysis.md
â  âââ simple_regression_analysis.png
â  âââ simple_regression_analysis.py
</code></pre>
<p>Now, I'd like to upload 3 new files to my GitHub using Python, more specifically, <a href="https://github.com/PyGithub/PyGithub" rel="nofollow">PyGithub</a>. Here is what I have tried:</p>
<pre><code>#!/usr/bin/env python
# *-* coding: utf-8 *-*
from github import Github
def main():
# Step 1: Create a Github instance:
g = Github("usrname", "passwd")
repo = g.get_user().get_repo('mathematics')
# Step 2: Prepare files to upload to GitHub
files = ['mathematics/numerical_analysis/regression_analysis/simple_regression_analysis.py', 'mathematics/numerical_analysis/regression_analysis/simple_regression_analysis.png']
# Step 3: Make a commit and push
commit_message = 'Add simple regression analysis'
tree = repo.get_git_tree(sha)
repo.create_git_commit(commit_message, tree, [])
repo.push()
if __name__ == '__main__':
main()
</code></pre>
<p>I don't know </p>
<ul>
<li>how to get the string <code>sha</code> for <code>repo.get_git_tree</code></li>
<li>how do I make a connection between step 2 and 3, i.e. pushing specific files</li>
</ul>
<p>Personally, <a href="http://pygithub.readthedocs.io/en/stable/introduction.html" rel="nofollow">PyGithub documentation</a> is not readable. I am unable to find the right api after searching for long time.</p>
| 9 | 2016-07-26T15:59:16Z | 39,623,010 | <p>If PyGithub's documentation is not usable (and it doesn't look so), and you just want to push a commit (not doing anything fancy with issues, repo configuration, etc.), you would probably be better off directly interfacing with git, either calling the <code>git</code> executable or using a wrapper library such as <a href="https://github.com/gitpython-developers/GitPython#gitpython" rel="nofollow">GitPython</a>. </p>
<p>Using <code>git</code> directly with something such as <code>subprocess.Popen</code> that you mentioned would probably be easier on the leaning curve, but also more difficult in the long term for error handling, etc. since you don't really have nice abstractions to pass around, and would have to do the parsing yourself.</p>
<p>Getting rid of PyGithub also frees you from being tied to GitHub and its API, allowing you to push to any repo, even another folder on your computer.</p>
| 1 | 2016-09-21T17:30:54Z | [
"python",
"git",
"github",
"pygithub"
] |
How do I push new files to GitHub? | 38,594,717 | <p>I created a new repository on github.com and then cloned it to my local machine with</p>
<pre><code>git clone https://github.com/usrname/mathematics.git
</code></pre>
<p>I added 3 new files under the folder <code>mathematics</code></p>
<pre><code>$ tree
.
âââ LICENSE
âââ numerical_analysis
â  âââ regression_analysis
â  âââ simple_regression_analysis.md
â  âââ simple_regression_analysis.png
â  âââ simple_regression_analysis.py
</code></pre>
<p>Now, I'd like to upload 3 new files to my GitHub using Python, more specifically, <a href="https://github.com/PyGithub/PyGithub" rel="nofollow">PyGithub</a>. Here is what I have tried:</p>
<pre><code>#!/usr/bin/env python
# *-* coding: utf-8 *-*
from github import Github
def main():
# Step 1: Create a Github instance:
g = Github("usrname", "passwd")
repo = g.get_user().get_repo('mathematics')
# Step 2: Prepare files to upload to GitHub
files = ['mathematics/numerical_analysis/regression_analysis/simple_regression_analysis.py', 'mathematics/numerical_analysis/regression_analysis/simple_regression_analysis.png']
# Step 3: Make a commit and push
commit_message = 'Add simple regression analysis'
tree = repo.get_git_tree(sha)
repo.create_git_commit(commit_message, tree, [])
repo.push()
if __name__ == '__main__':
main()
</code></pre>
<p>I don't know </p>
<ul>
<li>how to get the string <code>sha</code> for <code>repo.get_git_tree</code></li>
<li>how do I make a connection between step 2 and 3, i.e. pushing specific files</li>
</ul>
<p>Personally, <a href="http://pygithub.readthedocs.io/en/stable/introduction.html" rel="nofollow">PyGithub documentation</a> is not readable. I am unable to find the right api after searching for long time.</p>
| 9 | 2016-07-26T15:59:16Z | 39,627,647 | <p><strong>Note:</strong> This version of the script was called from inside the GIT repository because I removed the repository name from the file paths.</p>
<p>I finally figured out how to use <a href="https://github.com/PyGithub/PyGithub" rel="nofollow">PyGithub</a> to commit multiple files:</p>
<pre><code>import base64
from github import Github
from github import InputGitTreeElement
token = '5bf1fd927dfb8679496a2e6cf00cbe50c1c87145'
g = Github(token)
repo = g.get_user().get_repo('mathematics')
file_list = [
'numerical_analysis/regression_analysis/simple_regression_analysis.png',
'numerical_analysis/regression_analysis/simple_regression_analysis.py'
]
commit_message = 'Add simple regression analysis'
master_ref = repo.get_git_ref('heads/master')
master_sha = master_ref.object.sha
base_tree = repo.get_git_tree(master_sha)
element_list = list()
for entry in file_list:
with open(entry, 'rb') as input_file:
data = input_file.read()
if entry.endswith('.png'):
data = base64.b64encode(data)
element = InputGitTreeElement(entry, '100644', 'blob', data)
element_list.append(element)
tree = repo.create_git_tree(element_list, base_tree)
parent = repo.get_git_commit(master_sha)
commit = repo.create_git_commit(commit_message, tree, [parent])
master_ref.edit(commit.sha)
""" An egregious hack to change the PNG contents after the commit """
for entry in file_list:
with open(entry, 'rb') as input_file:
data = input_file.read()
if entry.endswith('.png'):
old_file = repo.get_contents(entry)
commit = repo.update_file('/' + entry, 'Update PNG content', data, old_file.sha)
</code></pre>
<p>If I try to add the raw data from a PNG file, the call to <code>create_git_tree</code> eventually calls <code>json.dumps</code> in <a href="https://github.com/PyGithub/PyGithub/blob/master/github/Requester.py#L211" rel="nofollow"><code>Requester.py</code></a>, which causes the following exception to be raised:</p>
<blockquote>
<p><code>UnicodeDecodeError: 'utf8' codec can't decode byte 0x89 in position 0: invalid start byte</code></p>
</blockquote>
<p>I work around this problem by <code>base64</code> encoding the PNG data and committing that. Later, I use the <code>update_file</code> method to change the PNG data. This results in two separate commits to the repository which is probably not what you want.</p>
| 1 | 2016-09-21T22:45:25Z | [
"python",
"git",
"github",
"pygithub"
] |
Access actual Features after a Feature Selection Pipeline in SciKit-Learn | 38,594,734 | <p>I use a feature selection in combination with a pipeline in SciKit-Learn. As a feature selection strategy I use <code>SelectKBest</code>.</p>
<p>The pipeline is created and executed like this:</p>
<pre><code>select = SelectKBest(k=5)
clf = SVC(decision_function_shape='ovo')
parameters = dict(feature_selection__k=[1,2,3,4,5,6,7,8],
svc__C=[0.01, 0.1, 1],
svc__decision_function_shape=['ovo'])
steps = [('feature_selection', select),
('svc', clf)]
pipeline = sklearn.pipeline.Pipeline(steps)
cv = sklearn.grid_search.GridSearchCV(pipeline, param_grid=parameters)
cv.fit( features_training, labels_training )
</code></pre>
<p>I know that I can get the best-parameters afterwards via <code>cv.best_params_</code>. However, this only tells me that a <code>k=4</code> is optimal. But I would like to know which features are these? How can this be done?</p>
| 0 | 2016-07-26T15:59:51Z | 38,596,968 | <p>For your example, you can get the scores of all your features using <code>cv.best_estimator_.named_steps['feature_selection'].scores_</code>. This will give you the scores for all of your features and using them you should be able to see which were the chosen features. Similarly, you can also get the pvalues by <code>cv.best_estimator_.named_steps['feature_selection'].pvalues_</code>.</p>
<p><strong>EDIT</strong></p>
<p>A better way to get this would be to use the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html#sklearn.feature_selection.SelectKBest.get_support" rel="nofollow"><code>get_support</code></a> method of the <code>SelectKBest</code> class. This will give a boolean array of shape <code>[# input features]</code>, in which an element is <em>True</em> iff its corresponding feature is selected for retention. This will be as follows:</p>
<p><code>cv.best_estimator_.named_steps['feature_selection'].get_support()</code></p>
| 1 | 2016-07-26T18:05:33Z | [
"python",
"machine-learning",
"scikit-learn",
"feature-detection"
] |
python/matplotlib boxplot on a x axis | 38,594,753 | <p>My data set is a list with 6 numbers: <code>[23948.30, 23946.20, 23961.20, 23971.70, 23956.30, 23987.30]</code></p>
<p>I want them to be in a horizontal box plot, line on the x axis, with <code>23855</code> and <code>24472</code> as the limit of the x axis, so the box plot will be in the middle of the line. If this can not be done, at least showing the x axis under it, and very close to the box plot.</p>
<p>I also want the box plot to show the mean number.</p>
<p>Now I can only get the horizontal box plot, and I also want the x-axis show the whole number instead of "xx+2.394e".</p>
<p>Here is my code now:</p>
<pre><code>def box_plot(circ_list, wear_limit):
print circ_list
print wear_limit
fig1 = plt.figure()
plt.boxplot(circ_list, 0, 'rs', 0)
plt.show()
</code></pre>
| 1 | 2016-07-26T16:01:02Z | 38,595,812 | <p>I'm not sure I understood everything in your post, but here are my corrections to your code:</p>
<pre><code>l = [23948.30, 23946.20, 23961.20, 23971.70, 23956.30, 23987.30]
def box_plot(circ_list):
fig, ax = plt.subplots()
plt.boxplot(circ_list, 0, 'rs', 0, showmeans=True)
plt.ylim((0.75, 1.25))
ax.set_yticks([])
labels = ["{}".format(int(i)) for i in ax.get_xticks()]
ax.set_xticklabels(labels)
plt.show()
box_plot(l)
</code></pre>
<p>The result:</p>
<p><a href="http://i.stack.imgur.com/EJQr1.png" rel="nofollow">Your box-plot</a></p>
<p>Now for the breakdown of your requests and how the code correspond to them:</p>
<ul>
<li>Showing the mean: this is a simple addition of the argument <code>showmeans=True</code> in the plt.boxplot function.</li>
<li>Drawing the horizontal box plot closer to the x-axis. By default, the boxplot is drawn at y=1, so I just rescaled the y-axis between 0.75 and 1.25 using <code>plt.ylim()</code>. You can tweak those numbers if you want to draw the boxplot closer to the x-axis (by changing 0.75 to 0.9 for instance), or draw the top of the plot further from the boxplot (by changing the 1.25 to a 1.5 for instance). I also eliminated the yticks to make the plot cleaner using <code>plt.set_ticks([])</code>.</li>
<li>Showing the x-ticks labels in integer form. I simply convert each label of the xticks to an integer, and apply it back using the <code>ax.set_xticklabels()</code> function.</li>
</ul>
<p>Do let me know if it correspond to what you were looking for, and happy matplotlib to you ;).</p>
| 0 | 2016-07-26T16:56:23Z | [
"python",
"matplotlib",
"boxplot"
] |
Coercing to Unicode: need string or buffer, function found on Django form | 38,594,814 | <p>I'm new on Django and I'm making a formwhen I press submit I'm getting this error that I haven't seen before: <code>TypeError at /catalog/ coercing to Unicode: need string or buffer, function found</code></p>
<p>My <code>forms.py</code> looks like:</p>
<pre><code>class AppsForm(forms.Form):
def __init__(self, *args, **kwargs):
policiesList = kwargs.pop('policiesList', None)
applicationList = kwargs.pop('applicationList', None)
super(AppsForm, self).__init__(*args, **kwargs)
if policiesList and applicationList:
self.fields['appsPolicyId'] = forms.ChoiceField(label='Application Policy', choices=policiesList)
self.fields['appsId'] = forms.ChoiceField(label='Application', choices=applicationList)
else:
self.fields['appsPolicyId'] = forms.ChoiceField(label='Application Policy',
choices=('No application policies found',
'No application policies found'))
self.fields['appsId'] = forms.ChoiceField(label='Application', choices=('No applications found',
'No applications found'))
</code></pre>
<p>My <code>views.py</code> looks like:</p>
<pre><code>def main(request):
if validateToken(request):
appList = getDetailsApplications(request)
polList = getDetailsApplicationPolicies(request)
message = None
if request.method == 'POST' and 'deployButton' in request.POST:
form = AppsForm(request.POST, policiesList=polList, applicationList=appList)
if form.is_valid():
deploy(request, form)
else:
form = AppsForm(policiesList=polList, applicationList=appList)
message = 'Form not valid, please try again.'
elif request.method == 'POST' and 'undeployButton' in request.POST:
form = AppsForm(request.POST, policiesList=polList, applicationList=appList)
if form.is_valid():
undeploy(request, form)
else:
form = AppsForm(policiesList=polList, applicationList=appList)
message = 'Form not valid, please try again.'
else:
form = AppsForm(policiesList=polList, applicationList=appList)
return render_to_response('catalog/catalog.html', {'message': message, 'form': form},
context_instance=RequestContext(request))
else:
return render_to_response('menu/access_error.html')
</code></pre>
<p>The error happens on <code>deploy(request, form)</code> and <code>undeploy(request, form)</code>, they are on another app and I import both from the <code>app/views.py</code>.</p>
<p>Here I show one of them because I think the problem on both it's the same, but I'm not unable to fix it...</p>
<pre><code>def deploy(request, form):
if validateToken(request):
policy = form.cleaned_data['appsPolicyId']
applicationID = form.cleaned_data['appsId']
headers = {'Content-Type': 'application/json'}
deployApp = apacheStratosAPI + applications + '/' + applicationID + '/' + deploy + '/' + policy
req = requests.post(deployApp, headers=headers, auth=HTTPBasicAuth(request.session['stratosUser'],
request.session['stratosPass']),
verify=False)
if req.status_code == 202:
serverInfo = json.loads(req.content)
message = '(Code: ' + str(req.status_code) + ') ' + serverInfo['message'] + '.'
return render_to_response('catalog/catalog.html', {'message': message, 'form': form},
context_instance=RequestContext(request))
elif req.status_code == 400 or req.status_code == 409 or req.status_code == 500:
serverInfo = json.loads(req.content)
message = '(Error: ' + str(req.status_code) + ') ' + serverInfo['message'] + '.'
return render_to_response('catalog/catalog.html', {'message': message, 'form': form},
context_instance=RequestContext(request))
else:
return render_to_response('menu/access_error.html')
</code></pre>
<p>The error it's on the line:</p>
<pre><code>deployApp = apacheStratosAPI + applications + '/' + applicationID + '/' + deploy + '/' + policy
</code></pre>
<p>When I debug, I see that the variables are correct and on format <code>u'value'</code>. I don't know why now I'm getting the Unicode error because on other forms that I have the values are on this format and I don't get any error.</p>
<p>Why I get this error now? Thanks and regards.</p>
| 0 | 2016-07-26T16:04:32Z | 38,594,942 | <p>One of the things you're including in that string concatenation is <code>deploy</code>. Since that isn't defined within the current scope, Python will look for the nearest declaration of that name, which is the current function itself, hence the error.</p>
<p>I don't know where that name is actually supposed to be defined, but you should probably do it within your function; and, in any case, you will need to rename either the variable or the function.</p>
| 1 | 2016-07-26T16:11:17Z | [
"python",
"django",
"forms",
"unicode"
] |
Process stops immediately after start | 38,594,892 | <p>Hope you can help me. I'm coding on a Raspberry Pi with MonoDevelop.</p>
<p>I want to execute a python script with C# and read from it.</p>
<pre><code>class Program
{
public static void Main(string[] args)
{
Process p = new Process();
p.OutputDataReceived += new DataReceivedEventHandler(OutputHandler);
p.StartInfo.FileName = "sudo";
p.StartInfo.Arguments = "python gpio.py";
p.StartInfo.UseShellExecute = false;
p.StartInfo.CreateNoWindow = true;
p.StartInfo.RedirectStandardOutput = true;
p.Start();
p.BeginOutputReadLine();
p.WaitForExit();
}
private static void OutputHandler(Object sender, DataReceivedEventArgs args)
{
Console.WriteLine(args.Data);
}
}
</code></pre>
<p>While Debugging i can see that the Process has exited
<a href="http://i.stack.imgur.com/vddMn.png" rel="nofollow">Click for image</a></p>
<p>But in the TaskManager i can see, that the process is still running.
Also the script controls the gpio pins. And the script controlls the pins (Led on/off), even if the "Process has exited" . But I dont get anything from redirectOutput.</p>
<p>Why does the Process immediately quits after starting (the script has a while true. it shouldn't stop)? Is this the right way to execute a script?<br>
If I execute the Python script from the terminal, it works fine. It shouldn't be an error with the script.
If I start a process e.g. FileName "libreoffice", it works too.</p>
<p>The script is located in the project folder in "/bin/Debug/" (the folder)
Permissions for execution are set for anyone.</p>
<p>Thanks,<br>
Greetings</p>
| 0 | 2016-07-26T16:08:50Z | 39,074,371 | <p>As @Gusman said, the problem was the sudo. And as recommended i am using now the <a href="https://github.com/cypherkey/RaspberryPi.Net" rel="nofollow">DLL</a> to access the GPIO Pins. Even if the Raspberry Pi is not fully supported.</p>
| 0 | 2016-08-22T08:14:07Z | [
"c#",
"python",
"linux",
"process",
"monodevelop"
] |
Clicking buttons and filling forms with Selenium and PhantomJS | 38,594,900 | <p>I have a simple task that I want to automate. I want to open a URL, click a button which takes me to the next page, fills in a search term, clicks the "search" button and prints out the url and source code of the results page. I have written the following.</p>
<pre><code>from selenium import webdriver
import time
driver = webdriver.PhantomJS()
driver.set_window_size(1120, 550)
#open URL
driver.get("https://www.searchiqs.com/nybro/")
time.sleep(5)
#click Log In as Guest button
driver.find_element_by_id('btnGuestLogin').click()
time.sleep(5)
#insert search term into Party 1 form field and then search
driver.find_element_by_id('ContentPlaceholder1_txtName').send_keys("Moses")
driver.find_element_by_id('ContentPlaceholder1_cmdSearch').click()
time.sleep(10)
#print and source code
print driver.current_url
print driver.page_source
driver.quit()
</code></pre>
<p>I am not sure where I am going wrong but I have followed a number of tutorials on how to click buttons and fill forms. I get this error instead.</p>
<pre><code>Traceback (most recent call last):
File "phant.py", line 12, in <module>
driver.find_element_by_id('btnGuestLogin').click()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 269, in find_element_by_id
return self.find_element(by=By.ID, value=id_)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 752, in find_element
'value': value})['value']
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 236, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/errorhandler.py", line 192, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: {"errorMessage":"Unable to find element with id 'btnGuestLogin'","request":{"headers":{"Accept":"application/json","
Accept-Encoding":"identity","Connection":"close","Content-Length":"94","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:35670","User-Agent":"Python-urllib/2.7"
},"httpVersion":"1.1","method":"POST","post":"{\"using\": \"id\", \"sessionId\": \"d38e5fa0-5349-11e6-b0c2-758ad3d2c65e\", \"value\": \"btnGuestLogin\"}","url":"/element","urlP
arsed":{"anchor":"","query":"","file":"element","directory":"/","path":"/element","relative":"/element","port":"","host":"","password":"","user":"","userInfo":"","authority":""
,"protocol":"","source":"/element","queryKey":{},"chunks":["element"]},"urlOriginal":"/session/d38e5fa0-5349-11e6-b0c2-758ad3d2c65e/element"}}
Screenshot: available via screen
</code></pre>
<p>The error seems to suggest that the element with that id does not exist yet it does.</p>
<p><strong>--- EDIT: Changed code to use WebDriverWait ---</strong></p>
<p>I have changed some things around to implement WebDriverWait</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
driver = webdriver.PhantomJS()
driver.set_window_size(1120, 550)
#open URL
driver.get("https://www.searchiqs.com/nybro/")
#click Log In as Guest button
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "btnGuestLogin"))
)
element.click()
#wait for new page to load, fill in form and hit search
element2 = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "ContentPlaceholder1_cmdSearch"))
)
#insert search term into Party 1 form field and then search
driver.find_element_by_id('ContentPlaceholder1_txtName').send_keys("Moses")
element2.click()
driver.implicitly_wait(10)
#print and source code
print driver.current_url
print driver.page_source
driver.quit()
</code></pre>
<p>It still raises this error</p>
<pre><code>Traceback (most recent call last):
File "phant.py", line 14, in <module>
EC.presence_of_element_located((By.ID, "btnGuestLogin"))
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/support/wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
Screenshot: available via screen
</code></pre>
| 1 | 2016-07-26T16:09:21Z | 38,595,944 | <p>The <code>WebDriverWait</code> approach actually works for me as is:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.PhantomJS()
driver.set_window_size(1120, 550)
driver.get("https://www.searchiqs.com/nybro/")
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "btnGuestLogin"))
)
element.click()
</code></pre>
<p>No errors. PhantomJS version 2.1.1, Selenium 2.53.6, Python 2.7.</p>
<hr>
<p>The issue might be related to SSL and <code>PhantomJS</code>, either work through <code>http</code>:</p>
<pre><code>driver.get("http://www.searchiqs.com/nybro/")
</code></pre>
<p>Or, try <a href="http://stackoverflow.com/questions/23581291/python-selenium-with-phantomjs-empty-page-source">ignoring SSL errors</a>:</p>
<pre><code>driver = webdriver.PhantomJS(service_args=['--ignore-ssl-errors=true', '--ssl-protocol=any'])
</code></pre>
| 1 | 2016-07-26T17:03:50Z | [
"python",
"selenium",
"automation",
"phantomjs"
] |
Django REST Framework Foreign Key - NOT NULL constraint failed | 38,594,960 | <p>I have the following setup in Django REST Framework:</p>
<p><strong>models.py:</strong></p>
<pre><code>class Device(models.Model):
name = models.CharField()
latitude = models.FloatField()
longitude = models.FloatField()
altitude = models.FloatField()
class Event(models.Model):
id_device = models.ForeignKey(Device, related_name='events')
name = models.CharField()
date = models.CharField()
time = models.CharField()
class Values(models.Model):
id_device = models.ForeignKey(Device)
id_event = models.ForeignKey(Event, related_name='values')
x = models.FloatField()
y = models.FloatField()
z = models.FloatField()
time = models.IntegerField()
</code></pre>
<p><strong>serializers.py:</strong></p>
<pre><code>class DeviceSerializer(serializers.ModelSerializer):
events = EventSerializer(many=True)
class Meta:
model = Device
fields = ('url', 'id', 'name', 'latitude', 'longitude', 'altitude', 'events')
def create(self, validated_data):
events_data = validated_data.pop('events')
device = Device.objects.create(**validated_data)
for event in events_data:
Events.objects.create(device=device, **event)
return device
class EventSerializer(serializers.ModelSerializer):
values = ValuesSerializer(many=True)
class Meta:
model = Event
fields = ('url', 'id', 'name', 'date', 'time', 'values')
def create(self, validated_data):
values_data = validated_data.pop('values')
event = Event.objects.create(**validated_data)
for value in values_data:
Values.objects.create(event=event, **value)
return event
class ValuesSerializer(serializers.ModelSerializer):
class Meta:
model = Values
fields = ('x', 'y', 'z', 'time')
</code></pre>
<p>When I try to post an <strong>event</strong> with some <strong>values</strong> assigned using a JSON file like this one:</p>
<pre><code>{
"name": "name_example",
"date": "date_example",
"time": "time_example",
"values": [
{
"x": 1,
"y": 2,
"z": 3,
"time": 1
},
{
"x": 10,
"y": 20,
"z": 30,
"time": 2
},
{
"x": 100,
"y": 200,
"z": 300,
"time": 4
}
]
}
</code></pre>
<p>I get the error <code>IntegrityError: NOT NULL constraint failed: drf_event.id_device_id</code></p>
<p>I'm new to this framework, so what can I do in order to post new <strong>events</strong> with <strong>values</strong> assigned to an existing <strong>device</strong>?</p>
| 0 | 2016-07-26T16:12:06Z | 38,596,738 | <p>You haven't a key pointing to device in your EventSerializer. You miss the id_device.</p>
<pre><code>class EventSerializer(serializers.ModelSerializer):
values = ValuesSerializer(many=True)
class Meta:
model = Event
fields = ('id_device', 'url', 'id', 'name', 'date', 'time', 'values')
</code></pre>
<p>And you need to add the key in your json:</p>
<pre><code>{
"id_device": 1,
"name": "name_example",
"date": "date_example",
"time": "time_example",
"values": [
{
"x": 1,
"y": 2,
"z": 3,
"time": 1
},
{
"x": 10,
"y": 20,
"z": 30,
"time": 2
},
{
"x": 100,
"y": 200,
"z": 300,
"time": 4
}
]
}
</code></pre>
| 0 | 2016-07-26T17:53:09Z | [
"python",
"django",
"django-models",
"django-rest-framework"
] |
Tkinter : Syntax highlighting for Text widget | 38,594,978 | <p>Can anyone explain how to add syntax highlighting to a Tkinter <code>Text</code> widget ?</p>
<p>Every time the program finds a matching word, it would color that word to how I want. Such as : Color the word <code>tkinter</code> in pink and <code>in</code> in blue. But when I type in <code>Tkinter</code>, it color <code>Tk--ter</code> in yellow and <code>in</code> in blue. </p>
<p>How can I fix this ? Thanks !</p>
| 0 | 2016-07-26T16:13:21Z | 38,595,955 | <p>You can use a <code>tag</code> to do this. You can configure the tag to have certain backgrounds, fonts, text sizes, colors etc. And then add these tags to the text you want to configure.</p>
<p>All of this is in the <a href="http://effbot.org/tkinterbook/text.htm" rel="nofollow">documentation</a>.</p>
| 0 | 2016-07-26T17:04:24Z | [
"python",
"text",
"syntax",
"tkinter"
] |
Tkinter : Syntax highlighting for Text widget | 38,594,978 | <p>Can anyone explain how to add syntax highlighting to a Tkinter <code>Text</code> widget ?</p>
<p>Every time the program finds a matching word, it would color that word to how I want. Such as : Color the word <code>tkinter</code> in pink and <code>in</code> in blue. But when I type in <code>Tkinter</code>, it color <code>Tk--ter</code> in yellow and <code>in</code> in blue. </p>
<p>How can I fix this ? Thanks !</p>
| 0 | 2016-07-26T16:13:21Z | 38,597,111 | <p>Use <a href="http://infohost.nmt.edu/tcc/help/pubs/tkinter/web/text-tag.html" rel="nofollow">tags</a>. I am going to implement the notions given there.</p>
<p><strong>Example:</strong></p>
<pre><code>import tkinter as tk
root = tk.Tk()
root.title("Begueradj")
text = tk.Text(root)
# Insert some text
text.insert(tk.INSERT, "Security ")
text.insert(tk.END, " Pentesting ")
text.insert(tk.END, "Hacking ")
text.insert(tk.END, "Coding")
text.pack()
# Create some tags
text.tag_add("one", "1.0", "1.8")
text.tag_add("two", "1.10", "1.20")
text.tag_add("three", "1.21", "1.28")
text.tag_add("four", "1.29", "1.36")
#Configure the tags
text.tag_config("one", background="yellow", foreground="blue")
text.tag_config("two", background="black", foreground="green")
text.tag_config("three", background="blue", foreground="yellow")
text.tag_config("four", background="red", foreground="black")
#Start the program
root.mainloop()
</code></pre>
<p><strong>Demo:</strong></p>
<p><a href="http://i.stack.imgur.com/ldrWC.png" rel="nofollow"><img src="http://i.stack.imgur.com/ldrWC.png" alt="enter image description here"></a></p>
| -1 | 2016-07-26T18:14:12Z | [
"python",
"text",
"syntax",
"tkinter"
] |
Adding the previous n rows as columns to a numpy array | 38,595,035 | <p>I want to add the previous n rows as columns to a numpy array. Here is an example:</p>
<pre><code>For n=2 this:
[[ 1, 2]
[ 3, 4]
[ 5, 6]
[ 7, 8]
[ 9, 10]
[11, 12]]
Should be turned into this:
[[ 1, 2, 0, 0, 0, 0]
[ 3, 4, 1, 2, 0, 0]
[ 5, 6, 3, 4, 1, 2]
[ 7, 8, 5, 6, 3, 4]
[ 9, 10, 7, 8, 5, 6]
[11, 12, 9, 10, 7, 8]]
</code></pre>
<p>Any ideas how I could do that without going over the entire array in a loop.</p>
| 3 | 2016-07-26T16:16:28Z | 38,595,430 | <p>Here is a way to pad 0 in the beginning of the array and then column stack them:</p>
<pre><code>import numpy as np
n = 2
def mypad(myArr, n):
if n == 0:
return myArr
else:
return np.pad(myArr, ((n,0), (0,0)), mode = "constant")[:-n]
np.column_stack(mypad(arr, i) for i in range(n + 1))
# array([[ 1, 2, 0, 0, 0, 0],
# [ 3, 4, 1, 2, 0, 0],
# [ 5, 6, 3, 4, 1, 2],
# [ 7, 8, 5, 6, 3, 4],
# [ 9, 10, 7, 8, 5, 6],
# [11, 12, 9, 10, 7, 8]])
</code></pre>
| 1 | 2016-07-26T16:37:20Z | [
"python",
"arrays",
"numpy"
] |
Adding the previous n rows as columns to a numpy array | 38,595,035 | <p>I want to add the previous n rows as columns to a numpy array. Here is an example:</p>
<pre><code>For n=2 this:
[[ 1, 2]
[ 3, 4]
[ 5, 6]
[ 7, 8]
[ 9, 10]
[11, 12]]
Should be turned into this:
[[ 1, 2, 0, 0, 0, 0]
[ 3, 4, 1, 2, 0, 0]
[ 5, 6, 3, 4, 1, 2]
[ 7, 8, 5, 6, 3, 4]
[ 9, 10, 7, 8, 5, 6]
[11, 12, 9, 10, 7, 8]]
</code></pre>
<p>Any ideas how I could do that without going over the entire array in a loop.</p>
| 3 | 2016-07-26T16:16:28Z | 38,596,465 | <p>Here's a vectorized approach -</p>
<pre><code>def vectorized_app(a,n):
M,N = a.shape
idx = np.arange(a.shape[0])[:,None] - np.arange(n+1)
out = a[idx.ravel(),:].reshape(-1,N*(n+1))
out[N*(np.arange(1,M+1))[:,None] <= np.arange(N*(n+1))] = 0
return out
</code></pre>
<p>Sample run -</p>
<pre><code>In [255]: a
Out[255]:
array([[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9],
[10, 11, 12],
[13, 14, 15],
[16, 17, 18]])
In [256]: vectorized_app(a,3)
Out[256]:
array([[ 1, 2, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 4, 5, 6, 1, 2, 3, 0, 0, 0, 0, 0, 0],
[ 7, 8, 9, 4, 5, 6, 1, 2, 3, 0, 0, 0],
[10, 11, 12, 7, 8, 9, 4, 5, 6, 1, 2, 3],
[13, 14, 15, 10, 11, 12, 7, 8, 9, 4, 5, 6],
[16, 17, 18, 13, 14, 15, 10, 11, 12, 7, 8, 9]])
</code></pre>
<p>Runtime test -</p>
<p>I am timing <a href="http://stackoverflow.com/a/38595430/3293881"><code>@Psidom's loop-comprehension based method</code></a> and the vectorized method listed in this post on a <code>100x</code> scaled up version (in terms of size) of the sample posted in the question :</p>
<pre><code>In [246]: a = np.random.randint(0,9,(600,200))
In [247]: n = 200
In [248]: %timeit np.column_stack(mypad(a, i) for i in range(n + 1))
1 loops, best of 3: 748 ms per loop
In [249]: %timeit vectorized_app(a,n)
1 loops, best of 3: 224 ms per loop
</code></pre>
| 3 | 2016-07-26T17:35:54Z | [
"python",
"arrays",
"numpy"
] |
how to scrape website in which page url is not changed but the next button add data below the same url page | 38,595,084 | <p>I have a URL:</p>
<pre><code>http://www.goudengids.be/qn/business/advanced/where/Provincie%20Antwerpen/what/restaurant
</code></pre>
<p>On that page there is a "next results" button which loads another 20 data point while still showing first dataset, without updating the URL. I wrote a script to scrape this page in python but it only scrapes the first 22 data point even though the "next results" button is clicked and shows about 40 data. </p>
<p>How can I scrape these types of website that dynamically load content</p>
<p>My script is </p>
<pre><code>import csv
import requests
from bs4 import BeautifulSoup
url = "http://www.goudengids.be/qn/business/advanced/where/Provincie%20Antwerpen/what/restaurant/"
r = requests.get(url)
r.content
soup = BeautifulSoup(r.content)
print (soup.prettify())
g_data2 = soup.find_all("a", {"class": "heading"})
for item in g_data2:
try:
name = item.text
print name
except IndexError:
name = ''
print "No Name found!"
</code></pre>
| 2 | 2016-07-26T16:18:42Z | 38,595,715 | <p>Instead of focusing on scraping HTML I think you should look at the JSON that is retrieved via AJAX. I think the JSON is less likely to be changed in the future as opposed to the page's markup. And on top of that, it's way easier to traverse a JSON structure than it is to scrape a DOM.</p>
<p>For instance, when you load the page you provided it hits a url to get JSON at <a href="http://www.goudengids.be/q/ajax/business/results.json" rel="nofollow">http://www.goudengids.be/q/ajax/business/results.json</a>.</p>
<p>Then it provides some url parameters to query the businesses. I think you should look more into using this to get your data as opposed to scraping the page and simulating button clicks, and etc.</p>
<p><strong>Edit:</strong></p>
<p>And it looks like it's using the headers set from visiting the site initially to ensure that you have a valid session. So you may have to hit the site initially to get the cookie headers and set that for subsequent requests to get the JSON from the endpoint above. I still think this will be easier and more predictable than trying to scrape HTML.</p>
| 1 | 2016-07-26T16:51:33Z | [
"python",
"csv",
"beautifulsoup"
] |
how to scrape website in which page url is not changed but the next button add data below the same url page | 38,595,084 | <p>I have a URL:</p>
<pre><code>http://www.goudengids.be/qn/business/advanced/where/Provincie%20Antwerpen/what/restaurant
</code></pre>
<p>On that page there is a "next results" button which loads another 20 data point while still showing first dataset, without updating the URL. I wrote a script to scrape this page in python but it only scrapes the first 22 data point even though the "next results" button is clicked and shows about 40 data. </p>
<p>How can I scrape these types of website that dynamically load content</p>
<p>My script is </p>
<pre><code>import csv
import requests
from bs4 import BeautifulSoup
url = "http://www.goudengids.be/qn/business/advanced/where/Provincie%20Antwerpen/what/restaurant/"
r = requests.get(url)
r.content
soup = BeautifulSoup(r.content)
print (soup.prettify())
g_data2 = soup.find_all("a", {"class": "heading"})
for item in g_data2:
try:
name = item.text
print name
except IndexError:
name = ''
print "No Name found!"
</code></pre>
| 2 | 2016-07-26T16:18:42Z | 38,595,764 | <p>If you were to solve it with <code>requests</code>, you need to mimic what browser does when you click the "Load More" button - it sends an <em>XHR request</em> to the <code>http://www.goudengids.be/q/ajax/business/results.json</code> endpoint, simulate it in your code <a class='doc-link' href="http://stackoverflow.com/documentation/python/1792/web-scraping-with-python/8152/maintaining-web-scraping-session-with-requests#t=201607261653521447847">maintaining the web-scraping session</a>. The XHR responses are in JSON format - no need for <code>BeautifulSoup</code> in this case at all:</p>
<pre><code>import requests
main_url = "http://www.goudengids.be/qn/business/advanced/where/Provincie%20Antwerpen/what/restaurant/"
xhr_url = "http://www.goudengids.be/q/ajax/business/results.json"
with requests.Session() as session:
session.headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36'}
# visit main URL
session.get(main_url)
# load more listings - follow the pagination
page = 1
listings = []
while True:
params = {
"input": "restaurant Provincie Antwerpen",
"what": "restaurant",
"where": "Provincie Antwerpen",
"type": "DOUBLE",
"resultlisttype": "A_AND_B",
"page": str(page),
"offset": "2",
"excludelistingids": "nl_BE_YP_FREE_11336647_0000_1746702_6165_20130000, nl_BE_YP_PAID_11336647_0000_1746702_7575_20139729427, nl_BE_YP_PAID_720348_0000_187688_7575_20139392980",
"context": "SRP * A_LIST"
}
response = requests.get(xhr_url, params=params, headers={
"X-Requested-With": "XMLHttpRequest",
"Referer": main_url
})
data = response.json()
# collect listing names in a list (for example purposes)
listings.extend([item["bn"] for item in data["overallResult"]["searchResults"]])
page += 1
# TODO: figure out exit condition for the while True loop
print(listings)
</code></pre>
<p>I've left an important TODO for you - figure out an exit condition - when to stop collecting listings.</p>
| 2 | 2016-07-26T16:53:56Z | [
"python",
"csv",
"beautifulsoup"
] |
Find the name of an instance of a class from within the class | 38,595,149 | <p>If I had this Python script:</p>
<pre><code>class my_class(object):
def __init__(self):
pass
def get_name(self):
return [name of self]
this_is_the_name_im_looking_for = my_class()
print ('the name is: ' + this_is_the_name_im_looking_for.get_name())
</code></pre>
<p>What would I have to replace <em>[name of self]</em> with in order to have the function <em>get_name</em> to return 'this_is_the_name_im_looking_for'?</p>
<p>I need this because I'm creating a Python module and want to feed back which instance of this class they're using</p>
| 0 | 2016-07-26T16:21:54Z | 38,595,586 | <p>The built-in python method <code>id</code> <a href="https://docs.python.org/3/library/functions.html#id" rel="nofollow">returns</a></p>
<blockquote>
<p>an integer which is guaranteed to be unique and constant for this object during its lifetime.</p>
</blockquote>
<p>This doesn't return the 'name' of the object per-se, but it will allow you to distinguish between different instances of the class. This also solves the problem of having multiple pointers to the same object since the id is associated with the underlying object (the address of the object in memory):</p>
<pre><code>val1 = 1
val2 = val1
id(val1) == id(val2)
# returns: True
id(val1) == id(1)
# returns: True
id(val1) == id(2)
# returns: False
</code></pre>
| 0 | 2016-07-26T16:45:03Z | [
"python",
"class",
"module"
] |
Tornado web sockets and long-running Celery tasks | 38,595,184 | <p>I have a Tornado server that's used to submit long-running (~minutes) calculations by submitting tasks to some Celery workers with a RabbitMQ back-end. The tasks that are submitted are yielded in a Tornado coroutine inside a <code>WebSocketHandler</code>:</p>
<pre><code>class MainWSHandler(WebSocketHandler):
def open(self):
logging.info("Connection opened.")
def on_close(self):
logging.info("Connection closed.")
def on_message(self, message):
result = self.submit_task(message)
self.write_message("Calculation has been submitted")
@gen.coroutine
def submit_task(self, params):
result = yield gen.Task(long_calculation.apply_async, args=[params])
self.write_message("Completed calculation")
return result
</code></pre>
<p>This works well if the user never leaves the page with the currently opened web socket. If they do, and the web socket closes, the returned message <code>self.write_message("Completed calculation")</code> will fail with a <code>WebSocketClosedError</code>. This is fine in cases where the user does not intend to come back to the page for awhile (that is, until after the calculation has been completed).</p>
<p>However, in the case where the user submits a calculation, leaves the page, and then returns before the calculation is finished, the same error is raised because the web socket has been closed and a new one opened. This prevents the calculation completion messages to propagate to the front-end.</p>
<p>My question is: is it possible to reconnect to the same web socket? Or, alternatively, how can I ensure that the message returned once the calculation has completed makes it to the user's current page?</p>
| 0 | 2016-07-26T16:23:29Z | 38,603,646 | <p>It seems like I may have jumped the gun assuming that there's a structural solution to this problem inherent in this long-running-task-with-a-web-socket system.</p>
<p>It's enough to store the currently opened web socket in a class attribute</p>
<pre><code>class MainWSHandler(WebSocketHandler):
self.clients = {}
def open(self):
logging.info("Connection opened.")
self.clients[self.current_user] = self
</code></pre>
<p>And deal with the response from the task by using the stored web socket</p>
<pre><code>@gen.coroutine
def submit_task(self, params):
result = yield gen.Task(long_calculation.apply_async, args=[params])
# Retrieve current web socket
ws = self.clients[self.current_user]
ws.write_message("Completed calculation")
return result
</code></pre>
<p>There may be more efficient ways (or some idiomatic method), but I am satisfied with this solution for now.</p>
| 0 | 2016-07-27T04:19:55Z | [
"python",
"sockets",
"websocket",
"celery",
"tornado"
] |
Join items of a list with '+' sign in a string | 38,595,362 | <p>I would like my output to be :</p>
<pre><code>Enter a number : n
List from zero to your number is : [0,1,2,3, ... , n]
0 + 1 + 2 + 3 + 4 + 5 ... + n = sum(list)
</code></pre>
<p>Yet my actual output is :</p>
<pre><code>Enter a number : 5
List from zero to your number is : [0, 1, 2, 3, 4, 5]
[+0+,+ +1+,+ +2+,+ +3+,+ +4+,+ +5+] = 15
</code></pre>
<p>I'm using <code>join</code> as it's the only type I know. </p>
<p>Why are the plus signs printed around the items and why are they surrounding blank spaces?</p>
<p>How should I print the <code>list</code>'s values into a string for the user to read ?</p>
<p>Thank you. Here's my code :</p>
<pre><code>##Begin n_nx1 application
n_put = int(input("Choose a number : "))
n_nx1lst = list()
def n_nx1fct():
for i in range(0,n_put+1):
n_nx1lst.append(i)
return n_nx1lst
print ("List is : ", n_nx1fct())
print ('+'.join(str(n_nx1lst)) + " = ", sum(n_nx1lst))
</code></pre>
| 4 | 2016-07-26T16:33:51Z | 38,595,413 | <p>You don't need to call <code>str</code> on your list. That returns the str representation of your list and the output of that is joined with <code>'+'</code>. </p>
<p>You can instead use <a href="https://docs.python.org/2/library/functions.html#map" rel="nofollow"><code>map</code></a> to convert each item in your list to <code>str</code>, then <code>join</code>:</p>
<pre><code>print('+'.join(map(str, n_nx1lst)) + " = ", sum(n_nx1lst))
</code></pre>
<p>You can also use the new style formatting to have a more readable output:</p>
<pre><code>result = '+'.join(map(str, n_nx1lst))
print("{} = {}".format(result, sum(n_nx1lst)))
</code></pre>
| 4 | 2016-07-26T16:36:28Z | [
"python",
"list",
"python-3.x"
] |
Join items of a list with '+' sign in a string | 38,595,362 | <p>I would like my output to be :</p>
<pre><code>Enter a number : n
List from zero to your number is : [0,1,2,3, ... , n]
0 + 1 + 2 + 3 + 4 + 5 ... + n = sum(list)
</code></pre>
<p>Yet my actual output is :</p>
<pre><code>Enter a number : 5
List from zero to your number is : [0, 1, 2, 3, 4, 5]
[+0+,+ +1+,+ +2+,+ +3+,+ +4+,+ +5+] = 15
</code></pre>
<p>I'm using <code>join</code> as it's the only type I know. </p>
<p>Why are the plus signs printed around the items and why are they surrounding blank spaces?</p>
<p>How should I print the <code>list</code>'s values into a string for the user to read ?</p>
<p>Thank you. Here's my code :</p>
<pre><code>##Begin n_nx1 application
n_put = int(input("Choose a number : "))
n_nx1lst = list()
def n_nx1fct():
for i in range(0,n_put+1):
n_nx1lst.append(i)
return n_nx1lst
print ("List is : ", n_nx1fct())
print ('+'.join(str(n_nx1lst)) + " = ", sum(n_nx1lst))
</code></pre>
| 4 | 2016-07-26T16:33:51Z | 38,595,425 | <p>Change each individual <code>int</code> element in the <code>list</code> to a <code>str</code> inside the <code>.join</code> call instead by using a <a class='doc-link' href="http://stackoverflow.com/documentation/python/196/comprehensions/739/generator-expressions#t=201607261724021165396"><code>generator expression</code></a>:</p>
<pre><code>print("+".join(str(i) for i in n_nx1lst) + " = ", sum(n_nx1lst))
</code></pre>
<p>In the first case, you're calling <code>str</code> on the whole <code>list</code> and not on individual elements in that <code>list</code>. As a result, it <em>joins each character in the representation of the list</em>, which looks like this:</p>
<pre><code>'[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]'
</code></pre>
<p>with the <code>+</code> sign yielding the result you're seeing.</p>
| 6 | 2016-07-26T16:37:05Z | [
"python",
"list",
"python-3.x"
] |
Join items of a list with '+' sign in a string | 38,595,362 | <p>I would like my output to be :</p>
<pre><code>Enter a number : n
List from zero to your number is : [0,1,2,3, ... , n]
0 + 1 + 2 + 3 + 4 + 5 ... + n = sum(list)
</code></pre>
<p>Yet my actual output is :</p>
<pre><code>Enter a number : 5
List from zero to your number is : [0, 1, 2, 3, 4, 5]
[+0+,+ +1+,+ +2+,+ +3+,+ +4+,+ +5+] = 15
</code></pre>
<p>I'm using <code>join</code> as it's the only type I know. </p>
<p>Why are the plus signs printed around the items and why are they surrounding blank spaces?</p>
<p>How should I print the <code>list</code>'s values into a string for the user to read ?</p>
<p>Thank you. Here's my code :</p>
<pre><code>##Begin n_nx1 application
n_put = int(input("Choose a number : "))
n_nx1lst = list()
def n_nx1fct():
for i in range(0,n_put+1):
n_nx1lst.append(i)
return n_nx1lst
print ("List is : ", n_nx1fct())
print ('+'.join(str(n_nx1lst)) + " = ", sum(n_nx1lst))
</code></pre>
| 4 | 2016-07-26T16:33:51Z | 38,595,483 | <p>What you need to do is concatenate a <strong>string</strong> element with ' + ' for <em>each</em> element in your list. All you need from there is to have some string formatting. </p>
<pre><code>def sum_of_input():
n = int(raw_input("Enter a number : ")) # Get our raw_input -> int
l = range(n + 1) # Create our list of range [ xâ¥0 | xâ¤10 ]
print("List from zero to your number: {}".format(l))
print(' + '.join(str(i) for i in l) + ' = {}'.format(sum(l)))
</code></pre>
<p><strong>Sample output:</strong></p>
<pre><code>>>> sum_of_input()
Enter a number : 10
List from zero to your number: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 = 55
</code></pre>
<p><strong>How does it work?</strong>
<br>We use what's called a <a href="https://docs.python.org/3/tutorial/datastructures.html" rel="nofollow">list comprehension (5.1.3)</a> (<em>generator in this specific usage</em>) to iterate over our list of <code>int</code> elements creating a <code>list</code> of <code>string</code> elements. <em>Now</em> we can use the <code>string</code> method <code>join()</code> to create our desired format.</p>
<pre><code>>>> [str(i) for i in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]]
['1', '2', '3', '4', '5', '6', '7', '8', '9', '10']
>>> ' + '.join(['1', '2', '3', '4', '5', '6', '7', '8', '9', '10'])
'1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10'
</code></pre>
| 5 | 2016-07-26T16:40:04Z | [
"python",
"list",
"python-3.x"
] |
In place replacement of text in a file in Python | 38,595,400 | <p>I am using the following code to upload a file on server using FTP after editing it:</p>
<pre><code>import fileinput
file = open('example.php','rb+')
for line in fileinput.input('example.php'):
if 'Original' in line :
file.write( line.replace('Original', 'Replacement'))
file.close()
</code></pre>
<p>There is one thing, instead of replacing the text in its original place, the code adds the replaced text at the end and the text in original place is unchanged.</p>
<p>Also, instead of just the replaced text, it prints out the whole line. Could anyone please tell me how to resolve these two errors?</p>
| 0 | 2016-07-26T16:36:05Z | 38,595,650 | <p>Replacing stuff in a file <strong>only</strong> works well if original and replacement have the same size (in bytes) then you can do </p>
<pre><code>with open('example.php','rb+') as f:
pos=f.tell()
line=f.readline()
if b'Original' in line:
f.seek(pos)
f.write(line.replace(b'Original',b'Replacement'))
</code></pre>
<p>(In this case <code>b'Original'</code> and <code>b'Replacement'</code> do not have the same size so your file will look funny after this)</p>
<p>Edit:</p>
<p>If original and replacement are not the same size, there are different possibilities like adding bytes to fill the hole or moving everything after the line.</p>
| -1 | 2016-07-26T16:48:07Z | [
"python",
"file",
"python-3.x",
"python-3.5"
] |
In place replacement of text in a file in Python | 38,595,400 | <p>I am using the following code to upload a file on server using FTP after editing it:</p>
<pre><code>import fileinput
file = open('example.php','rb+')
for line in fileinput.input('example.php'):
if 'Original' in line :
file.write( line.replace('Original', 'Replacement'))
file.close()
</code></pre>
<p>There is one thing, instead of replacing the text in its original place, the code adds the replaced text at the end and the text in original place is unchanged.</p>
<p>Also, instead of just the replaced text, it prints out the whole line. Could anyone please tell me how to resolve these two errors?</p>
| 0 | 2016-07-26T16:36:05Z | 38,595,733 | <p>1) <strong>The code adds the replaced text at the end and the text in original place is unchanged.</strong></p>
<p>You can't replace in the body of the file because you're opening it with the <code>+</code> signal. This way it'll append to the end of the file.</p>
<pre><code>file = open('example.php','rb+')
</code></pre>
<p>But this only works if you want to <strong>append</strong> to the end of the document.</p>
<p>To <strong>bypass</strong> this you may use <a href="http://stackoverflow.com/questions/11696472/seek-function"><code>seek()</code></a> to navigate to the specific line and replace it. Or create 2 files: an <code>input_file</code> and an <code>output_file</code>.</p>
<hr>
<p>2) <strong>Also, instead of just the replaced text, it prints out the whole line.</strong></p>
<p>It's because you're using:</p>
<pre><code>file.write( line.replace('Original', 'Replacement'))
</code></pre>
<hr>
<p><strong>Free Code:</strong></p>
<p>I've segregated into 2 files, an inputfile and an outputfile.</p>
<p>First it'll open the <code>ifile</code> and save all lines in a list called <code>lines</code>.</p>
<p>Second, it'll read all these lines, and if <code>'Original'</code> is present, it'll <code>replace</code> it.</p>
<p>After replacement, it'll save into <code>ofile</code>.</p>
<pre><code>ifile = 'example.php'
ofile = 'example_edited.php'
with open(ifile, 'rb') as f:
lines = f.readlines()
with open(ofile, 'wb') as g:
for line in lines:
if 'Original' in line:
g.write(line.replace('Original', 'Replacement'))
</code></pre>
<p>Then if you want to, you may <a href="http://docs.python.org/library/os.html#os.remove" rel="nofollow"><code>os.remove()</code></a> the non-edited file with:</p>
<hr>
<p><strong>More Info:</strong> <a href="http://www.tutorialspoint.com/python/python_files_io.htm" rel="nofollow">Tutorials Point: Python Files I/O</a></p>
| 0 | 2016-07-26T16:52:39Z | [
"python",
"file",
"python-3.x",
"python-3.5"
] |
In place replacement of text in a file in Python | 38,595,400 | <p>I am using the following code to upload a file on server using FTP after editing it:</p>
<pre><code>import fileinput
file = open('example.php','rb+')
for line in fileinput.input('example.php'):
if 'Original' in line :
file.write( line.replace('Original', 'Replacement'))
file.close()
</code></pre>
<p>There is one thing, instead of replacing the text in its original place, the code adds the replaced text at the end and the text in original place is unchanged.</p>
<p>Also, instead of just the replaced text, it prints out the whole line. Could anyone please tell me how to resolve these two errors?</p>
| 0 | 2016-07-26T16:36:05Z | 38,595,858 | <p>The second error is how the <a href="https://docs.python.org/3/library/stdtypes.html#str.replace" rel="nofollow"><code>replace()</code></a> method works. </p>
<p>It returns the <em>entire</em> input string, with only the specified substring replaced. See example <a href="http://www.tutorialspoint.com/python/string_replace.htm" rel="nofollow">here</a>.</p>
<p>To write to a specific place in the file, you should <a href="http://stackoverflow.com/questions/11696472/seek-function"><code>seek()</code></a> to the right position first.</p>
<p>I think this issue has been asked before in several places, I would do a quick search of StackOverflow. </p>
<p>Maybe <a href="http://stackoverflow.com/questions/16556944/how-do-i-write-to-the-middle-of-a-text-file-while-reading-its-contents">this</a> would help?</p>
| 0 | 2016-07-26T16:59:25Z | [
"python",
"file",
"python-3.x",
"python-3.5"
] |
How to prevent printing predict_proba() output in Keras? | 38,595,547 | <p>I am using <code>predict_proba()</code> in Keras for thousand of times, and after each use it prints the following:</p>
<pre><code>1/1 [==============================] - 0s
</code></pre>
<p>I wonder how I can prevent it from printing this. Thank you :)</p>
| 0 | 2016-07-26T16:43:35Z | 38,595,692 | <p>Pass <code>verbose=0</code> to <code>predict_proba</code> to turn off verbose output.</p>
| 0 | 2016-07-26T16:50:29Z | [
"python",
"deep-learning",
"keras"
] |
Python pandas - construct multivariate pivot table to display count of NaNs and non-NaNs | 38,595,578 | <p>I have a dataset based on different weather stations for several variables (Temperature, Pressure, etc.),</p>
<pre><code>stationID | Time | Temperature | Pressure |...
----------+------+-------------+----------+
123 | 1 | 30 | 1010.5 |
123 | 2 | 31 | 1009.0 |
202 | 1 | 24 | NaN |
202 | 2 | 24.3 | NaN |
202 | 3 | NaN | 1000.3 |
...
</code></pre>
<p>and I would like to create a pivot table that would show the number of NaNs and non-NaNs per weather station, such that:</p>
<pre><code>stationID | nanStatus | Temperature | Pressure |...
----------+-----------+-------------+----------+
123 | NaN | 0 | 0 |
| nonNaN | 2 | 2 |
202 | NaN | 1 | 2 |
| nonNaN | 2 | 1 |
...
</code></pre>
<p>Below I show what I have done so far, which works (in a cumbersome way) for Temperature. But how can I get the same for both variables, as shown above?</p>
<pre><code>import pandas as pd
import bumpy as np
df = pd.DataFrame({'stationID':[123,123,202,202,202], 'Time':[1,2,1,2,3],'Temperature':[30,31,24,24.3,np.nan],'Pressure':[1010.5,1009.0,np.nan,np.nan,1000.3]})
dfnull = df.isnull()
dfnull['stationID'] = df['stationID']
dfnull['tempValue'] = df['Temperature']
dfnull.pivot_table(values=["tempValue"], index=["stationID","Temperature"], aggfunc=len,fill_value=0)
</code></pre>
<p>The output is:</p>
<pre><code>----------------------------------
tempValue
stationID | Temperature
123 | False 2
202 | False 2
| True 1
</code></pre>
| 2 | 2016-07-26T16:44:44Z | 38,596,301 | <p><strong>UPDATE:</strong> thanks to <a href="http://stackoverflow.com/questions/38595578/python-pandas-construct-multivariate-pivot-table-to-display-count-of-nans-and/38596301?noredirect=1#comment64580792_38596301">@root</a>:</p>
<pre><code>In [16]: df.groupby('stationID')[['Temperature','Pressure']].agg([nans, notnans]).astype(int).stack(level=1)
Out[16]:
Temperature Pressure
stationID
123 nans 0 0
notnans 2 2
202 nans 1 2
notnans 2 1
</code></pre>
<p><strong>Original answer:</strong></p>
<pre><code>In [12]: %paste
def nans(s):
return s.isnull().sum()
def notnans(s):
return s.notnull().sum()
## -- End pasted text --
In [37]: df.groupby('stationID')[['Temperature','Pressure']].agg([nans, notnans]).astype(np.int8)
Out[37]:
Temperature Pressure
nans notnans nans notnans
stationID
123 0 2 0 2
202 1 2 2 1
</code></pre>
| 3 | 2016-07-26T17:25:25Z | [
"python",
"pandas",
"dataframe",
"pivot-table",
null
] |
Python pandas - construct multivariate pivot table to display count of NaNs and non-NaNs | 38,595,578 | <p>I have a dataset based on different weather stations for several variables (Temperature, Pressure, etc.),</p>
<pre><code>stationID | Time | Temperature | Pressure |...
----------+------+-------------+----------+
123 | 1 | 30 | 1010.5 |
123 | 2 | 31 | 1009.0 |
202 | 1 | 24 | NaN |
202 | 2 | 24.3 | NaN |
202 | 3 | NaN | 1000.3 |
...
</code></pre>
<p>and I would like to create a pivot table that would show the number of NaNs and non-NaNs per weather station, such that:</p>
<pre><code>stationID | nanStatus | Temperature | Pressure |...
----------+-----------+-------------+----------+
123 | NaN | 0 | 0 |
| nonNaN | 2 | 2 |
202 | NaN | 1 | 2 |
| nonNaN | 2 | 1 |
...
</code></pre>
<p>Below I show what I have done so far, which works (in a cumbersome way) for Temperature. But how can I get the same for both variables, as shown above?</p>
<pre><code>import pandas as pd
import bumpy as np
df = pd.DataFrame({'stationID':[123,123,202,202,202], 'Time':[1,2,1,2,3],'Temperature':[30,31,24,24.3,np.nan],'Pressure':[1010.5,1009.0,np.nan,np.nan,1000.3]})
dfnull = df.isnull()
dfnull['stationID'] = df['stationID']
dfnull['tempValue'] = df['Temperature']
dfnull.pivot_table(values=["tempValue"], index=["stationID","Temperature"], aggfunc=len,fill_value=0)
</code></pre>
<p>The output is:</p>
<pre><code>----------------------------------
tempValue
stationID | Temperature
123 | False 2
202 | False 2
| True 1
</code></pre>
| 2 | 2016-07-26T16:44:44Z | 38,596,333 | <p>I'll admit this is not the prettiest solution, but it works. First define two temporary columns <code>TempNaN</code> and <code>PresNaN</code>:</p>
<pre><code>df['TempNaN'] = df['Temperature'].apply(lambda x: 'NaN' if x!=x else 'NonNaN')
df['PresNaN'] = df['Pressure'].apply(lambda x: 'NaN' if x!=x else 'NonNaN')
</code></pre>
<p>Then define your results DataFrame using a MultiIndex:</p>
<pre><code>Results = pd.DataFrame(index=pd.MultiIndex.from_tuples(list(zip(*[sorted(list(df['stationID'].unique())*2),['NaN','NonNaN']*df['stationID'].nunique()])),names=['stationID','NaNStatus']))
</code></pre>
<p>Store your computations in the results DataFrame:</p>
<pre><code>Results['Temperature'] = df.groupby(['stationID','TempNaN'])['Temperature'].apply(lambda x: x.shape[0])
Results['Pressure'] = df.groupby(['stationID','PresNaN'])['Pressure'].apply(lambda x: x.shape[0])
</code></pre>
<p>And fill the blank values with zero:</p>
<pre><code>Results.fillna(value=0,inplace=True)
</code></pre>
<p>You can loop over the columns if that is easier. For example:</p>
<pre><code>Results = pd.DataFrame(index=pd.MultiIndex.from_tuples(list(zip(*[sorted(list(df['stationID'].unique())*2),['NaN','NonNaN']*df['stationID'].nunique()])),names=['stationID','NaNStatus']))
for col in ['Temperature','Pressure']:
df[col + 'NaN'] = df[col].apply(lambda x: 'NaN' if x!=x else 'NonNaN')
Results[col] = df.groupby(['stationID',col + 'NaN'])[col].apply(lambda x: x.shape[0])
df.drop([col + 'NaN'],axis=1,inplace=True)
Results.fillna(value=0,inplace=True)
</code></pre>
| 0 | 2016-07-26T17:27:24Z | [
"python",
"pandas",
"dataframe",
"pivot-table",
null
] |
Python pandas - construct multivariate pivot table to display count of NaNs and non-NaNs | 38,595,578 | <p>I have a dataset based on different weather stations for several variables (Temperature, Pressure, etc.),</p>
<pre><code>stationID | Time | Temperature | Pressure |...
----------+------+-------------+----------+
123 | 1 | 30 | 1010.5 |
123 | 2 | 31 | 1009.0 |
202 | 1 | 24 | NaN |
202 | 2 | 24.3 | NaN |
202 | 3 | NaN | 1000.3 |
...
</code></pre>
<p>and I would like to create a pivot table that would show the number of NaNs and non-NaNs per weather station, such that:</p>
<pre><code>stationID | nanStatus | Temperature | Pressure |...
----------+-----------+-------------+----------+
123 | NaN | 0 | 0 |
| nonNaN | 2 | 2 |
202 | NaN | 1 | 2 |
| nonNaN | 2 | 1 |
...
</code></pre>
<p>Below I show what I have done so far, which works (in a cumbersome way) for Temperature. But how can I get the same for both variables, as shown above?</p>
<pre><code>import pandas as pd
import bumpy as np
df = pd.DataFrame({'stationID':[123,123,202,202,202], 'Time':[1,2,1,2,3],'Temperature':[30,31,24,24.3,np.nan],'Pressure':[1010.5,1009.0,np.nan,np.nan,1000.3]})
dfnull = df.isnull()
dfnull['stationID'] = df['stationID']
dfnull['tempValue'] = df['Temperature']
dfnull.pivot_table(values=["tempValue"], index=["stationID","Temperature"], aggfunc=len,fill_value=0)
</code></pre>
<p>The output is:</p>
<pre><code>----------------------------------
tempValue
stationID | Temperature
123 | False 2
202 | False 2
| True 1
</code></pre>
| 2 | 2016-07-26T16:44:44Z | 38,596,508 | <pre><code>d = {'stationID':[], 'nanStatus':[], 'Temperature':[], 'Pressure':[]}
for station_id, data in df.groupby(['stationID']):
temp_nans = data.isnull().Temperature.mean()*data.isnull().Temperature.count()
pres_nans = data.isnull().Pressure.mean()*data.isnull().Pressure.count()
d['stationID'].append(station_id)
d['nanStatus'].append('NaN')
d['Temperature'].append(temp_nans)
d['Pressure'].append(pres_nans)
d['stationID'].append(station_id)
d['nanStatus'].append('nonNaN')
d['Temperature'].append(data.isnull().Temperature.count() - temp_nans)
d['Pressure'].append(data.isnull().Pressure.count() - pres_nans)
df2 = pd.DataFrame.from_dict(d)
print(df2)
</code></pre>
<p>The result is:</p>
<pre><code> Pressure Temperature nanStatus stationID
0 0.0 0.0 NaN 123
1 2.0 2.0 nonNaN 123
2 2.0 1.0 NaN 202
3 1.0 2.0 nonNaN 202
</code></pre>
| 0 | 2016-07-26T17:38:26Z | [
"python",
"pandas",
"dataframe",
"pivot-table",
null
] |
python/matplotlib/seaborn- boxplot on an x axis with data points | 38,595,612 | <p>My data set is like this: a python list with 6 numbers [23948.30, 23946.20, 23961.20, 23971.70, 23956.30, 23987.30]</p>
<p>I want them to be be a horizontal box plot above an x axis with[23855 and 24472] as the limit of the x axis (with no y axis). </p>
<p>The x axis will also contain points in the data.</p>
<p>(so the box plot and x axis have the same scale)</p>
<p>I also want the box plot show the mean number in picture.</p>
<p>Now I can only get the horizontal box plot.
(And I also want the x-axis show the whole number instead of xx+2.394e)</p>
<p>Here is my code now:</p>
<p>`</p>
<pre><code>def box_plot(circ_list, wear_limit):
print circ_list
print wear_limit
fig1 = plt.figure()
plt.boxplot(circ_list, 0, 'rs', 0)
plt.show()
</code></pre>
<p>`</p>
<p><a href="http://i.stack.imgur.com/BK41Q.png" rel="nofollow"><img src="http://i.stack.imgur.com/BK41Q.png" alt="enter image description here"></a></p>
<p>Seaborn code I am trying right now:</p>
<pre><code>def box_plot(circ_list, wear_limit):
print circ_list
print wear_limit
#fig1 = plt.figure()
#plt.boxplot(circ_list, 0, 'rs', 0)
#plt.show()
fig2 = plt.figure()
sns.set(style="ticks")
x = circ_list
y = []
for i in range(0, len(circ_list)):
y.append(0)
f, (ax_box, ax_line) = plt.subplots(2, sharex=True,
gridspec_kw={"height_ratios": (.15, .85)})
sns.boxplot(x, ax=ax_box)
sns.pointplot(x, ax=ax_line, ay=y)
ax_box.set(yticks=[])
ax_line.set(yticks=[])
sns.despine(ax=ax_line)
sns.despine(ax=ax_box, left=True)
cur_axes = plt.gca()
cur_axes.axes.get_yaxis().set_visible(False)
sns.plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/nF2Z8.png" rel="nofollow"><img src="http://i.stack.imgur.com/nF2Z8.png" alt="enter image description here"></a></p>
| 1 | 2016-07-26T16:46:20Z | 38,596,113 | <p>I answered this question in the other post as well, but I will paste it here just in case. I also added something that I feel might be closer to what you are looking to achieve.</p>
<pre><code>l = [23948.30, 23946.20, 23961.20, 23971.70, 23956.30, 23987.30]
def box_plot(circ_list):
fig, ax = plt.subplots()
plt.boxplot(circ_list, 0, 'rs', 0, showmeans=True)
plt.ylim((0.28, 1.5))
ax.set_yticks([])
labels = ["{}".format(int(i)) for i in ax.get_xticks()]
ax.set_xticklabels(labels)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.spines['left'].set_color('none')
ax.spines['bottom'].set_position('center')
ax.spines['bottom'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
plt.show()
box_plot(l)
</code></pre>
<p>The result:</p>
<p><img src="http://i.stack.imgur.com/EFilf.png" alt="Your box-plot"></p>
<p>Do let me know if it correspond to what you were looking for.</p>
| 0 | 2016-07-26T17:14:22Z | [
"python",
"matplotlib",
"plot",
"boxplot",
"seaborn"
] |
Python division operator gives different results | 38,595,613 | <p>In Python I am trying to divide an integer by half and I came across two different results based on the sign of the number.</p>
<p>Example:</p>
<pre><code>5/2 gives 2
and
-5/2 gives -3
</code></pre>
<p>How to get -2 when I divide -5/2 ? </p>
| 1 | 2016-07-26T16:46:22Z | 38,595,701 | <pre><code>>>> import math
>>> math.ceil(float(-5)/2)
-2.0
</code></pre>
| 0 | 2016-07-26T16:50:57Z | [
"python",
"integer-division",
"floor",
"ceil"
] |
Python division operator gives different results | 38,595,613 | <p>In Python I am trying to divide an integer by half and I came across two different results based on the sign of the number.</p>
<p>Example:</p>
<pre><code>5/2 gives 2
and
-5/2 gives -3
</code></pre>
<p>How to get -2 when I divide -5/2 ? </p>
| 1 | 2016-07-26T16:46:22Z | 38,595,710 | <p>This happens due to python rounding <strong>integer division</strong>. Below are a few examples. In python, the <code>float</code> type is the <em>stronger</em> type and expressions involving <code>float</code> and <code>int</code> evaluate to <code>float</code>.</p>
<pre><code>>>> 5/2
2
>>> -5/2
-3
>>> -5.0/2
-2.5
>>> 5.0/2
2.5
>>> -5//2
-3
</code></pre>
<p>To circumvent the rounding, you could leverage this property; and instead perform a calculation with <code>float</code> as to not lose precision. Then use <a href="https://docs.python.org/2/library/math.html" rel="nofollow">math module</a> to return the ceiling of that number (<em>then convert to -> int again</em>):</p>
<pre><code>>>> import math
>>> int(math.ceil(-5/float(2)))
-2
</code></pre>
| 1 | 2016-07-26T16:51:21Z | [
"python",
"integer-division",
"floor",
"ceil"
] |
Python division operator gives different results | 38,595,613 | <p>In Python I am trying to divide an integer by half and I came across two different results based on the sign of the number.</p>
<p>Example:</p>
<pre><code>5/2 gives 2
and
-5/2 gives -3
</code></pre>
<p>How to get -2 when I divide -5/2 ? </p>
| 1 | 2016-07-26T16:46:22Z | 38,595,711 | <p>You should enclose division in expression like below</p>
<pre><code>print -(5/2)
</code></pre>
| 2 | 2016-07-26T16:51:26Z | [
"python",
"integer-division",
"floor",
"ceil"
] |
Python division operator gives different results | 38,595,613 | <p>In Python I am trying to divide an integer by half and I came across two different results based on the sign of the number.</p>
<p>Example:</p>
<pre><code>5/2 gives 2
and
-5/2 gives -3
</code></pre>
<p>How to get -2 when I divide -5/2 ? </p>
| 1 | 2016-07-26T16:46:22Z | 38,595,747 | <p>You need to use float division and then use <code>int</code> to truncate the decimal</p>
<pre><code>>>> from __future__ import division
>>> -5 / 2
-2.5
>>> int(-5 / 2)
-2
</code></pre>
<p>In Python 3, float division is the default, and you don't need to include the <code>from __future__ import division</code>. Alternatively, you could manually make one of the values a float to force float division</p>
<pre><code>>>> -5 / 2.0
-2.5
</code></pre>
| 1 | 2016-07-26T16:53:14Z | [
"python",
"integer-division",
"floor",
"ceil"
] |
Python division operator gives different results | 38,595,613 | <p>In Python I am trying to divide an integer by half and I came across two different results based on the sign of the number.</p>
<p>Example:</p>
<pre><code>5/2 gives 2
and
-5/2 gives -3
</code></pre>
<p>How to get -2 when I divide -5/2 ? </p>
| 1 | 2016-07-26T16:46:22Z | 38,595,815 | <p>As of <a href="http://stackoverflow.com/questions/19919387/in-python-what-is-a-good-way-to-round-towards-zero-in-integer-division">this accepted answer</a>:</p>
<pre><code>> int(float(-5)/2)
-2
> int(float(5)/2)
2
</code></pre>
| 1 | 2016-07-26T16:56:49Z | [
"python",
"integer-division",
"floor",
"ceil"
] |
How to remove line when variable is not defined in jinja2 template | 38,595,674 | <p>I have a simple jinja2 template:</p>
<pre><code>{% for test in tests %}
{{test.status}} {{test.description}}:
{{test.message}}
Details:
{% for detail in test.details %}
{{detail}}
{% endfor %}
{% endfor %}
</code></pre>
<p>Which work really good when all of variable of 'test' object are defined like here:</p>
<pre><code>from jinja2 import Environment, PackageLoader
env = Environment(loader=PackageLoader('my_package', 'templates'), trim_blocks=True, lstrip_blocks=True, keep_trailing_newline=True)
template = env.get_template('template.hbs')
test_results = {
'tests': [
{
'status': 'ERROR',
'description': 'Description of test',
'message': 'Some test message what went wrong and something',
'details': [
'First error',
'Second error'
]
}
]
}
output = template.render(title=test_results['title'], tests=test_results['tests'])
</code></pre>
<p>Then output looks like this:</p>
<pre><code>ERROR Description of test:
Some test message what went wrong and something
Details:
First error
Second error
</code></pre>
<p>But sometimes it is possible that 'test' object will not have 'message' property and in this case there is an empty line:</p>
<pre><code>ERROR Description of test:
Details:
First error
Second error
</code></pre>
<p>Is it possible to make this variable stick to whole line? to make it disappear when variable is undefined?</p>
| 2 | 2016-07-26T16:49:33Z | 39,208,689 | <p>You can put a if condition inside the for loop to avoid blank line if there is no message.</p>
<pre><code>{% for test in tests %}
{{test.status}} {{test.description}}:
{% if test.message %}
{{test.message}}
{% endif %}
Details:
{% for detail in test.details %}
{{detail}}
{% endfor %}
{% endfor %}
</code></pre>
| 1 | 2016-08-29T14:27:49Z | [
"python",
"templates",
"jinja2"
] |
How can we get lmfit parameters after fitting? | 38,595,676 | <p>I wrote a program to fit some Raman spectra peaks.
I need to return the fitted parameters (position, amplitude, HWHM).</p>
<p>I used the modul lmfit to create a lorentzian peak with constraints.</p>
<p>I have a good agreement between the fitted peak and the raw data according to my figure plot.
But when it comes to exracte the parameters <em>after</em> fitting, I have a problem, the programm only return the initial values. </p>
<p>I tied the 'report_fit module' and changing the inital parameter without sucess. The parameters values does not evolve.</p>
<p>What bothers me is that this program works on my colleague's PC but not on mine.
So the problem may comes from my python version.</p>
<p>I am using spyder 2.3.9, and python 3.4 installed with anaconda under windows 10.
The lmfit module 0.9.3, seems to work partially, since I can get a good fitting agreament (from the figure plt.plot).
But I cannot return the parameter values after fitting. </p>
<hr>
<p>Here is my code:</p>
<pre><code>#!/usr/bin/python3
# -*- coding:utf-8 -*-
import os
import numpy as np
import matplotlib.pyplot as plt
from math import factorial
from scipy.interpolate import interp1d
from lmfit import minimize, Parameters #,report_fit
##############################################################################
# Fit of Raman peaks
def fmin150(pars,x,y):
amp= pars['Amp_150'].value
cent=pars['Cent_150'].value
hwhm=pars['Wid_150'].value
a=pars['a_150'].value
b=pars['b_150'].value
peak = (amp*hwhm)/(((x-cent)**2)+(hwhm**2)) + ((a*x)+b)
return peak - y
def fmin220(pars,x,y):
amp= pars['Amp_220'].value
cent=pars['Cent_220'].value
hwhm=pars['Wid_220'].value
a=pars['a_220'].value
b=pars['b_220'].value
peak = (amp*hwhm)/(((x-cent)**2)+(hwhm**2)) + ((a*x)+b)
return peak - y
def fmin2d(pars,x,y):
amp= pars['Amp_2D'].value
cent=pars['Cent_2D'].value
hwhm=pars['Wid_2D'].value
a=pars['a_2D'].value
b=pars['b_2D'].value
peak = (amp*hwhm)/(((x-cent)**2)+(hwhm**2)) + ((a*x)+b)
return peak - y
##############################################################################
def fit(filename):
"""
Fit Raman spectrum file
Return filename, for each peak (*f*whm, position, height)
PL (Position, Intensity, Area)
"""
print ("----------------------------")
print("Treating file : ")
print(filename)
try:
data = np.loadtxt(filename)
except:
print("Unable to load file")
xx=data[:,0]
yy=data[:,1]
#### Define fitting interval #####
# Cu oxides (unités en cm-1)
xminim150 = 120
xmaxim150 = 170
xminim220 = 175
xmaxim220 = 275
xminim300 = 280
xmaxim300 = 340
xminim640 = 345
xmaxim640 = 800
# Graphene
xminimG = 1540
xmaximG = 1680
xminim2D = 2600
xmaxim2D = 2820
# Definition Baground = place without the fitted peaks
zone1 = (xx > min(xx)) & (xx < xminim150)
zone2 = (xx > xmaxim150) & (xx < xminim220)
zone3 = (xx > xmaxim220) & (xx < xminim300)
zone4 = (xx > xmaxim300) & (xx < xminim640)
zone5 = (xx > xmaxim640) & (xx < xminimG)
zone6 = (xx > xmaximG) & (xx < xminim2D)
zone7 = (xx > xmaxim2D) & (xx < max(xx))
x_BG = np.concatenate((xx[zone1],xx[zone2],xx[zone3],xx[zone4],xx[zone5],xx[zone6],xx[zone7]))
y_BG = np.concatenate((yy[zone1],yy[zone2],yy[zone3],yy[zone4],yy[zone5],yy[zone6],yy[zone7]))
#Creation de l'interpolation lineaire
f1 = interp1d(x_BG, y_BG, kind='linear')
xinterpmin= x_BG[0] # valeur de x_BG min
xinterpmax= x_BG[len(x_BG)-1] # valeur de x_BG max
nbxinterp = len(xx) * 4 #(nb de point en x)* 4 pour une interpolation correcte
xnew = np.linspace(xinterpmin, xinterpmax, num=nbxinterp, endpoint=True)
ynew= f1(xnew)
########################## Fit 2D peaks ###############################
# create a set of Parameters
pars150 = Parameters()
pars220 = Parameters()
pars300 = Parameters()
pars640 = Parameters()
parsg = Parameters()
pars2d = Parameters()
##### Cu2O pic 150 cm-1 #####
pars150.add('Amp_150', value=10, min=0.0001, max=100000) # Amplitude ~intensity
pars150.add('Cent_150', value=150, min=140, max=160) # Center ~position
pars150.add('Wid_150', value=5, min=4, max=15 ) # Width is the HWHM
pars150.add('a_150', value=1, min=-100000, max=100000)
pars150.add('b_150', value=10, min=-100000, max=100000)
##### Cu2O pic 220 cm-1 #####
pars220.add('Amp_220', value=10, min=0.0001, max=100000)
pars220.add('Cent_220', value=220, min=200, max=230)
pars220.add('Wid_220', value=5, min=4, max=15 )
pars220.add('a_220', value=1, min=-100000, max=100000)
pars220.add('b_220', value=10, min=-100000, max=100000)
##### Graphene 2D #####
pars2d.add('Amp_2D', value=15, min=0.0001, max=100000)
pars2d.add('Cent_2D', value=2711, min=2670, max=2730)
pars2d.add('Wid_2D', value=15, min=4, max=25 )
pars2d.add('a_2D', value=1, min=-100000, max=100000)
pars2d.add('b_2D', value=10, min=-100000, max=100000)
# define x for each peaks
interv_150 = (xx > xminim150) & (xx < xmaxim150)
x_150 = xx[interv_150]
y_150 = yy[interv_150]
interv_220 = (xx > xminim220) & (xx < xmaxim220)
x_220 = xx[interv_220]
y_220 = yy[interv_220]
interv_2D = (xx > xminim2D) & (xx < xmaxim2D)
x_2D = xx[interv_2D]
y_2D = yy[interv_2D]
###########################################################
# Performe FIT with leastsq model ###########
result_150 = minimize(fmin150, pars150, args=(x_150, y_150))
result_220 = minimize(fmin220, pars220, args=(x_220, y_220))
result_2D = minimize(fmin2d, pars2d, args=(x_2D, y_2D))
# calculate final result
final_150 = y_150 + result_150.residual
final_220 = y_220 + result_220.residual
final_2D = y_2D + result_2D.residual
###########################################################
# Parameter after fit #
amp_150= pars150['Amp_150'].value
cent_150=pars150['Cent_150'].value
fwhm_150=2*pars150['Wid_150'].value
amp_220= pars220['Amp_220'].value
cent_220=pars220['Cent_220'].value
fwhm_220=2*pars220['Wid_220'].value
amp_2D= pars2d['Amp_2D'].value
cent_2D=pars2d['Cent_2D'].value
fwhm_2D=2*pars2d['Wid_2D'].value
###########
#Plot data#
plt.plot(xx, yy, 'k+' ,x_150, final_150, 'r', x_220, final_220,'r', x_2D, final_2D,'b')
plt.xlabel(r'Raman shift (cm$^{-1}$)', fontsize=14)
plt.ylabel('Intensity (a.u.)', fontsize=14)
plt.xlim(0,3000)
plt.title(filename, fontsize=16)
savename=filename[:-4]+".png"
print(savename)
plt.savefig(savename)
plt.clf()
return filename, amp_150, cent_150, fwhm_150, amp_220, cent_220, fwhm_220, amp_2D, cent_2D, fwhm_2D
def main():
"""main program loop"""
print("----------------------------")
liste = []
for filename in os.listdir(os.getcwd()):
if filename.endswith(".txt"):
liste.append(filename)
f1 = open("TestFit_all.dat","w")
header = "Filename\tI_150\tCentre_150\tFWHM_150\tI_220\tCentre_220\tFWHM_220\tI_300\tCentre_300\tFWHM_300\tI_640\tCentre_640\tFWHM_640\tI_G\tCentre_G\tFWHM_G\tI_2D\tCentre_2D\tFWHM_2D\tI_PL\tI_PL_err\tCent_PL\tCent_PL_err\tArea1000_PL\n"
f1.write(header)
for i in liste:
txt=fit(i)
print(txt)
#text = str(txt[0])+"\t"+str(txt[1])+"\t"+str(txt[2])+"\t"+str(txt[3])+"\t"+str(txt[4])+"\t"+"\n"
text = str(txt[0])+"\t"+str(txt[1])+"\t"+str(txt[2])+"\t"+str(txt[3])+"\t"+str(txt[4])+"\t"+str(txt[5])+"\t"+str(txt[6])+"\t"+str(txt[7])+"\t"+str(txt[8])+"\t"+str(txt[9])+"\t"+"\t"+"\n"
f1.write(text)
f1.close()
print("----------------------------")
print("Done")
###################################################################
# start
if __name__ == "__main__":
main()
</code></pre>
<p>Thank you for you help :)</p>
<h2><a href="http://i.stack.imgur.com/V3EyE.png" rel="nofollow">A working fit exemple</a></h2>
<p>Please send your reply to deniz.cakir@etu.umontpellier.fr</p>
| 0 | 2016-07-26T16:49:40Z | 38,627,608 | <p>Your script could be a lot shorter to illustrate the question. But, the final Parameters are held in <code>result_150.params</code> and so forth. The parameters passed into <code>minimize()</code> are not altered in the fit. </p>
<p>Well, that's true for lmfit version 0.9.0 and later. Perhaps your colleague has an older version on lmfit.</p>
| 0 | 2016-07-28T05:17:20Z | [
"python",
"windows",
"parameters",
"lmfit"
] |
Why registerTempTable removes some rows from the data-frame? | 38,595,679 | <p>I try to create a pandas data-frame using a Spark data-frame on HDInsight in the following way:</p>
<pre><code>tmp = sqlContext.createDataFrame(sparkDf)
tmp.registerTempTable('temp')
</code></pre>
<p>It looks like the <code>registerTempTable</code> removes some rows from the data-frame. </p>
<p>The following command returns 11000</p>
<pre><code>sparkDf.count()
</code></pre>
<p>While <code>tmp</code> has only 2500 rows.</p>
<p>I am following steps described <a href="https://azure.microsoft.com/en-us/documentation/articles/hdinsight-apache-spark-custom-library-website-log-analysis/" rel="nofollow">here</a>.</p>
| 0 | 2016-07-26T16:49:43Z | 38,599,020 | <p>I am assuming you're using Jupyter notebooks, and that you're getting your data from a SQL query, i.e.</p>
<pre><code>%%sql -o tmp
SELECT * FROM temp
</code></pre>
<p>This is happening because the <code>%%sql</code> query transparently limits the size of the result dataframe <code>tmp</code> to 2500 rows. You can choose a new limit by using the <code>-n</code> option:</p>
<pre><code>%%sql -o tmp -n 11000
SELECT * FROM temp
</code></pre>
<p>You can also choose <code>-1</code> to say that you don't want to limit the size of the dataframe at all (be careful with this, because if the result set is big enough it can cause your driver to run out of memory or your browser to hang/crash when rendering charts):</p>
<pre><code>%%sql -o tmp -n -1
SELECT * FROM temp
</code></pre>
| 5 | 2016-07-26T20:09:38Z | [
"python",
"azure",
"apache-spark",
"pyspark",
"hdinsight"
] |
not valid JSON python-instagram | 38,595,698 | <p>I just can't start using instagram api.</p>
<p>The code from github <a href="https://github.com/facebookarchive/python-instagram" rel="nofollow">https://github.com/facebookarchive/python-instagram</a></p>
<pre><code>from instagram.client import InstagramAPI
access_token = "***"
client_secret = "***"
scope = 'public_content'
api = InstagramAPI(access_token=access_token, client_secret=client_secret)
recent_media, next_ = api.user_recent_media(user_id = "***", count=10)
for media in recent_media:
print("hey")
</code></pre>
<p>Errors:</p>
<blockquote>
<p>Traceback (most recent call last):</p>
<p>File ...Python35-32\lib\site-packages\simplejson\decoder.py", line
400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end()) simplejson.scanner.JSONDecodeError: Expecting value: line 1 column 1
(char 0)</p>
</blockquote>
<p>During handling of the above exception, another exception occurred:</p>
<blockquote>
<p>Traceback (most recent call last):</p>
<p>File
"C:\Users\PolyProgrammist\AppData\Local\Programs\Python\Python35-32\lib\site-packages\instagram\bind.py",
line 131, in _do_api_request
raise InstagramClientError('Unable to parse response, not valid JSON.', status_code=response['status'])
instagram.bind.InstagramClientError: (404) Unable to parse response,
not valid JSON.</p>
</blockquote>
| 0 | 2016-07-26T16:50:53Z | 38,596,332 | <p>I suspect this has something to do with recent enforcement of permission reviews (which I think changes authentication) that Instagram laid down on June 1st.
<a href="https://www.instagram.com/developer/changelog/" rel="nofollow">https://www.instagram.com/developer/changelog/</a></p>
<p>Given that the python-instagram library hasn't been updated since <a href="https://github.com/facebookarchive/python-instagram/commit/6522cffe19065d456ceb0d10b0cd1a13c5026380" rel="nofollow" title="March">march</a>, it is probably out of date and not handling authentication errors correctly.</p>
<p>My guess is that the instagram API is returning a JSON formatted errors, which is causing <code>simplejson</code> to throw that exception.</p>
| 0 | 2016-07-26T17:27:22Z | [
"python",
"json",
"instagram-api"
] |
Times tables in python | 38,595,929 | <p>I decided to write software to make my little brother do his times tables, So I wrote the following code: </p>
<pre><code>for i in range(13):
for j in range(13):
print(i, '*', j, '=')
A = input(" ")
while A != i*j:
print(i, '*', j, '=')
A = input(" ")
else:
print("Correct")
</code></pre>
<p>I have found that it keeps giving me 0 * 0 = and does not continue on to the next question when I run this code, please tell me what I am doing wrong</p>
| -1 | 2016-07-26T17:03:09Z | 38,596,098 | <p>It is failing because of this line:</p>
<pre><code>while A != i*j:
</code></pre>
<p><code>A</code> is a string. <code>i*j</code> is not a string. This will cause the conditional to fail. It is doing this comparison:</p>
<pre><code>>>> 0 == "0"
False
</code></pre>
<p>To fix this, you can cast <code>A</code> as an <code>int</code> (since you are doing integer multiplication)</p>
<pre><code>while int(A) != i*j:
</code></pre>
| 1 | 2016-07-26T17:13:19Z | [
"python",
"loops",
"multiplication"
] |
Adding multiple facets to one variable | 38,595,961 | <p>I'm playing around with the following class to just practice my Python skills:</p>
<pre><code>class Scene(object):
def __init__(self, settings, actors):
self.settings = int(settings)
self.actors = actors
actors = []
new_setting = settings = settings + 1
class Actor(object):
def __init__(self, name, wage):
self.name = name
self.wage = wage
def addActor(self):
self.name = name
self.wage = wage
self.actors.append(name)
Scene1 = Scene(3, ["Tom Hanks", "Meg Ryan"])
Scene1.addActor(Actor("Paul Giamatti"))
</code></pre>
<p>I want to be able to add multiple features to the variable 'Actor'.
So for example I want to give Tom Hanks an age and a wage etc.</p>
<p>I tried to do this by defining several sub-variables in the function addActor but that gave me errors. Do I have to create a subclass called Actors and do it that way?</p>
<p>Thanks!</p>
| 0 | 2016-07-26T17:04:35Z | 38,596,272 | <p>The best approach is to define another class for <code>Actor</code>, see example below</p>
<pre><code>class Actor(object):
def __init__(self, name, age, wage):
self.name = name
self.age = age
self.wage = wage
class Scene(object):
def __init__(self, settings):
self.settings = settings
self.actors = []
def addActor(self, name, age, wage):
actor = Actor(name, age, wage)
self.actors.append(actor)
def listActor(self):
for actor in self.actors:
print "Actor name:", actor.name
print "Actor age:", actor.age
print "Actor wage:", actor.wage
# Usage
scene = Scene({"light":True, 'time':'day'}) # settings can be a dictionary depends on your use case
scene.listActor()
print scene.settings
</code></pre>
| 2 | 2016-07-26T17:23:57Z | [
"python"
] |
How to create a new list from other two lists, apply a function and append the output to each list? | 38,596,004 | <p>thank you in advance for the help.</p>
<p>I need to retrieve some data (clients and products) from a WebService. This code gets the data and transforms it into lists with dictionaries within them.</p>
<pre><code> consumidores = requests.get(url + 'all_consumers', headers=custom_header) # lista
con = consumidores.json()
productos = requests.get(url + 'all_products', headers=custom_header) # lista
prod = productos.json()
c = []
for key in con:
c = [key['genero'], key['complexion'], key['tallaCamisa'], key['tallaPantalon'], key['edad'], key['ubicacion'],
key['valorComercial'], key['valorCompra']]
p = []
for index in prod:
p = [index['genero'], index['precio']]
</code></pre>
<p>Whats I need to do is to create two lists, one for costumers and one for products. Choose some specific elements for each costumer and product and create a new list that have to look like this</p>
<pre><code>new_list = [[costumer_1, costumer_element1, costumer_element2 , ... , product_1, product_element1, product_element2, ...], [costumer_1, costumer_element1, ..., product_1, product_element1, ...], [costumer_2, costumer_elementn, ... product_1, product_element1 ,...] , ...]
</code></pre>
<p>Then apply a function that will relate costumers with products and append the result to the list that produced that output:</p>
<pre><code>results = [[costumer_1, costumer_element1, costumer_element2 , ... , product_1, product_element1, product_element2, RESULT], etc]
for key in con:
index=0
param_relcp = c[index][key['genero'],key['edad']]
index=index + 1
</code></pre>
<p>This returns an error: IndexError: list index out of range and using this</p>
<pre><code>c = []
for key in con:
c = [key['genero'], key['complexion'], key['tallaCamisa'], key['tallaPantalon'], key['edad'], key['ubicacion'],
key['valorComercial'], key['valorCompra'], key['id']]
</code></pre>
<p>Only takes the elements only from the first one of the list of lists. Any help will be appreciated.</p>
| 0 | 2016-07-26T17:07:13Z | 38,596,544 | <p>Your loops all appear to do nothing, e.g.</p>
<pre><code>for key in con:
c = [key['genero'], key['complexion'], key['tallaCamisa'], key['tallaPantalon'], key['edad'], key['ubicacion'],
key['valorComercial'], key['valorCompra']]
</code></pre>
<p>replaces <code>c</code> every time, and you only get the last value from the last time through the loop. You do the same for <code>p</code> and the same for <code>param_relcp</code>.</p>
<p>Either:</p>
<pre><code>c = []
for key in con:
c.append([key['genero'], key['complexion'], key['tallaCamisa'], key['tallaPantalon'], key['edad'], key['ubicacion'],
key['valorComercial'], key['valorCompra']])
</code></pre>
<p>or</p>
<pre><code>c = [[key['genero'], key['complexion'], key['tallaCamisa'], key['tallaPantalon'], key['edad'], key['ubicacion'],
key['valorComercial'], key['valorCompra'] for key in con]
</code></pre>
<p>and the same for the others.</p>
| 0 | 2016-07-26T17:40:10Z | [
"python"
] |
How to change the starting index of iterrows()? | 38,596,056 | <p>We can use the following to iterate rows of a data frame.</p>
<pre><code>for index, row in df.iterrows():
</code></pre>
<p>What if I want to begin from a different row index? (not from first row)?</p>
| 3 | 2016-07-26T17:10:27Z | 38,596,090 | <p>Sure:</p>
<pre><code>for i,(index,row) in enumerate(df.iterrows()):
if i == 0: continue # skip first row
</code></pre>
<p>Or something like:</p>
<pre><code>for i,(index,row) in enumerate(df.iterrows()):
if i < 5: continue # skip first 5 rows
</code></pre>
| 0 | 2016-07-26T17:12:44Z | [
"python",
"pandas",
"dataframe"
] |
How to change the starting index of iterrows()? | 38,596,056 | <p>We can use the following to iterate rows of a data frame.</p>
<pre><code>for index, row in df.iterrows():
</code></pre>
<p>What if I want to begin from a different row index? (not from first row)?</p>
| 3 | 2016-07-26T17:10:27Z | 38,596,091 | <p>Try using <code>itertools.islice</code></p>
<pre><code>from itertools import *
for index, row in islice(df.iterrows(), 1, None):
</code></pre>
| 3 | 2016-07-26T17:12:48Z | [
"python",
"pandas",
"dataframe"
] |
How to change the starting index of iterrows()? | 38,596,056 | <p>We can use the following to iterate rows of a data frame.</p>
<pre><code>for index, row in df.iterrows():
</code></pre>
<p>What if I want to begin from a different row index? (not from first row)?</p>
| 3 | 2016-07-26T17:10:27Z | 38,596,799 | <p>i know this has an answer, but why not just do:</p>
<pre><code>for i, r in df.iloc[1:].iterrows():
</code></pre>
| 2 | 2016-07-26T17:56:31Z | [
"python",
"pandas",
"dataframe"
] |
Python: prevent signals to propagate to child threads | 38,596,069 | <pre><code>import threading
import time
def worker(i):
while True:
try:
print i
time.sleep(10)
break
except Exception, msg:
print msg
threads = []
for i in range(10):
t1 = threading.Thread(target=worker, args=(i,))
threads.append(t1)
for t in threads:
t.start()
print "started all threads... waiting to be finished"
for t in threads:
t.join()
</code></pre>
<p>if i press ^C while the threads are running, does the thread gets the SIGINT?<br>
if this is true, what can i do from the caller thread to stop it from propagating SIGINT to running threads? </p>
<p>signal handler in caller thread would prevent it?<br>
or do i need signal handler for each thread? </p>
| 1 | 2016-07-26T17:11:27Z | 38,596,898 | <p>As referred in <a href="https://docs.python.org/2/library/threading.html" rel="nofollow">Python's docs</a>, you should use the attribute <strong>daemon</strong>:</p>
<blockquote>
<p>daemon: A boolean value indicating whether this thread is a daemon
thread (True) or not (False). This must be set before start() is
called, otherwise RuntimeError is raised. Its initial value is
inherited from the creating thread; the main thread is not a daemon
thread and therefore all threads created in the main thread default to
daemon = False.</p>
<p>The entire Python program exits when no alive non-daemon threads are
left.</p>
<p>New in version 2.6.</p>
</blockquote>
<p>To control the CTRL+C signal, you should capture it changing the handler with the <strong>signal.signal(signal_number, handler)</strong> function. The child process inherits the signal handler for SIGINT.</p>
<pre><code>import threading
import time
import signal
def worker(i):
while True:
try:
print(i)
time.sleep(10)
break
except Exception as msg:
print(msg)
def signal_handler(signal, frame):
print('You pressed Ctrl+C!')
print("I will wait for all threads... waiting to be finished")
for t in threads:
t.join()
signal.signal(signal.SIGINT, signal_handler)
threads = []
for i in range(10):
t1 = threading.Thread(target=worker, args=(i,))
threads.append(t1)
for t in threads:
t.start()
print("started all threads... waiting to be finished")
for t in threads:
t.join()
</code></pre>
| 0 | 2016-07-26T18:01:37Z | [
"python",
"multithreading",
"signals"
] |
Python listing last 10 modified files and reading each line of all 10 files | 38,596,080 | <p>I need some help listing files in a directory and reading through each file using Python. I know how to do this using shell commands but is there a Pythonic way to do it?</p>
<p>I would like to:</p>
<p>1.) List all files in a directory.</p>
<p>2.) Grab the last 10 modified/latest files (preferably using a wildcard)</p>
<p>3.) Read through each line of all 10 files</p>
<p>Using shell commands I can:</p>
<pre><code>Linux_System# ls -ltr | tail -n 10
-rw-rw-rw- 1 root root 999934 Jul 26 01:06 data_log.569
-rw-rw-rw- 1 root root 999960 Jul 26 02:05 data_log.570
-rw-rw-rw- 1 root root 999968 Jul 26 03:13 data_log.571
-rw-rw-rw- 1 root root 999741 Jul 26 04:20 data_log.572
-rw-rw-rw- 1 root root 999928 Jul 26 05:31 data_log.573
-rw-rw-rw- 1 root root 999942 Jul 26 06:45 data_log.574
-rw-rw-rw- 1 root root 999916 Jul 26 07:46 data_log.575
-rw-rw-rw- 1 root root 999862 Jul 26 08:59 data_log.576
-rw-rw-rw- 1 root root 999685 Jul 26 10:15 data_log.577
-rw-rw-rw- 1 root root 999633 Jul 26 11:26 data_log.578
Linux_System# cat data_log.{569..578}
</code></pre>
<p>Using glob I am able to list the files and open a specific file but not sure how I can list only the last 10 modified files and feed the wildcard file list to the open function.</p>
<pre><code>import os, fnmatch, glob
files = glob.glob("data_event_log.*")
files.sort(key=os.path.getmtime)
print("\n".join(files))
data_event_log.569
data_event_log.570
data_event_log.571
data_event_log.572
data_event_log.573
data_event_log.574
data_event_log.575
data_event_log.576
data_event_log.577
data_event_log.578
with open(data_event_log.560, 'r') as f:
output_list = []
for line in f.readlines():
if line.startswith('Time'):
lineRegex = re.compile(r'\d{4}-\d{2}-\d{2}')
a = (lineRegex.findall(line))
</code></pre>
| 2 | 2016-07-26T17:11:55Z | 38,596,492 | <p>it looks alike you almost did everything already</p>
<pre><code>import os.path, glob
files = glob.glob("data_event_log.*")
files.sort(key=os.path.getmtime)
latest=files[-10:] # last 10 entries
print("\n".join(latest))
lineRegex = re.compile(r'\d{4}-\d{2}-\d{2}')
for fn in latest:
with open(fn) as f:
for line in f:
if line.startswith('Time'):
a = lineRegex.findall(line)
</code></pre>
<p>Edit:</p>
<p>Especially if you have many files a better and simpler solution would be </p>
<pre><code>import os.path, glob, heapq
files = glob.iglob("data_event_log.*")
latest=heapq.nlargest(10, files, key=os.path.getmtime) # last 10 entries
print("\n".join(latest))
lineRegex = re.compile(r'\d{4}-\d{2}-\d{2}')
for fn in latest:
with open(fn) as f:
for line in f:
if line.startswith('Time'):
a = lineRegex.findall(line)
</code></pre>
| 3 | 2016-07-26T17:37:24Z | [
"python",
"text-parsing"
] |
Python listing last 10 modified files and reading each line of all 10 files | 38,596,080 | <p>I need some help listing files in a directory and reading through each file using Python. I know how to do this using shell commands but is there a Pythonic way to do it?</p>
<p>I would like to:</p>
<p>1.) List all files in a directory.</p>
<p>2.) Grab the last 10 modified/latest files (preferably using a wildcard)</p>
<p>3.) Read through each line of all 10 files</p>
<p>Using shell commands I can:</p>
<pre><code>Linux_System# ls -ltr | tail -n 10
-rw-rw-rw- 1 root root 999934 Jul 26 01:06 data_log.569
-rw-rw-rw- 1 root root 999960 Jul 26 02:05 data_log.570
-rw-rw-rw- 1 root root 999968 Jul 26 03:13 data_log.571
-rw-rw-rw- 1 root root 999741 Jul 26 04:20 data_log.572
-rw-rw-rw- 1 root root 999928 Jul 26 05:31 data_log.573
-rw-rw-rw- 1 root root 999942 Jul 26 06:45 data_log.574
-rw-rw-rw- 1 root root 999916 Jul 26 07:46 data_log.575
-rw-rw-rw- 1 root root 999862 Jul 26 08:59 data_log.576
-rw-rw-rw- 1 root root 999685 Jul 26 10:15 data_log.577
-rw-rw-rw- 1 root root 999633 Jul 26 11:26 data_log.578
Linux_System# cat data_log.{569..578}
</code></pre>
<p>Using glob I am able to list the files and open a specific file but not sure how I can list only the last 10 modified files and feed the wildcard file list to the open function.</p>
<pre><code>import os, fnmatch, glob
files = glob.glob("data_event_log.*")
files.sort(key=os.path.getmtime)
print("\n".join(files))
data_event_log.569
data_event_log.570
data_event_log.571
data_event_log.572
data_event_log.573
data_event_log.574
data_event_log.575
data_event_log.576
data_event_log.577
data_event_log.578
with open(data_event_log.560, 'r') as f:
output_list = []
for line in f.readlines():
if line.startswith('Time'):
lineRegex = re.compile(r'\d{4}-\d{2}-\d{2}')
a = (lineRegex.findall(line))
</code></pre>
| 2 | 2016-07-26T17:11:55Z | 38,596,547 | <p>What you are looking for is a fixed-size sorted buffer. <code>collections.deque</code> does this, albeit without the sorting. So, here's a buffer that'll do what you need, and <code>main</code> shows you how to use it</p>
<pre><code>import bisect
import glob
import operator
import os
class Buffer:
def __init__(self, maxlen, minmax=1, key=None):
if key is None: key = lambda x: x
self.key = key
self.maxlen = maxlen
self.buffer = []
self.keys = []
self.minmax = minmax # 1 to track max values, -1 to track min values
# iterator variables
self.curr = 0
def __iter__(self): return self
def __next__(self):
if self.curr >= len(self.buffer): raise StopIteration
self.curr += 1
return self.buffer[self.curr-1]
def insert(self, x):
key = self.key(x)
idx = bisect.bisect_left(self.keys, key)
self.keys.insert(idx, key)
self.buffer.insert(idx, x)
if len(self.buffer) > self.maxlen:
if self.minmax>0:
self.buffer = self.buffer[-1 * self.maxlen :]
self.keys = self.keys[-1 * self.maxlen :]
elif self.minmax<0:
self.buffer = self.buffer[: self.maxlen]
self.keys = self.keys[: self.maxlen]
def main():
dirpath = "/path/to/directory"
modtime = lambda fpath: os.stat(fpath).st_mtime
buffer = Buffer(10, 1, modtime)
for fpath in glob.glob(os.path.join(dirpath, "*data_event_log.*")):
buffer.insert(fpath)
for fpath in buffer:
# open the file path and print whatever
</code></pre>
| 0 | 2016-07-26T17:40:28Z | [
"python",
"text-parsing"
] |
Python listing last 10 modified files and reading each line of all 10 files | 38,596,080 | <p>I need some help listing files in a directory and reading through each file using Python. I know how to do this using shell commands but is there a Pythonic way to do it?</p>
<p>I would like to:</p>
<p>1.) List all files in a directory.</p>
<p>2.) Grab the last 10 modified/latest files (preferably using a wildcard)</p>
<p>3.) Read through each line of all 10 files</p>
<p>Using shell commands I can:</p>
<pre><code>Linux_System# ls -ltr | tail -n 10
-rw-rw-rw- 1 root root 999934 Jul 26 01:06 data_log.569
-rw-rw-rw- 1 root root 999960 Jul 26 02:05 data_log.570
-rw-rw-rw- 1 root root 999968 Jul 26 03:13 data_log.571
-rw-rw-rw- 1 root root 999741 Jul 26 04:20 data_log.572
-rw-rw-rw- 1 root root 999928 Jul 26 05:31 data_log.573
-rw-rw-rw- 1 root root 999942 Jul 26 06:45 data_log.574
-rw-rw-rw- 1 root root 999916 Jul 26 07:46 data_log.575
-rw-rw-rw- 1 root root 999862 Jul 26 08:59 data_log.576
-rw-rw-rw- 1 root root 999685 Jul 26 10:15 data_log.577
-rw-rw-rw- 1 root root 999633 Jul 26 11:26 data_log.578
Linux_System# cat data_log.{569..578}
</code></pre>
<p>Using glob I am able to list the files and open a specific file but not sure how I can list only the last 10 modified files and feed the wildcard file list to the open function.</p>
<pre><code>import os, fnmatch, glob
files = glob.glob("data_event_log.*")
files.sort(key=os.path.getmtime)
print("\n".join(files))
data_event_log.569
data_event_log.570
data_event_log.571
data_event_log.572
data_event_log.573
data_event_log.574
data_event_log.575
data_event_log.576
data_event_log.577
data_event_log.578
with open(data_event_log.560, 'r') as f:
output_list = []
for line in f.readlines():
if line.startswith('Time'):
lineRegex = re.compile(r'\d{4}-\d{2}-\d{2}')
a = (lineRegex.findall(line))
</code></pre>
| 2 | 2016-07-26T17:11:55Z | 38,596,864 | <p>pythonic answer: </p>
<p>use <code>sorted()</code> with a lambda function, then use list slicing to get the 10 earliest or 10 latest or what have you.</p>
<pre><code>from glob import glob
from os import stat
files = glob("*")
sorted_list = sorted(files, key=lambda x: stat(x).st_mtime)
truncated_list = sorted_list[-10:]
</code></pre>
| 0 | 2016-07-26T17:59:32Z | [
"python",
"text-parsing"
] |
Python equivalent of Excel's PERCENTILE.EXC | 38,596,100 | <p>I am using Pandas to compute some financial risk analytics, including Value at Risk. In short, to compute Value at Risk (VaR), you take a time series of simulated portfolio changes in value, and then compute a specific tail percentile loss. For example, 95% VaR is the 5th percentile figure in that time series.</p>
<p>I have my time series in a Pandas dataframe, and am currently using the pd.quantile() function to compute the percentile. My question is, typical market convention for VaR is use an exclusionary percentile (ie: 95% VaR is interpreted as: there is a 95% chance your portfolio will not loose MORE than the computed number) - akin to how MS Excel PERECENTILE.EXC() works. Pandas quantile() works akin to how Excel's PERCENTILE.INC() works - it includes the specified percentile. I have scoured several python math packages as well as this forum for a python solution that uses the same methodology as PERCENTILE.EXC() in Excel with no luck. I was hoping someone here might have a suggestion?</p>
<p>Here is sample code.</p>
<pre><code>import pandas as pd
import numpy as np
test_pd = pd.Series([15,14,18,-2,6,-78,31,21,98,-54,-2,-36,5,2,46,-72,3,-2,7,9,34])
test_np = np.array([15,14,18,-2,6,-78,31,21,98,-54,-2,-36,5,2,46,-72,3,-2,7,9,34])
print 'pandas: ' + str(test_pd.quantile(.05))
print 'numpy: '+ str(np.percentile(test_np,5))
</code></pre>
<p>The answer i am looking for is -77.4</p>
<p>Thanks,</p>
<p>Ryan</p>
| 3 | 2016-07-26T17:13:27Z | 38,597,170 | <p>EDIT: I just saw your edit. I think you are making a mistake. The value -77.4 is actually the 99.5% percentile of your data. Try <code>test_pd.quantile(.005)</code>. I believe that you must have made a mistake in Excel when specifying your percentile.</p>
<p>EDIT 2: I just tested it myself in Excel. For the 50-th percentile, I am getting the correct value in both Excel and Numpy/Pandas. For the 5th percentile however, I am getting -72 in Pandas/Numpy, and -74.6 in Excel. But Excel is just wrong here: it is very obvious that -74.6 is the 0.5th percentile, not the 5th...</p>
<p>FINAL EDIT: After some testing, it seems like Excel is behaving erratically around very small values of k with the <code>PERCENTILE.EXC()</code> function. Indeed, using the function with any k < 0.05 returns an error, so 0.05 must be a threshold below which the function is not working properly. I do not know why Excel chooses to return the 0.5th percentile when asked to exclude the 5th percentile (the logical behavior would be to return the 4.9th percentile, or the 4.99th...). However, both Numpy, Pandas and Excel return the same values for other values of k. For instance, <code>PERCENTILE.EXC(0.5) = 6</code>, and <code>test_pd.quantile(0.5) = 6</code> as well. I guess the lesson is that we need to be wary of Excel's behavior ;).</p>
<p>The way I understand your problem is: you want to know the value that corresponds to the k-th percentile of your data, this k-th percentile excluded. However, <code>pd.quantile()</code> returns the value that corresponds to your k-th percentile, this k-th percentile included. </p>
<p>I do not think that pd.quantile() returning the k-th percentile included is an issue. Indeed, assuming you want all stocks having a Value at Risk strictly above the 5-th percentile, you would do:</p>
<pre><code>mask = data["VaR"] < pd.quantile(data["VaR"], 0.05)
data_filt = data[mask]
</code></pre>
<p>Because you used a "smaller than" ( < ) operator, the values which exactly correspond to your 5-th percentile will be excluded, similar to Excel's PERCENTILE.EXC() function.</p>
<p>Do tell me if this is what you were looking for.</p>
| 2 | 2016-07-26T18:17:05Z | [
"python",
"pandas"
] |
Python equivalent of Excel's PERCENTILE.EXC | 38,596,100 | <p>I am using Pandas to compute some financial risk analytics, including Value at Risk. In short, to compute Value at Risk (VaR), you take a time series of simulated portfolio changes in value, and then compute a specific tail percentile loss. For example, 95% VaR is the 5th percentile figure in that time series.</p>
<p>I have my time series in a Pandas dataframe, and am currently using the pd.quantile() function to compute the percentile. My question is, typical market convention for VaR is use an exclusionary percentile (ie: 95% VaR is interpreted as: there is a 95% chance your portfolio will not loose MORE than the computed number) - akin to how MS Excel PERECENTILE.EXC() works. Pandas quantile() works akin to how Excel's PERCENTILE.INC() works - it includes the specified percentile. I have scoured several python math packages as well as this forum for a python solution that uses the same methodology as PERCENTILE.EXC() in Excel with no luck. I was hoping someone here might have a suggestion?</p>
<p>Here is sample code.</p>
<pre><code>import pandas as pd
import numpy as np
test_pd = pd.Series([15,14,18,-2,6,-78,31,21,98,-54,-2,-36,5,2,46,-72,3,-2,7,9,34])
test_np = np.array([15,14,18,-2,6,-78,31,21,98,-54,-2,-36,5,2,46,-72,3,-2,7,9,34])
print 'pandas: ' + str(test_pd.quantile(.05))
print 'numpy: '+ str(np.percentile(test_np,5))
</code></pre>
<p>The answer i am looking for is -77.4</p>
<p>Thanks,</p>
<p>Ryan</p>
| 3 | 2016-07-26T17:13:27Z | 38,597,798 | <p>It won't be as efficient as Pandas' own percentile but it should work:</p>
<pre><code>def quantile_exc(ser, q):
ser_sorted = ser.sort_values()
rank = q * (len(ser) + 1) - 1
assert rank > 0, 'quantile is too small'
rank_l = int(rank)
return ser_sorted.iat[rank_l] + (ser_sorted.iat[rank_l + 1] -
ser_sorted.iat[rank_l]) * (rank - rank_l)
ser = pd.Series([15,14,18,-2,6,-78,31,21,98,-54,-2,-36,5,2,46,-72,3,-2,7,9,34])
quantile_exc(ser, 0.05)
Out: -77.400000000000006
quantile_exc(ser, 0.1)
Out: -68.399999999999991
quantile_exc(ser, 0.3)
Out: -2.0
</code></pre>
<p>Note that Excel fails for small percentiles; it is not a bug. It is because ranks that go below the minimum value is not suitable for interpolation. So you might want to check if rank > 0 in the <code>quantile_exc</code> function (see the assertion part).</p>
| 1 | 2016-07-26T18:54:44Z | [
"python",
"pandas"
] |
How can a Facebook app read its own posts? | 38,596,125 | <p>I'm developing a Django application that can also post to Facebook. I have a problem reading the posts that the application itself created.</p>
<p>Given that the permissions requested using <code>django-allauth</code> includes <code>publish_actions</code>, check the example code:</p>
<pre><code>import open_facebook as ofb
import time
# the following app access token is retrieved by:
# curl -F type=client_cred -F client_id=<my_application_id> \
# -F client_secret=<my_application_secret> \
# https://graph.facebook.com/oauth/access_token
APP_ACCESS_TOKEN= "<my_application_id>|blahblah_blahblah"
MY_FACEBOOK_UID= "<my_facebook_user_id>"
fb= ofb.OpenFacebook(APP_ACCESS_TOKEN)
# this works fine! see output later
print("adding a postâ¦")
data= fb.set("/%s/feed" % MY_FACEBOOK_UID,
message="a test post, will be deleted\n" + time.asctime(),
link="some_link_to_my_application",
application="<my_application_id>")
print("data returned:")
print(data)
fbpost_id= data['id']
# this works too! it comes up empty, but I don't mind
print("getting feedâ¦")
data= fb.get("/%s/feed" % MY_FACEBOOK_UID)
print("result:")
print(data)
# and this does not work.
print("getting postâ¦")
data= fb.get(fbpost_id) # neither this or ("/%s" % fbpost_id) work
print("data returned:")
print(data)
</code></pre>
<p>All of the above produces the following output:</p>
<pre><code>adding a postâ¦
data returned:
{'id': '<a_very_nice_id_thank_you_fb>'}
getting feedâ¦
result:
{'data': []}
getting postâ¦
FB request, error type <class 'urllib.error.HTTPError'>, code 400
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 354, in execute_from_command_line
utility.execute()
File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 346, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 394, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 445, in execute
output = self.handle(*args, **options)
File "/home/<blahblah>/management/commands/blahtest.py", line 41, in handle
data= fb.get(fbpost_id)
File "/usr/local/lib/python3.5/dist-packages/open_facebook/api.py", line 731, in get
response = self.request(path, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/open_facebook/api.py", line 937, in request
response = self._request(url, post_data)
File "/usr/local/lib/python3.5/dist-packages/open_facebook/api.py", line 245, in _request
parsed_response['error'].get('code'))
File "/usr/local/lib/python3.5/dist-packages/open_facebook/api.py", line 338, in raise_error
raise error_class(error_message)
open_facebook.exceptions.ParameterException: Unsupported get request. Object with ID '<a_very_nice_id_thank_you_fb>' does not exist, cannot be loaded due to missing permissions, or does not support this operation. Please read the Graph API documentation at https://developers.facebook.com/docs/graph-api (error code 100)
</code></pre>
<p>So, an application token currently CAN post in my timeline BUT it can't retrieve what it posted.</p>
<p>I don't mind if my feed shows empty when requested by the app token, I have a record of all post ids created by this app. I just want the application to access posts, their likes etc, and <em>only</em> for the posts the app created.</p>
<p>How can I do that? What am I missing?</p>
<h3>Update</h3>
<p>I tried requesting the <code>user_posts</code> permission from the user; this results in the <code>/<uid>/feed</code> coming non-empty, but the third step still fails (I can't get the newly-created post by its id), so my question still applies.</p>
| 0 | 2016-07-26T17:15:20Z | 39,397,613 | <p>I am able to retrieve single posts as follows. I have installed facebook-sdk==2.0.0 and I've authorized the user_posts permission for my application. The is of the form usernum_postnum.</p>
<pre><code>import facebook
graph_api = facebook.GraphAPI(access_token=access_token, version='2.7')
graph_api.request('<facebook_post_id>')
</code></pre>
| 0 | 2016-09-08T18:12:23Z | [
"python",
"facebook-graph-api",
"django-facebook"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.