title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Python - Split a row into columns - csv data | 38,855,648 | <p>Am trying to read data from csv file, split each row into respective columns. </p>
<p>But my regex is failing when a particular column has <strong>commas with in itself</strong>. </p>
<p>eg: a,b,c,"d,e, g,",f </p>
<p>I want result like: </p>
<pre><code>a b c "d,e, g," f
</code></pre>
<p>which is 5 columns.</p>
<p>Here is the regex am using to split the string by comma </p>
<blockquote>
<p>,(?=(?:"[^"]<em>?(?:[^"]</em>)*))|,(?=[^"]+(?:,)|,+|$)</p>
</blockquote>
<p>but it fails for few strings while it works for others. </p>
<p>All am looking for is, when I read data from csv using pyspark into dataframe/rdd, I want to load/preserve all the columns without any mistakes </p>
<p>Thank You</p>
| 2 | 2016-08-09T16:06:55Z | 38,855,765 | <p>You can't easily parse CSV files with regex.</p>
<p>My go-to toolkit for handling CSV from the Unix command line is <code>csvkit</code>, which you can get from <a href="https://csvkit.readthedocs.io" rel="nofollow">https://csvkit.readthedocs.io</a> . It has a Python library as well.</p>
<p>The Python docs for the standard csv library are here: <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">https://docs.python.org/2/library/csv.html</a></p>
<p>There is an extensive discussion of parsing CSV here:</p>
<p><a href="http://programmers.stackexchange.com/questions/166454/can-the-csv-format-be-defined-by-a-regex">http://programmers.stackexchange.com/questions/166454/can-the-csv-format-be-defined-by-a-regex</a></p>
<p>This is a well-trodden path, and the libraries are good enough that you shouldn't roll your own code.</p>
| 2 | 2016-08-09T16:13:31Z | [
"python",
"regex",
"csv",
"pyspark",
"rdd"
] |
Retrieve Default ACL in Python Using Posix 1e | 38,855,666 | <p>Using the <a href="http://pylibacl.k1024.org/module.html" rel="nofollow">posix 1e Python module</a> I am able to get/set the ACL for a file without having to spawn a subprocess and call <code>getfacl</code>/<code>setfacl</code>:</p>
<pre><code>>>> import posix1e
>>> acl1 = posix1e.ACL(file="file.txt")
>>> print acl1
user::rw-
group::rw-
other::r--
</code></pre>
<p>I can also <em>apply</em> a default ACL and <em>delete</em> it:</p>
<pre><code>path = '/some/other/path/'
acl1.applyto(path, posix1e.ACL_TYPE_DEFAULT)
posix1e.delete_default(path)
</code></pre>
<p>However, <strong>I can not seem to work out how to <em>retrieve</em> the default ACL</strong>! Does anyone know how this can be done using the posix 1e module?</p>
| 0 | 2016-08-09T16:08:27Z | 38,856,272 | <p>Turns out there is a way to do this:</p>
<pre><code>default_acl1 = posix1e.ACL(filedef="/some/other/path/")
</code></pre>
| 0 | 2016-08-09T16:42:08Z | [
"python",
"acl"
] |
assertRaises for method with optional parameters | 38,855,717 | <p>I'm using <code>assertRaises</code> for <a href="https://docs.djangoproject.com/en/dev/topics/testing/" rel="nofollow">unit test in Django</a>.</p>
<p>Example method I want to test:</p>
<pre><code>def example_method(var, optional_var=None):
if optional_var is not None:
raise ExampleException()
</code></pre>
<p>My test method:</p>
<pre><code>def test_method(self):
self.assertRaises(ExampleException, example_method, ???)
</code></pre>
<p>How should I pass the arguments to raise the exception?</p>
| 0 | 2016-08-09T16:10:43Z | 38,855,718 | <p>Two ways to do it:</p>
<ol>
<li><p><strong>Just like in the question but putting the args</strong>:</p>
<pre><code>def test_method(self):
self.assertRaises(ExampleException, example_method, "some_var",
optional_var="not_none")
</code></pre></li>
<li><p><strong>With <code>with</code></strong>:</p>
<p>Like explained in <a href="https://docs.python.org/3.4/library/unittest.html#unittest.TestCase.assertRaises" rel="nofollow">Python Docs</a>:</p>
<pre><code>def test_method(self):
with self.assertRaises(ExampleException):
example_method("some_var", "not_none")
</code></pre></li>
</ol>
| 1 | 2016-08-09T16:10:43Z | [
"python",
"django",
"unit-testing"
] |
Append float to list with for loop in python | 38,855,729 | <p>I am trying to add a list of monthly temperatures into a big list that will contain 24 months of temperatures. The problem is that they are given in floats, but to append items, they must be integers. </p>
<pre><code>temperatures = []
np.array(temperatures, dtype = np.float32)
</code></pre>
<p>(after my first month, I append my values to the big list temperatures and empty TEMP1 for the next month)</p>
<pre><code>for item in TEMP1:
np.insert(temperatures, TEMP1[item])
</code></pre>
<p>the message of error is : </p>
<pre><code>File "/home/piscopo/Bureau/EC/Alert_extraction.py", line 87, in <module>
np.insert(temperatures, TEMP1[item])
TypeError: list indices must be integers, not numpy.float32
</code></pre>
<p>Thank you</p>
| -3 | 2016-08-09T16:11:19Z | 38,855,789 | <p>You have to save your nparray in a variable and then you can add your TEMP1 monthly temperatures all at once with the method append() like this :</p>
<pre><code>import numpy as np
TEMP1 = [22.4, 14.4, 12.3]
temperatures = []
floatTemperatures = np.array(temperatures, dtype = np.float32)
floatTemperatures = np.append(floatTemperatures, TEMP1)
</code></pre>
| 0 | 2016-08-09T16:14:35Z | [
"python",
"list",
"append"
] |
matplotlib generating strange y-axis on certain data sets? | 38,855,748 | <p>I am writing a python 2.7 script that generates multiple matplotlib graphs in a loop.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
</code></pre>
<p>Here I scale the data so the first point is 100</p>
<pre><code>scalefac = 100/float(sdata[0])
for dat in range(len(sdata)):
sdata[dat] = float(sdata[dat])*scalefac
</code></pre>
<p>Then plot it.</p>
<pre><code>y = sdata
x = np.arange(0, len(sdates))
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
aaa = tuple(sdates)
ax.scatter(x,y)
ax.plot(x,y)
plt.xticks(x,aaa)
plt.xlabel('date of run (mm/dd/yy)')
plt.ylabel('power % of baseline')
plt.title('Power Estimation Over Time')
plt.grid(True)
plt.savefig("dump/graph.%s.png" % str(dirlist[d]))
plt.close(fig)
</code></pre>
<p>This seems to work correctly, but only when the y data is not too close together. For instance, when y is [100, 95] or [100, 100, 110] the y axis has the right units and the points are in the right places. When y is [100,100] or [100, 100.5] the y axis is on units of .1 and the data is plotted at ~.2</p>
<p>If it goes through the loop twice and one is [100, 95] and the other is [100, 100.5], only the [100, 95] will get plotted correctly.</p>
<h3>Bad graph:</h3>
<p><img src="http://i.stack.imgur.com/Z5mWZ.png" alt="bad graph"></p>
<h3>Good graph:</h3>
<p><img src="http://i.stack.imgur.com/qMxcD.png" alt="good graph"></p>
<p>What the heck?</p>
| 0 | 2016-08-09T16:12:32Z | 38,859,468 | <p>If I understand you correctly the problem is the offset, e.g. <code>0.0002 + 9.9998e1</code>, which you want to be plotted as <code>100</code>, right? If so <a href="https://stackoverflow.com/questions/14711655/how-to-prevent-numbers-being-changed-to-exponential-form-in-python-matplotlib-fi">this answer</a> might help you. </p>
<p>If you think it's too long to read, here is a quick code example. The key thing is <code>ax.get_yaxis().get_major_formatter().set_useOffset(False)</code>, which turns off the formatting of the y-values.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = [0, 1, 2]
y = [100, 100, 100.1]
fig = plt.figure()
ax = fig.add_subplot(2, 1, 1)
ax2 = fig.add_subplot(2, 1, 2)
# The bad plot
ax.scatter(x,y)
ax.plot(x,y)
# The good plot
ax2.scatter(x,y)
ax2.plot(x,y)
ax2.get_yaxis().get_major_formatter().set_useOffset(False)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/PVaYU.png" rel="nofollow"><img src="http://i.stack.imgur.com/PVaYU.png" alt="enter image description here"></a></p>
| 0 | 2016-08-09T20:02:31Z | [
"python",
"python-2.7",
"numpy",
"matplotlib"
] |
Send data from Python to node a server with socket (NodeJS,Socket.io) | 38,855,818 | <p>I trying to send sensor data (in python) from my raspberry pi3 to my local node server.</p>
<p>I found a module for python called <a href="http://www.python-requests.org/en/latest/user/quickstart/#passing-parameters-in-urls" rel="nofollow">requests</a> to send data to a server.</p>
<p>Here I'm trying send the value 22 (later there will be sensor data) from my raspberry pi3 to my local node server with socket.io.The requests.get() works but the put commmand doesn't send the data.</p>
<p>Can you tell me where the mistake is ? </p>
<pre><code>#!/usr/bin/env python
#
import requests
r = requests.get('http://XXX.XXX.XXX.XXX:8080');
print(r)
r = requests.put('http://XXX.XXX.XXX.XXX:8080', data = {'rasp_param':'22'});
</code></pre>
<p>In my server.js I try to get the data but somehow nothing getting received</p>
<p><strong>server.js</strong></p>
<pre><code>var express = require('express')
, app = express()
, server = require('http').createServer(app)
, io = require('socket.io').listen(server)
, conf = require('./config.json');
// Webserver
server.listen(conf.port);
app.configure(function(){
app.use(express.static(__dirname + '/public'));
});
app.get('/', function (req, res) {
res.sendfile(__dirname + '/public/index.html');
});
// Websocket
io.sockets.on('connection', function (socket) {
//Here I want get the data
io.sockets.on('rasp_param', function (data){
console.log(data);
});
});
});
// Server Details
console.log('Ther server runs on http://127.0.0.1:' + conf.port + '/');
</code></pre>
| 2 | 2016-08-09T16:16:39Z | 38,856,592 | <p>you are using HTTP PUT from Python, but you are listening with a websocket server on nodejs side.</p>
<p><strong>Either</strong> have node listening for HTTP POST (I'd use POST rather than PUT):</p>
<pre><code>app.post('/data', function (req, res) {
//do stuff with the data here
});
</code></pre>
<p><strong>Or</strong> have a websocket client on python's side : </p>
<pre><code>ws = yield from websockets.connect("ws://10.1.10.10")
ws.send(json.dumps({'param':'value'}))
</code></pre>
<p>A persistant websocket connection is probably the best choice.</p>
| 1 | 2016-08-09T17:00:45Z | [
"python",
"node.js",
"websocket",
"socket.io"
] |
Making functions to print first and last 10 elements of array | 38,855,944 | <p>I couldn't find anything related to printing the first 10 and last 10 elements of an array that's imported from a text file. Here's what I need to do:</p>
<ul>
<li>Add in a function will print the first ten elements of the array.</li>
<li>Add in a function that will print the last ten elements of the array.</li>
<li>Use the len() function to get the size of the array.</li>
<li>Use your functions to print the first ten elements of the array and then the last ten elements.</li>
<li>Then sort the Array from highest to lowest.</li>
<li>Use your functions to print the first ten elements of the array and then the last ten elements of the sorted array.</li>
</ul>
<p>Heres my code: Ignore the average and sum because it is needed for the an other part of the program.</p>
<pre><code>def avgcalc(myList):
intTotal = 0
intCount = 0;
intLenMyList = len(myList)
while(intCount < intLenMyList):
intTotal += myList[intCount]
intCount += 1
return intTotal/intLenMyList
def sum1(myList):
sum = 0
for element in myList:
sum+=element
print (sum)
def ten(myList):
for item in myList[:10]:
print(item)
arr_intValues = []
myFile = open("FinalData.Data", "r")
print("File read complete")
for myLine in myFile:
arr_intValues.append(int(myLine))
print (avgcalc(arr_intValues))
print (sum1(arr_intValues))
ten(myList)
</code></pre>
| -2 | 2016-08-09T16:22:54Z | 38,856,058 | <p>You need to define <code>myList</code>, or simply pass <code>arr_intValues</code> into the function call for <code>ten</code>, i.e.</p>
<pre><code> ten(arr_intValues)
</code></pre>
<p>Print the first ten (as you do above)</p>
<pre><code>for item in myList[:10]:
print (item)
</code></pre>
<p>Print the last ten</p>
<pre><code>for item in myList[-10:]:
print (item)
</code></pre>
| 2 | 2016-08-09T16:29:12Z | [
"python",
"arrays",
"list",
"elements"
] |
Making functions to print first and last 10 elements of array | 38,855,944 | <p>I couldn't find anything related to printing the first 10 and last 10 elements of an array that's imported from a text file. Here's what I need to do:</p>
<ul>
<li>Add in a function will print the first ten elements of the array.</li>
<li>Add in a function that will print the last ten elements of the array.</li>
<li>Use the len() function to get the size of the array.</li>
<li>Use your functions to print the first ten elements of the array and then the last ten elements.</li>
<li>Then sort the Array from highest to lowest.</li>
<li>Use your functions to print the first ten elements of the array and then the last ten elements of the sorted array.</li>
</ul>
<p>Heres my code: Ignore the average and sum because it is needed for the an other part of the program.</p>
<pre><code>def avgcalc(myList):
intTotal = 0
intCount = 0;
intLenMyList = len(myList)
while(intCount < intLenMyList):
intTotal += myList[intCount]
intCount += 1
return intTotal/intLenMyList
def sum1(myList):
sum = 0
for element in myList:
sum+=element
print (sum)
def ten(myList):
for item in myList[:10]:
print(item)
arr_intValues = []
myFile = open("FinalData.Data", "r")
print("File read complete")
for myLine in myFile:
arr_intValues.append(int(myLine))
print (avgcalc(arr_intValues))
print (sum1(arr_intValues))
ten(myList)
</code></pre>
| -2 | 2016-08-09T16:22:54Z | 38,856,232 | <p>Read a file into a list, one element per line:</p>
<pre><code>with open("filename.txt") as f:
lines = f.read().splitlines()
</code></pre>
<p>Print the first 10 elements of the list:</p>
<pre><code>print("\n".join(lines[:10]))
</code></pre>
<p>Print the last 10 elements of the list:</p>
<pre><code>print("\n".join(lines[-10:])
</code></pre>
| 0 | 2016-08-09T16:39:47Z | [
"python",
"arrays",
"list",
"elements"
] |
Prevent Django from parsing incorrect TIMESTAMP field | 38,856,034 | <p>I have a sqlite database (from an iOS application) similar to:</p>
<pre><code>CREATE TABLE ZSPL ( Z_PK INTEGER PRIMARY KEY, ZWHEN TIMESTAMP, ZWHEN2 real)
1, 492445270.121238, 492445270.121238
2, 492445270.871551, 492445270.871551
</code></pre>
<p>I was hoping to build a Django (v1.10) model to work with it <a href="https://docs.djangoproject.com/en/1.10/howto/custom-model-fields/#converting-values-to-python-objects" rel="nofollow">following these docs</a>:</p>
<pre><code>APPLE_EPOCH = datetime(year=2001, month=1, day=1, hour=0, second=0)
def apple_time_to_datetime(apple_time):
if isinstance(apple_time, datetime):
return apple_time
return APPLE_EPOCH + timedelta(seconds=float(apple_time))
class AppleDateTimeField(models.DateTimeField):
def from_db_value(self, value, expression, connection, context):
return apple_time_to_datetime(value)
def to_python(self, value):
return apple_time_to_datetime(value)
class Spl(models.Model):
when = AppleDateTimeField(db_column='ZWHEN')
</code></pre>
<p>However this errors because <code>parse_datetime</code> registered by the converters <a href="https://github.com/django/django/blob/master/django/db/backends/sqlite3/base.py#" rel="nofollow">here</a> attempts to parse the numeric value in <code>ZWHEN</code>. If these converters are commented out then it errors on the dbapi2 <code>convert_timestamp</code> function <a href="https://hg.python.org/cpython/file/3.5/Lib/sqlite3/dbapi2.py#l66" rel="nofollow">here</a>.</p>
<p>Similarly, when the <code>db_column='ZWHEN'</code> is changed to <code>db_column='ZWHEN2'</code> it errors because a datetime converter is registered before the custom converters. You can see this in the value of <code>conv</code> on <a href="https://github.com/django/django/blob/master/django/db/models/sql/compiler.py#L785" rel="nofollow">this line</a>.</p>
<p>I was wondering if there was anyway to get around this problem and have the custom <code>from_db_value</code> converter called first?</p>
<p>Temporarily I've resorted to using the copied <code>ZWHEN2</code> instead of the <code>ZWHEN</code> field and editing the <a href="https://github.com/django/django/blob/master/django/db/models/sql/compiler.py#L778" rel="nofollow"><code>get_converters</code></a> function to read:</p>
<pre><code>converters[i] = (field_converters + backend_converters, expression)
</code></pre>
<p>instead of:</p>
<pre><code>converters[i] = (backend_converters + field_converters, expression)
</code></pre>
| 0 | 2016-08-09T16:27:49Z | 38,856,629 | <p>This isn't the answer I'm looking for as it involves changes to the Django library but at least it doesn't involve editing the source code itself.</p>
<p>In the same file as the model I've added the following lines:</p>
<pre><code>from django.db.backends.sqlite3.base import Database, decoder
Database.register_converter(str("timestamp"), decoder(apple_time_to_datetime))
Database.register_converter(str("TIMESTAMP"), decoder(apple_time_to_datetime))
</code></pre>
<p>The seems to have "solved" the problem.</p>
| 0 | 2016-08-09T17:03:18Z | [
"python",
"django",
"sqlite3"
] |
Matplotlib; adding circle to subplot - Issue/Confused | 38,856,061 | <p>Bit of an odd one, and I'm clearly missing something, but I'm getting some really weird behaviour, and I can't work out what I'm doing wrong.</p>
<p>I have a plot with subplots in a grid format (for the sake of this post, I'll say just a 2 by 2 grid). I want to plot some stuff on each and also add a circle. Should be easy, but it's not acting as I expect.</p>
<p>Example Code 1:</p>
<pre><code>import matplotlib.pyplot as plt
x = [ -1.0, -0.5, 0.0, 0.5, 1.0 ]
y = [ 0.7, 0.2, 1.0, 0.0, 0.0 ]
circle = plt.Circle( ( 0, 0 ), 1 )
fig, axes = plt.subplots( 2, 2 )
axes[ 0, 0 ].plot( x, y )
axes[ 1, 1 ].plot( x, y )
axes[ 0, 0 ].add_patch( circle )
axes[ 1, 1 ].add_patch( circle )
plt.show( )
</code></pre>
<p>Output 1:</p>
<p><a href="http://i.stack.imgur.com/5BUzq.png" rel="nofollow"><img src="http://i.stack.imgur.com/5BUzq.png" alt="Output 1"></a></p>
<p>Example Code 2:</p>
<pre><code>import matplotlib.pyplot as plt
x = [ -1.0, -0.5, 0.0, 0.5, 1.0 ]
y = [ 0.7, 0.2, 1.0, 0.0, 0.0 ]
circle = plt.Circle( ( 0, 0 ), 1 )
fig, axes = plt.subplots( 2, 2 )
axes[ 0, 0 ].plot( x, y )
axes[ 1, 1 ].plot( x, y )
axes[ 0, 0 ].add_patch( circle )
#axes[ 1, 1 ].add_patch( circle )
plt.show( )
</code></pre>
<p>Output 2:</p>
<p><a href="http://i.stack.imgur.com/6zacH.png" rel="nofollow"><img src="http://i.stack.imgur.com/6zacH.png" alt="Output 2"></a></p>
<p>Example Code 3:</p>
<pre><code>import matplotlib.pyplot as plt
x = [ -1.0, -0.5, 0.0, 0.5, 1.0 ]
y = [ 0.7, 0.2, 1.0, 0.0, 0.0 ]
circle = plt.Circle( ( 0, 0 ), 1 )
fig, axes = plt.subplots( 2, 2 )
axes[ 0, 0 ].plot( x, y )
axes[ 1, 1 ].plot( x, y )
#axes[ 0, 0 ].add_patch( circle )
axes[ 1, 1 ].add_patch( circle )
plt.show( )
</code></pre>
<p>Output 3:<br>
<a href="http://i.stack.imgur.com/TuYXe.png" rel="nofollow"><img src="http://i.stack.imgur.com/TuYXe.png" alt="Output 3"></a></p>
<p>I really don't understand this behaviour (why does example 2 work but not 1 or 3?), or what I'm doing to cause it. Can anyone shed some light? Thanks in advance.</p>
| 0 | 2016-08-09T16:29:19Z | 38,856,487 | <p>you are using same 'circle' plot for two different patches i think that is creating problem,it throws an error </p>
<blockquote>
<p>Can not reset the axes. You are probably trying to re-use an artist in more than one Axes which is not supported</p>
</blockquote>
<p>you need to create different circles for each of the the subplots,</p>
<pre><code>import matplotlib.pyplot as plt
x = [ -1.0, -0.5, 0.0, 0.5, 1.0 ]
y = [ 0.7, 0.2, 1.0, 0.0, 0.0 ]
circle1 = plt.Circle( ( 0, 0 ), 1 )
circle2 = plt.Circle( ( 0, 0 ), 1 )
fig, axes = plt.subplots( 2, 2 )
axes[ 0, 0 ].plot( x, y )
axes[ 1, 1 ].plot( x, y )
axes[ 0, 0 ].add_patch( circle1 )
axes[ 1, 1 ].add_patch( circle2 )
plt.show( )
</code></pre>
| 1 | 2016-08-09T16:53:54Z | [
"python",
"matplotlib",
"geometry",
"subplot"
] |
Get the absolute value of a sum in an Excel sheet using openpyxl | 38,856,115 | <p>I am starting to use openpyxl and I want to copy the sum of a row.
In Excel the value is 150, but when I try to print it, the output I get is the formula, not the actual value: </p>
<pre><code>=SUM(B1:B19)
</code></pre>
<p>The script I use is: </p>
<pre><code>print(ws["B20"].value)
</code></pre>
<p>Using "data_only" didn't work.</p>
<pre><code>wb = ("First_File_b.xlsx" , data_only=True)
</code></pre>
<p>Any idea how I can solve to obtain the numerical value? Help would be greatly appreciated. </p>
| 0 | 2016-08-09T16:32:22Z | 38,858,349 | <p>Okay, here's a simple example</p>
<p>I have created a spreadsheet with first spreadsheet "Feuil1" (french version) which contains <code>A1,...,A7</code> as <code>1,2,3,4,5,6,7</code> and <code>A8=SUM(A1:A7)</code></p>
<p>Here's the code, that could be adapted maybe to other operators. maybe not so simply. It also supports ranges from A1:B12 for instance, untested and no parsing support for cols like <code>AA</code> although could be done.</p>
<pre><code>import openpyxl,re
fre = re.compile(r"=(\w+)\((\w+):(\w+)\)$")
cre = re.compile(r"([A-Z]+)(\d+)")
def the_sum(a,b):
return a+b
d=dict()
d["SUM"] = the_sum
def get_evaluated_value(w,sheet_name,cell_name):
result = w[sheet_name][cell_name].value
if isinstance(result,int) or isinstance(result,float):
pass
else:
m = fre.match(result)
if m:
g = m.groups()
operator=d[g[0]] # ATM only sum is supported
# compute range
mc1 = cre.match(g[1])
mc2 = cre.match(g[2])
start_col = ord(mc1.group(1))
end_col = ord(mc2.group(1))
start_row = int(mc1.group(2))
end_row = int(mc2.group(2))
result = 0
for i in range(start_col,end_col+1):
for j in range(start_row,end_row+1):
c = chr(i)+str(j)
result = operator(result,w["Feuil1"][c].value)
return result
w = openpyxl.load_workbook(r"C:\Users\dartypc\Desktop\test.xlsx")
print(get_evaluated_value(w,"Feuil1","A2"))
print(get_evaluated_value(w,"Feuil1","A8"))
</code></pre>
<p>output:</p>
<pre><code>2
28
</code></pre>
<p>yay!</p>
| 1 | 2016-08-09T18:52:14Z | [
"python",
"openpyxl"
] |
Get the absolute value of a sum in an Excel sheet using openpyxl | 38,856,115 | <p>I am starting to use openpyxl and I want to copy the sum of a row.
In Excel the value is 150, but when I try to print it, the output I get is the formula, not the actual value: </p>
<pre><code>=SUM(B1:B19)
</code></pre>
<p>The script I use is: </p>
<pre><code>print(ws["B20"].value)
</code></pre>
<p>Using "data_only" didn't work.</p>
<pre><code>wb = ("First_File_b.xlsx" , data_only=True)
</code></pre>
<p>Any idea how I can solve to obtain the numerical value? Help would be greatly appreciated. </p>
| 0 | 2016-08-09T16:32:22Z | 38,919,882 | <p>I have solved the matter using a combination of openpyxl and pandas:</p>
<pre><code>import pandas as pd
import openpyxl
from openpyxl import Workbook , load_workbook
source_file = "Test.xlsx"
# write to file
wb = load_workbook (source_file)
ws = wb.active
ws.title = "hello world"
ws.append ([10,10])
wb.save(source_file)
# read from file
df = pd.read_excel(source_file)
sum_jan = df ["Jan"].sum()
print (sum_jan)
</code></pre>
| 0 | 2016-08-12T13:59:48Z | [
"python",
"openpyxl"
] |
Unable to Load an Image from an URL at TKinter | 38,856,128 | <p>My goal is to display an JPG image from an URL using tkinter python.</p>
<p>This is the <a href="http://stackoverflow.com/questions/6086262/python-3-how-to-retrieve-an-image-from-the-web-and-display-in-a-gui-using-tkint">stackoverflow link</a> that I used as a reference. But when I try to run the code, I have received a bunch of error such as:</p>
<ul>
<li>KeyError: b'R0l.......</li>
<li>AttributeError: 'PhotoImage' object has no attribute '_PhotoImage__photo'</li>
</ul>
<p>Does anyone have the solution to this?</p>
<p>This is the code:</p>
<pre><code>import tkinter as tk
from PIL import Image, ImageTk
from urllib.request import urlopen
import base64
root = tk.Tk()
URL = "http://www.universeofsymbolism.com/images/ram-spirit-animal.jpg"
u = urlopen(URL)
raw_data = u.read()
u.close()
b64_data = base64.encodestring(raw_data)
photo = ImageTk.PhotoImage(b64_data)
label = tk.Label(image=photo)
label.image = photo
label.pack()
root.mainloop()
</code></pre>
| 0 | 2016-08-09T16:33:08Z | 38,857,099 | <p>The first error is not specifying the <code>data</code> parameter within <code>ImageTk.PhotoImage(data=b64_data)</code>. However, I'm unsure why <code>PhotoImage</code> is unable to read base64 data.</p>
<p>A workaround would be to use <code>BytesIO</code> from the <code>io</code> module. You can pass in the raw data you read from the image into a <code>BytesIO</code>, open it in <code>Image</code> and then pass that into <code>PhotoImage</code>.</p>
<p>I found the code for opening the image from <a href="http://effbot.org/imagingbook/image.htm#tag-Image.open" rel="nofollow">here</a>.</p>
<pre><code>import tkinter as tk
from PIL import Image, ImageTk
from urllib2 import urlopen
from io import BytesIO
root = tk.Tk()
URL = "http://www.universeofsymbolism.com/images/ram-spirit-animal.jpg"
u = urlopen(URL)
raw_data = u.read()
u.close()
im = Image.open(BytesIO(raw_data))
photo = ImageTk.PhotoImage(im)
label = tk.Label(image=photo)
label.image = photo
label.pack()
root.mainloop()
</code></pre>
<p>If anybody has a better answer as to why the encoding fails, it would be a more appropriate answer to this question.</p>
| 1 | 2016-08-09T17:31:30Z | [
"python",
"tkinter",
"base64",
"python-imaging-library",
"urllib"
] |
Simple multithread for loop in Python | 38,856,172 | <p>I searched everywhere and I don't find any simple example of iterating a loop with multithreading.</p>
<p>For example, how can I do to multithread this loop?</p>
<pre><code>for item in range(0, 1000):
print item
</code></pre>
<p>Is there any way to cut it in like 4 threads, so one thread has 250 iterations?</p>
<p>Thank you!</p>
| 0 | 2016-08-09T16:35:51Z | 38,856,378 | <p>Easiest way is with <a href="https://docs.python.org/2.7/library/multiprocessing.html#module-multiprocessing.dummy" rel="nofollow">multiprocessing.dummy</a> (which uses threads instead of processes) and a <a href="https://docs.python.org/2.7/library/multiprocessing.html#module-multiprocessing.pool" rel="nofollow">Pool</a></p>
<pre><code>import multiprocessing.dummy as mp
def do_print(s):
print s
if __name__=="__main__":
p=mp.Pool(4)
p.map(do_print,range(0,10)) # range(0,1000) if you want to replicate your example
p.close()
p.join()
</code></pre>
<p>Maybe you want to try real multiprocessing, too if you want to better utilize multiple CPUs but there are several caveats and <a href="https://docs.python.org/2.7/library/multiprocessing.html#programming-guidelines" rel="nofollow">guidelines</a> to follow then.</p>
<p>Possibly other methods of <code>Pool</code> would better suit your needs - depending on what you are actually trying to do.</p>
| 1 | 2016-08-09T16:47:27Z | [
"python",
"multithreading",
"iteration"
] |
Simple multithread for loop in Python | 38,856,172 | <p>I searched everywhere and I don't find any simple example of iterating a loop with multithreading.</p>
<p>For example, how can I do to multithread this loop?</p>
<pre><code>for item in range(0, 1000):
print item
</code></pre>
<p>Is there any way to cut it in like 4 threads, so one thread has 250 iterations?</p>
<p>Thank you!</p>
| 0 | 2016-08-09T16:35:51Z | 38,856,443 | <p>You'll have to do the splitting manually:</p>
<pre><code>import threading
def ThFun(start, stop):
for item in range(start, stop):
print item
for n in range(0, 1000, 100):
stop = n + 100 if n + 100 <= 1000 else 1000
threading.Thread(target = ThFun, args = (n, stop)).start()
</code></pre>
<p>This code uses <em>multithreading</em>, which means that everything will be run within a single Python process (i.e. only one Python interpreter will be launched). </p>
<p><em>Multiprocessing</em>, discussed in the other answer, means <em>running some code in several Python interpreters</em> (in several <em>processes</em>, not <em>threads</em>). This may make use of all the CPU cores available, so this is useful when you're focusing on the speed of your code (<em>print a ton of numbers until the terminal hates you!</em>), not simply on parallel processing. <sup>1</sup></p>
<hr>
<p><sup>1. <code>multiprocessing.dummy</code> turns out to be <a href="http://stackoverflow.com/questions/26432411/multiprocessing-dummy-in-python">a wrapper around the <code>threading</code> module</a>. <code>multiprocessing</code> and <code>multiprocessing.dummy</code> have the same interface, but the first module does parallel processing using <em>processes</em>, while the latter - using <em>threads</em>. </sup></p>
| 0 | 2016-08-09T16:51:42Z | [
"python",
"multithreading",
"iteration"
] |
Cannot Import easygui module | 38,856,178 | <p>This is my first question on Stack Oveflow, so forgive me if I do something wrong.
I've been using Python for a few months. I'm trying to make a simple GUI. I came across EasyGUI. </p>
<p>When i try to import the module, i get an error:</p>
<pre><code> Traceback (most recent call last):
File "C:/Users/matthewr/PycharmProjects/testing start/Tsting.py", line 1, in <module>
import easygui
File "C:\Users\matthewr\AppData\Local\Programs\Python\Python35-32\lib\site-packages\easygui\__init__.py", line 50, in <module>
from .boxes.choice_box import choicebox
File "C:\Users\matthewr\AppData\Local\Programs\Python\Python35-32\lib\site-packages\easygui\boxes\choice_box.py", line 76
except Exception, e:
^
SyntaxError: invalid syntax
</code></pre>
<p>I erased everything in my code except <code>import easygui</code> but the error still comes up.</p>
<p>I uninstalled and reinstalled using pip, but no luck.</p>
<p>any help would be appreciated.</p>
| 0 | 2016-08-09T16:36:21Z | 38,858,770 | <p>Try easygui 0.96.0 </p>
<p>I've been using easygui for some time but I had exactly the same problem today on a new machine with a fresh install of 3.5.2 with easygui 0.98.0. However, easygui 0.96.0 works for me.</p>
<ol>
<li>reverted to Py 3.5.1, same problem. </li>
<li>easygui 0.97 same issue on both Py 3.5.1, and 3.5.2</li>
<li>Py 3.5.2 with easygui 0.96.0 - works fine!</li>
</ol>
<p>pip uninstall easygui</p>
<p>pip install easygui==0.96.0</p>
| 3 | 2016-08-09T19:18:23Z | [
"python",
"import",
"importerror",
"easygui"
] |
Python: String replace index | 38,856,180 | <p>I mean, i want to replace <code>str[9:11]</code> for another string.
If I do <code>str.replace(str[9:11], "###")</code> It doesn't work, because the sequence [9:11] can be more than one time.
If str is <code>"cdabcjkewabcef"</code> i would get <code>"cd###jkew###ef"</code> but I only want to replace the second. </p>
| 0 | 2016-08-09T16:36:22Z | 38,856,340 | <p>Given txt and s - the string you want to replace:</p>
<pre><code>txt.replace(s, "***", 1).replace(s, "###").replace("***", s)
</code></pre>
<p>Another way:</p>
<pre><code>txt[::-1].replace(s[::-1], "###", 1)[::-1]
</code></pre>
| 0 | 2016-08-09T16:45:58Z | [
"python",
"string",
"python-3.x"
] |
Python: String replace index | 38,856,180 | <p>I mean, i want to replace <code>str[9:11]</code> for another string.
If I do <code>str.replace(str[9:11], "###")</code> It doesn't work, because the sequence [9:11] can be more than one time.
If str is <code>"cdabcjkewabcef"</code> i would get <code>"cd###jkew###ef"</code> but I only want to replace the second. </p>
| 0 | 2016-08-09T16:36:22Z | 38,856,436 | <p>Here is a sample code:</p>
<pre><code>word = "astalavista"
index = 0
newword = ""
addon = "xyz"
while index < 8:
newword = newword + word[index]
index += 1
ind = index
i = 0
while i < len(addon):
newword = newword + addon[i]
i += 1
while ind < len(word):
newword = newword + word[ind]
ind += 1
print newword
</code></pre>
| 0 | 2016-08-09T16:51:22Z | [
"python",
"string",
"python-3.x"
] |
Python: String replace index | 38,856,180 | <p>I mean, i want to replace <code>str[9:11]</code> for another string.
If I do <code>str.replace(str[9:11], "###")</code> It doesn't work, because the sequence [9:11] can be more than one time.
If str is <code>"cdabcjkewabcef"</code> i would get <code>"cd###jkew###ef"</code> but I only want to replace the second. </p>
| 0 | 2016-08-09T16:36:22Z | 38,856,460 | <p>You can use <code>join()</code> with sub-strings.</p>
<pre><code>s = 'cdabcjkewabcef'
sequence = '###'
indicies = (9,11)
print sequence.join([s[:indicies[0]-1], s[indicies[1]:]])
>>> 'cdabcjke###cef'
</code></pre>
| 1 | 2016-08-09T16:52:34Z | [
"python",
"string",
"python-3.x"
] |
Python: String replace index | 38,856,180 | <p>I mean, i want to replace <code>str[9:11]</code> for another string.
If I do <code>str.replace(str[9:11], "###")</code> It doesn't work, because the sequence [9:11] can be more than one time.
If str is <code>"cdabcjkewabcef"</code> i would get <code>"cd###jkew###ef"</code> but I only want to replace the second. </p>
| 0 | 2016-08-09T16:36:22Z | 38,856,535 | <p>you can do</p>
<pre><code>s="cdabcjkewabcef"
snew="".join((s[:9],"###",s[12:]))
</code></pre>
<p>which should be faster than joining like <code>snew=s[:9]+"###"+s[12:]</code> on large strings</p>
| 1 | 2016-08-09T16:57:12Z | [
"python",
"string",
"python-3.x"
] |
Running an R script from command line (to execute from python) | 38,856,271 | <p>I'm currently trying to run an R script from the command line (my end goal is to execute it as the last line of a python script). I'm not sure what a batch file is, or how to make my R script 'executable'. Currently it is saved as a .R file. It works when I run it from R.<br>
How do I execute this from the windows command prompt line? Do i need to download something called Rscript.exe? Do I just save my R script as an .exe file? Please advise on the easiest way to achieve this.<br>
R: version 3.3 python: version 3.x os: windows </p>
| 2 | 2016-08-09T16:42:05Z | 38,856,331 | <p>You probably already have R, since you can already run your script.</p>
<p>All you have to do is find its binaries (the Rscript.exe file).</p>
<p>Then open windows command line ([cmd] + [R] > type in : "cmd" > [enter])</p>
<p>Enter the full path to R.exe, followed by the full path to your script.</p>
| 1 | 2016-08-09T16:45:39Z | [
"python",
"shell",
"command-line"
] |
Running an R script from command line (to execute from python) | 38,856,271 | <p>I'm currently trying to run an R script from the command line (my end goal is to execute it as the last line of a python script). I'm not sure what a batch file is, or how to make my R script 'executable'. Currently it is saved as a .R file. It works when I run it from R.<br>
How do I execute this from the windows command prompt line? Do i need to download something called Rscript.exe? Do I just save my R script as an .exe file? Please advise on the easiest way to achieve this.<br>
R: version 3.3 python: version 3.x os: windows </p>
| 2 | 2016-08-09T16:42:05Z | 38,856,393 | <p>You already have <code>Rscript</code>, it came with your version of R. If <code>R.exe</code>, <code>Rgui.exe</code>, ... are in your path, then so is <code>Rscript.exe</code>.</p>
<p>Your call from Python could just be <code>Rscript myFile.R</code>. Rscript is much better than <code>R BATCH CMD ...</code> and other <em>very old and outdated</em> usage patterns.</p>
| 1 | 2016-08-09T16:48:22Z | [
"python",
"shell",
"command-line"
] |
Running an R script from command line (to execute from python) | 38,856,271 | <p>I'm currently trying to run an R script from the command line (my end goal is to execute it as the last line of a python script). I'm not sure what a batch file is, or how to make my R script 'executable'. Currently it is saved as a .R file. It works when I run it from R.<br>
How do I execute this from the windows command prompt line? Do i need to download something called Rscript.exe? Do I just save my R script as an .exe file? Please advise on the easiest way to achieve this.<br>
R: version 3.3 python: version 3.x os: windows </p>
| 2 | 2016-08-09T16:42:05Z | 38,856,920 | <p>As mentioned, Rscript.exe the automated executable to run R scripts ships with any R installation (usually located in bin folder) and as <a href="http://stackoverflow.com/questions/3412911/r-exe-rcmd-exe-rscript-exe-and-rterm-exe-whats-the-difference">@Dirk Eddelbuettel mentions</a> is the recommended automated version. And in Python you can run <strong>any</strong> external program as a <a href="https://docs.python.org/2/library/subprocess.html" rel="nofollow">subprocess</a> with various types including a <em>call</em>, <em>check_output</em>, <em>check_call</em>, or <em>Popen</em> and the latter of which provides more facility such as capturing errors in the child process. </p>
<p>If R directory is in your <a href="http://superuser.com/questions/284342/what-are-path-and-other-environment-variables-and-how-can-i-set-or-use-them">PATH environmental variable</a>, you do not need to include full path to RScript.exe but just name of program, <em>Rscript</em>. And do note this is fairly the same process for Linux or Mac operating systems.</p>
<pre><code>command = 'C:/R-3.3/bin/Rscript.exe' # OR command = 'Rscript'
path2script = 'C:/Path/To/R/Script.R'
arg = '--vanilla'
# CHECK_CALL VERSION
retval = subprocess.check_call([command, arg, path2script], shell=True)
# CALL VERSION
retval = subprocess.call(["'Rscript' 'C:/Path/To/R/Script.R'"])
# POPEN VERSION (W/ CWD AND OUTPUT/ERROR CAPTURE)
curdir = 'C:/Path/To/R/Script'
p = subprocess.Popen(['Rscript', 'Script.R'], cwd=curdir,
stdin = subprocess.PIPE, stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
output, error = p.communicate()
if p.returncode == 0:
print('R OUTPUT:\n {0}'.format(output.decode("utf-8")))
else:
print('R ERROR:\n {0}'.format(error.decode("utf-8")))
</code></pre>
| 1 | 2016-08-09T17:20:56Z | [
"python",
"shell",
"command-line"
] |
python: read data from stdin and raw_input | 38,856,286 | <p>I want to pass some data to a python script using echo and after that promote the user to input options. I am running through an <code>EOFError</code> which I think is happening since I read all data in <code>sys.stdin</code>. How do I fix this issue? Thanks!</p>
<p>code.py:</p>
<pre><code>x = ''
for line in sys.stdin:
x += line
y = raw_input()
</code></pre>
<p>usage:</p>
<pre><code>echo -e -n '1324' | ./code.py
</code></pre>
<p>error at <code>raw_input()</code>:</p>
<pre><code>EOFError: EOF when reading a line
</code></pre>
| 0 | 2016-08-09T16:42:43Z | 38,856,351 | <p>You just cannot send data through stdin (that's redirecting) and then get back the interactive mode.</p>
<p>When you perform <code>a | b</code>, b cannot read from standard input anymore. If it wants to do that, it will stop as soon as <code>a</code> finishes and breaks the pipe. </p>
<p>But when <code>a</code> finishes, it does not mean than you get hold of <code>stdin</code> again.</p>
<p>Maybe you could change the way you want to do things, example:</p>
<pre><code>echo -n -e '1324' | ./code.py
</code></pre>
<p>becomes</p>
<pre><code>./code.py '1234' '5678'
</code></pre>
<p>and use <code>sys.argv[]</code> to get the value of <code>1234</code>, <code>5678</code>...</p>
<pre><code>import sys
x = ''
for line in sys.argv[1:]:
x += line+"\n"
y = raw_input()
</code></pre>
<p>if you have a lot of lines to output, pass an argument which is a file and what you'll read</p>
<pre><code>import sys
x = ''
for line in open(sys.argv[1],"r"):
x += line
y = raw_input()
</code></pre>
| 1 | 2016-08-09T16:46:17Z | [
"python",
"stdin",
"eoferror"
] |
python: read data from stdin and raw_input | 38,856,286 | <p>I want to pass some data to a python script using echo and after that promote the user to input options. I am running through an <code>EOFError</code> which I think is happening since I read all data in <code>sys.stdin</code>. How do I fix this issue? Thanks!</p>
<p>code.py:</p>
<pre><code>x = ''
for line in sys.stdin:
x += line
y = raw_input()
</code></pre>
<p>usage:</p>
<pre><code>echo -e -n '1324' | ./code.py
</code></pre>
<p>error at <code>raw_input()</code>:</p>
<pre><code>EOFError: EOF when reading a line
</code></pre>
| 0 | 2016-08-09T16:42:43Z | 38,856,370 | <p>Use:</p>
<pre><code>{ echo -e -n '1324'; cat; } | ./code.py
</code></pre>
<p>First <code>echo</code> will write the literal string to the pipe, then <code>cat</code> will read from standard input and copy that to the pipe. The python script will see this all as its standard input.</p>
| 2 | 2016-08-09T16:46:51Z | [
"python",
"stdin",
"eoferror"
] |
TensorFlow queue feed order | 38,856,292 | <p>I'm wondering in what order TensorFlow queues feed data (specifically when you have a list of tensors that they're feeding from).</p>
<p>For example, in a queue like this:</p>
<pre><code>fifo_q = tf.FIFOQueue(
capacity=10,
dtypes=[tf.string, tf.string]
shapes=[[], []])
</code></pre>
<p>If I enqueue these lists:</p>
<pre><code>sess = tf.Session()
l = [str(i+1) for i in range(10)]
x = tf.constant(l)
y = tf.constant(l)
eq = fifo_q.enqueue_many([x, y])
dq1, dq2 = fifo_q.dequeue()
sess.run(eq)
</code></pre>
<p>I would expect <code>dq1</code>, <code>dq2</code> to be '1', '1' the on the first run, then '2', '2' and so on. But this isn't what's happening. Instead, when I run the following code, I get '1', '2' and then '3', '4', and so on until <code>dq2</code> reaches 10, and then the queue locks up.</p>
<pre><code>for x in range(6):
print('dq1:', sess.run(dq1))
print('dq2:', sess.run(dq2))
</code></pre>
<p>Why does this happen instead of what I expect? I'm using this to match up training examples with labels, but some training examples and labels are skipped/off-set. Is a better solution just to interleave the file names in a single queue? Either way, I would like to understand this behavior.</p>
<p>Any help is appreciated.</p>
| 2 | 2016-08-09T16:43:17Z | 38,856,513 | <p>TensorFlow queues allow you to enqueue and dequeue lists (more precisely fixed-length tuples) of tensors atomically, in a single operation. The tensors <code>dq1</code> and <code>dq2</code> are the outputs of the same dequeue operation, which in this case will remove a tuple of two tensors from the queue. Each invocation of <code>sess.run(dq1)</code> or <code>sess.run(dq2)</code> corresponds to a separate invocation of the dequeue operation, <em>but</em> when you invoke <code>sess.run(dq1)</code> TensorFlow discards the other tuple element, because you didn't explicitly request it in the call to <code>sess.run()</code>.</p>
<p>The solution is to ensure that <strong>both</strong> outputs of the <code>dequeue()</code> operation are consumed in the <strong>same</strong> call to <code>sess.run()</code>. For example, the following change to your program should produce the result that you originally expected:</p>
<pre><code>for x in range(6):
dq1_val, dq2_val = sess.run([dq1, dq2])
print('dq1:', dq1_val)
print('dq2:', dq2_val)
</code></pre>
| 1 | 2016-08-09T16:55:59Z | [
"python",
"tensorflow"
] |
Selecting multiple partitioned regions in ABAQUS with findAt for setting mesh controls | 38,856,474 | <p>With reference to my previous question, </p>
<p><a href="http://stackoverflow.com/questions/38625874/faster-way-to-partition-a-face-with-sketch-in-abaqus-with-scripting">Faster way to partition a Face with Sketch in ABAQUS with scripting</a>,</p>
<p>I have to select the multiple regions created by the partitioning method to assign the mesh controls and seed the edges and finally, mesh the regions respectively.</p>
<p>Problem is, since the partitioned regions are parametrised and of such a greater number, defining a function for the purpose and running it in a loop was the only way that seemed fit to me. Hence, I tried to define a function in two different ways like so:</p>
<ol>
<li><p>A function is defined to select the regions and run in a loop throughout the length of the body. Here, each small region is picked once and the same mesh controls are applied repeatedly leading to a long time in generating the mesh.</p>
<pre><code>def set_mesh_control_structured(x_left, x_right, y_top, y_bottom,
element_type, mesh_technique, minimize_transition):
p = mdb.models['Model-1'].parts['Part']
f = p.faces
pickedRegions = f.findAt(((x_left + (x_right - x_left)/2, y_bottom
(y_top - y_bottom)/2, 0.0), ))
return p.setMeshControls(regions=pickedRegions,
elemShape=element_type, technique=mesh_technique,
minTransition=minimize_transition)
# Executed within a 'for' loop like e.g.:
for i in range((8 * total_blocks) + 6):
set_mesh_control_structured(x_left, x_right + (i *
block_length), y_coord[0], 0.0, QUAD, STRUCTURED, OFF)
</code></pre></li>
<li><p>The second function tries to select all the regions one by one and then apply the mesh controls at the end only once. This is where the problem creeps up. One assumes that the argument for findAt() is a tuple of tuples but it doesn't work and ABAQUS gives an error warning saying that "<em>...in set_mesh_control_structured; pickedRegions = f.findAt(regions_tuple); TypeError: arg1(coordinates)[0][0];found tuple expecting float</em>".</p>
<pre><code>def set_mesh_control_structured(range_arg, x_left, x_right, y_top,
y_bottom, element_type, mesh_technique, minimize_transition):
p = mdb.models['TDCB'].parts['Part_TDCB']
f = p.faces
regions_tuple = ()
for i in range(range_arg):
# Put x,y,z coords in one value
incremental_picked_regions = (x_left + (i * (x_right -
x_left)/2), y_bottom + (i * (y_top - y_bottom)/2), 0.0)
# Abaqus wants each repeating unit as ((x,y,z),)
incremental_picked_regions = ((incremental_picked_regions),)
# Adding all the coordinates into 1 tuple
regions_tuple += (incremental_picked_regions,)
pickedRegions = f.findAt(regions_tuple)
return p.setMeshControls(regions=pickedRegions,
elemShape=element_type, technique=mesh_technique,
minTransition=minimize_transition)
</code></pre></li>
</ol>
<p>Can anyone please tell me what I'm doing wrong in the second function definition or is there a better way to select multiple regions for the purpose of setting mesh controls and seeding apart from findAt()? I am aware of getBoundingBox and faces.index[#] etc. but I have no clue on how to use them. So, a MWE will also be highly appreciated. </p>
<p>Thanks a lot in advance.</p>
| 0 | 2016-08-09T16:53:14Z | 38,859,808 | <p>try this, use <code>findAt</code> on each individual point and add the results:</p>
<pre><code> for i in range(range_arg):
# Put x,y,z coords in one value
incremental_picked_regions = (x_left + (i * (x_right -
x_left)/2), y_bottom + (i * (y_top - y_bottom)/2), 0.0)
if i==0 :
pickedRegions = f.findAt((incremental_picked_regions,),)
else:
pickedRegions += f.findAt((incremental_picked_regions,),)
</code></pre>
| 0 | 2016-08-09T20:26:03Z | [
"python",
"scripting",
"abaqus"
] |
Selecting multiple partitioned regions in ABAQUS with findAt for setting mesh controls | 38,856,474 | <p>With reference to my previous question, </p>
<p><a href="http://stackoverflow.com/questions/38625874/faster-way-to-partition-a-face-with-sketch-in-abaqus-with-scripting">Faster way to partition a Face with Sketch in ABAQUS with scripting</a>,</p>
<p>I have to select the multiple regions created by the partitioning method to assign the mesh controls and seed the edges and finally, mesh the regions respectively.</p>
<p>Problem is, since the partitioned regions are parametrised and of such a greater number, defining a function for the purpose and running it in a loop was the only way that seemed fit to me. Hence, I tried to define a function in two different ways like so:</p>
<ol>
<li><p>A function is defined to select the regions and run in a loop throughout the length of the body. Here, each small region is picked once and the same mesh controls are applied repeatedly leading to a long time in generating the mesh.</p>
<pre><code>def set_mesh_control_structured(x_left, x_right, y_top, y_bottom,
element_type, mesh_technique, minimize_transition):
p = mdb.models['Model-1'].parts['Part']
f = p.faces
pickedRegions = f.findAt(((x_left + (x_right - x_left)/2, y_bottom
(y_top - y_bottom)/2, 0.0), ))
return p.setMeshControls(regions=pickedRegions,
elemShape=element_type, technique=mesh_technique,
minTransition=minimize_transition)
# Executed within a 'for' loop like e.g.:
for i in range((8 * total_blocks) + 6):
set_mesh_control_structured(x_left, x_right + (i *
block_length), y_coord[0], 0.0, QUAD, STRUCTURED, OFF)
</code></pre></li>
<li><p>The second function tries to select all the regions one by one and then apply the mesh controls at the end only once. This is where the problem creeps up. One assumes that the argument for findAt() is a tuple of tuples but it doesn't work and ABAQUS gives an error warning saying that "<em>...in set_mesh_control_structured; pickedRegions = f.findAt(regions_tuple); TypeError: arg1(coordinates)[0][0];found tuple expecting float</em>".</p>
<pre><code>def set_mesh_control_structured(range_arg, x_left, x_right, y_top,
y_bottom, element_type, mesh_technique, minimize_transition):
p = mdb.models['TDCB'].parts['Part_TDCB']
f = p.faces
regions_tuple = ()
for i in range(range_arg):
# Put x,y,z coords in one value
incremental_picked_regions = (x_left + (i * (x_right -
x_left)/2), y_bottom + (i * (y_top - y_bottom)/2), 0.0)
# Abaqus wants each repeating unit as ((x,y,z),)
incremental_picked_regions = ((incremental_picked_regions),)
# Adding all the coordinates into 1 tuple
regions_tuple += (incremental_picked_regions,)
pickedRegions = f.findAt(regions_tuple)
return p.setMeshControls(regions=pickedRegions,
elemShape=element_type, technique=mesh_technique,
minTransition=minimize_transition)
</code></pre></li>
</ol>
<p>Can anyone please tell me what I'm doing wrong in the second function definition or is there a better way to select multiple regions for the purpose of setting mesh controls and seeding apart from findAt()? I am aware of getBoundingBox and faces.index[#] etc. but I have no clue on how to use them. So, a MWE will also be highly appreciated. </p>
<p>Thanks a lot in advance.</p>
| 0 | 2016-08-09T16:53:14Z | 38,883,235 | <p>Anyone looking for a better understanding of this question, I'd first of all, advise to look up on my other linked question.</p>
<p>I solved this problem of mine by using <code>getByBoundingBox</code>which has the following syntax:
<code>getByBoundingBox(xmin, ymin, zmin, xmax, ymax, zmax)</code></p>
<p>So, this can be conveniently used instead of findAt() to select a large number of partitioned faces or also edges. </p>
<p>So taking a planar rectangle for example, having the four corners as (0.0, 0.0, 0.0), (2.0, 0.0, 0.0), (2.0, 2.0, 0.0) and (0.0, 2.0, 0.0) respectively and let's assume that there are multiple partitioned faces inside this rectangle which all need to be selected at once as one does in the GUI. First, the six arguments of the <code>getByBoundingBox</code> will be:</p>
<pre><code>xmin = 0.0,
ymin = 0.0,
zmin = 0.0,
xmax = 2.0,
ymax = 2.0,
zmax = 0.0
</code></pre>
<p>Then, it's just a matter of picking the desired region as follows:</p>
<pre><code>pickedRegions = f.getByBoundingBox(xmin, ymin, zmin, xmax, ymax, zmax)
</code></pre>
| 0 | 2016-08-10T20:46:57Z | [
"python",
"scripting",
"abaqus"
] |
optimising way of exporting a dataframe in pandas | 38,856,482 | <p>I want to export dataframes efficiently by avoiding the use of loops.My format is specific so I can not use any prebuilt function like pandas.to_csv...
This is what i have at the moment</p>
<pre><code>def export(datafr,fout):
fo=open(fout,'w')
fo.write('#channel d '+'\" "date" - date\n')
fo.write('#channel t '+'\" "time" - time\n')
fo.write("#begindata\n")
for date in datafr.df.index:#2011-11-02 00:00:00
record=datafr.df.ix[date]#row
fo.write(str(date)+" ")
for i in record:
fo.write("%3.3f" % (i)+" ")
fo.write("\n")
fo.close()
</code></pre>
<p>It works, but I have to use loops what is not efficient at all with long time series. I thought about using map() or pandas.apply() but I do not get anything so far. One try:</p>
<pre><code>site = DataFrame(np.random.randn(4, 3), columns=list('bde'), index=['Utah', 'Ohio', 'Texas', 'Oregon'])
fout='c://Site.dat'
fo=open(fout,'w')
def writed(f,i,data):
f.write(str(i)+" ")
f.write("%3.3f" % (data)+" ")
map(writed,fo,site.index,site.values)
</code></pre>
<p>but I get this error</p>
<pre><code>IOError: File not open for reading
</code></pre>
| 0 | 2016-08-09T16:53:33Z | 38,922,724 | <p>IIUC you want to write some header to an output file first, and afterwards, the data frame with formatted floats.</p>
<p>I believe you can still do it with pandas <code>to_csv</code> using the arguments: <code>mode</code> (set to "append" instead of default "write") and <code>float_format</code>.</p>
<p>So first goes the header, for example:</p>
<pre><code>with open('filename.dat', 'w') as fo:
fo.write('#channel d '+'\" "date" - date\n')
# etc. etc.
</code></pre>
<p>and then just append the data frame:</p>
<pre><code>df.to_csv('filename.dat', mode='a', float_format='%3.3f', sep=' ')
</code></pre>
| 1 | 2016-08-12T16:34:45Z | [
"python",
"pandas",
"file-io"
] |
Execute SQL file with multiple statements separated by ";" using pyodbc | 38,856,534 | <p>I am currently writing a script to run multiple SQL files using Python, a little background before you mention alternative methods; this is to automate the scripts and Python is the only tools I have on our windows 2008 server. I have a script that works for one set but the issue is when the other set has two statements instead of one seperated by a ';' here is my code:</p>
<pre><code>import os
import pyodbc
print ("Connecting via ODBC")
conn = pyodbc.connect('DSN=dsn', autocommit=True)
print ("Connected!\n")
inputdir = 'C:\\path'
cursor = conn.cursor()
for script in os.listdir(inputdir):
with open(inputdir+'\\' + script,'r') as inserts:
sqlScript = inserts.readlines()
sql = (" ".join(sqlScript))
cursor.execute(sql)
print (script)
conn.close()
print ('Run Complete!')
</code></pre>
<p>So this code works to show the entire file but it only executes one statement before ";".</p>
<p>Any help would be great!</p>
<p>Thanks.</p>
| 0 | 2016-08-09T16:57:06Z | 38,857,840 | <p>The API in the pyodbc connector (or pymysql) doesn't allow multiple statements in a SQL call. This is an issue of engine parsing; an API would need to completely understand the SQL that it's passing in order for multiple statements to be passed, and then multiple results handled upon return.</p>
<p>A slight modification to your script like the one below should allow you to send each of your statements individually with separate connectors:</p>
<pre><code>import os
import pyodbc
print ("Connecting via ODBC")
conn = pyodbc.connect('DSN=dsn', autocommit=True)
print ("Connected!\n")
inputdir = 'C:\\path'
for script in os.listdir(inputdir):
with open(inputdir+'\\' + script,'r') as inserts:
sqlScript = inserts.readlines()
for statement in sqlScript.split(';'):
with conn.cursor() as cur:
cur.execute(statement)
print(script)
conn.close()
</code></pre>
<p>The <code>with conn.cursor() as cur:</code> opens a closes a cursor for each statement, exiting appropriately after each call is completed.</p>
| 3 | 2016-08-09T18:19:46Z | [
"python",
"pyodbc",
"netezza"
] |
NameError: name not defined | 38,856,538 | <p>I'm trying to analyze some data, my code is as follows:</p>
<pre><code>for line in h:
if line_cnt in start_x:
recording_scores = True
temp_i = start_x.index(line_cnt)
score_acc = [0, 0, 0]
codon_id = remainder_x[temp_i]
temp_z = line.split()
temp_score = float(temp_z[1])
score_acc[codon_id] += temp_score
codon_id = (codon_id + 1) % 3
if temp_i>0 and line_cnt == end_x[temp_i]:
score_x0[temp_i] = score_acc[0] / ((end_x[temp_i] - start_x[temp_i] + 1) / 3)
score_x1[temp_i] = score_acc[1] / ((end_x[temp_i] - start_x[temp_i] + 1) / 3)
score_x2[temp_i] = score_acc[2] / ((end_x[temp_i] - start_x[temp_i] + 1) / 3)
temp_i = -1
recording_scores = False
</code></pre>
<p>I keep getting an error message saying that:</p>
<pre><code>Traceback (most recent call last):
File "CRECDR_analysis.py", line 79, in <module>
if temp_i>0 and line_cnt == end_x[temp_i]:
NameError: name 'temp_i' is not defined
CRE_CDR.pbs.e4524341 (END)
</code></pre>
<p>I thought I defined temp_i in the first if statement, but does the definition not carry over to the second if statement? Could someone clear this up for me?</p>
| -3 | 2016-08-09T16:57:21Z | 38,856,854 | <p>As Morgan pointed out, you don't define <code>temp_i</code> or <code>line_cnt</code> if you don't go into the first loop. I don't know what you want to do exactly, but you probably meant to do this if you only meant to test the second condition if the first was true.</p>
<pre><code>for line in h:
if line_cnt in start_x:
recording_scores = True
temp_i = start_x.index(line_cnt)
score_acc = [0, 0, 0]
codon_id = remainder_x[temp_i]
temp_z = line.split()
temp_score = float(temp_z[1])
score_acc[codon_id] += temp_score
codon_id = (codon_id + 1) % 3
if temp_i>0 and line_cnt == end_x[temp_i]:
score_x0[temp_i] = score_acc[0] / ((end_x[temp_i] - start_x[temp_i] + 1) / 3)
score_x1[temp_i] = score_acc[1] / ((end_x[temp_i] - start_x[temp_i] + 1) / 3)
score_x2[temp_i] = score_acc[2] / ((end_x[temp_i] - start_x[temp_i] + 1) / 3)
temp_i = -1
recording_scores = False
</code></pre>
<p>Otherwise, you would want to define <code>temp_i</code> and <code>line_cnt</code> in an else statement as such:
for line in h:</p>
<pre><code> if line_cnt in start_x:
recording_scores = True
temp_i = start_x.index(line_cnt)
score_acc = [0, 0, 0]
codon_id = remainder_x[temp_i]
temp_z = line.split()
temp_score = float(temp_z[1])
score_acc[codon_id] += temp_score
codon_id = (codon_id + 1) % 3
else:
temp_i = 0
line_cnt = foo
if temp_i>0 and line_cnt == end_x[temp_i]:
score_x0[temp_i] = score_acc[0] / ((end_x[temp_i] - start_x[temp_i] + 1) / 3)
score_x1[temp_i] = score_acc[1] / ((end_x[temp_i] - start_x[temp_i] + 1) / 3)
score_x2[temp_i] = score_acc[2] / ((end_x[temp_i] - start_x[temp_i] + 1) / 3)
temp_i = -1
recording_scores = False
</code></pre>
| 0 | 2016-08-09T17:16:25Z | [
"python"
] |
Panda rolling window percentile rank | 38,856,551 | <p>I am trying to calculate the percentile rank of data by column within a rolling window.</p>
<pre><code>test=pd.DataFrame(np.random.randn(20,3),pd.date_range('1/1/2000',periods=20),['A','B','C'])
test
Out[111]:
A B C
2000-01-01 -0.566992 -1.494799 0.462330
2000-01-02 -0.550769 -0.699104 0.767778
2000-01-03 -0.270597 0.060836 0.057195
2000-01-04 -0.583784 -0.546418 -0.557850
2000-01-05 0.294073 -2.326211 0.262098
2000-01-06 -1.122543 -0.116279 -0.003088
2000-01-07 0.121387 0.763100 3.503757
2000-01-08 0.335564 0.076304 2.021757
2000-01-09 0.403170 0.108256 0.680739
2000-01-10 -0.254558 -0.497909 -0.454181
2000-01-11 0.167347 0.459264 -1.247459
2000-01-12 -1.243778 0.858444 0.338056
2000-01-13 -1.070655 0.924808 0.080867
2000-01-14 -1.175651 -0.559712 -0.372584
2000-01-15 -0.216708 -0.116188 0.511223
2000-01-16 0.597171 0.205529 -0.728783
2000-01-17 -0.624469 0.592436 0.832100
2000-01-18 0.259269 0.665585 0.126534
2000-01-19 1.150804 0.575759 -1.335835
2000-01-20 -0.909525 0.500366 2.120933
</code></pre>
<p>I tried to use .rolling with .apply but I am missing something.</p>
<pre><code>pctrank = lambda x: x.rank(pct=True)
rollingrank=test.rolling(window=10,centre=False).apply(pctrank)
</code></pre>
<p>For column A the final value would be the percentile rank of -0.909525 within the length=10 window from 2000-01-11 to 2000-01-20. Any ideas?</p>
| 2 | 2016-08-09T16:57:56Z | 38,856,907 | <p>Your lambda receives a numpy array, which does not have a <code>.rank</code> method — it is pandas's <code>Series</code> and <code>DataFrame</code> that have it. You can thus change it to</p>
<pre><code>pctrank = lambda x: pd.Series(x).rank(pct=True).iloc[-1]
</code></pre>
<p>Or you could use pure numpy along the lines of <a href="http://stackoverflow.com/a/5284703/509824">this SO answer</a>:</p>
<pre><code>def pctrank(x):
n = len(x)
temp = x.argsort()
ranks = np.empty(n)
ranks[temp] = (np.arange(n) + 1) / n
return ranks[-1]
</code></pre>
| 1 | 2016-08-09T17:20:20Z | [
"python",
"pandas",
"apply",
"rank",
"percentile"
] |
**line 110, in <module> TypeError: unsupported operand type(s) for +: 'int' and 'str' | 38,856,565 | <p>So, someone at the company left and I was handed this script (which we really need to work). I've never programmed a day in my life.</p>
<p>After some googling and YouTube, I understand the issue to be that somewhere in the code it is trying to string together using + an integer and a string (and this is not allowed). I cannot figure out where this error is occurring, however. I' know it says line 110. Below is what is at line 108-125. Is the issue with the (account + '----' + "ELP CAC Config Selected')???</p>
<pre><code> if stage == 'CAC':
if fund == 'ELP':
logNotes(account + ' --- ' + ' ELP CAC Config Selected')
fileTypes = ['ConsumerAgreement', 'HCOCAC', 'HCODAC']
elif fund == 'KW':
logNotes(account + ' --- ' + ' KW CAC Config Selected')
fileTypes = ['ConsumerAgreement', 'CAC', 'HCOCAC', 'HCODAC']
elif stage == 'IC':
logNotes(account + ' --- ' + ' General IC Config Selected')
fileTypes = ['BuildingPlans', 'SystemPhotos', 'InstallationCompletionCertificate', 'HCOIC']
elif stage == 'FA':
if fund == 'Investec':
logNotes(account + ' --- ' + ' Investec Config Selected')
fileTypes = ['BOS', 'ConsumerAgreement', 'ConditionalWaiverIC','Conditional WaiverFA']
fund = 'KW'
else:
logNotes(account + ' --- ' + ' General FA Config Selected')
fileTypes = ['FinalAcceptanceCertificate', 'PTO']
</code></pre>
| 1 | 2016-08-09T16:58:53Z | 38,856,672 | <p>In Python, you cannot "add" a number and a string of characters. Hence, you must first convert the number to its character representation.
Try replacing the code with this:</p>
<pre><code> account_str = str(account) # Save the converted account string
if stage == 'CAC':
if fund == 'ELP':
logNotes(account_str + ' --- ' + ' ELP CAC Config Selected')
fileTypes = ['ConsumerAgreement', 'HCOCAC', 'HCODAC']
elif fund == 'KW':
logNotes(account_str + ' --- ' + ' KW CAC Config Selected')
fileTypes = ['ConsumerAgreement', 'CAC', 'HCOCAC', 'HCODAC']
elif stage == 'IC':
logNotes(account_str + ' --- ' + ' General IC Config Selected')
fileTypes = ['BuildingPlans', 'SystemPhotos', 'InstallationCompletionCertificate', 'HCOIC']
elif stage == 'FA':
if fund == 'Investec':
logNotes(account_str + ' --- ' + ' Investec Config Selected')
fileTypes = ['BOS', 'ConsumerAgreement', 'ConditionalWaiverIC','Conditional WaiverFA']
fund = 'KW'
else:
logNotes(account_str + ' --- ' + ' General FA Config Selected')
fileTypes = ['FinalAcceptanceCertificate', 'PTO']
</code></pre>
| 0 | 2016-08-09T17:06:29Z | [
"python",
"python-3.x"
] |
What does the group_keys argument to pandas.groupby actually do? | 38,856,583 | <p>In <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>pandas.DataFrame.groupby</code></a>, there is an argument <code>group_keys</code>, which I gather is supposed to do something relating to how group keys are included in the dataframe subsets. According to the documentation:</p>
<blockquote>
<p><strong>group_keys</strong> : <em>boolean, default True</em></p>
<blockquote>
<p>When calling apply, add group keys to index to identify pieces</p>
</blockquote>
</blockquote>
<p>However, I can't really find any examples where <code>group_keys</code> makes an actual difference:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[0, 1, 3],
[3, 1, 1],
[3, 0, 0],
[2, 3, 3],
[2, 1, 0]], columns=list('xyz'))
gby = df.groupby('x')
gby_k = df.groupby('x', group_keys=False)
</code></pre>
<p>It doesn't make a difference in the output of <code>apply</code>:</p>
<pre><code>ap = gby.apply(pd.DataFrame.sum)
# x y z
# x
# 0 0 1 3
# 2 4 4 3
# 3 6 1 1
ap_k = gby_k.apply(pd.DataFrame.sum)
# x y z
# x
# 0 0 1 3
# 2 4 4 3
# 3 6 1 1
</code></pre>
<p>And even if you print out the grouped subsets as you go, the results are still identical:</p>
<pre><code>def printer_func(x):
print(x)
return x
print('gby')
print('--------------')
gby.apply(printer_func)
print('--------------')
print('gby_k')
print('--------------')
gby_k.apply(printer_func)
print('--------------')
# gby
# --------------
# x y z
# 0 0 1 3
# x y z
# 0 0 1 3
# x y z
# 3 2 3 3
# 4 2 1 0
# x y z
# 1 3 1 1
# 2 3 0 0
# --------------
# gby_k
# --------------
# x y z
# 0 0 1 3
# x y z
# 0 0 1 3
# x y z
# 3 2 3 3
# 4 2 1 0
# x y z
# 1 3 1 1
# 2 3 0 0
# --------------
</code></pre>
<p>I considered the possibility that the default argument is actually <code>True</code>, but switching <code>group_keys</code> to explicitly <code>False</code> doesn't make a difference either. What exactly is this argument for?</p>
<p>(Run on <code>pandas</code> version <code>0.18.1</code>)</p>
<p><strong>Edit:</strong>
I did find a way where <code>group_keys</code> changes behavior, based on <a href="http://stackoverflow.com/a/34282449/467366">this answer</a>:</p>
<pre><code>import pandas as pd
import numpy as np
row_idx = pd.MultiIndex.from_product(((0, 1), (2, 3, 4)))
d = pd.DataFrame([[4, 3], [1, 3], [1, 1], [2, 4], [0, 1], [4, 2]], index=row_idx)
df_n = d.groupby(level=0).apply(lambda x: x.nlargest(2, [0]))
# 0 1
# 0 0 2 4 3
# 3 1 3
# 1 1 4 4 2
# 2 2 4
df_k = d.groupby(level=0, group_keys=False).apply(lambda x: x.nlargest(2, [0]))
# 0 1
# 0 2 4 3
# 3 1 3
# 1 4 4 2
# 2 2 4
</code></pre>
<p>However, I'm still not clear on the intelligible principle behind what <code>group_keys</code> is <em>supposed to do</em>. This behavior does not seem intuitive based on <strong>@piRSquared</strong>'s answer.</p>
| 5 | 2016-08-09T17:00:09Z | 38,857,246 | <p>If you are passing a function that preserves an index, pandas tries to keep that information. But if you pass a function that removes all semblance of index information, <code>group_keys=True</code> allows you to keep that information.</p>
<p>Use this instead</p>
<pre><code>f = lambda df: df.reset_index(drop=True)
</code></pre>
<p>Then the different <code>groupby</code></p>
<pre><code>gby.apply(lambda df: df.reset_index(drop=True))
</code></pre>
<p><a href="http://i.stack.imgur.com/1Q7YD.png" rel="nofollow"><img src="http://i.stack.imgur.com/1Q7YD.png" alt="enter image description here"></a></p>
<pre><code>gby_k.apply(lambda df: df.reset_index(drop=True))
</code></pre>
<p><a href="http://i.stack.imgur.com/uJ74P.png" rel="nofollow"><img src="http://i.stack.imgur.com/uJ74P.png" alt="enter image description here"></a></p>
| 2 | 2016-08-09T17:41:51Z | [
"python",
"pandas"
] |
What does the group_keys argument to pandas.groupby actually do? | 38,856,583 | <p>In <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>pandas.DataFrame.groupby</code></a>, there is an argument <code>group_keys</code>, which I gather is supposed to do something relating to how group keys are included in the dataframe subsets. According to the documentation:</p>
<blockquote>
<p><strong>group_keys</strong> : <em>boolean, default True</em></p>
<blockquote>
<p>When calling apply, add group keys to index to identify pieces</p>
</blockquote>
</blockquote>
<p>However, I can't really find any examples where <code>group_keys</code> makes an actual difference:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[0, 1, 3],
[3, 1, 1],
[3, 0, 0],
[2, 3, 3],
[2, 1, 0]], columns=list('xyz'))
gby = df.groupby('x')
gby_k = df.groupby('x', group_keys=False)
</code></pre>
<p>It doesn't make a difference in the output of <code>apply</code>:</p>
<pre><code>ap = gby.apply(pd.DataFrame.sum)
# x y z
# x
# 0 0 1 3
# 2 4 4 3
# 3 6 1 1
ap_k = gby_k.apply(pd.DataFrame.sum)
# x y z
# x
# 0 0 1 3
# 2 4 4 3
# 3 6 1 1
</code></pre>
<p>And even if you print out the grouped subsets as you go, the results are still identical:</p>
<pre><code>def printer_func(x):
print(x)
return x
print('gby')
print('--------------')
gby.apply(printer_func)
print('--------------')
print('gby_k')
print('--------------')
gby_k.apply(printer_func)
print('--------------')
# gby
# --------------
# x y z
# 0 0 1 3
# x y z
# 0 0 1 3
# x y z
# 3 2 3 3
# 4 2 1 0
# x y z
# 1 3 1 1
# 2 3 0 0
# --------------
# gby_k
# --------------
# x y z
# 0 0 1 3
# x y z
# 0 0 1 3
# x y z
# 3 2 3 3
# 4 2 1 0
# x y z
# 1 3 1 1
# 2 3 0 0
# --------------
</code></pre>
<p>I considered the possibility that the default argument is actually <code>True</code>, but switching <code>group_keys</code> to explicitly <code>False</code> doesn't make a difference either. What exactly is this argument for?</p>
<p>(Run on <code>pandas</code> version <code>0.18.1</code>)</p>
<p><strong>Edit:</strong>
I did find a way where <code>group_keys</code> changes behavior, based on <a href="http://stackoverflow.com/a/34282449/467366">this answer</a>:</p>
<pre><code>import pandas as pd
import numpy as np
row_idx = pd.MultiIndex.from_product(((0, 1), (2, 3, 4)))
d = pd.DataFrame([[4, 3], [1, 3], [1, 1], [2, 4], [0, 1], [4, 2]], index=row_idx)
df_n = d.groupby(level=0).apply(lambda x: x.nlargest(2, [0]))
# 0 1
# 0 0 2 4 3
# 3 1 3
# 1 1 4 4 2
# 2 2 4
df_k = d.groupby(level=0, group_keys=False).apply(lambda x: x.nlargest(2, [0]))
# 0 1
# 0 2 4 3
# 3 1 3
# 1 4 4 2
# 2 2 4
</code></pre>
<p>However, I'm still not clear on the intelligible principle behind what <code>group_keys</code> is <em>supposed to do</em>. This behavior does not seem intuitive based on <strong>@piRSquared</strong>'s answer.</p>
| 5 | 2016-08-09T17:00:09Z | 38,857,733 | <p><code>group_keys</code> parameter in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> comes handy during <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow"><code>apply</code></a> operations that creates an additional index column corresponding to the grouped columns[<code>group_keys=True</code>] and eliminates in the case[<code>group_keys=False</code>] especially during the case when trying to perform operations on individual columns.</p>
<p>One such instance:</p>
<pre><code>In [21]: gby = df.groupby('x',group_keys=True).apply(lambda row: row['x'])
In [22]: gby
Out[22]:
x
0 0 0
2 3 2
4 2
3 1 3
2 3
Name: x, dtype: int64
In [23]: gby_k = df.groupby('x', group_keys=False).apply(lambda row: row['x'])
In [24]: gby_k
Out[24]:
0 0
3 2
4 2
1 3
2 3
Name: x, dtype: int64
</code></pre>
<p>One of it's intended application could be to group by one of the levels of the hierarchy by converting it to a <code>Multi-index</code> dataframe object.</p>
<pre><code>In [27]: gby.groupby(level='x').sum()
Out[27]:
x
0 0
2 4
3 6
Name: x, dtype: int64
</code></pre>
| 1 | 2016-08-09T18:12:23Z | [
"python",
"pandas"
] |
Given three coordinate points, how do you detect when the angle between them crosses 180 degrees? | 38,856,588 | <p>This seems like it should be really simple but I'm having trouble with it. Basically, I have three points that keep changing (lets call them p1, p2, and p3). Also, let's define p2 as the vertex point.</p>
<p>Essentially, what I need to do is calculate the angle between the three points. A good example would be if the three angles form a 179 degree angle, then the points change to form a 181 degree angle. So what I really need is a good method for determining if an angle is greater than 180 degrees. I tried using the law of cosines, but it did not give me a good answer because when the points form a 181 degree angle, it simply interprets it as a 179 degree angle in a different direction. Also, I am doing this in Python, if that helps. Thanks!</p>
| 2 | 2016-08-09T17:00:25Z | 38,856,685 | <p>What you are trying to decide is whether (p3-p2) is a left or right turn comparing to (p2-p1). This is actually a core part of Graham Scan which is used for computing convex hulls (<a href="https://en.wikipedia.org/wiki/Graham_scan" rel="nofollow">https://en.wikipedia.org/wiki/Graham_scan</a>). Quoting Wikipedia with slight edits:</p>
<blockquote>
<p>...determining whether three points constitute a "left turn" or a
"right turn" does not require computing the actual angle between the
two line segments, and can actually be achieved with simple arithmetic
only. For three points P1=(x1, y1), P2=(x2, y2), and P3=(x3, y3),
simply compute the z-coordinate of the cross product of the two
vectors (p2-p1) and (p3-p1), which is given by the expression
<code>(x2 - x1) * (y3 - y1) - (y2 - y1) * (x3 - x1)</code>. If the result is 0, the
points are collinear; if it is positive, the three points constitute a
"left turn" or counter-clockwise orientation, otherwise a "right turn"
or clockwise orientation (for counter-clockwise numbered points).</p>
</blockquote>
| 5 | 2016-08-09T17:07:05Z | [
"python",
"math",
"geometry",
"trigonometry",
"angle"
] |
Given three coordinate points, how do you detect when the angle between them crosses 180 degrees? | 38,856,588 | <p>This seems like it should be really simple but I'm having trouble with it. Basically, I have three points that keep changing (lets call them p1, p2, and p3). Also, let's define p2 as the vertex point.</p>
<p>Essentially, what I need to do is calculate the angle between the three points. A good example would be if the three angles form a 179 degree angle, then the points change to form a 181 degree angle. So what I really need is a good method for determining if an angle is greater than 180 degrees. I tried using the law of cosines, but it did not give me a good answer because when the points form a 181 degree angle, it simply interprets it as a 179 degree angle in a different direction. Also, I am doing this in Python, if that helps. Thanks!</p>
| 2 | 2016-08-09T17:00:25Z | 38,856,694 | <p>To get signed angle in the full range, use atan2 function with dot and cross product of vectors <code>p2p1</code> and <code>p2p3</code></p>
<pre><code>Angle(in radians) = atan2(cross(p2p1,p2p3), dot(p2p1,p2p3))
</code></pre>
| 0 | 2016-08-09T17:07:43Z | [
"python",
"math",
"geometry",
"trigonometry",
"angle"
] |
Tensorflow: No module named contrib.learn.python.learn.datasets.mnist | 38,856,636 | <pre><code>import tensorflow as tf
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
</code></pre>
<p>I tried executing the above and I am getting the below error:</p>
<pre><code>ImportError: No module named contrib.learn.python.learn.datasets.mnist
</code></pre>
<p>I did <code>sudo pip show tensorflow</code>.
The location showed <code>/usr/local/lib/python2.7/dist-packages</code></p>
<p>So, I appended /usr/local/lib/python2.7/dist-packages to sys.path. But still getting the same error.</p>
<p>I'm not able to use anything from contrib.</p>
<p><code>training_set = tf.contrib.learn.datasets.base.load_csv(filename=IRIS_TRAINING, target_dtype=np.int)</code></p>
<p><code>AttributeError: 'module' object has no attribute 'contrib'</code></p>
<p>Could somebody please help me? Thanks in advance.</p>
| 0 | 2016-08-09T17:03:57Z | 38,858,627 | <p>The <a href="https://www.tensorflow.org/versions/r0.7/tutorials/mnist/beginners/index.html" rel="nofollow">TF 0.7 MNIST tutorial</a> suggests this snippet for loading data</p>
<pre><code>from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
</code></pre>
| 0 | 2016-08-09T19:08:22Z | [
"python",
"module",
"tensorflow"
] |
median of multiple files in pandas | 38,856,754 | <p>I have several csv files that I'm loading into pandas. The contain all the same columns, and <strong>almost</strong> but not exactly the same indexes. The rows are indexed by a pair (segVar, val). </p>
<p>What I want is a new DataFrame in with the same columns, and the union of the indexes, and each row is the median of the appropriate rows from the other files.</p>
<p>I also need to keep the order of the rows the same. (The orders between the files will be consistent)</p>
<p>This is probably 2 questions: how best to get the union of the indexes, and how to get the medians. But if it can be done in one answer, that's great.</p>
| -1 | 2016-08-09T17:10:44Z | 38,857,211 | <p>The answer, as Ayhan says is concat and groupby. I'll post my next question (how to sort the rows) in a different question, because it is easier to phrase.</p>
| 0 | 2016-08-09T17:39:07Z | [
"python",
"pandas",
"dataframe"
] |
median of multiple files in pandas | 38,856,754 | <p>I have several csv files that I'm loading into pandas. The contain all the same columns, and <strong>almost</strong> but not exactly the same indexes. The rows are indexed by a pair (segVar, val). </p>
<p>What I want is a new DataFrame in with the same columns, and the union of the indexes, and each row is the median of the appropriate rows from the other files.</p>
<p>I also need to keep the order of the rows the same. (The orders between the files will be consistent)</p>
<p>This is probably 2 questions: how best to get the union of the indexes, and how to get the medians. But if it can be done in one answer, that's great.</p>
| -1 | 2016-08-09T17:10:44Z | 38,956,742 | <p>You can use pd.concat to combine the DataFrames and use groupby on the index:</p>
<pre><code>df1 = pd.DataFrame({'A': [1, 2, 3], 'B': [2, 3, 5]}, index = [1, 2, 3])
df1
Out:
A B
1 1 2
2 2 3
3 3 5
df2 = pd.DataFrame({'A': [4, 5, 2], 'B': [1, 6, 3]}, index = [2, 3, 5])
df2
Out:
A B
2 4 1
3 5 6
5 2 3
df3 = pd.DataFrame({'A': [4, 3, 1], 'B': [3, 2, 5]}, index = [3, 4, 5])
df3
Out:
A B
3 4 3
4 3 2
5 1 5
</code></pre>
<hr>
<pre><code>pd.concat([df1, df2, df3]).groupby(level=0).median()
Out:
A B
1 1.0 2.0
2 3.0 2.0
3 4.0 5.0
4 3.0 2.0
5 1.5 4.0
</code></pre>
| 1 | 2016-08-15T14:11:15Z | [
"python",
"pandas",
"dataframe"
] |
What is the point of calling super in custom error classes in python? | 38,856,819 | <p>So I have a simple custom error class in Python that I created based on the Python 2.7 documentation:</p>
<pre><code>class InvalidTeamError(Exception):
def __init__(self, message='This user belongs to a different team'):
self.message = message
</code></pre>
<p>This gives me warning <code>W0231: __init__ method from base class %r is not called</code> in PyLint so I go look it up and am given the very helpful description of "explanation needed." I'd normally just ignore this error but I have noticed that a ton code online includes a call to super in the beginning of the <strong>init</strong> method of custom error classes so my question is: Does doing this actually serve a purpose or is it just people trying to appease a bogus pylint warning?</p>
| 7 | 2016-08-09T17:14:36Z | 38,857,670 | <p>Lookinv at the cpython2.7 source code there should be no problem avoiding that call to super <strong>init</strong> and Yes it's done just because its generally a good practice to call base class init in your init.</p>
<p><a href="https://github.com/python/cpython/blob/master/Objects/exceptions.c" rel="nofollow">https://github.com/python/cpython/blob/master/Objects/exceptions.c</a> see line 60 for BaseException init and line 456 how Exception derives from BaseException.</p>
| -1 | 2016-08-09T18:08:58Z | [
"python",
"python-2.7",
"exception",
"pylint"
] |
What is the point of calling super in custom error classes in python? | 38,856,819 | <p>So I have a simple custom error class in Python that I created based on the Python 2.7 documentation:</p>
<pre><code>class InvalidTeamError(Exception):
def __init__(self, message='This user belongs to a different team'):
self.message = message
</code></pre>
<p>This gives me warning <code>W0231: __init__ method from base class %r is not called</code> in PyLint so I go look it up and am given the very helpful description of "explanation needed." I'd normally just ignore this error but I have noticed that a ton code online includes a call to super in the beginning of the <strong>init</strong> method of custom error classes so my question is: Does doing this actually serve a purpose or is it just people trying to appease a bogus pylint warning?</p>
| 7 | 2016-08-09T17:14:36Z | 38,857,736 | <p>This was a valid pylint warning: by not using the superclass <code>__init__</code> you can miss out on implementation changes in the parent class. And, indeed, you have - because <code>BaseException.message</code> has been deprecated as of Python 2.6.</p>
<p>Here would be an implementation which will avoid your warning W0231 and will also avoid python's deprecation warning about the <code>message</code> attribute. </p>
<pre><code>class InvalidTeamError(Exception):
def __init__(self, message='This user belongs to a different team'):
super(InvalidTeamError, self).__init__(message)
</code></pre>
<p>This is a better way to do it, because the <a href="https://hg.python.org/cpython/file/6f6e56bb10aa/Objects/exceptions.c#l100">implementation for <code>BaseException.__str__</code></a> only considers the 'args' tuple, it doesn't look at message at all. With your old implementation, <code>print InvalidTeamError()</code> would have only printed an empty string, which is probably not what you wanted!</p>
| 8 | 2016-08-09T18:12:37Z | [
"python",
"python-2.7",
"exception",
"pylint"
] |
Iterating through two pandas dataframes and appending data from one dataframe to the other | 38,856,904 | <p>I have two pandas data-frames that look like this: </p>
<p>data_frame_1:</p>
<pre><code>index un_id city
1 abc new york
2 def atlanta
3 gei toronto
4 lmn tampa
</code></pre>
<p>data_frame_2:</p>
<pre><code>index name un_id
1 frank gei
2 john lmn
3 lisa abc
4 jessica def
</code></pre>
<p>I need to match names to cities via the un_id column either in a new data-frame or an existing data-frame. I am having trouble figuring out how to iterate through one column, grab the un_id, iterate through the other un_id column in the other data-frame with that un_id, and then append the information needed back to the original data-frame. </p>
| 1 | 2016-08-09T17:20:04Z | 38,857,189 | <p>use pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow">merge</a>:</p>
<pre><code>In[14]:df2.merge(df1,on='un_id')
Out[14]:
name un_id city
0 frank gei toronto
1 john lmn tampa
2 lisa abc new york
3 jessica def atlanta
</code></pre>
| 2 | 2016-08-09T17:37:21Z | [
"python",
"pandas",
"dataframe",
"iteration"
] |
neural net sigmoid function "takes exactly 1 argument (2 given)" | 38,856,948 | <p>trying to forward propagate some data through a neural net</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
class Neural_Network(object):
def __init__(self):
#Define Hyperparameters
self.inputLayerSize = 2
self.outputLayerSize = 1
self.hiddenLayerSize = 3
#Weights (parameters)
self.W1 = np.random.randn(self.inputLayerSize, self.hiddenLayerSize)
self.W2 = np.random.randn(self.hiddenLayerSize, self.outputLayerSize)
def forward(self, X):
#Propagate inputs though network
self.z2 = np.dot(X, self.W1)
self.a2 = self.sigmoid(self.z2)
self.z3 = np.dot(self.a2, self.W2)
yHat = self.sigmoid(self.z3)
return yHat
def sigmoid(z):
# apply sigmoid activation function
return 1/(1+np.exp(-z))
</code></pre>
<p>When I run:<br>
<code>NN = Neural_Network()
yHat = NN.forward(X)</code></p>
<p>Why do I get the error:<code>TypeError: sigmoid() takes exactly 1 argument (2 given)</code></p>
<p>when I run:
<code>print NN.W1</code></p>
<p>i get: <code>[[ 1.034435 -0.19260378 -2.73767483]
[-0.66502157 0.86653985 -1.22692781]]</code></p>
<p>(perhaps this is a problem with the numpy dot function returning too many dimensions?)</p>
<p>*note: i am running in jupyter notebook and <code>%pylab inline</code></p>
| 0 | 2016-08-09T17:22:16Z | 38,856,964 | <p>You are missing a <code>self</code> argument for the <code>sigmoid</code> function. <code>def sigmoid(z):</code> -> <code>def sigmoid(self, z):</code>. This <code>self.sigmoid(self.z3)</code> is effectively calling <code>sigmoid</code> with <code>self</code> as the first parameter and <code>self.z3</code> as the second.</p>
<p>(That or your code indentation is off which doesn't look likely since the code runs)</p>
| 1 | 2016-08-09T17:23:07Z | [
"python",
"neural-network",
"matrix-multiplication",
"sigmoid"
] |
Make a list of every column in a file in Python | 38,856,990 | <p>I would like to create a list for every column in a txt file.
The file looks like this:</p>
<p><code>NAME S1 S2 S3 S4
A 1 4 3 1
B 2 1 2 6
C 2 1 3 5</code></p>
<p>PROBLEM 1 . How do I dynamically make the number of lists that fit the number of columns, such that I can fill them? In some files I will have 4 columns, others I will have 6 or 8...</p>
<p>PROBLEM 2. What is a pythonic way to iterate through each column and make a list of the values like this:</p>
<pre><code>list_s1 = [1,2,2]
list_s2 = [4,1,1]
</code></pre>
<p>etc.</p>
<p>Right now I have read in the txt file and I have each individual line. As input I give the number of NAMES in a file (here HOW_MANY_SAMPLES = 4)</p>
<pre><code>def parse_textFile(file):
list_names = []
with open(file) as f:
header = f.next()
head_list = header.rstrip("\r\n").split("\t")
for i in f:
e = i.rstrip("\r\n").split("\t")
list_names.append(e)
for i in range(1, HOW_MANY_SAMPLES):
l+i = []
l+i.append([a[i] for a in list_names])
</code></pre>
<p>I need a dynamic way of creating and filling the number of lists that correspond to the amount of columns in my table.</p>
| 0 | 2016-08-09T17:24:40Z | 38,857,072 | <h2>Problem 1:</h2>
<p>You can use <code>len(head_list)</code> instead of having to specify <code>HOW_MANY_SAMPLES</code>.</p>
<p>You can also try using <a href="https://docs.python.org/3/library/csv.html" rel="nofollow">Python's CSV module</a> and setting the delimiter to a space or a tab instead of a comma.</p>
<p>See <a href="http://stackoverflow.com/a/8859304/3199915">this answer to a similar StackOverflow question</a>.</p>
<h2>Problem 2:</h2>
<p>Once you have a list representing each row, you can use <code>zip</code> to create lists representing each column:
See <a href="http://stackoverflow.com/a/20279160/3199915">this answer</a>.</p>
<p>With the CSV module, you can <a href="http://stackoverflow.com/a/29082892/3199915">follow this suggestion</a>, which is another way to invert the data from row-based lists to column-based lists.</p>
<h2>Sample:</h2>
<pre><code>import csv
# open the file in universal line ending mode
with open('data.txt', 'rU') as infile:
# register a dialect that skips extra whitespace
csv.register_dialect('ignorespaces', delimiter=' ', skipinitialspace=True)
# read the file as a dictionary for each row ({header : value})
reader = csv.DictReader(infile, dialect='ignorespaces')
data = {}
for row in reader:
for header, value in row.items():
try:
if (header):
data[header].append(value)
except KeyError:
data[header] = [value]
for column in data.keys():
print (column + ": " + str(data[column]))
</code></pre>
<p>this yields:</p>
<pre><code>S2: ['4', '1', '1']
S1: ['1', '2', '2']
S3: ['3', '2', '3']
S4: ['1', '6', '5']
NAME: ['A', 'B', 'C']
</code></pre>
| 2 | 2016-08-09T17:29:38Z | [
"python",
"list",
"multiple-columns"
] |
Make a list of every column in a file in Python | 38,856,990 | <p>I would like to create a list for every column in a txt file.
The file looks like this:</p>
<p><code>NAME S1 S2 S3 S4
A 1 4 3 1
B 2 1 2 6
C 2 1 3 5</code></p>
<p>PROBLEM 1 . How do I dynamically make the number of lists that fit the number of columns, such that I can fill them? In some files I will have 4 columns, others I will have 6 or 8...</p>
<p>PROBLEM 2. What is a pythonic way to iterate through each column and make a list of the values like this:</p>
<pre><code>list_s1 = [1,2,2]
list_s2 = [4,1,1]
</code></pre>
<p>etc.</p>
<p>Right now I have read in the txt file and I have each individual line. As input I give the number of NAMES in a file (here HOW_MANY_SAMPLES = 4)</p>
<pre><code>def parse_textFile(file):
list_names = []
with open(file) as f:
header = f.next()
head_list = header.rstrip("\r\n").split("\t")
for i in f:
e = i.rstrip("\r\n").split("\t")
list_names.append(e)
for i in range(1, HOW_MANY_SAMPLES):
l+i = []
l+i.append([a[i] for a in list_names])
</code></pre>
<p>I need a dynamic way of creating and filling the number of lists that correspond to the amount of columns in my table.</p>
| 0 | 2016-08-09T17:24:40Z | 38,857,834 | <p>By using <code>pandas</code> you can create a list of list or a dic to get what you are looking for.</p>
<p>Create a <code>dataframe</code> from your file, then iterate through each column and add it to a list or dic.</p>
<pre><code>from StringIO import StringIO
import pandas as pd
TESTDATA = StringIO("""NAME S1 S2 S3 S4
A 1 4 3 1
B 2 1 2 6
C 2 1 3 5""")
columns = []
c_dic = {}
df = pd.read_csv(TESTDATA, sep=" ", engine='python')
for column in df:
columns.append(df[column].tolist())
c_dic[column] = df[column].tolist()
</code></pre>
<p>Then you will have a list of list for all the columns</p>
<pre><code>for x in columns:
print x
</code></pre>
<p>Returns </p>
<pre><code>['A', 'B', 'C']
[1, 2, 2]
[4, 1, 1]
[3, 2, 3]
[1, 6, 5]
</code></pre>
<p>and </p>
<pre><code>for k,v in c_dic.iteritems():
print k,v
</code></pre>
<p>returns </p>
<pre><code>S3 [3, 2, 3]
S2 [4, 1, 1]
NAME ['A', 'B', 'C']
S1 [1, 2, 2]
S4 [1, 6, 5]
</code></pre>
<p>if you need to keep track of columns name and data</p>
| 2 | 2016-08-09T18:19:15Z | [
"python",
"list",
"multiple-columns"
] |
Pandas: df.mul vs df.rmul | 38,857,004 | <p>Can anybody help me understand the difference (if any) between the two methods: <code>df.mul</code> and <code>df.rmul</code>? The documentation looks identical:</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mul.html#pandas.DataFrame.mul">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mul.html#pandas.DataFrame.mul</a></p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rmul.html">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rmul.html</a></p>
| 5 | 2016-08-09T17:25:43Z | 38,857,643 | <p>From the code:</p>
<pre><code># not entirely sure why this is necessary, but previously was included
# so it's here to maintain compatibility
rmul=arith_method(operator.mul, names('rmul'), op('*'),
default_axis=default_axis, reversed=True),
</code></pre>
<p>Analogous lines for <code>mul</code></p>
<pre><code>mul=arith_method(operator.mul, names('mul'), op('*'),
default_axis=default_axis),
</code></pre>
<p><code>rmul</code> has a flag <code>reversed=True</code></p>
<p>My assumption is that the <code>reversed</code> flag is important for non commutative operations like subtraction and division. It isn't necessary for multiplication, hence the comment.</p>
<p>For all practical purposes, it looks the same.</p>
| 2 | 2016-08-09T18:06:59Z | [
"python",
"pandas"
] |
Pandas: df.mul vs df.rmul | 38,857,004 | <p>Can anybody help me understand the difference (if any) between the two methods: <code>df.mul</code> and <code>df.rmul</code>? The documentation looks identical:</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mul.html#pandas.DataFrame.mul">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mul.html#pandas.DataFrame.mul</a></p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rmul.html">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rmul.html</a></p>
| 5 | 2016-08-09T17:25:43Z | 38,857,747 | <p>The documentation is not identical. As stated in the documentation, <code>df.mul(other)</code> is equivalent to <code>df * other</code>, while <code>df.rmul(other)</code> is equivalent to <code>other * df</code>.</p>
<p>This probably doesn't matter for most cases, but it will matter if, e.g., you have a dataframe of object dtype whose elements have noncommutative multiplication. Maybe you wrote a quaternion class and filled a dataframe with quaternions. Someone with more Pandas experience might be able to come up with more practical cases where it matters.</p>
| 3 | 2016-08-09T18:13:35Z | [
"python",
"pandas"
] |
Pythonic way to replace every second comma of string with space | 38,857,048 | <p>I have a string which looks like this:</p>
<pre><code>coords = "86.2646484375,23.039297747769726,87.34130859375,22.59372606392931,88.13232421875,24.066528197726857"
</code></pre>
<p>What I want is to bring it to this format:</p>
<pre><code>coords = "86.2646484375,23.039297747769726 87.34130859375,22.59372606392931 88.13232421875,24.066528197726857"
</code></pre>
<p>So in every second number to replace the comma with a space. Is there a simple, pythonic way to do this.</p>
<p>Right now I am trying to do it with using the split function to create a list and then loop through the list. But it seems rather not straightforward.</p>
| 3 | 2016-08-09T17:28:08Z | 38,857,104 | <p>This sort of works:</p>
<pre><code>>>> s = coords.split(',')
>>> s
['86.2646484375', '23.039297747769726', '87.34130859375', '22.59372606392931', '88.13232421875', '24.066528197726857']
>>> [','.join(i) for i in zip(s[::2], s[1::2])]
['86.2646484375,23.039297747769726', '87.34130859375,22.59372606392931', '88.13232421875,24.066528197726857']
</code></pre>
| 2 | 2016-08-09T17:31:56Z | [
"python"
] |
Pythonic way to replace every second comma of string with space | 38,857,048 | <p>I have a string which looks like this:</p>
<pre><code>coords = "86.2646484375,23.039297747769726,87.34130859375,22.59372606392931,88.13232421875,24.066528197726857"
</code></pre>
<p>What I want is to bring it to this format:</p>
<pre><code>coords = "86.2646484375,23.039297747769726 87.34130859375,22.59372606392931 88.13232421875,24.066528197726857"
</code></pre>
<p>So in every second number to replace the comma with a space. Is there a simple, pythonic way to do this.</p>
<p>Right now I am trying to do it with using the split function to create a list and then loop through the list. But it seems rather not straightforward.</p>
| 3 | 2016-08-09T17:28:08Z | 38,857,120 | <p>First let's import the regular expression module and define your <code>coords</code> variable:</p>
<pre><code>>>> import re
>>> coords = "86.2646484375,23.039297747769726,87.34130859375,22.59372606392931,88.13232421875,24.066528197726857"
</code></pre>
<p>Now, let's replace every second comma with a space:</p>
<pre><code>>>> re.sub('(,[^,]*),', r'\1 ', coords)
'86.2646484375,23.039297747769726 87.34130859375,22.59372606392931 88.13232421875,24.066528197726857'
</code></pre>
<p>The regular expression <code>(,[^,]*),</code> looks for pairs of commas. The replacement text, <code>r'\1 '</code> keeps the first comma but replaces the second with a space.</p>
| 12 | 2016-08-09T17:32:37Z | [
"python"
] |
Pythonic way to replace every second comma of string with space | 38,857,048 | <p>I have a string which looks like this:</p>
<pre><code>coords = "86.2646484375,23.039297747769726,87.34130859375,22.59372606392931,88.13232421875,24.066528197726857"
</code></pre>
<p>What I want is to bring it to this format:</p>
<pre><code>coords = "86.2646484375,23.039297747769726 87.34130859375,22.59372606392931 88.13232421875,24.066528197726857"
</code></pre>
<p>So in every second number to replace the comma with a space. Is there a simple, pythonic way to do this.</p>
<p>Right now I am trying to do it with using the split function to create a list and then loop through the list. But it seems rather not straightforward.</p>
| 3 | 2016-08-09T17:28:08Z | 38,857,321 | <p>The pythonic way is to split the string and join it again, with the alternating delimiters:</p>
<pre><code>from itertools import chain, cycle, izip
coords = ''.join(chain.from_iterable(izip(coords.split(','), cycle(', '))))
</code></pre>
| 1 | 2016-08-09T17:47:23Z | [
"python"
] |
Identify Django User table based on content_type and field name links | 38,857,096 | <p>I've writing an app for managing subscriptions, for example annual club membership. There are different catergories of membership and these categories can have admin specified criteria which could relate to any table in the database. The only requirement is there must be a link back to the Django User table.</p>
<p>I've therefore got this model (defined below) where:</p>
<ul>
<li><strong>name</strong> is the Category name for user convenience </li>
<li><strong>content_type</strong> is a link to the <code>django_content_type</code> table to identify
the table the criteria are being set for this category</li>
<li><strong>filter_condition</strong> is the condition to be user to select relevant users
(for example if the <code>content_type</code> table were User, then this could be
as simple as <code>{"is_active":"true"}</code></li>
<li><strong>user_link</strong> is the field within the table identified by <code>content_type</code>
which is a foreign key for the User table</li>
</ul>
<p>I want to check the <code>user_link</code> does link to the User table when the admin saves and I'm writing <code>def clean (self)</code> for that purpose.</p>
<p><strong>I cannot work out how to convert my <code>content_type</code> and <code>user_link</code> fields into an object I can use to check it is the User table.</strong> Help very welcome!</p>
<p>Here's models.py</p>
<pre><code>from django.db import models
from django.contrib.auth.models import User
from django.conf import settings
import datetime
from django.core.exceptions import ValidationError
from django.utils.translation import ugettext_lazy as _
from django.contrib.contenttypes.models import ContentType
from subscriptions.fields import JSONField
class Category(models.Model):
name = models.CharField('Category', max_length=30)
content_type = models.ForeignKey(ContentType)
filter_condition = JSONField(default="{}", help_text=_(u"Django ORM compatible lookup kwargs which are used to get the list of objects."))
user_link = models.CharField(_(u"Link to User table"), max_length=64, help_text=_(u"Name of the model field which links to the User table. 'No-link' means this is the User table."), default="No-link")
def clean (self):
if self.user_link == "No-link":
if self.content_type.app_label == "auth" and self.content_type.model == "user":
pass
else:
raise ValidationError(
_("Must specify the field that links to the user table.")
)
else:
t = getattr(self.content_type, self.user_link)
ct = ContentType.objects.get_for_model(t)
if not ct.model == "user":
raise ValidationError(
_("Must specify the field that links to the user table.")
)
def __unicode__(self):
return self.name
def _get_filter(self):
# simplejson likes to put unicode objects as dictionary keys
# but keyword arguments must be str type
fc = {}
for k,v in self.filter_condition.iteritems():
fc.update({str(k): v})
return fc
def object_list(self):
return self.content_type.model_class()._default_manager.filter(**self._get_filter())
def object_count(self):
return self.object_list().count()
class Meta:
verbose_name = _("Category")
verbose_name_plural = _("Categories")
ordering = ('name',)
</code></pre>
<p>Different discriptions can have different criteria. </p>
| 1 | 2016-08-09T17:30:57Z | 38,866,989 | <p>For validation purposes I've got the solution below, which replaces the main <code>else</code> clause in <code>def clean</code>.</p>
<pre><code> else:
s = apps.get_model(self.content_type.app_label, self.content_type.model)
if not hasattr(s, self.user_link):
raise ValidationError(
_("Must specify the field that links to the user table.")
)
</code></pre>
<p>I now need to work out how to actually use the information and connect the two into a field so I can link through to the <code>User</code> table</p>
| 1 | 2016-08-10T07:35:18Z | [
"python",
"django",
"django-contenttypes"
] |
How can I evaluate a value in a Python generator at the time I create the generator, not at the time I iterate it? | 38,857,108 | <p>The following is my code:</p>
<pre><code>import itertools
i = itertools.chain()
for a in [1, 2, 3]:
i = itertools.chain(i, (a for _ in range(2)))
print(list(i))
[3, 3, 3, 3, 3, 3]
</code></pre>
<p>Is there a way I can access the value of <code>a</code> when creating the generator, rather than when I iterate it in the <code>print</code> statement?</p>
<p>I'd like the output to be <code>[1,1,2,2,3,3]</code>, ie, the value of <code>a</code> when the generator was created.</p>
<p>This is a trivial problem, but in my case I am iterating 1,000,000 rows in the outer loop, then in the inner loop generating 8 rows for each of those million, so I'm keen to keep it a generator.</p>
<p>Nb. The use case is I'm iterating a table in the outer loop, creating sub-objects for each row, passing the primary key to the sub-objects. The numbers are pretty large, so I want to build up the generator, then bulk insert after the loop (using Django's <code>Model.objects.bulk_create(generator)</code>). But by the time I call <code>bulk_create</code> the primary key is always set to the last row in the outer loop.</p>
<pre><code>gen = itertools.chain()
for id in ParentModel.objects.all().value_list('id', flat=True)):
gen = itertools.chain(gen, (InnerModel(fk=id) for i in range(10000)))
InnerModel.objects.bulk_create(gen)
</code></pre>
<p>All the generated InnerModels point to the last OuterModel in the list.</p>
| 1 | 2016-08-09T17:32:15Z | 38,857,573 | <p>One way would be to restructure your code to use a two-<code>for</code> genexp, so that <code>id</code> has the right value when it's needed:</p>
<pre><code>InnerModel.objects.bulk_create(
InnerModel(fk=id) for id in ParentModel.objects.all().value_list('id', flat=True)
for i in range(10000))
</code></pre>
<p>As another benefit, you won't get the nasty stack overflow you're building up to with those nested <code>chain</code>s.</p>
| 2 | 2016-08-09T18:03:23Z | [
"python",
"django",
"generator"
] |
How can I evaluate a value in a Python generator at the time I create the generator, not at the time I iterate it? | 38,857,108 | <p>The following is my code:</p>
<pre><code>import itertools
i = itertools.chain()
for a in [1, 2, 3]:
i = itertools.chain(i, (a for _ in range(2)))
print(list(i))
[3, 3, 3, 3, 3, 3]
</code></pre>
<p>Is there a way I can access the value of <code>a</code> when creating the generator, rather than when I iterate it in the <code>print</code> statement?</p>
<p>I'd like the output to be <code>[1,1,2,2,3,3]</code>, ie, the value of <code>a</code> when the generator was created.</p>
<p>This is a trivial problem, but in my case I am iterating 1,000,000 rows in the outer loop, then in the inner loop generating 8 rows for each of those million, so I'm keen to keep it a generator.</p>
<p>Nb. The use case is I'm iterating a table in the outer loop, creating sub-objects for each row, passing the primary key to the sub-objects. The numbers are pretty large, so I want to build up the generator, then bulk insert after the loop (using Django's <code>Model.objects.bulk_create(generator)</code>). But by the time I call <code>bulk_create</code> the primary key is always set to the last row in the outer loop.</p>
<pre><code>gen = itertools.chain()
for id in ParentModel.objects.all().value_list('id', flat=True)):
gen = itertools.chain(gen, (InnerModel(fk=id) for i in range(10000)))
InnerModel.objects.bulk_create(gen)
</code></pre>
<p>All the generated InnerModels point to the last OuterModel in the list.</p>
| 1 | 2016-08-09T17:32:15Z | 38,857,662 | <p>If you don't mind wrapping the tuple into a lambda:</p>
<pre><code>>>> import itertools
>>> i = itertools.chain()
>>> for a in [1, 2, 3]:
>>> i = itertools.chain(i, (lambda x: (x for _ in range(2)))(a))
>>> print(list(i))
[1, 1, 2, 2, 3, 3]
</code></pre>
<p>The idea is to copy the value of <code>a</code> of each iteration. <code>lambda</code>'s argument can do that. In each iteration, a local variable <code>x</code> is created and assigned with <code>a</code>.</p>
| 1 | 2016-08-09T18:08:00Z | [
"python",
"django",
"generator"
] |
Capture the numbers in this regex | 38,857,270 | <p>I have a string like this:</p>
<pre><code>{{1090872, "A"}, {4281, "AA"}, {1332552, "AAACU"}, {1287145, "AABB"}}
</code></pre>
<p>How can I write a regex to capture the numbers. I know that I can capture the letters with: "(.*?)"</p>
| 3 | 2016-08-09T17:43:32Z | 38,857,346 | <p>If you don't have number in quotes, then answer is</p>
<pre class="lang-py prettyprint-override"><code>import re
str = '{{1090872, "A"}, {4281, "AA"}, {1332552, "AAACU"}, {1287145, "AABB"}}'
re.findall(r'\d+', str)
['1090872', '4281', '1332552', '1287145']
</code></pre>
<p>otherwise you can try</p>
<pre class="lang-py prettyprint-override"><code>re.findall(r'[{},](\d+)[{},]', str)
</code></pre>
| 2 | 2016-08-09T17:49:10Z | [
"python",
"regex"
] |
How to sort rows in pandas with a non-standard order | 38,857,311 | <p>I have a pandas dataframe, say:</p>
<pre><code>df = pd.DataFrame ([['a', 3, 3], ['b', 2, 5], ['c', 4, 9], ['d', 1, 43]], columns = ['col 1' , 'col2', 'col 3'])
</code></pre>
<p>or:</p>
<pre><code> col 1 col2 col 3
0 a 3 3
1 b 2 5
2 c 4 9
3 d 1 43
</code></pre>
<p>If I want to sort by col2, I can use df.sort, and that will sort ascending and descending.</p>
<p>However, if I want to sort the rows so that col2 is: [4, 2, 1, 3], how would I do that?</p>
| 3 | 2016-08-09T17:46:50Z | 38,857,529 | <p>One way is to convert that column to a <code>Categorical</code> type, which can have an arbitrary ordering.</p>
<pre><code>In [51]: df['col2'] = df['col2'].astype('category', categories=[4, 1, 2, 3], ordered=True)
In [52]: df.sort_values('col2')
Out[52]:
col 1 col2 col 3
2 c 4 9
3 d 1 43
1 b 2 5
0 a 3 3
</code></pre>
| 3 | 2016-08-09T18:00:23Z | [
"python",
"sorting",
"pandas"
] |
How to sort rows in pandas with a non-standard order | 38,857,311 | <p>I have a pandas dataframe, say:</p>
<pre><code>df = pd.DataFrame ([['a', 3, 3], ['b', 2, 5], ['c', 4, 9], ['d', 1, 43]], columns = ['col 1' , 'col2', 'col 3'])
</code></pre>
<p>or:</p>
<pre><code> col 1 col2 col 3
0 a 3 3
1 b 2 5
2 c 4 9
3 d 1 43
</code></pre>
<p>If I want to sort by col2, I can use df.sort, and that will sort ascending and descending.</p>
<p>However, if I want to sort the rows so that col2 is: [4, 2, 1, 3], how would I do that?</p>
| 3 | 2016-08-09T17:46:50Z | 38,857,613 | <p>Try this:</p>
<pre><code>sortMap = {4:1, 2:2, 1:3,3:4 }
df["new"] = df2['col2'].map(sortMap)
df.sort_values('new', inplace=True)
df
col1 col2 col3 new
2 c 4 9 1
1 b 2 5 2
3 d 1 43 3
0 a 3 3 4
</code></pre>
<p>alt method to create dict: </p>
<pre><code>ll = [4, 2, 1, 3]
sortMap = dict(zip(ll,range(len(ll))))
</code></pre>
| 3 | 2016-08-09T18:05:21Z | [
"python",
"sorting",
"pandas"
] |
How to sort rows in pandas with a non-standard order | 38,857,311 | <p>I have a pandas dataframe, say:</p>
<pre><code>df = pd.DataFrame ([['a', 3, 3], ['b', 2, 5], ['c', 4, 9], ['d', 1, 43]], columns = ['col 1' , 'col2', 'col 3'])
</code></pre>
<p>or:</p>
<pre><code> col 1 col2 col 3
0 a 3 3
1 b 2 5
2 c 4 9
3 d 1 43
</code></pre>
<p>If I want to sort by col2, I can use df.sort, and that will sort ascending and descending.</p>
<p>However, if I want to sort the rows so that col2 is: [4, 2, 1, 3], how would I do that?</p>
| 3 | 2016-08-09T17:46:50Z | 38,857,621 | <p>alternative solution:</p>
<pre><code>In [409]: lst = [4, 2, 1, 3]
In [410]: srt = pd.Series(np.arange(len(lst)), index=lst)
In [411]: srt
Out[411]:
4 0
2 1
1 2
3 3
dtype: int32
In [412]: df.assign(x=df.col2.map(srt))
Out[412]:
col 1 col2 col 3 x
0 a 3 3 3
1 b 2 5 1
2 c 4 9 0
3 d 1 43 2
In [413]: df.assign(x=df.col2.map(srt)).sort_values('x')
Out[413]:
col 1 col2 col 3 x
2 c 4 9 0
1 b 2 5 1
3 d 1 43 2
0 a 3 3 3
In [414]: df.assign(x=df.col2.map(srt)).sort_values('x').drop('x',1)
Out[414]:
col 1 col2 col 3
2 c 4 9
1 b 2 5
3 d 1 43
0 a 3 3
</code></pre>
<p>NOTE: i do like <a href="http://stackoverflow.com/a/38857529/5741205">@chrisb's solution</a> more - it's much more elegant and probably will work faster</p>
| 1 | 2016-08-09T18:05:43Z | [
"python",
"sorting",
"pandas"
] |
Stopping processes in ThreadPool in Python | 38,857,379 | <p>I've been trying to write an interactive wrapper (for use in ipython) for a library that controls some hardware. Some calls are heavy on the IO so it makes sense to carry out the tasks in parallel. Using a ThreadPool (almost) works nicely:</p>
<pre><code>from multiprocessing.pool import ThreadPool
class hardware():
def __init__(IPaddress):
connect_to_hardware(IPaddress)
def some_long_task_to_hardware(wtime):
wait(wtime)
result = 'blah'
return result
pool = ThreadPool(processes=4)
Threads=[]
h=[hardware(IP1),hardware(IP2),hardware(IP3),hardware(IP4)]
for tt in range(4):
task=pool.apply_async(h[tt].some_long_task_to_hardware,(1000))
threads.append(task)
alive = [True]*4
Try:
while any(alive) :
for tt in range(4): alive[tt] = not threads[tt].ready()
do_other_stuff_for_a_bit()
except:
#some command I cannot find that will stop the threads...
raise
for tt in range(4): print(threads[tt].get())
</code></pre>
<p>The problem comes if the user wants to stop the process or there is an IO error in <code>do_other_stuff_for_a_bit()</code>. Pressing <kbd>Ctrl</kbd>+<kbd>C</kbd> stops the main process but the worker threads carry on running until their current task is complete.<br>
Is there some way to stop these threads without having to rewrite the library or have the user exit python? <code>pool.terminate()</code> and <code>pool.join()</code> that I have seen used in other examples do not seem to do the job.</p>
<p>The actual routine (instead of the simplified version above) uses logging and although all the worker threads are shut down at some point, I can see the processes that they started running carry on until complete (and being hardware I can see their effect by looking across the room).</p>
<p>This is in python 2.7. </p>
<p><strong>UPDATE:</strong></p>
<p>The solution seems to be to switch to using multiprocessing.Process instead of a thread pool. The test code I tried is to run foo_pulse:</p>
<pre><code>class foo(object):
def foo_pulse(self,nPulse,name): #just one method of *many*
print('starting pulse for '+name)
result=[]
for ii in range(nPulse):
print('on for '+name)
time.sleep(2)
print('off for '+name)
time.sleep(2)
result.append(ii)
return result,name
</code></pre>
<p>If you try running this using ThreadPool then ctrl-C does not stop foo_pulse from running (even though it does kill the threads right away, the print statements keep on coming:</p>
<pre><code>from multiprocessing.pool import ThreadPool
import time
def test(nPulse):
a=foo()
pool=ThreadPool(processes=4)
threads=[]
for rn in range(4) :
r=pool.apply_async(a.foo_pulse,(nPulse,'loop '+str(rn)))
threads.append(r)
alive=[True]*4
try:
while any(alive) : #wait until all threads complete
for rn in range(4):
alive[rn] = not threads[rn].ready()
time.sleep(1)
except : #stop threads if user presses ctrl-c
print('trying to stop threads')
pool.terminate()
print('stopped threads') # this line prints but output from foo_pulse carried on.
raise
else :
for t in threads : print(t.get())
</code></pre>
<p>However a version using multiprocessing.Process works as expected:</p>
<pre><code>import multiprocessing as mp
import time
def test_pro(nPulse):
pros=[]
ans=[]
a=foo()
for rn in range(4) :
q=mp.Queue()
ans.append(q)
r=mp.Process(target=wrapper,args=(a,"foo_pulse",q),kwargs={'args':(nPulse,'loop '+str(rn))})
r.start()
pros.append(r)
try:
for p in pros : p.join()
print('all done')
except : #stop threads if user stops findRes
print('trying to stop threads')
for p in pros : p.terminate()
print('stopped threads')
else :
print('output here')
for q in ans :
print(q.get())
print('exit time')
</code></pre>
<p>Where I have defined a wrapper for the library foo (so that it did not need to be re-written). If the return value is not needed the neither is this wrapper :</p>
<pre><code>def wrapper(a,target,q,args=(),kwargs={}):
'''Used when return value is wanted'''
q.put(getattr(a,target)(*args,**kwargs))
</code></pre>
<p>From the documentation I see no reason why a pool would not work (other than a bug).</p>
| 5 | 2016-08-09T17:51:03Z | 38,861,184 | <p>This is a very interesting use of parallelism. </p>
<p>However, if you are using <code>multiprocessing</code>, the goal is to have many processes running in parallel, as opposed to one process running many threads. </p>
<p>Consider these few changes to implement it using <code>multiprocessing</code>:</p>
<p>You have these functions that will run in parallel:</p>
<pre><code>import time
import multiprocessing as mp
def some_long_task_from_library(wtime):
time.sleep(wtime)
class MyException(Exception): pass
def do_other_stuff_for_a_bit():
time.sleep(5)
raise MyException("Something Happened...")
</code></pre>
<p>Let's create and start the processes, say 4:</p>
<pre><code>procs = [] # this is not a Pool, it is just a way to handle the
# processes instead of calling them p1, p2, p3, p4...
for _ in range(4):
p = mp.Process(target=some_long_task_from_library, args=(1000,))
p.start()
procs.append(p)
mp.active_children() # this joins all the started processes, and runs them.
</code></pre>
<p>The processes are running in parallel, presumably in a separate cpu core, but that is to the OS to decide. You can check in your system monitor.</p>
<p>In the meantime you run a process that will break, and you want to stop the running processes, not leaving them orphan:</p>
<pre><code>try:
do_other_stuff_for_a_bit()
except MyException as exc:
print(exc)
print("Now stopping all processes...")
for p in procs:
p.terminate()
print("The rest of the process will continue")
</code></pre>
<p>If it doesn't make sense to continue with the main process when one or all of the subprocesses have terminated, you should handle the exit of the main program.</p>
<p>Hope it helps, and you can adapt bits of this for your library.</p>
| 1 | 2016-08-09T22:06:14Z | [
"python",
"threadpool"
] |
Stopping processes in ThreadPool in Python | 38,857,379 | <p>I've been trying to write an interactive wrapper (for use in ipython) for a library that controls some hardware. Some calls are heavy on the IO so it makes sense to carry out the tasks in parallel. Using a ThreadPool (almost) works nicely:</p>
<pre><code>from multiprocessing.pool import ThreadPool
class hardware():
def __init__(IPaddress):
connect_to_hardware(IPaddress)
def some_long_task_to_hardware(wtime):
wait(wtime)
result = 'blah'
return result
pool = ThreadPool(processes=4)
Threads=[]
h=[hardware(IP1),hardware(IP2),hardware(IP3),hardware(IP4)]
for tt in range(4):
task=pool.apply_async(h[tt].some_long_task_to_hardware,(1000))
threads.append(task)
alive = [True]*4
Try:
while any(alive) :
for tt in range(4): alive[tt] = not threads[tt].ready()
do_other_stuff_for_a_bit()
except:
#some command I cannot find that will stop the threads...
raise
for tt in range(4): print(threads[tt].get())
</code></pre>
<p>The problem comes if the user wants to stop the process or there is an IO error in <code>do_other_stuff_for_a_bit()</code>. Pressing <kbd>Ctrl</kbd>+<kbd>C</kbd> stops the main process but the worker threads carry on running until their current task is complete.<br>
Is there some way to stop these threads without having to rewrite the library or have the user exit python? <code>pool.terminate()</code> and <code>pool.join()</code> that I have seen used in other examples do not seem to do the job.</p>
<p>The actual routine (instead of the simplified version above) uses logging and although all the worker threads are shut down at some point, I can see the processes that they started running carry on until complete (and being hardware I can see their effect by looking across the room).</p>
<p>This is in python 2.7. </p>
<p><strong>UPDATE:</strong></p>
<p>The solution seems to be to switch to using multiprocessing.Process instead of a thread pool. The test code I tried is to run foo_pulse:</p>
<pre><code>class foo(object):
def foo_pulse(self,nPulse,name): #just one method of *many*
print('starting pulse for '+name)
result=[]
for ii in range(nPulse):
print('on for '+name)
time.sleep(2)
print('off for '+name)
time.sleep(2)
result.append(ii)
return result,name
</code></pre>
<p>If you try running this using ThreadPool then ctrl-C does not stop foo_pulse from running (even though it does kill the threads right away, the print statements keep on coming:</p>
<pre><code>from multiprocessing.pool import ThreadPool
import time
def test(nPulse):
a=foo()
pool=ThreadPool(processes=4)
threads=[]
for rn in range(4) :
r=pool.apply_async(a.foo_pulse,(nPulse,'loop '+str(rn)))
threads.append(r)
alive=[True]*4
try:
while any(alive) : #wait until all threads complete
for rn in range(4):
alive[rn] = not threads[rn].ready()
time.sleep(1)
except : #stop threads if user presses ctrl-c
print('trying to stop threads')
pool.terminate()
print('stopped threads') # this line prints but output from foo_pulse carried on.
raise
else :
for t in threads : print(t.get())
</code></pre>
<p>However a version using multiprocessing.Process works as expected:</p>
<pre><code>import multiprocessing as mp
import time
def test_pro(nPulse):
pros=[]
ans=[]
a=foo()
for rn in range(4) :
q=mp.Queue()
ans.append(q)
r=mp.Process(target=wrapper,args=(a,"foo_pulse",q),kwargs={'args':(nPulse,'loop '+str(rn))})
r.start()
pros.append(r)
try:
for p in pros : p.join()
print('all done')
except : #stop threads if user stops findRes
print('trying to stop threads')
for p in pros : p.terminate()
print('stopped threads')
else :
print('output here')
for q in ans :
print(q.get())
print('exit time')
</code></pre>
<p>Where I have defined a wrapper for the library foo (so that it did not need to be re-written). If the return value is not needed the neither is this wrapper :</p>
<pre><code>def wrapper(a,target,q,args=(),kwargs={}):
'''Used when return value is wanted'''
q.put(getattr(a,target)(*args,**kwargs))
</code></pre>
<p>From the documentation I see no reason why a pool would not work (other than a bug).</p>
| 5 | 2016-08-09T17:51:03Z | 38,921,883 | <p>In answer to the question of why pool did not work then this is due to (as quoted in the <a href="https://docs.python.org/3.1/library/multiprocessing.html" rel="nofollow">Documentation</a>) then <strong>main</strong> needs to be importable by the child processes and due to the nature of this project interactive python is being used. </p>
<p>At the same time it was not clear why ThreadPool would - although the clue is right there in the name. ThreadPool creates its pool of worker processes using multiprocessing.dummy which as noted <a href="http://stackoverflow.com/questions/26432411/multiprocessing-dummy-in-python">here</a> is just a wrapper around the Threading module. Pool uses the multiprocessing.Process. This can be seen by this test:</p>
<pre><code>p=ThreadPool(processes=3)
p._pool[0]
<DummyProcess(Thread23, started daemon 12345)> #no terminate() method
p=Pool(processes=3)
p._pool[0]
<Process(PoolWorker-1, started daemon)> #has handy terminate() method if needed
</code></pre>
<p>As threads do not have a terminate method the worker threads carry on running until they have completed their current task. Killing threads is messy (which is why I tried to use the multiprocessing module) but solutions are <a href="http://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread-in-python">here</a>.</p>
<p>The one warning about the solution using the above:</p>
<pre><code>def wrapper(a,target,q,args=(),kwargs={}):
'''Used when return value is wanted'''
q.put(getattr(a,target)(*args,**kwargs))
</code></pre>
<p>is that changes to attributes inside the instance of the object are not passed back up to the main program. As an example the class foo above can also have methods such as:
def addIP(newIP):
self.hardwareIP=newIP
A call to <code>r=mp.Process(target=a.addIP,args=(127.0.0.1))</code> does not update <code>a</code>.</p>
<p>The only way round this for a complex object seems to be shared memory using a custom <code>manager</code> which can give access to both the methods and attributes of object <code>a</code> For a very large complex object based on a library this may be best done using <code>dir(foo)</code> to populate the manager. If I can figure out how I'll update this answer with an example (for my future self as much as others).</p>
| 0 | 2016-08-12T15:46:35Z | [
"python",
"threadpool"
] |
How to get only one list in python? | 38,857,433 | <p>i need little help about some lists, so i have this part of code:</p>
<pre class="lang-py prettyprint-override"><code>get_attributeName = \
soup.find(True, {"class": ["product-attributes", "product-attribute-value"]}).find_all('li')
allDataList = []
for attData, attValues in get_attributeName:
data = [attData, attValues.text]
allDataList.append(data)
print(allDataList)
</code></pre>
<p>And result after <code>print(allDataList)</code> result is:</p>
<pre class="lang-py prettyprint-override"><code>[['Year: ', '2013']]
[['Year: ', '2013'], ['Color: ', 'Yellow']]
[['Year: ', '2013'], ['Color: ', 'Yellow'], ['Package: ', '5kg']]
[['Year: ', '2013'], ['Color: ', 'Yellow'], ['Package: ', '5kg'], ['Comment: ', 'Null']]
[['Year: ', '2013'], ['Color: ', 'Yellow'], ['Package: ', '5kg'], ['Comment: ', 'Null'], ['Product: ', 'mushrooms']]
[['Year: ', '2013'], ['Color: ', 'Yellow'], ['Package: ', '5kg'], ['Comment: ', 'Null'], ['Product: ', 'mushrooms'], ['Forest: ', 'NULL']]
[['Year: ', '2013'], ['Color: ', 'Yellow'], ['Package: ', '5kg'], ['Comment: ', 'Null'], ['Product: ', 'mushrooms'], ['Forest: ', 'NULL'], ['Country: ', 'France']]
</code></pre>
<p>I need for result only last row with all list inside one list, like this:</p>
<pre class="lang-py prettyprint-override"><code>[['Year: ', '2013'], ['Color: ', 'Yellow'], ['Package: ', '5kg'], ['Comment: ', 'Null'], ['Product: ', 'mushrooms'], ['Forest: ', 'NULL'], ['Country: ', 'France']]
</code></pre>
| 0 | 2016-08-09T17:55:09Z | 38,857,474 | <p>Your code is <em>printing</em> the list each iteration, as you build it. Simply don't print each iteration; print only <em>after</em> the <code>for</code> loop has completed (if at all).</p>
<pre><code>get_attributeName = soup.find(True,{"class": ["product-attributes", "product-attribute-value"]}).find_all('li')
allDataList = []
for attData, attValues in get_attributeName:
data = [attData, attValues.text]
allDataList.append(data)
print(allDataList)
</code></pre>
| 2 | 2016-08-09T17:56:59Z | [
"python",
"list"
] |
AttributeError: 'module' object has no attribute 'Screen' and indentation error | 38,857,435 | <p>I'm learning Turtle class for Python. While running <code>shp.py</code> file in terminal I got multiple errors:</p>
<p><a href="http://i.stack.imgur.com/GuYq8.png" rel="nofollow"><img src="http://i.stack.imgur.com/GuYq8.png" alt="enter image description here"></a></p>
<p>What is going wrong?</p>
<pre><code>import turtle as myTurtle
def draw_shape():
window = myTurtle.Screen()
window.bgcolor("yellow")
brack = myTurtle.Turtle()
brack.shape("turtle")
brack.speed(2)
c = 1
while c < 5:
brack.forward(100)
brack.right(90)
c = c+1
rosy = myTurtle.Turtle()
rosy.shape("arrow")
rosy.color("blue")
rosy.circle(100)
matt = myTurtle.Turtle()
matt.shape("circle")
i = 1
while i < 4:
matt.forward(320)
matt.left(120)
i = i+1
window.exitonclick()
draw_shape()
</code></pre>
| -1 | 2016-08-09T17:55:14Z | 38,857,575 | <p>Currently, your script is importing itself. A simple change to the file name can fix this. </p>
<pre><code>Go to My Documents -> Locate your file -> Right click on it -> Click Rename -> Enter a new name like myTurtle.py
</code></pre>
| 0 | 2016-08-09T18:03:25Z | [
"python",
"terminal",
"turtle-graphics"
] |
NameError: name '[string]' is not defined | 38,857,541 | <p>When testing my program, I keep getting this error:</p>
<pre><code>1. Encrypt a file
2. Decrypt a file
----> 1
Enter the filename you'd like to encrypt: test
Traceback (most recent call last):
File "./encrypt.py", line 71, in <module>
Main()
File "./encrypt.py", line 58, in Main
filename = input("Enter the filename you'd like to encrypt: ")
File "<string>", line 1, in <module>
NameError: name 'test' is not defined
</code></pre>
<p>And here is my code for the Main() function:</p>
<pre><code>def Main():
print("1. Encrypt a file")
print("2. Decrypt a file")
choice = str(input("----> "))
if choice == '1':
filename = input("Enter the filename you'd like to encrypt: ")
password = input("Enter a password used for the encryption tool: ")
encrypt(getKey(password), filename)
print("File has been encrypted.")
elif choice == '2':
filename = input("Enter the filename you'd like to decrypt: ")
password = input("Enter the password used for the encryption of this file: ")
decrypt(getKey(password), filename)
print("File has been decrypted. Note that if the password used in the encryption does " \
+ "not match the password you entered in, the file will remain encrypted.")
else:
print("Invalid option. Closing the program...")
</code></pre>
<p>I am using the simple input() method to get my data ('test', for example), and it keeps telling me whatever information I enter in at runtime, the name of what I just entered is not defined. I don't see any formatting errors, syntax errors, etc.</p>
| 0 | 2016-08-09T18:01:03Z | 38,857,602 | <blockquote>
<p>it keeps telling me whatever information I enter in at runtime, the name of what I just entered is not defined. I don't see any formatting errors, syntax errors, etc.</p>
</blockquote>
<p>You are probably using Python 2.x. You should use: <code>raw_input(..)</code> instead of <code>input(..)</code></p>
| 0 | 2016-08-09T18:05:03Z | [
"python"
] |
NameError: name '[string]' is not defined | 38,857,541 | <p>When testing my program, I keep getting this error:</p>
<pre><code>1. Encrypt a file
2. Decrypt a file
----> 1
Enter the filename you'd like to encrypt: test
Traceback (most recent call last):
File "./encrypt.py", line 71, in <module>
Main()
File "./encrypt.py", line 58, in Main
filename = input("Enter the filename you'd like to encrypt: ")
File "<string>", line 1, in <module>
NameError: name 'test' is not defined
</code></pre>
<p>And here is my code for the Main() function:</p>
<pre><code>def Main():
print("1. Encrypt a file")
print("2. Decrypt a file")
choice = str(input("----> "))
if choice == '1':
filename = input("Enter the filename you'd like to encrypt: ")
password = input("Enter a password used for the encryption tool: ")
encrypt(getKey(password), filename)
print("File has been encrypted.")
elif choice == '2':
filename = input("Enter the filename you'd like to decrypt: ")
password = input("Enter the password used for the encryption of this file: ")
decrypt(getKey(password), filename)
print("File has been decrypted. Note that if the password used in the encryption does " \
+ "not match the password you entered in, the file will remain encrypted.")
else:
print("Invalid option. Closing the program...")
</code></pre>
<p>I am using the simple input() method to get my data ('test', for example), and it keeps telling me whatever information I enter in at runtime, the name of what I just entered is not defined. I don't see any formatting errors, syntax errors, etc.</p>
| 0 | 2016-08-09T18:01:03Z | 38,857,609 | <p>You're going to want to use <code>raw_input()</code> instead of <code>input()</code>. <code>input</code> tries to run the expression it gets as a Python expression, whereas <code>raw_input</code> returns a string. This is in Python 2.x; in 3.x <code>raw_input</code> doesn't exist.</p>
<p>When you get the <code>NameError</code>, it's trying to run your input as an expression, but <code>test</code> doesn't exist. </p>
| 1 | 2016-08-09T18:05:15Z | [
"python"
] |
NameError: name '[string]' is not defined | 38,857,541 | <p>When testing my program, I keep getting this error:</p>
<pre><code>1. Encrypt a file
2. Decrypt a file
----> 1
Enter the filename you'd like to encrypt: test
Traceback (most recent call last):
File "./encrypt.py", line 71, in <module>
Main()
File "./encrypt.py", line 58, in Main
filename = input("Enter the filename you'd like to encrypt: ")
File "<string>", line 1, in <module>
NameError: name 'test' is not defined
</code></pre>
<p>And here is my code for the Main() function:</p>
<pre><code>def Main():
print("1. Encrypt a file")
print("2. Decrypt a file")
choice = str(input("----> "))
if choice == '1':
filename = input("Enter the filename you'd like to encrypt: ")
password = input("Enter a password used for the encryption tool: ")
encrypt(getKey(password), filename)
print("File has been encrypted.")
elif choice == '2':
filename = input("Enter the filename you'd like to decrypt: ")
password = input("Enter the password used for the encryption of this file: ")
decrypt(getKey(password), filename)
print("File has been decrypted. Note that if the password used in the encryption does " \
+ "not match the password you entered in, the file will remain encrypted.")
else:
print("Invalid option. Closing the program...")
</code></pre>
<p>I am using the simple input() method to get my data ('test', for example), and it keeps telling me whatever information I enter in at runtime, the name of what I just entered is not defined. I don't see any formatting errors, syntax errors, etc.</p>
| 0 | 2016-08-09T18:01:03Z | 38,857,632 | <p>The <code>input</code> function treats whatever you input as a python expression. What you're looking for is the <code>raw_input</code> function which treats your input as a string.
Switch all your <code>input</code>s for <code>raw_input</code>s and you should be fine.</p>
| 0 | 2016-08-09T18:06:24Z | [
"python"
] |
NameError: name '[string]' is not defined | 38,857,541 | <p>When testing my program, I keep getting this error:</p>
<pre><code>1. Encrypt a file
2. Decrypt a file
----> 1
Enter the filename you'd like to encrypt: test
Traceback (most recent call last):
File "./encrypt.py", line 71, in <module>
Main()
File "./encrypt.py", line 58, in Main
filename = input("Enter the filename you'd like to encrypt: ")
File "<string>", line 1, in <module>
NameError: name 'test' is not defined
</code></pre>
<p>And here is my code for the Main() function:</p>
<pre><code>def Main():
print("1. Encrypt a file")
print("2. Decrypt a file")
choice = str(input("----> "))
if choice == '1':
filename = input("Enter the filename you'd like to encrypt: ")
password = input("Enter a password used for the encryption tool: ")
encrypt(getKey(password), filename)
print("File has been encrypted.")
elif choice == '2':
filename = input("Enter the filename you'd like to decrypt: ")
password = input("Enter the password used for the encryption of this file: ")
decrypt(getKey(password), filename)
print("File has been decrypted. Note that if the password used in the encryption does " \
+ "not match the password you entered in, the file will remain encrypted.")
else:
print("Invalid option. Closing the program...")
</code></pre>
<p>I am using the simple input() method to get my data ('test', for example), and it keeps telling me whatever information I enter in at runtime, the name of what I just entered is not defined. I don't see any formatting errors, syntax errors, etc.</p>
| 0 | 2016-08-09T18:01:03Z | 38,857,906 | <p>It turns out my linux distro has both python 2.7.12 and python 3.5.2. Apparently the system defaults to python 2.7.12 instead of the newer version, so I fixed it by changing:</p>
<pre><code>#!/usr/bin/python
</code></pre>
<p>to:</p>
<pre><code>#!/usr/bin/python3
</code></pre>
| 0 | 2016-08-09T18:25:03Z | [
"python"
] |
assigning hashtags to django models | 38,857,778 | <p>I have defined Job model in my django website as shown here:</p>
<pre><code>class Job(models.Model):
title = models.CharField(max_length=100)
description = models.TextField()
def __str__(self):
return self.title
</code></pre>
<p>Skills required by a job:</p>
<pre><code>class SkillsGroup(models.Model):
group_name = models.CharField(max_length=150)
def __str__(self):
return self.group_name
class Skill(models.Model):
skill_group = models.ForeignKey(SkillsGroup)
name = models.CharField(max_length=200)
def __str__(self):
return self.skill_group.group_name + ' - ' + self.name
</code></pre>
<p>Now my problem is how could I assign each Job a list of skills (Like hashtags or ...) so that each user can find jobs according to a specific skill.
Something like the way Tags are assigned to each question in stackoverflow.</p>
<p>Is there anyway I could implement this feature without using external apps/libraries?</p>
| 0 | 2016-08-09T18:15:34Z | 38,857,860 | <p>It sounds like you want <code>Skill</code> to be a <code>ForiegnKey</code> to <code>Job</code>.</p>
| -1 | 2016-08-09T18:21:23Z | [
"python",
"django"
] |
assigning hashtags to django models | 38,857,778 | <p>I have defined Job model in my django website as shown here:</p>
<pre><code>class Job(models.Model):
title = models.CharField(max_length=100)
description = models.TextField()
def __str__(self):
return self.title
</code></pre>
<p>Skills required by a job:</p>
<pre><code>class SkillsGroup(models.Model):
group_name = models.CharField(max_length=150)
def __str__(self):
return self.group_name
class Skill(models.Model):
skill_group = models.ForeignKey(SkillsGroup)
name = models.CharField(max_length=200)
def __str__(self):
return self.skill_group.group_name + ' - ' + self.name
</code></pre>
<p>Now my problem is how could I assign each Job a list of skills (Like hashtags or ...) so that each user can find jobs according to a specific skill.
Something like the way Tags are assigned to each question in stackoverflow.</p>
<p>Is there anyway I could implement this feature without using external apps/libraries?</p>
| 0 | 2016-08-09T18:15:34Z | 38,858,079 | <p>perhaps you will want to take a look for this django app:
<a href="http://django-tagging.readthedocs.io/" rel="nofollow">django-tagging</a></p>
| -1 | 2016-08-09T18:36:00Z | [
"python",
"django"
] |
assigning hashtags to django models | 38,857,778 | <p>I have defined Job model in my django website as shown here:</p>
<pre><code>class Job(models.Model):
title = models.CharField(max_length=100)
description = models.TextField()
def __str__(self):
return self.title
</code></pre>
<p>Skills required by a job:</p>
<pre><code>class SkillsGroup(models.Model):
group_name = models.CharField(max_length=150)
def __str__(self):
return self.group_name
class Skill(models.Model):
skill_group = models.ForeignKey(SkillsGroup)
name = models.CharField(max_length=200)
def __str__(self):
return self.skill_group.group_name + ' - ' + self.name
</code></pre>
<p>Now my problem is how could I assign each Job a list of skills (Like hashtags or ...) so that each user can find jobs according to a specific skill.
Something like the way Tags are assigned to each question in stackoverflow.</p>
<p>Is there anyway I could implement this feature without using external apps/libraries?</p>
| 0 | 2016-08-09T18:15:34Z | 38,863,648 | <p>You need ManyToManyField. I will simplify a bit your example.</p>
<pre><code>class Job(models.Model):
title = models.CharField(max_length=100)
description = models.TextField()
skills = models.ManyToManyField(Skill)
def __str__(self):
return self.title
class Skill(models.Model):
name = models.CharField(max_length=100)
</code></pre>
<p>You can now add or remove required skills to your job like this, but be sure that Skill instance is already saved in a database or create it right in a "skills" property of Job instance:</p>
<pre><code>job = Job.objects.get(title="My Vacancy")
# Create a new skill for my job offer
job.skills.create(name="Special Skill")
# Add an existing skill to my job offer
skill = Skill.objects.get(name="Another Special Skill")
job.skills.add(skill)
# I've changed my mind, I don't need the last skill to my vacancy
job.skills.remove(skill)
</code></pre>
<p>This field can act like Queryset also:</p>
<pre><code># Check required skills for a job
skills = job.skills.all()
# Lets find some job with special skills
jobs = Job.objects.filter(skills__name__icontains="Special Skill")
</code></pre>
<p><a href="https://docs.djangoproject.com/es/1.9/topics/db/examples/many_to_many/" rel="nofollow">https://docs.djangoproject.com/es/1.9/topics/db/examples/many_to_many/</a></p>
| 0 | 2016-08-10T03:12:52Z | [
"python",
"django"
] |
Getting a Tic Tac Toe move from a button click (in Python with Tkinter) | 38,857,811 | <p>I'm trying to write a Tic Tac Toe program with a simple GUI using Tkinter. Here is the result so far:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import Tkinter as tk
class Game:
def __init__(self, player1, player2, with_GUI=False):
self.player1 = player1
self.player2 = player2
self.current_player = player1
self.board = Board()
self.with_GUI = with_GUI
if self.with_GUI:
master = tk.Tk()
self.GUI = GUI(master)
self.board.GUI = self.GUI
self.player1.GUI = self.GUI
self.player2.GUI = self.GUI
def play(self):
self.board.render()
while not self.board.over():
self.play_turn()
self.declare_outcome()
def play_turn(self):
move = self.current_player.get_move()
mark = self.current_player.mark
self.board.place_mark(move, mark)
self.switch_players()
self.board.render()
def switch_players(self):
if self.current_player == self.player1:
self.current_player = self.player2
else:
self.current_player = self.player1
def declare_outcome(self):
if self.board.winner() == 1:
print "Player 1 wins!"
elif self.board.winner() == 0:
print "Player 2 wins!"
else:
print "Cat's game."
class Board:
def __init__(self, grid=np.ones((3,3))*np.nan, GUI=None):
self.grid = grid
self.GUI = GUI
def winner(self):
rows = [self.grid[i,:] for i in range(3)]
cols = [self.grid[:,j] for j in range(3)]
diag = [np.array([self.grid[i,i] for i in range(3)])]
cross_diag = [np.array([self.grid[2-i,i] for i in range(3)])]
lanes = np.concatenate((rows, cols, diag, cross_diag)) # A "lane" is defined as a row, column, diagonal, or cross-diagonal
any_lane = lambda x: any([np.array_equal(lane, x) for lane in lanes]) # Returns true if any lane is equal to the input argument "x"
if any_lane(np.ones(3)):
return 1
elif any_lane(np.zeros(3)):
return 0
def over(self):
return (not np.any(np.isnan(self.grid))) or (self.winner() is not None)
def place_mark(self, pos, mark):
num = self.mark2num(mark)
self.grid[tuple(pos)] = num
def mark2num(self, mark):
if mark == "X":
return 1
elif mark == "O":
return 0
else:
print "The player's mark must be either 'X' or 'O'."
def render(self):
if self.GUI is None:
print self.grid
else:
pass
class HumanPlayer:
def __init__(self, mark, GUI=None):
self.mark = mark
self.GUI = GUI
def get_move(self, board=Board()):
if self.GUI is None:
move_string = input("Where would you like to move? (row, column) ")
move = tuple(move_string)
if not self.empty(move, board):
print "That square is already occupied.\n"
return self.get_move(board)
else:
return tuple(move_string)
else:
# return GUI.make_move()
pass
def empty(self, move, board):
return np.isnan(board.grid[move])
class GUI:
def __init__(self, master):
frame = tk.Frame(master)
frame.pack()
self.buttons = [[None for _ in range(3)] for _ in range(3)]
for i in range(3):
for j in range(3):
self.buttons[i][j] = tk.Button(frame, height=3, width=3, text="", command=lambda i=i, j=j: self.make_move(self.buttons[i][j]))
self.buttons[i][j].grid(row=i, column=j)
def make_move(self, button):
if button["text"] == "":
button.configure(text="X")
info = button.grid_info()
move = (info["row"], info["column"])
print move
return move
player1 = HumanPlayer(mark="X")
player2 = HumanPlayer(mark="O")
game = Game(player1, player2, with_GUI=True)
game.play()
</code></pre>
<p>The program works without a GUI: if I set <code>with_GUI</code> to <code>False</code> in the penultimate line, it will use a very simple command line interface in which the board is represented by a 3x3 Numpy array where the "X" mark is represented by <code>1</code>, the "O" by <code>0</code>, and an empty square by <code>NaN</code>.</p>
<p>Using <code>with_GUI=True</code>, I get an array of unlabeled buttons which will acquire the label "X" when clicked and print out the coordinates to the command line (see below).</p>
<p><a href="http://i.stack.imgur.com/2srQm.png" rel="nofollow"><img src="http://i.stack.imgur.com/2srQm.png" alt="enter image description here"></a>
<a href="http://i.stack.imgur.com/C3Agq.png" rel="nofollow"><img src="http://i.stack.imgur.com/C3Agq.png" alt="enter image description here"></a></p>
<p>I am struggling, however, to see how I could get the return from the <code>make_move</code> function in the <code>GUI</code> class into the <code>get_move</code> function in the <code>HumanPlayer</code> class. <code>make_move</code> requires as input an instance of <code>tk.Button</code>, how can I make the most recently clicked <code>Button</code> available to this class?</p>
| 0 | 2016-08-09T18:17:59Z | 38,868,972 | <p>To get the Button itself, do not use a lambda binding.</p>
<p>Use the default bindings and use the event object.</p>
<p>Using this approach you do not need to pass row, col at creation time.</p>
<pre><code>import Tkinter as tk
class someclass(tk.Frame):
def __init__(self, *args, **kwargs):
# the init, etc...
btn = tk.Button(self, text='bla')
btn.configure(command=lambda button=btn: self.callback(button))
btn.grid()
def callback(self, button):
btn = button
# do stuff with the button
</code></pre>
<p>One Remark left:</p>
<p><b>Please do not import stuff you do not use</b> (<code>matplotlib.pyplot</code> in your case)</p>
<p><b> Edit 1 </b> Edited Code due to comment from TigerhawkT3</p>
| -1 | 2016-08-10T09:12:35Z | [
"python",
"tkinter"
] |
Remove the last empty line in CSV file | 38,857,870 | <pre><code>nf=open(Output_File,'w+')
with open(Input_File,'read') as f:
for row in f:
Current_line = str(row)
Reformated_line=str(','.join(Current_line.split('|')[1:-1]))
nf.write(Reformated_line+ "\n")
</code></pre>
<p>I'm trying to read <code>Input file</code> which is in Table Format and write it in a CSV file, but my Output contains one last empty line also. How can I remove the last empty line in CSV?</p>
| 1 | 2016-08-09T18:22:20Z | 38,857,965 | <p>Just a question of reordering things a little:</p>
<pre><code>first = True
with open(Input_File,'read') as f, open(Output_File,'w+') as nf:
for row in f:
Current_line = str(row)
Reformated_line=str(','.join(Current_line.split('|')[1:-1]))
if not first:
nf.write('\n')
else:
first = False
nf.write(Reformated_line)
</code></pre>
| 0 | 2016-08-09T18:29:04Z | [
"python",
"csv"
] |
Remove the last empty line in CSV file | 38,857,870 | <pre><code>nf=open(Output_File,'w+')
with open(Input_File,'read') as f:
for row in f:
Current_line = str(row)
Reformated_line=str(','.join(Current_line.split('|')[1:-1]))
nf.write(Reformated_line+ "\n")
</code></pre>
<p>I'm trying to read <code>Input file</code> which is in Table Format and write it in a CSV file, but my Output contains one last empty line also. How can I remove the last empty line in CSV?</p>
| 1 | 2016-08-09T18:22:20Z | 38,858,214 | <p>It sounds like you have an empty line in your input file. From your comments, you actually have a non-empty line that has no <code>|</code> characters in it. In either case, it is easy enough to check for an empty result line.</p>
<p>Try this:</p>
<pre><code>#UNTESTED
nf=open(Output_File,'w+')
with open(Input_File,'read') as f:
for row in f:
Current_line = str(row)
Reformated_line=str(','.join(Current_line.split('|')[1:-1]))
if Reformatted_line:
nf.write(Reformated_line+ "\n")
</code></pre>
<p>Other notes:</p>
<ul>
<li>You should use <code>with</code> consistently. Open both files the same way.</li>
<li><code>str(row)</code> is a no-op. <code>row</code> is already a str.</li>
<li><code>str(','.join(...))</code> is similarly redundant.</li>
<li><code>open(..., 'read')</code> is not a valid use of the mode parameter to <code>open()</code>. You should use <code>r</code> or even omit the parameter altogether.</li>
<li>I prefer not to introduce new names when changing the format of existing data. That is, I prefer <code>row = row.split()</code> over <code>Reformatted_line = row.split()</code>.</li>
</ul>
<p>Here is a version that incorporates these and other suggestions:</p>
<pre><code>with open(Input_File) as inf, open(Output_File, 'w+') as outf:
for row in inf:
row = ','.join(row.split('|')[1:-1])
if row:
outf.write(row + "\n")
</code></pre>
| 1 | 2016-08-09T18:44:23Z | [
"python",
"csv"
] |
How to catch requests.get() exceptions | 38,857,883 | <p>I'm working on a web scraper for yellowpages.com, which seems to be working well overall. However, while iterating through the pagination of a long query, requests.get(url) will randomly return <code><Response [503]></code> or <code><Response [404]></code>. Occassionally, I will receive worse exceptions, such as:</p>
<blockquote>
<p>requests.exceptions.ConnectionError:
HTTPConnectionPool(host='www.yellowpages.com', port=80): Max retries
exceeded with url:
/search?search_terms=florists&geo_location_terms=FL&page=22 (Caused by
NewConnectionError(': Failed to establish a new connection:
[WinError 10053] An established connection was aborted by the software
in your host machine',))</p>
</blockquote>
<p>Using time.sleep() seems to eliminate the 503 errors, but 404s and exceptions remain issues. </p>
<p>I'm trying to figure out how to "catch" the various responses, so I can make changes (wait, change proxy, change user-agent) and try again and/or move on. Pseudocode something like this:</p>
<pre><code>If error/exception with request.get:
wait and/or change proxy and user agent
retry request.get
else:
pass
</code></pre>
<p>At this point, I can't even seem to capture an issue using:</p>
<pre><code>try:
r = requests.get(url)
except requests.exceptions.RequestException as e:
print (e)
import sys #only added here, because it's not part of my stable code below
sys.exit()
</code></pre>
<p>Full code for where I'm starting from on <a href="https://github.com/ZaxR/YP_scraper/blob/master/YP_scrape.py" rel="nofollow">github</a> and below:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import itertools
import csv
# Search criteria
search_terms = ["florists", "pharmacies"]
search_locations = ['CA', 'FL']
# Structure for Data
answer_list = []
csv_columns = ['Name', 'Phone Number', 'Street Address', 'City', 'State', 'Zip Code']
# Turns list of lists into csv file
def write_to_csv(csv_file, csv_columns, answer_list):
with open(csv_file, 'w') as csvfile:
writer = csv.writer(csvfile, lineterminator='\n')
writer.writerow(csv_columns)
writer.writerows(answer_list)
# Creates url from search criteria and current page
def url(search_term, location, page_number):
template = 'http://www.yellowpages.com/search?search_terms={search_term}&geo_location_terms={location}&page={page_number}'
return template.format(search_term=search_term, location=location, page_number=page_number)
# Finds all the contact information for a record
def find_contact_info(record):
holder_list = []
name = record.find(attrs={'class': 'business-name'})
holder_list.append(name.text if name is not None else "")
phone_number = record.find(attrs={'class': 'phones phone primary'})
holder_list.append(phone_number.text if phone_number is not None else "")
street_address = record.find(attrs={'class': 'street-address'})
holder_list.append(street_address.text if street_address is not None else "")
city = record.find(attrs={'class': 'locality'})
holder_list.append(city.text if city is not None else "")
state = record.find(attrs={'itemprop': 'addressRegion'})
holder_list.append(state.text if state is not None else "")
zip_code = record.find(attrs={'itemprop': 'postalCode'})
holder_list.append(zip_code.text if zip_code is not None else "")
return holder_list
# Main program
def main():
for search_term, search_location in itertools.product(search_terms, search_locations):
i = 0
while True:
i += 1
url = url(search_term, search_location, i)
r = requests.get(url)
soup = BeautifulSoup(r.text, "html.parser")
main = soup.find(attrs={'class': 'search-results organic'})
page_nav = soup.find(attrs={'class': 'pagination'})
records = main.find_all(attrs={'class': 'info'})
for record in records:
answer_list.append(find_contact_info(record))
if not page_nav.find(attrs={'class': 'next ajax-page'}):
csv_file = "YP_" + search_term + "_" + search_location + ".csv"
write_to_csv(csv_file, csv_columns, answer_list) # output data to csv file
break
if __name__ == '__main__':
main()
</code></pre>
<p>Thank you in advance for taking the time to read this long post/reply :)</p>
| 0 | 2016-08-09T18:22:54Z | 38,857,939 | <p>What about something like this</p>
<pre><code>try:
req = ..
if req.status_code == 503:
pass
elif ..:
pass
else:
do something when request succeeds
except ConnectionError:
pass
</code></pre>
| 0 | 2016-08-09T18:27:19Z | [
"python",
"exception-handling",
"web-scraping",
"python-requests",
"yellow-pages"
] |
Python - parsing configuration file - flow control | 38,857,898 | <p>I am trying to write a python 2.7 script to parse through a configuration file. The configuration file has standard settings but not all settings are populated. I've been able to extract the values for a single section, but when I added additional entries my script logic fails. I am thinking I could use the attributes (RuleName, Next) in the configuration file to break thinks up but I can't think of how to accomplish this. Below is a sample of what the configurations look like.</p>
<p><strong>Configuration File</strong></p>
<pre><code> RuleName "Rule 1"
value1 = "some value"
value2 = "some value"
value3 = "some value"
value4 = "some value"
Next
RuleName = "Rule 2"
value1 = "some value"
value2 = "some value"
value3 = "some value"
Next
RuleName = "Rule 3"
value1 = "some value"
value2 = "some value"
value3 = "some value"
value4 = "some value"
value5 = "some value"
Next
</code></pre>
<p>Here is the logic of my script. Any suggestions would be helpful. This is my first attempt to write a more complex script with Python. I am sure there are more sophisticated ways to due this but I'd like to keep it relatively basic as I learn python.</p>
<p>Thank You!</p>
<pre><code>for line in lines:
n = line.lstrip()
if n.find(rulesetting1) != -1:
pos = len(rulesetting1)
rulevalue1 = n[pos:]
elif n.find(rulesetting2) != -1:
pos = len(rulesetting2)
rulevalue2 = n[pos:]
elif n.find(rulesetting3) != -1:
pos = len(rulesetting3)
rulevalue3 = n[pos:]
elif n.find(rulesetting4) != -1:
pos = len(rulesetting4)
rulevalue4 = n[pos:]
elif n.find(rulesetting5) != -1:
pos = len(rulesetting5)
rulevalue5 = n[pos:]
elif n.find("Next") != -1:
Start cycle over?
</code></pre>
<p>What about searching for "Next" and then starting the cycle over. Eventually I want to write this to a CSV file, but I need to get this flow down first.</p>
| 0 | 2016-08-09T18:24:46Z | 38,858,480 | <p>This looks like a dictionary to me.</p>
<pre><code>optionsdict = dict()
curkey = None
for line in optionsfile:
if line.strip().startswith("RuleName"):
curkey = line.split("=")[1].strip() # whatever's after the =
elif line.strip() == "Next":
pass # not sure what you're doing with the Next lines...
else:
option, value = map(str.strip, line.split("="))
optionsdict.setdefault(curkey, {})[option] = value
</code></pre>
<p>That said, it seems like it'd be easier to use a more standard format that comes batteries included, if you're making this all up anyway.</p>
<pre class="lang-none prettyprint-override"><code># configfile.ini
[Rule 1]
value1 = "some value"
value2 = "some value"
value3 = "some value"
value4 = "some value"
[Rule 2]
value1 = "some value"
value2 = "some value"
value3 = "some value"
value4 = "some value"
[Rule 3]
value1 = "some value"
value2 = "some value"
value3 = "some value"
value4 = "some value"
value5 = "some value"
</code></pre>
<p> </p>
<pre><code># Python script
from configparser import ConfigParser
config = ConfigParser()
config.read_file("path/to/configfile.ini")
config["Rule 1"]["value1"] # "some value"
</code></pre>
<p>This uses the <a href="https://docs.python.org/2/library/configparser.html" rel="nofollow"><code>configparser</code></a> package to parse your options.</p>
| 0 | 2016-08-09T18:59:52Z | [
"python",
"python-2.7"
] |
pandas to_json - return timestamp with units of days rather than seconds | 38,857,941 | <p>I have a pandas series containing datetime objects which have been created from day-month-year strings</p>
<pre><code>series = pd.Series(['3/11/2000', '3/12/2000', '3/13/2000'])
series = pd.to_datetime(series)
print series
0 2000-03-11
1 2000-03-12
2 2000-03-13
dtype: datetime64[ns]
</code></pre>
<p>Later, after using these datetime objects, I want to convert this series into json in a format day-month-year. However, to_json returns the datetime with HH:MM:SS etc</p>
<pre><code>json = series.to_json(orient='index', date_format='iso', date_unit = 's')
print json
{"0":"2000-03-11T00:00:00Z","1":"2000-03-12T00:00:00Z","2":"2000-03-13T00:00:00Z"}
</code></pre>
<p>Is there any inbuilt and elegant way to just return the dates as so</p>
<pre><code>{"0":"2000-03-11","1":"2000-03-12","2":"2000-03-13"}
</code></pre>
<p>without the HH:MM:SS etc. The closest I have got (without converting to strings and writing a function to parse) is the date_unit argument of to_json although the largest time unit is seemingly seconds. </p>
<p>Can anyone help? Thanks</p>
| 4 | 2016-08-09T18:27:41Z | 38,858,104 | <p>This is not elegant but you can try convert to string before to json:</p>
<pre><code>>>> series.apply(lambda x : x.strftime('%Y-%m-%d')).to_json()
'{"0":"2000-03-11","1":"2000-03-12","2":"2000-03-13"}'
</code></pre>
| 3 | 2016-08-09T18:37:30Z | [
"python",
"date",
"datetime",
"pandas"
] |
pandas to_json - return timestamp with units of days rather than seconds | 38,857,941 | <p>I have a pandas series containing datetime objects which have been created from day-month-year strings</p>
<pre><code>series = pd.Series(['3/11/2000', '3/12/2000', '3/13/2000'])
series = pd.to_datetime(series)
print series
0 2000-03-11
1 2000-03-12
2 2000-03-13
dtype: datetime64[ns]
</code></pre>
<p>Later, after using these datetime objects, I want to convert this series into json in a format day-month-year. However, to_json returns the datetime with HH:MM:SS etc</p>
<pre><code>json = series.to_json(orient='index', date_format='iso', date_unit = 's')
print json
{"0":"2000-03-11T00:00:00Z","1":"2000-03-12T00:00:00Z","2":"2000-03-13T00:00:00Z"}
</code></pre>
<p>Is there any inbuilt and elegant way to just return the dates as so</p>
<pre><code>{"0":"2000-03-11","1":"2000-03-12","2":"2000-03-13"}
</code></pre>
<p>without the HH:MM:SS etc. The closest I have got (without converting to strings and writing a function to parse) is the date_unit argument of to_json although the largest time unit is seemingly seconds. </p>
<p>Can anyone help? Thanks</p>
| 4 | 2016-08-09T18:27:41Z | 38,858,132 | <p>Something like this?</p>
<pre><code>In [64]: series.dt.date.astype(str).to_json()
Out[64]: '{"0":"2000-03-11","1":"2000-03-12","2":"2000-03-13"}'
</code></pre>
| 5 | 2016-08-09T18:38:55Z | [
"python",
"date",
"datetime",
"pandas"
] |
Add values to array in one line | 38,857,966 | <p>How can I write this code in one-line?</p>
<pre><code>aa = []
for s in complete:
aa.append(s)
</code></pre>
<p>I know there are several solutions. I would really appreciate if you could write them down.
Thanks! </p>
| 2 | 2016-08-09T18:29:05Z | 38,857,997 | <p>List comprehensions are awesome:</p>
<pre><code>aa = [s for s in complete]
</code></pre>
| 1 | 2016-08-09T18:30:55Z | [
"python"
] |
Add values to array in one line | 38,857,966 | <p>How can I write this code in one-line?</p>
<pre><code>aa = []
for s in complete:
aa.append(s)
</code></pre>
<p>I know there are several solutions. I would really appreciate if you could write them down.
Thanks! </p>
| 2 | 2016-08-09T18:29:05Z | 38,858,003 | <p>like this (be care with strings):</p>
<pre><code>aa.extend(complete)
</code></pre>
<p>or with list comprehension:</p>
<pre><code>aa = list(s for s in complete)
</code></pre>
<p>or if u want to copy list u can do follow:</p>
<pre><code>aa = complete[:]
aa = complete.copy() # same
aa = list(complete) # same
</code></pre>
<p>or just use '+':</p>
<pre><code>aa += complete
</code></pre>
| 2 | 2016-08-09T18:31:18Z | [
"python"
] |
Add values to array in one line | 38,857,966 | <p>How can I write this code in one-line?</p>
<pre><code>aa = []
for s in complete:
aa.append(s)
</code></pre>
<p>I know there are several solutions. I would really appreciate if you could write them down.
Thanks! </p>
| 2 | 2016-08-09T18:29:05Z | 38,858,043 | <p>As long as you just need to set <code>aa</code> equal to <code>complete</code>, just use</p>
<p><code>aa = complete</code></p>
| 2 | 2016-08-09T18:33:27Z | [
"python"
] |
Add values to array in one line | 38,857,966 | <p>How can I write this code in one-line?</p>
<pre><code>aa = []
for s in complete:
aa.append(s)
</code></pre>
<p>I know there are several solutions. I would really appreciate if you could write them down.
Thanks! </p>
| 2 | 2016-08-09T18:29:05Z | 38,858,050 | <p>If you want to add values to array in one line, it depends how the values are given. If you have another <code>list</code>, you can also use extend:</p>
<pre><code>my_list = []
my_list.extend([1,2,3,4])
</code></pre>
| 0 | 2016-08-09T18:33:39Z | [
"python"
] |
Add values to array in one line | 38,857,966 | <p>How can I write this code in one-line?</p>
<pre><code>aa = []
for s in complete:
aa.append(s)
</code></pre>
<p>I know there are several solutions. I would really appreciate if you could write them down.
Thanks! </p>
| 2 | 2016-08-09T18:29:05Z | 38,858,080 | <p>To extend <code>aa</code>, use the <code>extend()</code> function:</p>
<pre><code>aa.extend(s for s in complete)
</code></pre>
<p>or </p>
<pre><code>aa.extend(complete)
</code></pre>
<p>If you simply wanted to equate the two, a simple <code>=</code> is fine:</p>
<pre><code>aa = complete
</code></pre>
| 0 | 2016-08-09T18:36:00Z | [
"python"
] |
Add values to array in one line | 38,857,966 | <p>How can I write this code in one-line?</p>
<pre><code>aa = []
for s in complete:
aa.append(s)
</code></pre>
<p>I know there are several solutions. I would really appreciate if you could write them down.
Thanks! </p>
| 2 | 2016-08-09T18:29:05Z | 38,858,099 | <p>I like to do such things with a list comprehension:</p>
<pre><code>aa = [s for s in complete]
</code></pre>
<p>Though, depending on the type of <code>complete</code>, and whether or not you want to use package like numpy there may be a faster way, such as</p>
<pre><code>import numpy as np
aa = np.array(complete)
</code></pre>
<p>I'm sure there are many other ways as well :)</p>
| 0 | 2016-08-09T18:36:58Z | [
"python"
] |
How to Bind and Send from Google Cloud Forwarding Rule IP Address? | 38,858,086 | <p>I've followed the instructions for <a href="https://cloud.google.com/compute/docs/protocol-forwarding/">Using Protocol Forwarding</a> on the Google Cloud Platform. So I now have something like this:</p>
<pre><code>$ gcloud compute forwarding-rules list
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
x-fr-1 us-west1 104.198.?.?? TCP us-west1-a/targetInstances/x-target-instance
x-fr-2 us-west1 104.198.?.?? TCP us-west1-a/targetInstances/x-target-instance
x-fr-3 us-west1 104.198.??.??? TCP us-west1-a/targetInstances/x-target-instance
x-fr-4 us-west1 104.198.??.??? TCP us-west1-a/targetInstances/x-target-instance
x-fr-5 us-west1 104.198.?.??? TCP us-west1-a/targetInstances/x-target-instance
</code></pre>
<p>(Note: Names have been changed and question-marks have been substituted. I'm not sure it matters to keep these private but better safe than sorry.)</p>
<p>My instance "x" is in the "x-target-instance" and has five forwarding rules "x-fr-1" through "x-fr-5". I'm running nginx on "x" and I can access it from any of its 6 external IP addresses (1 for the instance + 5 forwarding rules). So far, so good.</p>
<p>I am interested now in binding a server to these external IP addresses. To explore, I tried using Python:</p>
<pre><code>import socket
import time
def serve(ip_address, port=80):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind((ip_address, port))
try:
sock.listen(5)
while True:
con, _ = sock.accept()
print con.getpeername(), con.getsockname()
con.send(time.ctime())
con.close()
finally:
sock.close()
</code></pre>
<p>Now I can bind "0.0.0.0" and I get some interesting results:</p>
<pre><code>>>> serve("0.0.0.0")
('173.228.???.??', 57288) ('10.240.?.?', 80)
('173.228.???.??', 57286) ('104.198.?.??', 80)
</code></pre>
<p>When I communicate with the server on its external IP address, the "getsockname" method returns the instance's internal IP address. But when I communicate with the server on an external IP address as used by a forwarding rule, then the "getsockname" methods returns the external IP address.</p>
<p>Ok, now I bind the instance's internal IP address:</p>
<pre><code>>>> serve("10.240.?.?")
('173.228.???.??', 57295) ('10.240.?.?', 80)
</code></pre>
<p>Again I can communicate with the server on its external IP address, and the "getsockname" method returns the instance's internal IP address. That seems a bit odd.</p>
<p>Also, if I try to bind the instance's external IP address:</p>
<pre><code>>>> serve("104.198.?.??")
error: [Errno 99] Cannot assign requested address
</code></pre>
<p>Then I get an error.</p>
<p>But, if I try to bind the external IP addresses used by the forwarding rules and then make a request:</p>
<pre><code>>>> serve("104.198.??.???")
('173.228.???.??', 57313) ('104.198.??.???', 80)
</code></pre>
<p>It works.</p>
<p>Finally I look at "ifconfig":</p>
<pre><code>ens4 Link encap:Ethernet HWaddr 42:01:0a:??:??:??
inet addr:10.240.?.? Bcast:10.240.?.? Mask:255.255.255.255
inet6 addr: fe80::4001:???:????:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1460 Metric:1
RX packets:37554 errors:0 dropped:0 overruns:0 frame:0
TX packets:32286 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:41201244 (41.2 MB) TX bytes:3339072 (3.3 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:9403 errors:0 dropped:0 overruns:0 frame:0
TX packets:9403 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:3155046 (3.1 MB) TX bytes:3155046 (3.1 MB)
</code></pre>
<p>And I see only two interfaces. Clearly, the abilities of Google Cloud Platform Networking has exceeded what I can remember from my Computer Networking class in college. To summarize my observations:</p>
<ol>
<li>If I want to bind on the instance's external IP address, then I bind its internal IP address.</li>
<li>A process bound to the instance's internal IP address can not differentiate the destination IP between the instance's internal or external IP addresses.</li>
<li>The single networking adapter, "ens4", is receiving packets bound for any of the instance's 6 external IP address.</li>
</ol>
<p>And here's my questions:</p>
<ol>
<li>Why can I not bind the instance's external IP address?</li>
<li>How is it that I can bind the external IP addresses used by forwarding rules when I have no associated network adapters?</li>
<li>If I want to restrict SSH access to the instance's external IP address, should I configure SSH to bind the internal IP address?</li>
<li>If I setup an HTTP proxy on one of the external IP addresses used by a forwarding rule, what will be the source IP of the proxied request?</li>
<li>Lastly, and this may be a bug, why is the forwarding rules list empty in the web interface at <a href="https://console.cloud.google.com/networking/loadbalancing/advanced/forwardingRules/list?project=xxx">https://console.cloud.google.com/networking/loadbalancing/advanced/forwardingRules/list?project=xxx</a> when I can see them with "gcloud compute forwarding-rules list"?</li>
</ol>
| 11 | 2016-08-09T18:36:20Z | 38,973,628 | <ol>
<li>it's not in the local routing table ('ip route show table local')
[ you could of course add it (e.g. 'ip address add x.x.x.x/32 dev ens4'),
but doing so wouldn't do you much good, since no packets will be
delivered to your VM using that as the destination address - see
below... ]</li>
<li>because the forwarded addresses have been added to your local routing table ('ip route show table local')</li>
<li>you could [ though note that this would restrict ssh access to either external clients targeting the external IP address, or to clients within your virtual network targeting either the external or internal IP address ]. However, as already noted - it might be more important to restrict the allowed <em>client</em> addresses (not the server address), and for that the firewall would be more effective.</li>
<li>it depends on where the destination of the proxied request goes. If it's internal to your virtual network, then it will be the VM's internal IP address, otherwise it's NAT-ed (outside of your VM) to be the VM's external IP address.</li>
<li>There are multiple tabs on that page - two of which list different classes of forwarding rule ("global forwarding rules" vs "forwarding rules"). Admittedly somewhat confusing :P</li>
</ol>
<p>One other thing that's slightly confusing - when sending packets to your VM using its external IP as the destination address, an entity outside the VM (think of it as a virtual switch / router / NAT device) automatically modifies the destination to be the internal IP before the packet arrives at the virtio driver for the virtual NIC - so there's nothing you can do to modify that behavior. Packets addressed to the IP of a forwarding rule, however, are not NAT-ed (as you've seen).</p>
<p>Hope that helps! </p>
| 4 | 2016-08-16T11:23:43Z | [
"python",
"networking",
"google-compute-engine",
"google-cloud-platform"
] |
Python asyncio performance on OS X vs Ubuntu | 38,858,144 | <p>I have some issues with Python asyncio performance on OS X. I have Macbook pro 2015 with 16gb RAM. But can't get the same performance on OS X (El Capitan) as on Ubuntu. Even given the fact that I am running Ubuntu inside VM (vagrant, 4gb RAM) with OS X host.</p>
<p>OS X benchmark with wrk:
<code>
wrk -t8 -d 10s -c 300 http://127.0.0.1:9090 â
Running 10s test @ http://127.0.0.1:9090
8 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 628.63us 1.45ms 16.32ms 89.46%
Req/Sec 696.05 834.65 5.51k 88.89%
19498 requests in 10.08s, 361.78KB read
Socket errors: connect 0, read 20482, write 0, timeout 0
Requests/sec: 1934.40
Transfer/sec: 35.89KB</code></p>
<p>Ubuntu:</p>
<p><code>wrk -t8 -d 10s -c 300 http://127.0.0.1:9090
Running 10s test @ http://127.0.0.1:9090
8 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 5.49ms 14.33ms 408.97ms 99.22%
Req/Sec 3.58k 1.41k 8.42k 70.91%
204333 requests in 10.06s, 3.70MB read
Socket errors: connect 0, read 3, write 977, timeout 0
Requests/sec: 20311.64
Transfer/sec: 376.88KB
</code></p>
<p>Server code: <a href="https://gist.github.com/ssbb/5f6c2c043880e0e917c3254d06c52a7e" rel="nofollow">https://gist.github.com/ssbb/5f6c2c043880e0e917c3254d06c52a7e</a></p>
<p><code>ulimit -a</code> on Ubuntu: <a href="https://gist.github.com/ssbb/e468b3ede5470da25699e4da4506b77c" rel="nofollow">https://gist.github.com/ssbb/e468b3ede5470da25699e4da4506b77c</a></p>
<p><code>ulimit -a</code> on OS X: <a href="https://gist.github.com/ssbb/f2a846975069a1d62a313790ad8d26ce" rel="nofollow">https://gist.github.com/ssbb/f2a846975069a1d62a313790ad8d26ce</a></p>
<p><code>sysctl -a</code> on OS X: <a href="https://gist.github.com/ssbb/c78d5da7ae9e3670175f643309cf9f6b" rel="nofollow">https://gist.github.com/ssbb/c78d5da7ae9e3670175f643309cf9f6b</a></p>
<p><code>sysctl -a</code> on Ubuntu: <a href="https://gist.github.com/ssbb/9a00cc3856135369b16ddc0083d2bc88" rel="nofollow">https://gist.github.com/ssbb/9a00cc3856135369b16ddc0083d2bc88</a></p>
<p>Why I have so much difference between Ubuntu/OS X. Also I tried to run this server on Arch Linux (not VM, just second OS) and have the same results as OS X.</p>
<p>Do Ubuntu have some "hacks" for TCP stack?</p>
| 1 | 2016-08-09T18:39:49Z | 38,866,429 | <p>MacOSX has slower network stack implementation that linux, it's well known fact.</p>
<p>I don't know why Arch Linux is slower than Ubuntu on your machine. Network stack is implemented by linux kernel itself, linux distros with the same kernel version should display almost the same performance. </p>
| 0 | 2016-08-10T07:06:36Z | [
"python",
"osx",
"ubuntu",
"tcp",
"python-asyncio"
] |
Pandas - Sort by group membership numbers | 38,858,177 | <p>When faced with large numbers of groups, any graph you might make is apt to be useless due to having too many lines and an unreadable legend. In these cases, being able to find the groups that have the most and least information in them is very useful. However, while <code>x.size()</code> tells you the group membership (after having used <code>groupby</code>), there is no way I can find to re-sort the dataframe using this information, so that you can then use limiting looping to only graph the first x groups.</p>
| 3 | 2016-08-09T18:42:13Z | 38,858,324 | <p>You can use <code>transform</code> to get the counts and sort on that column:</p>
<pre><code>df = pd.DataFrame({'A': list('aabababc'), 'B': np.arange(8)})
df
Out:
A B
0 a 0
1 a 1
2 b 2
3 a 3
4 b 4
5 a 5
6 b 6
7 c 7
</code></pre>
<hr>
<pre><code>df['counts'] = df.groupby('A').transform('count')
df
Out:
A B counts
0 a 0 4
1 a 1 4
2 b 2 3
3 a 3 4
4 b 4 3
5 a 5 4
6 b 6 3
7 c 7 1
</code></pre>
<p>Now you can sort by <code>counts</code>:</p>
<pre><code>df.sort_values('counts')
Out:
A B counts
7 c 7 1
2 b 2 3
4 b 4 3
6 b 6 3
0 a 0 4
1 a 1 4
3 a 3 4
5 a 5 4
</code></pre>
<p>In one line:</p>
<pre><code>df.assign(counts = df.groupby('A').transform('count')).sort_values('counts')
</code></pre>
| 3 | 2016-08-09T18:51:16Z | [
"python",
"pandas"
] |
Is there a way to use python 2.x and 3.x in same virtualenv? | 38,858,195 | <p>I want to use jupyter with both the versioned kernel of python in a virtualenv. How can I do that?</p>
| 0 | 2016-08-09T18:43:01Z | 38,859,614 | <p>you can use <code>tox</code> to setup multiple virtualenvs instead ... that would generally be the standard solution</p>
| -2 | 2016-08-09T20:12:48Z | [
"python",
"virtualenv",
"jupyter"
] |
Regular expression (Python) - match MAC address | 38,858,244 | <p>I have this text (from ipconfig /all)</p>
<pre><code>Ethernet adapter Local Area Connection:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Atheros AR8151 PCI-E Gigabit Ethernet Controller (NDIS 6.20)
Physical Address. . . . . . . . . : 50-E5-49-CE-FC-EF
DHCP Enabled. . . . . . . . . . . : Yes
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::5cba:e9f2:a99f:4499%11(Preferred)
IPv4 Address. . . . . . . . . . . : 10.0.0.1(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Lease Obtained. . . . . . . . . . : ⢠06 â¬â¦ââ¦âË 2016 20:35:49
Lease Expires . . . . . . . . . . : â°â¦ï¢â¢â°â¢â° 09 â¬â¦ââ¦âË 2016 21:05:49
Default Gateway . . . . . . . . . : 10.0.0.138
DHCP Server . . . . . . . . . . . : 10.0.0.138
DHCPv6 IAID . . . . . . . . . . . : 240182601
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-19-7A-1F-FC-50-E5-49-CE-FC-EF
DNS Servers . . . . . . . . . . . : 10.0.0.138
NetBIOS over Tcpip. . . . . . . . : Enabled
Ethernet adapter Local Area Connection* 11:
Description . . . . . . . . . . . : Juniper Networks Virtual Adapter
Physical Address. . . . . . . . . : 02-05-85-7F-EB-80
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::8dfb:6d42:97e1:2dc7%19(Preferred)
IPv4 Address. . . . . . . . . . . : 172.16.2.7(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.255
Default Gateway . . . . . . . . . :
DHCPv6 IAID . . . . . . . . . . . : 436340101
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-19-7A-1F-FC-50-E5-49-CE-FC-EF
DNS Servers . . . . . . . . . . . : 172.16.0.6
172.16.0.91
NetBIOS over Tcpip. . . . . . . . : Enabled
</code></pre>
<p>I want to mach only the Physical Address (MAC)</p>
<p>From here it's will be 2:<br>
<strong>50-E5-49-CE-FC-EF</strong> and <strong>02-05-85-7F-EB-80</strong></p>
<p>This (<a href="http://stackoverflow.com/questions/4260467/what-is-a-regular-expression-for-a-mac-address">link</a>) <code>([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})</code> will not work because<br>
This will mach also <strong>00-01-00-01-19-7A-1F-FC-50-E5-49-CE-FC-EF</strong><br>
So I've fixed it to this <code>([^-])([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})([^-])</code>
(force not <code>-</code> in the start and in the end)<br>
But now I get in the mach the whitespace in the begining and in the end I get <code>/n</code>.<br>
Here is the link: <a href="https://regex101.com/r/kJ7mC0/1" rel="nofollow">https://regex101.com/r/kJ7mC0/1</a> </p>
<p>I need an <em>elegant</em> <strong>regex</strong> to solve it. </p>
| 1 | 2016-08-09T18:45:55Z | 38,858,490 | <p>You get the non-<code>-</code> symbols on both sides because the negated character class <code>[^-]</code> matches and consumes the characters. To avoid getting these characters in the match value, you can use lookarounds:</p>
<pre><code>p = re.compile(r'(?<!-)(?:[0-9a-f]{2}[:-]){5}[0-9a-f]{2}(?!-)', re.IGNORECASE)
^^^^^^ ^^^^
</code></pre>
<p>The <code>(?<!-)</code> negative lookbehind makes sure there is no <code>-</code> before the value you need to extract, and <code>(?!-)</code> is a negative lookahead that fails the match if there is a <code>-</code> after this value.</p>
<p>If the <code>:</code> delimiter is not expected, replace <code>[:-]</code> with <code>-</code>. Use it with <code>re.findall</code>.</p>
<p>Also, all the <code>(...)</code> should be turned to <code>(?:...)</code> non-capturing groups so as not to "spoil" the match results.</p>
<p>See the <a href="http://ideone.com/Z1RCV5" rel="nofollow">Python demo</a></p>
<p>An alternative can be a regex with 1 capturing group capturing what we need, and matching what we do not need:</p>
<pre><code>p = re.compile(r'(?:^|[^-])((?:[0-9a-f]{2}[:-]){5}[0-9a-f]{2})(?:$|[^-])', re.IGNORECASE)
</code></pre>
<p>It does not look that elegant, but will work the same way as <code>(?:^|[^-])</code> non-capturing group is not included in the <code>re.findall</code> results, and matches either the start of string or a symbol other than <code>-</code>. <code>(?:$|[^-])</code> matches the end of string or a symbol other than <code>-</code>. </p>
<p>Also, to shorten the pattern, you may replace <code>a-fA-F</code> with just <code>a-f</code> in the character classes and use the <code>re.IGNORECASE</code> or <code>re.I</code> flag.</p>
| 1 | 2016-08-09T19:00:25Z | [
"python",
"regex",
"mac-address",
"regular-language"
] |
Remote shell using python | 38,858,259 | <p>I want to do a simple remote shell. I have used sockets to comunicate and it works, in LAN. Now my friend is trying to connect to my server script from his client but he cannot do it.
I have opened port 4333 public and associated it to 10001 private in my router.</p>
<p>By the way I've replaced the IP with XX.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8 -*-
# Client's script
import socket
import sys
import os
# I create a TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('XX.XXX.XX.XX', 4333)
sock.connect(server_address)
datos = sock.recv(30)
comando = str(datos)
os.system(comando)
</code></pre>
<p>=========</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8 -*-
# Server's script
import socket
import sys
# I create the TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
direc = socket.gethostbyname(socket.gethostname())
# Socket link
server_address = (direc, 10001)
connection = sock.bind(server_address)
sock.listen(1)
coman = ''
while coman != 'salir':
# Awaiting conecction
print "Esperando conexion..."
connection, client_address = sock.accept()
coman = raw_input("Introduce comando: ")
connection.sendall(coman)
connection.close()
</code></pre>
| 0 | 2016-08-09T18:47:21Z | 38,858,563 | <p>What is the value of <code>direc</code>? It should be a connectable IP address for your server to work, i.e. <code>127.0.0.1</code> doesn't work for your friends, even they are under same LAN with your machine.</p>
<p><code>sock.bind()</code> means that the socket will only listen connections to <code>direc</code>:<code>10001</code>. You can use <code>0.0.0.0</code> to represent "the whole internet".</p>
| 0 | 2016-08-09T19:05:10Z | [
"python",
"linux",
"shell",
"sockets",
"remote-access"
] |
JSON to DataFrames in Python3.X | 38,858,261 | <p>I have a JSON data as below in list. Each line is a independent dictionary</p>
<pre class="lang-py prettyprint-override"><code>["{'asin': '0001048791', 'salesRank': {'Books': 6334800}, 'imUrl': 'http://ecx.images-amazon.com/images/I/51MKP0T4DBL.jpg', 'categories': [['Books']], 'title': 'The Crucible: Performed by Stuart Pankin, Jerome Dempsey &amp; Cast'}\n",
"{'asin': '0000143561', 'categories': [['Movies & TV', 'Movies']], 'description': '3Pack DVD set - Italian Classics, Parties and Holidays.', 'title': 'Everyday Italian (with Giada de Laurentiis), Volume 1 (3 Pack): Italian Classics, Parties, Holidays', 'price': 12.99, 'salesRank': {'Movies & TV': 376041}, 'imUrl': 'http://g-ecx.images-amazon.com/images/G/01/x-site/icons/no-img-sm._CB192198896_.gif', *'related'*: {'also_viewed': ['B0036FO6SI', 'B000KL8ODE', '000014357X', 'B0037718RC', 'B002I5GNVU', 'B000RBU4BM'], 'buy_after_viewing': ['B0036FO6SI', 'B000KL8ODE', '000014357X', 'B0037718RC']}}\n"]
</code></pre>
<p>If you look at the data carefully, you can observe the below</p>
<ol>
<li>The key <code>related</code> / <code>also_bought</code> / <code>also_viewed</code> is intermittently available</li>
<li>it has <code>\n</code> after each pair of <code>{}</code>.</li>
</ol>
<p>Below is the maximum columns that this data can contain.
<a href="http://i.stack.imgur.com/HF5Ip.png" rel="nofollow">Max columns that a single dict in the each line of file can contain</a> </p>
<p>My ultimate goal is to move the above instructed columns data to the data frame and am not sure whether i can move it or not.</p>
<p>Kindly help!</p>
| 0 | 2016-08-09T18:47:33Z | 38,868,751 | <h2>Your Data is not JSON </h2>
<p>OK - There are several issues with what you are attempting to do. First, just copying and pasting your list into an interactive interpreter session:</p>
<pre><code>>>> data = ["{'asin': '0001048791', 'salesRank': {'Books': 6334800}, 'imUrl': 'http://ecx.images-amazon.com/images/I/51MKP0T4DBL.jpg', 'categories': [['Books']], 'title': 'The Crucible: Performed by Stuart Pankin, Jerome Dempsey &amp; Cast'}\n",
... "{'asin': '0000143561', 'categories': [['Movies & TV', 'Movies']], 'description': '3Pack DVD set - Italian Classics, Parties and Holidays.', 'title': 'Everyday Italian (with Giada de Laurentiis), Volume 1 (3 Pack): Italian Classics, Parties, Holidays', 'price': 12.99, 'salesRank': {'Movies & TV': 376041}, 'imUrl': 'http://g-ecx.images-amazon.com/images/G/01/x-site/icons/no-img-sm._CB192198896_.gif', *'related'*: {'also_viewed': ['B0036FO6SI', 'B000KL8ODE', '000014357X', 'B0037718RC', 'B002I5GNVU', 'B000RBU4BM'], 'buy_after_viewing': ['B0036FO6SI', 'B000KL8ODE', '000014357X', 'B0037718RC']}}\n"]
>>> [type(x) for x in data]
[<class 'str'>, <class 'str'>]
</code></pre>
<p>You simply have a list with two strings. These are not dictionaries, and more importantly, they are not valid JSON. Notice that the strings have single-quotes and that there are keys like this:</p>
<pre><code>*'related'*: {'also_viewed': ['B0036FO6SI', 'B000KL8ODE', '000014357X', 'B0037718RC', 'B002I5GNVU', 'B000RBU4BM'], 'buy_after_viewing': ['B0036FO6SI', 'B000KL8ODE', '000014357X', 'B0037718RC']}
</code></pre>
<p>With two asterisks (*) surrounding it <em>outside</em> the single-quotes. This is certainly not valid JSON. At first, my suspicion was that you read the data in using code like the following:</p>
<pre><code>with open('data.json','r') as f:
data = f.readlines()
</code></pre>
<p>That would imply that your .json file was invalid to begin with. However, th screenshot you posted shows valid json code, so whatever processing you did turned a valid json file into strings which are invalid as JSON. For the example you gave, we can do a quick hack to turn these into valid Python objects by using the <code>json</code> module function <code>loads</code> which deserializes a valid JSON string into Python objects:</p>
<pre><code>>>> import json
>>> valid_data = [json.loads(s.replace("'",'"').replace('*','')) for s in data]
</code></pre>
<p>Notice that for each string in my original list, I replace single-quotes (<code>'</code>) with double-quotes (<code>"</code>) and remove asterisks (<code>*</code>). Now, this is what you want:</p>
<pre><code>>>> from pprint import pprint
>>> pprint(valid_data)
[{'asin': '0001048791',
'categories': [['Books']],
'imUrl': 'http://ecx.images-amazon.com/images/I/51MKP0T4DBL.jpg',
'salesRank': {'Books': 6334800},
'title': 'The Crucible: Performed by Stuart Pankin, Jerome Dempsey &amp; '
'Cast'},
{'asin': '0000143561',
'categories': [['Movies & TV', 'Movies']],
'description': '3Pack DVD set - Italian Classics, Parties and Holidays.',
'imUrl': 'http://g-ecx.images-amazon.com/images/G/01/x-site/icons/no-img-sm._CB192198896_.gif',
'price': 12.99,
'related': {'also_viewed': ['B0036FO6SI',
'B000KL8ODE',
'000014357X',
'B0037718RC',
'B002I5GNVU',
'B000RBU4BM'],
'buy_after_viewing': ['B0036FO6SI',
'B000KL8ODE',
'000014357X',
'B0037718RC']},
'salesRank': {'Movies & TV': 376041},
'title': 'Everyday Italian (with Giada de Laurentiis), Volume 1 (3 Pack): '
'Italian Classics, Parties, Holidays'}]
</code></pre>
<p>Now, you can simply use the <code>DataFrame</code> constructor from <code>pandas</code>:</p>
<pre><code>>>> df = pd.DataFrame(valid_data)
>>> df
asin categories \
0 0001048791 [[Books]]
1 0000143561 [[Movies & TV, Movies]]
description \
0 NaN
1 3Pack DVD set - Italian Classics, Parties and ...
imUrl price \
0 http://ecx.images-amazon.com/images/I/51MKP0T4... NaN
1 http://g-ecx.images-amazon.com/images/G/01/x-s... 12.99
related salesRank \
0 NaN {'Books': 6334800}
1 {'also_viewed': ['B0036FO6SI', 'B000KL8ODE', '... {'Movies & TV': 376041}
title
0 The Crucible: Performed by Stuart Pankin, Jero...
1 Everyday Italian (with Giada de Laurentiis), V...
>>> data[0]
"{'asin': '0001048791', 'salesRank': {'Books': 6334800}, 'imUrl': 'http://ecx.images-amazon.com/images/I/51MKP0T4DBL.jpg', 'categories': [['Books']], 'title': 'The Crucible: Performed by Stuart Pankin, Jerome Dempsey &amp; Cast'}\n"
>>> df['asin']
0 0001048791
1 0000143561
Name: asin, dtype: object
>>> df.columns
Index(['asin', 'categories', 'description', 'imUrl', 'price', 'related',
'salesRank', 'title'],
dtype='object')
</code></pre>
<p>That worked, and as you can see, it dealt with the intermittent values for certain columns by filling in with <code>NaN</code>. Some of these columns, like <code>'related'</code> simply store dictionaries. This might not be what you want, really, but from this point you must decide how exactly you want to clean up or reshape this data. If you run into issues there, it might be worth starting another question.</p>
<h2>Finally</h2>
<p>Instead of hacking away at the data after it has been turned into these strings, you might just want to do the following (let's say the file you are working with is called 'data.json'):</p>
<pre><code>>>> with open('valid_json.json') as f:
... data = pd.read_json(f)
...
>>> data
asin categories \
0 1048791 [[Books]]
1 143561 [[Movies & TV, Movies]]
description \
0 NaN
1 3Pack DVD set - Italian Classics, Parties and ...
imUrl price \
0 http://ecx.images-amazon.com/images/I/51MKP0T4... NaN
1 http://g-ecx.images-amazon.com/images/G/01/x-s... 12.99
related salesRank \
0 NaN {'Books': 6334800}
1 {'also_viewed': ['B0036FO6SI', 'B000KL8ODE', '... {'Movies & TV': 376041}
title
0 The Crucible: Performed by Stuart Pankin, Jero...
1 Everyday Italian (with Giada de Laurentiis), V...
>>> type(data)
<class 'pandas.core.frame.DataFrame'>
</code></pre>
| 0 | 2016-08-10T09:03:19Z | [
"python",
"python-3.x",
"pandas",
"dataframe"
] |
How can I search for a specific string that varies by a single character in beautiful soup python? | 38,858,276 | <p>I want to search for all variations of a string that is completely the same other than a single character. Here is an example of the string:</p>
<p>AccordionPanelTab cond1_left
AccordionPanelTab cond2_left
AccordionPanelTab cond3_left</p>
<p>I would normally just iterate through a for loop however this string is the class I'm trying to use in beautiful soup. My code looks like this</p>
<pre><code>sessionSwell = dryscrape.Session()
sessionSwell.visit(swellURL)
responseSwell = sessionSwell.body()
soupSwell = BeautifulSoup(responseSwell, "lxml")
swellDayData = soupSwell.findAll("div", {"class": "AccordionPanelTab cond1_left"})
</code></pre>
<p>I was thinking there is a command that I could put in the string in the place of the 1 that would tell the computer I do not care what is in this place. I'm sure this is a simple fix but I researched far and wide and couldn't figure it out. </p>
<p>Thanks</p>
| 1 | 2016-08-09T18:49:06Z | 38,859,179 | <p>As per my understanding, you needed a list of all the strings that needs to match your conditions.You can use regular expression for that. </p>
<p><a href="https://docs.python.org/3/library/re.html#re.findall" rel="nofollow">re.findall(pattern, string)</a> returns a list of matching strings.</p>
<p>You can do that by </p>
| -1 | 2016-08-09T19:43:09Z | [
"python",
"beautifulsoup"
] |
Reference instance method outside class definition | 38,858,279 | <p>I'm trying to pass a method as an argument outside of the definition of a class. However, since it's not defined in that scope, it doesn't work. Here's what I'd like to do:</p>
<pre><code>def applyMethod(obj, method):
obj.method()
class MyClass():
def myMethod(self):
print 1
a = MyClass()
#works as expected
a.myMethod()
#"NameError: name 'myMethod' is not defined"
applyMethod(a, myMethod)
</code></pre>
| 2 | 2016-08-09T18:49:13Z | 38,858,367 | <p>myMethod is only defined in the namespace of MyClass. Your code could look like this:</p>
<pre><code>def applyMethod(obj, method):
method(obj)
class MyClass():
def myMethod(self):
print 1
a = MyClass()
a.myMethod()
applyMethod(a, MyClass.myMethod)
</code></pre>
<p>Now you're referencing <code>myMethod</code> from the namespace it exists in, and calling the equivalent of <code>obj.myMethod()</code> from the <code>applyMethod</code> function.</p>
<p>That's all you need - the <code>instance.method()</code> is just syntactic sugar for <code>ClassName.method(instance)</code>, so I just rewrote the <code>applyMethod</code> function to work without the syntactic sugar, i.e. be passed in as the raw <code>MyClass.myMethod</code>, then gave it an instance of <code>MyClass</code> as its first argument. This is what the <code>a.myMethod()</code> syntax is doing in the backend.</p>
| 2 | 2016-08-09T18:53:27Z | [
"python",
"function",
"methods",
"parameter-passing"
] |
Reading the files from a log file list | 38,858,317 | <p>I have the following log file called <code>log.txt</code>, with all the file names to be considered from a folder:</p>
<pre><code>log.txt
C:\data\01.log
C:\data\02.log
C:\data\03.log
C:\data\04.log
</code></pre>
<p>My task is to read these files one after another from log.txt using a for loop.</p>
<pre><code>with open("C:\data\log.txt",'r') as f:
logs=f.read()
print logs
for line in logs:
line = myfile.readline().replace('\n', '')
with open(line, 'r') as myfile:
lines = [line.rstrip('\n') for line in myfile.readlines()]
</code></pre>
<p>I am getting this error:</p>
<blockquote>
<p>IOError: [Errno 2] No such file or directory:</p>
</blockquote>
| -1 | 2016-08-09T18:50:57Z | 38,860,543 | <p>What is the error you are getting? </p>
<p>Is it "IOError: [Errno 2] No such file or directory:"? </p>
<p>This error means that the directory C:\data\ does not exist. Are you sure this folder exists? Also if it does exist, is the logs.txt file in that directory? </p>
<p>I personally do not have a C:\data directory, so unless you created it, you have the address of the wrong directory. </p>
| 0 | 2016-08-09T21:14:42Z | [
"python",
"file-io"
] |
Python Client to nodeJS Server with Socket.IO | 38,858,352 | <p>I'm trying to send values from my raspberry pi (in python 2.7.9) to my nodeJS server with socket.io.</p>
<p>My goal ist send many values continuously from my pi over a websocket connection to my node Server (local), which should take the values and show it on the index.html (for other clients, like a Web-Chat where just the raspberry sends values)</p>
<p>I tried everything but I can't make a handshake and send data. When I open the "<a href="http://IP_ADDRESS:8080">http://IP_ADDRESS:8080</a>" in my browser I see a connection but not with my python Code. <P>Please I need some help....</p>
<p><strong>server.js</strong></p>
<pre><code>var express = require('express')
, app = express()
, server = require('http').createServer(app)
, io = require('socket.io').listen(server)
, conf = require('./config.json');
// Webserver
server.listen(conf.port);
app.configure(function(){
app.use(express.static(__dirname + '/public'));
});
app.get('/', function (req, res) {
res.sendfile(__dirname + '/public/index.html');
});
// Websocket
io.sockets.on('connection', function (socket) {
//Here I want get the data
io.sockets.on('rasp_param', function (data){
console.log(data);
});
});
});
// Server Details
console.log('Ther server runs on http://127.0.0.1:' + conf.port + '/');
</code></pre>
<p>my python <strong>websocket</strong>-code in which I just want send values</p>
<pre><code>#!/usr/bin/env python
#
from websocket import create_connection
ws = create_connection("ws://IP_ADDRESS:8080/")
ws.send("Some value")
ws.close();
</code></pre>
| 0 | 2016-08-09T18:52:24Z | 38,858,405 | <p>Socket.io communications aren't plain websockets. You probably need an implementation of the socket.io client on python, to make sure that the messages you are sending are compatible with the socket.io protocol. Something like <a href="https://pypi.python.org/pypi/socketIO-client" rel="nofollow">socketIO-client</a>, maybe.</p>
| 0 | 2016-08-09T18:55:35Z | [
"python",
"node.js",
"websocket",
"socket.io"
] |
How to count values in a certain range in a Numpy array? (Does not work) | 38,858,363 | <p>I have already checked this thread <a href="http://stackoverflow.com/questions/9560207/how-to-count-values-in-a-certain-range-in-a-numpy-array">How to count values in a certain range in a Numpy array?</a> but their answer does not seem to work.</p>
<p>I have a numpy array of 2000 floats called data:</p>
<pre><code>print(type(data)) --> <type 'list'>
print(type(data[0])) --> <type 'numpy.float64'>
</code></pre>
<p>And I have 2 variables to form a range, minV and maxV:</p>
<pre><code>print(type(minV)) --> <type 'float'>
print(type(maxV)) --> <type 'float'>
</code></pre>
<p>If I try the solution given in the link mentioned above, I receive this exception:</p>
<pre><code>((minV < data) & (data < maxV)).sum()
AttributeError: 'bool' object has no attribute 'sum'
</code></pre>
<p>And indeed, that expression is a boolean:</p>
<pre><code>print(type( (minV < data) & (data < minV) ) ) --> <type 'bool'>
print( ( (minV < data) & (data < minV) ) ) --> True
</code></pre>
<p>The python version I am using is Python 2.7.3 -- EPD 7.3-2 (64-bit)
Numpy version is 1.6.1</p>
<p>System is Linux (Although I ignore if that is important).</p>
<p>Thanks.</p>
| -1 | 2016-08-09T18:53:10Z | 38,858,460 | <p>It seems that <code>data</code> is not an array. Please check it. The suggested solution does work with arrays. Perhaps at some point <code>data</code> became a number or something.</p>
| 0 | 2016-08-09T18:59:09Z | [
"python",
"arrays",
"numpy"
] |
How to count values in a certain range in a Numpy array? (Does not work) | 38,858,363 | <p>I have already checked this thread <a href="http://stackoverflow.com/questions/9560207/how-to-count-values-in-a-certain-range-in-a-numpy-array">How to count values in a certain range in a Numpy array?</a> but their answer does not seem to work.</p>
<p>I have a numpy array of 2000 floats called data:</p>
<pre><code>print(type(data)) --> <type 'list'>
print(type(data[0])) --> <type 'numpy.float64'>
</code></pre>
<p>And I have 2 variables to form a range, minV and maxV:</p>
<pre><code>print(type(minV)) --> <type 'float'>
print(type(maxV)) --> <type 'float'>
</code></pre>
<p>If I try the solution given in the link mentioned above, I receive this exception:</p>
<pre><code>((minV < data) & (data < maxV)).sum()
AttributeError: 'bool' object has no attribute 'sum'
</code></pre>
<p>And indeed, that expression is a boolean:</p>
<pre><code>print(type( (minV < data) & (data < minV) ) ) --> <type 'bool'>
print( ( (minV < data) & (data < minV) ) ) --> True
</code></pre>
<p>The python version I am using is Python 2.7.3 -- EPD 7.3-2 (64-bit)
Numpy version is 1.6.1</p>
<p>System is Linux (Although I ignore if that is important).</p>
<p>Thanks.</p>
| -1 | 2016-08-09T18:53:10Z | 38,858,610 | <p>I suspect you are using python 2 because comparing a list and a number doesn't raise a <code>TypeError</code> in your case. </p>
<p>But in order to use element-wise comparison (<code><</code>, <code>></code>, <code>&</code>) you need to convert your list to a numpy array:</p>
<pre><code>import numpy as np
data = np.array(data)
((minV < data) & (data < maxV)).sum()
</code></pre>
<p>should work. For example:</p>
<pre><code>data = list(range(1000))
minV = 100
maxV = 500
data = np.array(data)
((minV < data) & (data < maxV)).sum() # returns 399
</code></pre>
| 1 | 2016-08-09T19:07:49Z | [
"python",
"arrays",
"numpy"
] |
AttributeError: 'Cycler' object has no attribute 'change_key' | 38,858,407 | <p>I'm trying to <code>import matplotlib</code> on Ubuntu. I reinstalled matplotlib from source because i couldn't use the <code>TkAgg</code> backend. Now I'm facing a new problem which I cannot solve and also can't find the solution anywhere. I'm using Python 3.5.
I have this error showing me when i run a simple import:</p>
<pre><code>Traceback (most recent call last):
File "plot_test.py", line 17, in <module>
import matplotlib
File "/usr/local/lib/python3.5/site-packages/matplotlib-2.0.0b3+1955.g888bf17-py3.5-linux-x86_64.egg/matplotlib/__init__.py", line 1174, in <module>
rcParams = rc_params()
File "/usr/local/lib/python3.5/site-packages/matplotlib-2.0.0b3+1955.g888bf17-py3.5-linux-x86_64.egg/matplotlib/__init__.py", line 1017, in rc_params
return rc_params_from_file(fname, fail_on_error)
File "/usr/local/lib/python3.5/site-packages/matplotlib-2.0.0b3+1955.g888bf17-py3.5-linux-x86_64.egg/matplotlib/__init__.py", line 1149, in rc_params_from_file
config = RcParams([(key, default) for key, (default, _) in iter_params
File "/usr/local/lib/python3.5/site-packages/matplotlib-2.0.0b3+1955.g888bf17-py3.5-linux-x86_64.egg/matplotlib/__init__.py", line 901, in __init__
self[k] = v
File "/usr/local/lib/python3.5/site-packages/matplotlib-2.0.0b3+1955.g888bf17-py3.5-linux-x86_64.egg/matplotlib/__init__.py", line 918, in __setitem__
cval = self.validate[key](val)
File "/usr/local/lib/python3.5/site-packages/matplotlib-2.0.0b3+1955.g888bf17-py3.5-linux-x86_64.egg/matplotlib/rcsetup.py", line 844, in validate_cycler
cycler_inst.change_key(prop, norm_prop)
AttributeError: 'Cycler' object has no attribute 'change_key'
</code></pre>
<p>I think it has maybe something to do with cycler import from rcsetup.py because there is a comment which says that:</p>
<pre><code># Don't let the original cycler collide with our validating cycler
</code></pre>
<p>So the original whatever this is, is colliding with their cycler?</p>
<p>How can I fix this? Any suggestions? Thanks!</p>
| -1 | 2016-08-09T18:55:37Z | 38,858,585 | <p>Just checked the version of cycler and it was outdated. Just update cycler with</p>
<p><code>sudo python3 pip install --upgrade cycler</code>.</p>
| 0 | 2016-08-09T19:06:28Z | [
"python",
"python-3.x",
"matplotlib"
] |
Scraping a website with BeautifulSoup | 38,858,466 | <p>I'm trying to scrape a website with BeautifulSoup. More specifically I'm trying to get the string from a following tag:</p>
<pre><code><td class="Fz(s) Fw(500) Ta(end)" data-reactid=".17c0h26fqwq.1.$0.0.0.3.1.$main-0-Quote-Proxy.$main 0-Quote.2.0.0.0.1.0.0:$VALUATION_MEASURES.0.1.0.$MARKET_CAP_INTRADAY.1">4.39B</td>
</code></pre>
<p>However, when I try to look for the attrs of all td tags, BeautifulSoup can't find the one I want. This is the code:</p>
<pre><code>from urllib.request import urlopen
source_code = urlopen('http://finance.yahoo.com/quote/IONS/key-statistics?p=IONS').read()
from bs4 import BeautifulSoup
yahoo_finance = BeautifulSoup(source_code, 'html.parser')
tds = yahoo_finance.find_all('td')
for td in tds:
print(td.attrs)
</code></pre>
<p>This is the output:</p>
<pre><code>{'class': ['W(100%)', 'Va(t)', 'Px(0)'], 'data-reactid': '.odbtogw33w.0.0.$uh.2.0.1.0.1.0.0.0'}
{'class': ['Va(t)', 'Tren(os)', 'W(10%)', 'Whs(nw)', 'Px(0)', 'Bdcl(s)'], 'data-reactid': '.odbtogw33w.0.0.$uh.2.0.1.0.1.0.0.1'}
</code></pre>
<p>So, it doesn't find 'class':['Fz(s)', 'Fw(500)', 'Ta(end)'] </p>
<p>Does anyone have an idea why?</p>
<p>Goran</p>
| 1 | 2016-08-09T18:59:33Z | 38,859,825 | <p>So this is additional code that I wrote, now I can nicely save dynamically generated content and get the tag that I want with BeautifulSoup:</p>
<pre><code>from contextlib import closing
from selenium.webdriver import Firefox
from selenium.webdriver.support.ui import WebDriverWait
with closing(Firefox()) as browser:
browser.get('https://finance.yahoo.com/quote/IONS?p=IONS')
button = browser.find_element_by_link_text('Statistics')
button.click()
#WebDriverWait(browser, timeout=10).until(
#lambda x: x.find_element_by_class_name('Fz(s) Fw(500) Ta(end)'))
page_source = browser.page_source
print(page_source)
yahoo_finance = BeautifulSoup(page_source, 'html.parser')
</code></pre>
<p>@nephtes @Padraic Cunningham thanks for hints.</p>
| 0 | 2016-08-09T20:27:05Z | [
"python",
"beautifulsoup",
"screen-scraping"
] |
Scraping a website with BeautifulSoup | 38,858,466 | <p>I'm trying to scrape a website with BeautifulSoup. More specifically I'm trying to get the string from a following tag:</p>
<pre><code><td class="Fz(s) Fw(500) Ta(end)" data-reactid=".17c0h26fqwq.1.$0.0.0.3.1.$main-0-Quote-Proxy.$main 0-Quote.2.0.0.0.1.0.0:$VALUATION_MEASURES.0.1.0.$MARKET_CAP_INTRADAY.1">4.39B</td>
</code></pre>
<p>However, when I try to look for the attrs of all td tags, BeautifulSoup can't find the one I want. This is the code:</p>
<pre><code>from urllib.request import urlopen
source_code = urlopen('http://finance.yahoo.com/quote/IONS/key-statistics?p=IONS').read()
from bs4 import BeautifulSoup
yahoo_finance = BeautifulSoup(source_code, 'html.parser')
tds = yahoo_finance.find_all('td')
for td in tds:
print(td.attrs)
</code></pre>
<p>This is the output:</p>
<pre><code>{'class': ['W(100%)', 'Va(t)', 'Px(0)'], 'data-reactid': '.odbtogw33w.0.0.$uh.2.0.1.0.1.0.0.0'}
{'class': ['Va(t)', 'Tren(os)', 'W(10%)', 'Whs(nw)', 'Px(0)', 'Bdcl(s)'], 'data-reactid': '.odbtogw33w.0.0.$uh.2.0.1.0.1.0.0.1'}
</code></pre>
<p>So, it doesn't find 'class':['Fz(s)', 'Fw(500)', 'Ta(end)'] </p>
<p>Does anyone have an idea why?</p>
<p>Goran</p>
| 1 | 2016-08-09T18:59:33Z | 38,859,891 | <p>You can get the data just using requests, the content is generated from an ajax get to <em><a href="https://query1.finance.yahoo.com/v10/finance/quoteSummary/IONS" rel="nofollow">https://query1.finance.yahoo.com/v10/finance/quoteSummary/IONS</a></em>:</p>
<pre><code>from pprint import pprint as pp
import requests
params = {"formatted": "true", "lang": "en-US", "region": "US",
"modules": "defaultKeyStatistics,financialData,calendarEvents", "corsDomain": "finance.yahoo.com"}
url = "http://finance.yahoo.com/quote/IONS/key-statistics?p=IONS"
ajax = "https://query1.finance.yahoo.com/v10/finance/quoteSummary/IONS"
with requests.Session() as s:
cont = requests.get(url).content
data = s.get(ajax, params=params).json()
pp(data[u'quoteSummary']["result"])
</code></pre>
<p>That gives you:</p>
<pre><code>[{u'calendarEvents': {u'dividendDate': {},
u'earnings': {u'earningsAverage': {u'fmt': u'-0.53',
u'raw': -0.53},
u'earningsDate': [{u'fmt': u'2016-08-09',
u'raw': 1470700800}],
u'earningsHigh': {u'fmt': u'-0.39',
u'raw': -0.39},
u'earningsLow': {u'fmt': u'-0.75',
u'raw': -0.75},
u'revenueAverage': {u'fmt': u'37.69M',
u'longFmt': u'37,690,000',
u'raw': 37690000},
u'revenueHigh': {u'fmt': u'56M',
u'longFmt': u'56,000,000',
u'raw': 56000000},
u'revenueLow': {u'fmt': u'25.2M',
u'longFmt': u'25,200,000',
u'raw': 25200000}},
u'exDividendDate': {},
u'maxAge': 1},
u'defaultKeyStatistics': {u'52WeekChange': {u'fmt': u'\u221e%',
u'raw': u'Infinity'},
u'SandP52WeekChange': {u'fmt': u'3.65%',
u'raw': 0.03645599},
u'annualHoldingsTurnover': {},
u'annualReportExpenseRatio': {},
u'beta': {u'fmt': u'2.35', u'raw': 2.35046},
u'beta3Year': {},
u'bookValue': {u'fmt': u'1.31', u'raw': 1.31},
u'category': None,
u'earningsQuarterlyGrowth': {},
u'enterpriseToEbitda': {u'fmt': u'-37.62',
u'raw': -37.618},
u'enterpriseToRevenue': {u'fmt': u'15.86',
u'raw': 15.864},
u'enterpriseValue': {u'fmt': u'4.09B',
u'longFmt': u'4,092,714,240',
u'raw': 4092714240},
u'fiveYearAverageReturn': {},
u'floatShares': {u'fmt': u'119.83M',
u'longFmt': u'119,833,635',
u'raw': 119833635},
u'forwardEps': {u'fmt': u'-1.14', u'raw': -1.14},
u'forwardPE': {u'fmt': u'-31.87',
u'raw': -31.868423},
u'fundFamily': None,
u'fundInceptionDate': {},
u'heldPercentInsiders': {},
u'heldPercentInstitutions': {},
u'lastCapGain': {},
u'lastDividendValue': {},
u'lastFiscalYearEnd': {u'fmt': u'2015-12-31',
u'raw': 1451520000},
u'lastSplitDate': {},
u'lastSplitFactor': None,
u'legalType': None,
u'maxAge': 1,
u'morningStarOverallRating': {},
u'morningStarRiskRating': {},
u'mostRecentQuarter': {u'fmt': u'2016-03-31',
u'raw': 1459382400},
u'netIncomeToCommon': {u'fmt': u'-134.48M',
u'longFmt': u'-134,478,000',
u'raw': -134478000},
u'nextFiscalYearEnd': {u'fmt': u'2017-12-31',
u'raw': 1514678400},
u'pegRatio': {u'fmt': u'-0.76', u'raw': -0.76},
u'priceToBook': {u'fmt': u'27.73',
u'raw': 27.732826},
u'priceToSalesTrailing12Months': {},
u'profitMargins': {u'fmt': u'-52.12%',
u'raw': -0.52124},
u'revenueQuarterlyGrowth': {},
u'sharesOutstanding': {u'fmt': u'120.78M',
u'longFmt': u'120,783,000',
u'raw': 120783000},
u'sharesShort': {u'fmt': u'13.89M',
u'longFmt': u'13,890,400',
u'raw': 13890400},
u'sharesShortPriorMonth': {u'fmt': u'13.03M',
u'longFmt': u'13,032,400',
u'raw': 13032400},
u'shortPercentOfFloat': {u'fmt': u'13.66%',
u'raw': 0.13664},
u'shortRatio': {u'fmt': u'6.66', u'raw': 6.66},
u'threeYearAverageReturn': {},
u'totalAssets': {},
u'trailingEps': {u'fmt': u'-1.12',
u'raw': -1.119},
u'yield': {},
u'ytdReturn': {}},
u'financialData': {u'currentPrice': {u'fmt': u'36.33', u'raw': 36.33},
u'currentRatio': {u'fmt': u'6.14', u'raw': 6.136},
u'debtToEquity': {u'fmt': u'302.79', u'raw': 302.793},
u'earningsGrowth': {},
u'ebitda': {u'fmt': u'-108.8M',
u'longFmt': u'-108,796,000',
u'raw': -108796000},
u'ebitdaMargins': {u'fmt': u'-42.17%',
u'raw': -0.42169997},
u'freeCashflow': {u'fmt': u'15.13M',
u'longFmt': u'15,127,875',
u'raw': 15127875},
u'grossMargins': {u'fmt': u'-30.48%', u'raw': -0.30478},
u'grossProfits': {u'fmt': u'283.7M',
u'longFmt': u'283,703,000',
u'raw': 283703000},
u'maxAge': 86400,
u'numberOfAnalystOpinions': {u'fmt': u'8',
u'longFmt': u'8',
u'raw': 8},
u'operatingCashflow': {u'fmt': u'-11.82M',
u'longFmt': u'-11,817,000',
u'raw': -11817000},
u'operatingMargins': {u'fmt': u'-46.09%',
u'raw': -0.46085998},
u'profitMargins': {u'fmt': u'-52.12%',
u'raw': -0.52124},
u'quickRatio': {u'fmt': u'5.94', u'raw': 5.944},
u'recommendationKey': u'hold',
u'recommendationMean': {u'fmt': u'2.80', u'raw': 2.8},
u'returnOnAssets': {u'fmt': u'-8.12%',
u'raw': -0.08116},
u'returnOnEquity': {u'fmt': u'-61.97%',
u'raw': -0.6197},
u'revenueGrowth': {u'fmt': u'-41.10%', u'raw': -0.411},
u'revenuePerShare': {u'fmt': u'2.15', u'raw': 2.148},
u'targetHighPrice': {u'fmt': u'64.00', u'raw': 64.0},
u'targetLowPrice': {u'fmt': u'17.00', u'raw': 17.0},
u'targetMeanPrice': {u'fmt': u'39.13', u'raw': 39.13},
u'targetMedianPrice': {u'fmt': u'38.00', u'raw': 38.0},
u'totalCash': {u'fmt': u'723.51M',
u'longFmt': u'723,507,008',
u'raw': 723507008},
u'totalCashPerShare': {u'fmt': u'5.99', u'raw': 5.99},
u'totalDebt': {u'fmt': u'478.9M',
u'longFmt': u'478,904,000',
u'raw': 478904000},
u'totalRevenue': {u'fmt': u'257.99M',
u'longFmt': u'257,993,984',
u'raw': 257993984}}}]
</code></pre>
| 2 | 2016-08-09T20:31:47Z | [
"python",
"beautifulsoup",
"screen-scraping"
] |
Parsing keyword next to special character (pyparsing) | 38,858,479 | <p>Using pyparsing, how can I match a keyword immediately before or after a special character (like "{" or "}")? The code below shows that my keyword "msg" is not matched unless it is preceded by whitespace (or at start):</p>
<pre><code>import pyparsing as pp
openBrace = pp.Suppress(pp.Keyword("{"))
closeBrace = pp.Suppress(pp.Keyword("}"))
messageKw = pp.Keyword("msg")
messageExpr = pp.Forward()
messageExpr << messageKw + openBrace +\
pp.ZeroOrMore(messageExpr) + closeBrace
try:
result = messageExpr.parseString("msg { msg { } }")
print result.dump(), "\n"
result = messageExpr.parseString("msg {msg { } }")
print result.dump()
except pp.ParseException as pe:
print pe, "\n", "Text: ", pe.line
</code></pre>
<p>I'm sure there's a way to do this, but I have been unable to find it.</p>
<p>Thanks in advance</p>
| 1 | 2016-08-09T18:59:52Z | 38,858,834 | <pre><code>openBrace = pp.Suppress(pp.Keyword("{"))
closeBrace = pp.Suppress(pp.Keyword("}"))
</code></pre>
<p>should be:</p>
<pre><code>openBrace = pp.Suppress(pp.Literal("{"))
closeBrace = pp.Suppress(pp.Literal("}"))
</code></pre>
<p>or even just:</p>
<pre><code>openBrace = pp.Suppress("{")
closeBrace = pp.Suppress("}")
</code></pre>
<p>(Most pyparsing classes will auto-promote a string argument <code>"arg"</code> to <code>Literal("arg")</code>.)</p>
<p>When I have parsers with many punctuation marks, rather than have a big ugly chunk of statements like this, I'll collapse them down to something like:</p>
<pre><code>OBRACE, CBRACE, OPAR, CPAR, SEMI, COMMA = map(pp.Suppress, "{}();,")
</code></pre>
<p>The problem you are seeing is that <code>Keyword</code> looks at the immediately-surrounding characters, to make sure that the current string is not being accidentally matched when it is really embedded in a larger identifier-like string. In <code>Keyword('{')</code>, this will only work if there is no adjoining character that could be confused as part of a larger word. Since '{' itself is not really a typical keyword character, using <code>Keyword('{')</code> is not a good use of that class. </p>
<p>Only use <code>Keyword</code> for strings that could be misinterpreted as identifiers. For matching characters that are not in the set of typical keyword characters (by "keyword characters" I mean alphanumerics + '_'), use <code>Literal</code>.</p>
| 1 | 2016-08-09T19:22:08Z | [
"python",
"python-2.7",
"parsing",
"text-parsing",
"pyparsing"
] |
Python version of h=area() in Matlab | 38,858,506 | <p>I asked a related question yesterday and fortunately got my answer from jlarsch quickly. But now I am stuck with the next part, which starts with the <code>h=area()</code> line. I'd like to know the python version of the <code>area()</code> function, via which I will be able to set the colors. Could someone shed me some light again? Thanks much in advance.</p>
<pre><code>...
Subplot (2,1,1);
H = plot (rand(100,5));
C = get (H, 'Color')
H = area (myX, myY);
H(1).FaceColor = C(1);
H(2).FaceColor = C(2);
Grid on;
...
</code></pre>
| -1 | 2016-08-09T19:01:18Z | 38,858,631 | <p>You might be looking for <a href="http://www.pygame.org/docs/ref/draw.html" rel="nofollow">pygame.draw.polygon()</a>, which can fill a polygon defined by an arbitrary array of points.</p>
| 0 | 2016-08-09T19:08:34Z | [
"python"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.