title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Error Executing Stored Procedure that References Linked Server with PyMSSQL | 38,837,327 | <p>I'm trying to execute a stored procedure from a python script, via pymssql, that communicates with a linked server. The SP works when run manually, but when run from the python script errors out with:</p>
<pre><code>(7391, 'The operation could not be performed because OLE DB
provider"SQLNCLI11" for linked server "DBLOG" was unable to begin a
distributed transaction.DB-Lib error message 20018, severity 16:
\nGeneral SQL Server error: Check messages from the SQL Server\n')
</code></pre>
<p>I haven't been able to find anything that references this as a limitation within pymssql itself. I'm not quite sure where to start. I do quite a bit of work with pymssql and have never had any connection issues, and I have verified that the login I am using has sufficient permissions (I even tried using SA).</p>
<p>Any ideas?</p>
<p>Thank you!</p>
| 1 | 2016-08-08T19:45:13Z | 38,839,207 | <p>I was able to recreate the issue with an SP that tried to do an UPDATE on the linked server, e.g., </p>
<pre class="lang-sql prettyprint-override"><code>UPDATE LINKEDSERVERNAME...TableName SET ...
</code></pre>
<p>although my error message was slightly different</p>
<blockquote>
<p>(8501, "MSDTC on server 'PANORAMA\SQLEXPRESS' is unavailable.DB-Lib error message 20018, ...</p>
</blockquote>
<p>I was able to avoid the issue by adding an <code>autocommit=True</code> argument to the end of my <code>pymssql.connect</code> call. </p>
<p>If for some reason using <code>autocommit=True</code> is no good for you then have a look at</p>
<p><a href="http://stackoverflow.com/q/29414250/2144390">MSDTC on server 'server is unavailable</a></p>
<p>for information on configuring MSDTC.</p>
| 2 | 2016-08-08T22:04:18Z | [
"python",
"sql-server",
"pymssql"
] |
Simple way to plot time series with real dates using pandas | 38,837,421 | <p>Starting from the following CSV data, loaded into a pandas data frame...</p>
<pre><code>Buchung;Betrag;Saldo
27.06.2016;-1.000,00;42.374,95
02.06.2016;500,00;43.374,95
01.06.2016;-1.000,00;42.874,95
13.05.2016;-500,00;43.874,95
02.05.2016;500,00;44.374,95
04.04.2016;500,00;43.874,95
02.03.2016;500,00;43.374,95
10.02.2016;1.000,00;42.874,95
02.02.2016;500,00;41.874,95
01.02.2016;1.000,00;41.374,95
04.01.2016;300,00;40.374,95
30.12.2015;234,54;40.074,95
02.12.2015;300,00;39.840,41
02.11.2015;300,00;39.540,41
08.10.2015;1.000,00;39.240,41
02.10.2015;300,00;38.240,41
02.09.2015;300,00;37.940,41
31.08.2015;2.000,00;37.640,41
</code></pre>
<p>... I would like an intuitive way to plot the time series given by the dates in column "Buchung" and the monetary values in column "Saldo". </p>
<p>I tried</p>
<pre><code>seaborn.tsplot(data=data, time="Buchung", value="Saldo")
</code></pre>
<p>which yields</p>
<pre><code>ValueError: could not convert string to float: '31.08.2015'
</code></pre>
<p>What is an easy way to read the dates and values and plot the time series? I assume this is such a common problem that there must be a three line solution.</p>
| 0 | 2016-08-08T19:51:09Z | 38,837,506 | <p>You need to convert your date column into the correct format:</p>
<pre><code>data['Buchung'] = pd.to_datetime(data['Buchung'], format='%d.%m.%Y')
</code></pre>
<p>Now your plot will work.</p>
<hr>
<p>Though you didn't ask, I think you will also run into a similar problem because your numbers (in <code>'Betrag'</code> and <code>'Saldo'</code>) seem to be string as well. So I recommend you convert them to numeric before plotting. Here is how you can do that by simple string manipulation:</p>
<pre><code>data["Saldo"] = data["Saldo"].str.replace('.', '').str.replace(',', '.')
data["Betrag"] = data["Betrag"].str.replace('.', '').str.replace(',', '.')
</code></pre>
<p>Or set the <a href="https://docs.python.org/3.5/library/locale.html" rel="nofollow">locale</a>:</p>
<pre><code>import locale
# The data appears to be in a European format, German locale might
# fit. Try this on Windows machine:
locale.setlocale(locale.LC_ALL, 'de')
data['Betrag'] = data['Betrag'].apply(locale.atof)
data['Saldo'] = data['Saldo'].apply(locale.atof)
# This will reset the locale to system default
locale.setlocale(locale.LC_ALL, '')
</code></pre>
<p>On an Ubuntu machine, follow <a href="http://stackoverflow.com/a/14548156/3765319">this answer</a>. If the above code does not work on a Windows machine, try <code>locale.locale_alias</code> to list all available locales and pick the name from that.</p>
<hr>
<h1>Output</h1>
<p>Using <code>matplotlib</code> since I cannot install Seaborn on the machine I am working from.</p>
<pre><code>from matplotlib import pyplot as plt
plt.plot(data['Buchung'], data['Saldo'], '-')
_ = plt.xticks(rotation=45)
</code></pre>
<p><a href="http://i.stack.imgur.com/HCTnu.png" rel="nofollow"><img src="http://i.stack.imgur.com/HCTnu.png" alt="The Plot"></a></p>
<p>Note: this has been produced using the <code>locale</code> method. Hence the month names are in German.</p>
| 2 | 2016-08-08T19:56:26Z | [
"python",
"pandas",
"plot",
"time-series",
"seaborn"
] |
Python threads repeating values | 38,837,428 | <p>I am trying to write a python SHA512 brute forcer.</p>
<p>I use a Queue to store the values in the wordlist and then compare them against the encrypted hash.</p>
<p>The problem is that, instead of the values being popped out of the Queue, they are reused by other threads. So basically, instead of having the whole work split between threads to make things faster, I got several threads doing the exact same thing. How can I fix this?</p>
<p>I want something like this: <a href="https://github.com/WillPennell/Python/blob/master/Black-Hat-Python/BHP-Code/Chapter5/content_bruter.py#L20" rel="nofollow">https://github.com/WillPennell/Python/blob/master/Black-Hat-Python/BHP-Code/Chapter5/content_bruter.py#L20</a></p>
<pre><code>import threading
import thread
import Queue
import os,sys
import crypt
import codecs
from datetime import datetime,timedelta
import argparse
today = datetime.today()
resume = None
threads = 5
def build_wordlist(wordlist_file):
fd = open(wordlist_file,"rb")
raw_words = fd.readlines()
fd.close()
found_resume = False
words = Queue.Queue()
for word in raw_words:
word = word.rstrip()
if resume is not None:
if found_resume:
words.put(word)
else:
if word == resume:
found_resume = True
print "Resuming wordlist from: %s" % resume
else:
words.put(word)
return words
def testPass(cryptPass,user):
word_queue = build_wordlist('test.txt')
while not word_queue.empty():
attempt = word_queue.get()
ctype = cryptPass.split("$")[1]
if ctype == '6':
print "[+] Hash type SHA-512 detected ..."
salt = cryptPass.split("$")[2]
insalt = "$" + ctype + "$" + salt + "$"
word = attempt
cryptWord = crypt.crypt(word,insalt)
if (cryptWord == cryptPass):
time = time = str(datetime.today() - today)
print "[+] Found password for the user: " + user + " ====> " + word + " Time: "+time+"\n"
return
print "Password not found for the user: " + user
print "Moving on to next user..."
exit
def main():
parse = argparse.ArgumentParser(description='A simple brute force /etc/shadow .')
parse.add_argument('-f', action='store', dest='path', help='Path to shadow file, example: \'/etc/shadow\'')
argus=parse.parse_args()
if argus.path == None:
parse.print_help()
exit
else:
build_wordlist('test.txt')
passFile = open (argus.path,'r')
for line in passFile.readlines():
line = line.replace("\n","").split(":")
if not line[1] in [ 'x' , '*' , '!' ]:
user = line[0]
cryptPass = line[1]
for i in range(threads):
t = threading.Thread(target=testPass,args=(cryptPass,user))
t.daemon = True
t.start()
if __name__=="__main__":
main()
</code></pre>
<p>EDIT: I realized there are 2 ways I can do this:
first, I can create a thread for each user, which is not what I want.
Second, I can split the work of each user through several threads, which is what I want.</p>
| 0 | 2016-08-08T19:51:49Z | 38,837,659 | <p>This can be solved using the classic producer and consumer problem. You may find <a href="http://www.napuzba.com/story/producer-consumer-python/" rel="nofollow">Solution to producer and consumer problem in python</a> useful. </p>
| 0 | 2016-08-08T20:06:48Z | [
"python",
"multithreading",
"queue"
] |
Python threads repeating values | 38,837,428 | <p>I am trying to write a python SHA512 brute forcer.</p>
<p>I use a Queue to store the values in the wordlist and then compare them against the encrypted hash.</p>
<p>The problem is that, instead of the values being popped out of the Queue, they are reused by other threads. So basically, instead of having the whole work split between threads to make things faster, I got several threads doing the exact same thing. How can I fix this?</p>
<p>I want something like this: <a href="https://github.com/WillPennell/Python/blob/master/Black-Hat-Python/BHP-Code/Chapter5/content_bruter.py#L20" rel="nofollow">https://github.com/WillPennell/Python/blob/master/Black-Hat-Python/BHP-Code/Chapter5/content_bruter.py#L20</a></p>
<pre><code>import threading
import thread
import Queue
import os,sys
import crypt
import codecs
from datetime import datetime,timedelta
import argparse
today = datetime.today()
resume = None
threads = 5
def build_wordlist(wordlist_file):
fd = open(wordlist_file,"rb")
raw_words = fd.readlines()
fd.close()
found_resume = False
words = Queue.Queue()
for word in raw_words:
word = word.rstrip()
if resume is not None:
if found_resume:
words.put(word)
else:
if word == resume:
found_resume = True
print "Resuming wordlist from: %s" % resume
else:
words.put(word)
return words
def testPass(cryptPass,user):
word_queue = build_wordlist('test.txt')
while not word_queue.empty():
attempt = word_queue.get()
ctype = cryptPass.split("$")[1]
if ctype == '6':
print "[+] Hash type SHA-512 detected ..."
salt = cryptPass.split("$")[2]
insalt = "$" + ctype + "$" + salt + "$"
word = attempt
cryptWord = crypt.crypt(word,insalt)
if (cryptWord == cryptPass):
time = time = str(datetime.today() - today)
print "[+] Found password for the user: " + user + " ====> " + word + " Time: "+time+"\n"
return
print "Password not found for the user: " + user
print "Moving on to next user..."
exit
def main():
parse = argparse.ArgumentParser(description='A simple brute force /etc/shadow .')
parse.add_argument('-f', action='store', dest='path', help='Path to shadow file, example: \'/etc/shadow\'')
argus=parse.parse_args()
if argus.path == None:
parse.print_help()
exit
else:
build_wordlist('test.txt')
passFile = open (argus.path,'r')
for line in passFile.readlines():
line = line.replace("\n","").split(":")
if not line[1] in [ 'x' , '*' , '!' ]:
user = line[0]
cryptPass = line[1]
for i in range(threads):
t = threading.Thread(target=testPass,args=(cryptPass,user))
t.daemon = True
t.start()
if __name__=="__main__":
main()
</code></pre>
<p>EDIT: I realized there are 2 ways I can do this:
first, I can create a thread for each user, which is not what I want.
Second, I can split the work of each user through several threads, which is what I want.</p>
| 0 | 2016-08-08T19:51:49Z | 38,837,762 | <p>Let's look at this block of code :</p>
<pre><code>for i in range(threads):
t = threading.Thread(target=testPass,args=(cryptPass,user))
t.daemon = True
t.start()
</code></pre>
<p>And let's describe what this is doing for each thread you start : </p>
<ol>
<li><strong>create a new <code>Queue</code> object from <code>test.txt</code> as defined by <code>build_wordlist</code></strong> </li>
<li>Process the queue from step 1</li>
</ol>
<p>It sounds like your desired behavior is to multithread some processing step on a single queue rather than create duplicates of the same queue. So this means your "testPass" method should probably take in a Queue object. i.e.</p>
<pre><code>q = build_wordlist('test.txt')
for i in range(threads):
t = threading.Thread(target=testPass,args=(q, cryptPass,user))
t.daemon = True
t.start()
</code></pre>
<p>and <code>testPass</code> should look like : </p>
<pre><code>def testPass(queue, cryptPass, user):
word_queue = queue
... stuff ...
</code></pre>
| 0 | 2016-08-08T20:15:04Z | [
"python",
"multithreading",
"queue"
] |
Update a dictionary value using python | 38,837,507 | <p>I have a json file, which when loaded in python using <code>json.loads()</code> becomes a <code>dictionary</code>. The json data is a <code>nested dictionary</code> which can contain a <code>'groups'</code> key inside another <code>'groups'</code> key. The values inside the <code>'groups'</code> key are a <code>'name'</code> key and a <code>'properties'</code> key.</p>
<p>Each <code>'properties'</code> key has a unique <code>'name'</code> and a <code>'value'</code> key.</p>
<p>My objective is to search for a <code>'groups'</code> key having its <code>'name'</code> key value as <code>"SportCar"</code>, which has a <code>properties</code> key having a <code>name</code> key value as <code>"BMW"</code>, and only when these conditions are satisfied, update the <code>'data'</code> key from <code>'data':value1</code> to <code>'data':value2</code>.</p>
<p>An example of the json is as follows</p>
<pre><code>{
"groups": [
{
"name": "SportCar",
"properties": [
{
"name": "BMW",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value1"
}
},
{
"name": "Audi",
"value": {
"type": "Boolean",
"data": true
}
}
],
"groups": [
{
"name": "Trucks",
"properties": [
{
"name": "Volvo",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value1"
}
}
]
}
]
},
{
"name": "MotorCycle",
"properties": [
{
"name": "Yamaha",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value1"
}
}
],
"groups": [
{
"name": "Speeders",
"properties": [
{
"name": "prop2",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value1"
}
}
]
}
]
}
]
}
</code></pre>
<p>The above json is contained in myjson22.json. Here is what I have tried so far:</p>
<pre><code>import json
from pprint import pprint
json_data=open('myjson22.json', 'r')
data = json.load(json_data)
#print(data)
def get_recursively(search_dict, field):
"""
To read the json data as type dict and search all 'groups' keys for the 'name' key value value provided.
"""
fields_found = []
for key, value in search_dict.items():
if key == field:
fields_found.append(value)
elif isinstance(value, dict):
results = get_recursively(value, field)
for result in results:
fields_found.append(result)
elif isinstance(value, list):
for item in value:
if isinstance(item, dict):
more_results = get_recursively(item, field)
for another_result in more_results:
fields_found.append(another_result)
return fields_found
get_recursively(data, ["properties"][0])
</code></pre>
<p>and the output was:</p>
<pre><code> [[{'name': 'BMW',
'value': {'data': 'value1', 'encoding': 'utf-8', 'type': 'String'}},
{'name': 'Audi', 'value': {'data': True, 'type': 'Boolean'}}],
[{'name': 'Volvo',
'value': {'data': 'value1', 'encoding': 'utf-8', 'type': 'String'}}],
[{'name': 'Yamaha',
'value': {'data': 'value1', 'encoding': 'utf-8', 'type': 'String'}}],
[{'name': 'prop2',
'value': {'data': 'value1', 'encoding': 'utf-8', 'type': 'String'}}]]
</code></pre>
| -3 | 2016-08-08T19:56:32Z | 38,839,283 | <p>A way to implement this recursive solution is with backtracking. When no more <code>'groups'</code> keys are are found nested inside the root key, the <code>'name'</code> key value is matched with <code>groups_name</code> parameter, which is 'SportCar' in our case. If this condition is satisfied check for the values inside same <code>'groups'</code> key's (i.e. 'SportCar' key's) <code>'properties'</code> key and match its <code>'name'</code> key value with the <code>properties_name</code> parameter (which is 'BMW' in our case). If this second condition is also true, then update the <code>'data'</code> key value inside the same <code>'properties'</code> key, as per requirements, or else return (for backtracking).</p>
<pre><code>import json
json_data = open('myjson22.json', 'r')
data = json.load(json_data)
def get_recursively( myJson, groups_name, properties_name, value2):
if 'groups' in myJson.keys():
# As there are multiple values inside 'groups' key
for jsonInsideGroupsKey in myJson['groups']:
get_recursively( jsonInsideGroupsKey, groups_name, properties_name, value2)
if 'name' in myJson.keys():
# check for groups name
if myJson['name'] == groups_name:
# check for properties name
if myJson['properties'][0]['name'] == properties_name:
# Update value. The changes will persist as we backtrack because
# we are making the update at the original memory location
# and not on a copy. For more info see deep and shallow copy.
myJson['properties'][0]['value']['data'] = value2
return
get_recursively(data,'SportCar','BMW','changedValue1')
get_recursively(data,'Speeders','prop2','changedValue2')
print data
</code></pre>
<p>my output:</p>
<p>{u'groups': [{u'name': u'SportCar', u'groups': [{u'name': u'Trucks', u'properties': [{u'name': u'Volvo', u'value': {u'data': u'value1', u'type': u'String', u'encoding': u'utf-8'}}]}], u'properties': [{u'name': u'BMW', u'value': {u'data': <code>'changedValue1'</code>, u'type': u'String', u'encoding': u'utf-8'}}, {u'name': u'Audi', u'value': {u'data': True, u'type': u'Boolean'}}]}, {u'name': u'MotorCycle', u'groups': [{u'name': u'Speeders', u'properties': [{u'name': u'prop2', u'value': {u'data': <code>'changedValue2'</code>, u'type': u'String', u'encoding': u'utf-8'}}]}], u'properties': [{u'name': u'Yamaha', u'value': {u'data': u'value1', u'type': u'String', u'encoding': u'utf-8'}}]}]}</p>
<p>prettified it will look as:</p>
<pre><code>{
"groups": [
{
"name": "SportCar",
"properties": [
{
"name": "BMW",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "ChangedValue1"
}
},
{
"name": "Audi",
"value": {
"type": "Boolean",
"data": true
}
}
],
"groups": [
{
"name": "Trucks",
"properties": [
{
"name": "Volvo",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value1"
}
}
]
}
]
},
{
"name": "MotorCycle",
"properties": [
{
"name": "Yamaha",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value1"
}
}
],
"groups": [
{
"name": "Speeders",
"properties": [
{
"name": "prop2",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "ChangedValue2"
}
}
]
}
]
}
]
}
</code></pre>
| 2 | 2016-08-08T22:11:54Z | [
"python",
"json"
] |
Django HTTP_HOST errors on AWS EC2 behind Load Balancer | 38,837,512 | <p>I have a Django app, using Apache and mod_wsgi running on an EC2 instance behind an AWS ELB balancer. The balancer maps SSL traffic (port 443) to port 8080 on the EC2 instance. Apache has a VirtualHost configured on port 8080 to serve the Django app, with ServerName set to the domain name for the website. Django runs in production mode (DEBUG=False) and exposes, among other things, a healtcheck endpoint (at /healtcheck). The ALLOWED_HOSTS setting is set to the domain name for the website, plus the private IP address of the EC2 instance, in order to allow the Load Balancer to hit the healthcheck endpoint. </p>
<p>Everything works fine with this set-up. The problem is that I keep receiving occasional bursts of e-mails from Django with error messages similar to this: <code>ERROR (EXTERNAL IP): Invalid HTTP_HOST header: '52.51.147.134'. You may need to add u'52.51.147.134' to ALLOWED_HOSTS.</code> The headers also contain <code>HTTP_X_FORWARDED_FOR = '139.162.13.205'</code></p>
<p>I get various IP addresses (and sometimes hostnames), belonging to script kiddies, I presume. </p>
<p>How can I block this traffic from ever reaching the Django app, while still allowing valid traffic (where HTTP_HOST is my domain name) and the ELB healthcheck traffic (where HTTP_HOST is my EC2 private IP address)? </p>
| 1 | 2016-08-08T19:57:01Z | 38,837,931 | <p>I would suggest you only allow traffic on your EC2 instance from the load balancer using a security group AND the IP address of your office/home if you SSH'ing into the EC2 instance.</p>
<p><a href="http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-security-groups.html#elb-vpc-instance-security-groups" rel="nofollow">http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-security-groups.html#elb-vpc-instance-security-groups</a></p>
<p>This will stop the script kiddies from hitting the EC2 instance directly which appears is what is happening here.</p>
| 1 | 2016-08-08T20:26:00Z | [
"python",
"django",
"apache",
"amazon-web-services",
"amazon-ec2"
] |
Getting the resource schedule from Office 365? | 38,837,540 | <p>I had a python script which could get fetch the schedule for a resource(room) from the company's Office 365 calendar by calling
<code>https://outlook.office365.com/api/v1.0/users/<roomName@companyName.com>/calendarview?startDateTime=2016-08-07 22:00:00&endDateTime=2016-08-08 22:00:00</code></p>
<p>This doesn't seem to work anymore?
As far as I could find out it looks like the API has changed by restricting the permissions to resource calendar.
Is that a correct assumption or am I doing something wrong?</p>
<p>Is there a way to actually get the schedule for a resource?</p>
<p>I would preferably want to do this in Python or C#</p>
| 0 | 2016-08-08T19:59:25Z | 38,844,108 | <p>What's the error message you get? If you were getting the error like <code>The access token is acquired using an authentication method that is too weak to allow access for this application</code>, we need to use the certificate make to request the token instead of using the client id and secret. </p>
<p>Here is an code sample that use the certificate to request the token for your reference:</p>
<pre><code> public static async Task<string> GetTokenByCert(string clientId, string tenant, string certThumbprint,string resource)
{
string authority = $"https://login.windows.net/{tenant}";
X509Certificate2 cert = CertHelper.FindCert(certThumbprint);
var certCred = new ClientAssertionCertificate(clientId, cert);
var authContext = new Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext(authority);
AuthenticationResult result = null;
try
{
result = await authContext.AcquireTokenAsync(resource, certCred);
}
catch (Exception ex)
{
}
return result.AccessToken;
}
</code></pre>
<p>More detail about config/use the certificate to request the token, please refer <a href="https://msdn.microsoft.com/en-us/library/office/dn707383.aspx" rel="nofollow">here</a>.</p>
| 0 | 2016-08-09T06:58:32Z | [
"c#",
"python",
"office365",
"office365api"
] |
How to properly read strings from SQL Database with NumPy | 38,837,569 | <p>I have a dataset which I am querying with SQL. My query returns a long string which simply contains the column names and then the data, with rows separated by newline characters. I then use <code>numpy.genfromtxt</code> to turn this long string into a numpy array.</p>
<p>However, there are a few columns that should be read as strings. So, I am explicitly passing a <code>dtype</code> array to <code>genfromtxt</code> so that it saves the column values correctly. However, when I inspect the output, all column entries that should be a string simply appear as <code>''</code>, an empty string. </p>
<p>I am declaring the data type of these columns as <code>str</code>. As an example, one such entry that is turning into an empty string is, in the original dataset, the word <code>GALAXY</code>. However, on the official docs for the dataset, it is listed that the data type of this column is <code>varchar</code>. I assumed <code>str</code> would be the correct type for this, but I guess not. </p>
<hr>
<p><strong>Edit:</strong> Ignore that this has anything to with SQL. Basically, I have a string that is the result of a query, and I need to pack it into a numpy array using <code>np.genfromtxt</code>. I avoided posting the explicit strings because they are brutal to look at, but here is one:</p>
<p><code>b'bestObjID,ra,dec,z,zErr,zWarning,class,subClass,rChi2,DOF,rChi2Diff,z_noqso,zErr_noqso,zWarning_noqso,class_noqso,subClass_noqso,rChi2Diff_noqso,velDisp,velDispErr,velDispZ,velDispZErr,velDispChi2\n1237662340012638224,239.58334,27.233419,0.09080672,2.924875E-05,0,GALAXY,,1.104714,3735,1.411605,0,0,0,,,0,272.6187,13.61222,0,0,1815.653\n'</code></p>
<p>As you can see, it is a <code>bytes</code> object with rows separated by <code>\n</code> and the first row being the column labels. </p>
<p>The result of passing this to <code>np.genfromtxt</code> is </p>
<p><code>array((1237662340012638224, 239.58334, 27.233419, 0.09080672264099121, 2.9248749342514202e-05, 0, '', '', 1.104714035987854, 3735.0, 1.4116050004959106, 0.0, 0.0, 0, '', '', 0.0, 272.61871337890625, 13.61221981048584, 0.0, 0.0, 1815.6529541015625),
dtype=[('bestObjID', '<i8'), ('ra', '<f8'), ('dec', '<f8'), ('z', '<f4'), ('zErr', '<f4'), ('zWarning', '<i8'), ('class', '<c16'), ('subClass', '<c16'), ('rChi2', '<f4'), ('DOF', '<f4'), ('rChi2Diff', '<f4'), ('z_noqso', '<f4'), ('zErr_noqso', '<f4'), ('zWarning_noqso', '<i8'), ('class_noqso', '<c16'), ('subClass_noqso', '<c16'), ('rChi2Diff_noqso', '<f4'), ('velDisp', '<f4'), ('velDispErr', '<f4'), ('velDispZ', '<f4'), ('velDispZErr', '<f4'), ('velDispChi2', '<f4')])
</code></p>
<p>You can see how what should say <code>'GALAXY'</code> turns into <code>''</code> when I specify that the data type of this entry is <code>str</code>. If I instead use the <code>c</code> dataype, I can recover the <code>G</code> of <code>GALAXY</code>, but nothing more. If I try to use <code>c8</code> or <code>c16</code>, I get <code>(nan+0j)</code></p>
| 0 | 2016-08-08T20:01:08Z | 38,838,417 | <p>I'm guessing at how you're using genfromtxt, but this seems to work?</p>
<pre><code>import numpy as np
from StringIO import StringIO
s = b'bestObjID,ra,dec,z,zErr,zWarning,class,subClass,rChi2,DOF,rChi2Diff,z_noqso,zErr_noqso,zWarning_noqso,class_noqso,subClass_noqso,rChi2Diff_noqso,velDisp,velDispErr,velDispZ,velDispZErr,velDispChi2\n1237662340012638224,239.58334,27.233419,0.09080672,2.924875E-05,0,GALAXY,,1.104714,3735,1.411605,0,0,0,,,0,272.6187,13.61222,0,0,1815.653\n'
S = lambda : StringIO(s)
np.genfromtxt(S(), dtype = None, names=True, delimiter=',')
</code></pre>
<p>outputs</p>
<pre><code>array((1237662340012638224, 239.58334, 27.233419, 0.09080672, 2.924875e-05, 0, 'GALAXY', False, 1.104714, 3735, 1.411605, 0, 0, 0, False, False, 0, 272.6187, 13.61222, 0, 0, 1815.653),
dtype=[('bestObjID', '<i8'), ('ra', '<f8'), ('dec', '<f8'), ('z', '<f8'), ('zErr', '<f8'), ('zWarning', '<i8'), ('class', 'S6'), ('subClass', '?'), ('rChi2', '<f8'), ('DOF', '<i8'), ('rChi2Diff', '<f8'), ('z_noqso', '<i8'), ('zErr_noqso', '<i8'), ('zWarning_noqso', '<i8'), ('class_noqso', '?'), ('subClass_noqso', '?'), ('rChi2Diff_noqso', '<i8'), ('velDisp', '<f8'), ('velDispErr', '<f8'), ('velDispZ', '<i8'), ('velDispZErr', '<i8'), ('velDispChi2', '<f8')])
</code></pre>
| 1 | 2016-08-08T20:58:08Z | [
"python",
"mysql",
"string",
"numpy"
] |
Changing time format in template is raising exception | 38,837,638 | <p>I can't figure out why <code>Django</code> returns <code>Exception</code> when I try to set time format in template. </p>
<p>This is a column in <code>Django-tables2</code> table.</p>
<pre><code>time_arrival = tables.TemplateColumn('{{record.time_arrival|time: "H:i"}}',verbose_name=u'Äas prÃchodu')
</code></pre>
<p>The <code>time_arrival</code> is an attribute of <code>Reservation</code> model which is a <code>record</code> in this table. When there is just <code>{{ record.time_arrival }}</code> it shows time in this format: <code>1 p.m.</code> but I want to show <code>13:00</code> for example so I have to change the format.</p>
<p>This exception is being raised: </p>
<pre><code>Exception Value: Could not parse the remainder: ': "H:i"' from 'record.time_arrival|time: "H:i"'
</code></pre>
<p>This is a <code>time_arrival</code> attribute in <code>Reservation</code> model:</p>
<pre><code>time_arrival = models.TimeField(null=True, blank=True, verbose_name=u'Äas prÃletu')
</code></pre>
<p>Do you know where is the problem?</p>
| 0 | 2016-08-08T20:05:55Z | 38,837,695 | <p>Remove the space between <code>time:</code> and <code>"H:i"</code></p>
<pre><code>'{{record.time_arrival|time:"H:i"}}'
</code></pre>
| 2 | 2016-08-08T20:09:56Z | [
"python",
"django",
"django-templates"
] |
QT QFileSystemWatcher | 38,837,668 | <p>It seems simple but it doesn't work. I have usbautomount installed.</p>
<pre><code>#Watch the media directory and connect to enable save csv pb
self.usb_watcher = QFileSystemWatcher()
self.usb_watcher.addPaths(["/media/usb0"])
self.usb_watcher.directoryChanged.connect(self.enable_save_csv_pb)
self.usb_watcher.fileChanged.connect(self.enable_save_csv_pb)
</code></pre>
<p>I think it has to do with addpath. If I don't put in the square brackets I get this error message:</p>
<pre><code>QFileSystemWatcher: failed to add paths: m, e, d, i, a, /, u, s, b, 0
</code></pre>
<p>But I've seen examples without the square brackets.</p>
| -1 | 2016-08-08T20:07:14Z | 38,837,976 | <p>I think you should use <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qfilesystemwatcher.html#addPath" rel="nofollow">addPath</a> instead of <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qfilesystemwatcher.html#addPaths" rel="nofollow">addPaths</a></p>
| 0 | 2016-08-08T20:29:09Z | [
"python",
"linux",
"pyqt",
"qt4"
] |
How to determine the functions available in DLL for use in Python | 38,837,849 | <p>A <a href="http://www.brooksinstrument.com/products/accessories-software/product-software/brooks-labview-dll" rel="nofollow">LabView dll file</a> is to be used from a Python 2.7 script. What can we do to figure out which are the methods in the DLL file that we can call from Python, without having to load the dll in LabView?</p>
<p>Also, do I need to have LabView running on the system which runs the Python code that calls the LabView dll?</p>
| 2 | 2016-08-08T20:20:41Z | 39,337,307 | <p>What I understand is, that you are using a third party DLL created with LabView. In this case the LabView runtime has to be installed to use it which can be installed for free: <a href="http://digital.ni.com/public.nsf/allkb/B727BE0152F606D486256A22004DE82D" rel="nofollow">http://digital.ni.com/public.nsf/allkb/B727BE0152F606D486256A22004DE82D</a></p>
<p>To find the methods provided with the DLL I would</p>
<ul>
<li>Search for a documentation by the manufacturer</li>
<li>Look for a header file provided with the DLL</li>
<li>Use a Tool like DLL Export Viewer: <a href="http://www.nirsoft.net/utils/dll_export_viewer.html" rel="nofollow">http://www.nirsoft.net/utils/dll_export_viewer.html</a></li>
</ul>
| 0 | 2016-09-05T20:29:48Z | [
"python",
"python-2.7",
"dll",
"labview"
] |
Restart script on error | 38,837,851 | <p>so I have this script in selenium, that it sometimes crashes for various reasons. Sometimes it fails to click a button, gets confused, gets messed up and displays an error.</p>
<p>How can I command the script, so whenever it crashes, to re-run the script from the beginning again? I've heard about <code>try</code> and <code>except</code> functions but I'm not sure how to use them.</p>
<p>Any help is appreciated! :)</p>
<p>[Using Python 2.7 with Selenium Webdriver]</p>
| 0 | 2016-08-08T20:20:44Z | 38,838,161 | <p>generic answer to retry on any exception:</p>
<pre><code>while True:
try:
# run your selenium code which sometimes breaks
pass
except Exception as e:
print("something went wrong: "+repr(e))
</code></pre>
<p>you may try to refine the exception type to avoid retrying say, because of a python error like <code>ValueError</code> or <code>IOError</code>. Check the exception type and change <code>Exception</code> by the qualified type.</p>
| 0 | 2016-08-08T20:42:03Z | [
"python",
"selenium"
] |
Numpy Uniform Distribution With Decay | 38,837,860 | <p>I'm trying to construct a matrix of uniform distributions decaying to 0 at the same rate in each row. The distributions should be between -1 and 1. What I'm looking at is to construct something that resembles:</p>
<pre><code>[[0.454/exp(0) -0.032/exp(1) 0.641/exp(2)...]
[-0.234/exp(0) 0.921/exp(1) 0.049/exp(2)...]
...
[0.910/exp(0) 0.003/exp(1) -0.908/exp(2)...]]
</code></pre>
<p>I can build a matrix of uniform distributions using:</p>
<pre><code>w = np.array([np.random.uniform(-1, 1, 10) for i in range(10)])
</code></pre>
<p>and can achieve the desired result using a <code>for</code> loop with:</p>
<pre><code>for k in range(len(w)):
for l in range(len(w[0])):
w[k][l] = w[k][l]/np.exp(l)
</code></pre>
<p>but wanted to know if there was a better way of accomplishing this. </p>
| 2 | 2016-08-08T20:21:33Z | 38,837,956 | <p>Alok Singhal's answer is best, but as another way to do this (perhaps more explicit)
you can duplicate the vector <code>[exp(0), ...,exp(9)]</code> and stack them all into matrix by doing an outer product with a vector of ones. Then divide the 'w' matrix by the new 'decay' matrix.</p>
<pre><code>n=10
w = np.array([np.random.uniform(-1, 1, n) for i in range(n)])
decay = np.outer( np.ones((n,1)), np.exp(np.arange(10)) )
result = w/decay
</code></pre>
<p>You could also use <code>np.tile</code> for creating a matrix out of several copies of a vector. It accomplishes the same thing as the outer product trick. </p>
| 1 | 2016-08-08T20:28:02Z | [
"python",
"numpy"
] |
Numpy Uniform Distribution With Decay | 38,837,860 | <p>I'm trying to construct a matrix of uniform distributions decaying to 0 at the same rate in each row. The distributions should be between -1 and 1. What I'm looking at is to construct something that resembles:</p>
<pre><code>[[0.454/exp(0) -0.032/exp(1) 0.641/exp(2)...]
[-0.234/exp(0) 0.921/exp(1) 0.049/exp(2)...]
...
[0.910/exp(0) 0.003/exp(1) -0.908/exp(2)...]]
</code></pre>
<p>I can build a matrix of uniform distributions using:</p>
<pre><code>w = np.array([np.random.uniform(-1, 1, 10) for i in range(10)])
</code></pre>
<p>and can achieve the desired result using a <code>for</code> loop with:</p>
<pre><code>for k in range(len(w)):
for l in range(len(w[0])):
w[k][l] = w[k][l]/np.exp(l)
</code></pre>
<p>but wanted to know if there was a better way of accomplishing this. </p>
| 2 | 2016-08-08T20:21:33Z | 38,838,074 | <p>You can use numpy's broadcasting feature to do this:</p>
<pre><code>w = np.random.uniform(-1, 1, size=(10, 10))
weights = np.exp(np.arange(10))
w /= weights
</code></pre>
| 6 | 2016-08-08T20:35:10Z | [
"python",
"numpy"
] |
truncated incorrect value | 38,837,910 | <p>I have a python code that displays a range of dates. In the code below I have passed the dates in the select operation by casting the dates and using STR_TO_DATE function. I want to know how a range of values with start and end date can be passed in the query below. what i want to achieve is to give a range of dates and the script should find those dates in the mysql table and display the dates.The date column in mysql is varchar type.SO i need to convert varchar to date and then use between operator to get range of dates.Following is the code snippet:</p>
<pre><code>import MySQLdb
import os,sys
import datetime
path="C:/Python27/"
conn = MySQLdb.connect (host = "localhost",user = "root", passwd = "CIMIS",db = "cimis")
c = conn.cursor()
message = """select stationId,Date,hour,airTemperature from cimishourly where Date between CAST((SELECT STR_TO_DATE('5/16/2011 ', '%c/%e/%Y')) AS DATE) and CAST((SELECT STR_TO_DATE('5/18/2011 ', '%c/%e/%Y')) AS DATE)"""
c.execute(message,)
result=c.fetchall()
for row in result:
print(row)
conn.commit()
c.close()
</code></pre>
<p>the error message is:truncated incorrect date value '6/8/1982'</p>
| -2 | 2016-08-08T20:24:23Z | 38,838,424 | <p>First off, <strong>Please include your error in your question</strong>. This makes it easier for people to help you. </p>
<p>I'm gussing from the title that your trying to do something like: </p>
<pre><code>s = "string"
l = ["list", "list2"]
print(l + s)
</code></pre>
<p>That is incorrect. You can't add a variable of type string and a variable of type list together.</p>
<p>If your trying to add your strings and list all into one string, use the builtin python function <code>str()</code> to convert a list to a string, and then use <code>.join</code> to join the strings.</p>
<pre><code>s = "string"
l = ["list", "list2"]
print(str(''.join(l)) + s)
#output: listlist2string
</code></pre>
<p>If your trying to convert your strings into a list, use the builtin python function <code>list()</code> which converts the string to a list:</p>
<pre><code>s = "string"
l = ["list", "list2"]
print(l + list(s))
#output: ['list', 'list2', 's', 't', 'r', 'i', 'n', 'g']
</code></pre>
<p>Or, if your trying to add a string to every other string in your list, use list comprehension:</p>
<pre><code>s = "string"
l = ["list", "list2"]
ls = [str(''.join(i)) + s for i in l]
print(ls)
#output :['liststring', 'list2string']
</code></pre>
<p>For more information on converting types in python, i reccomend reading: <a href="http://www.pitt.edu/~naraehan/python2/data_types_conversion.html" rel="nofollow">http://www.pitt.edu/~naraehan/python2/data_types_conversion.html</a>.</p>
<p><strong>EDIT</strong>:
After updating your title and posting the error message, I believe that your problem stems from these lines: <code>(SELECT STR_TO_DATE('5/18/2011 ', '%c/%e/%Y')</code>. which should be: <code>STR_TO_DATE('2011/5/18', '%Y-%m-%d')</code>.</p>
| 0 | 2016-08-08T20:58:41Z | [
"python",
"mysql",
"python-2.7",
"mysql-python"
] |
truncated incorrect value | 38,837,910 | <p>I have a python code that displays a range of dates. In the code below I have passed the dates in the select operation by casting the dates and using STR_TO_DATE function. I want to know how a range of values with start and end date can be passed in the query below. what i want to achieve is to give a range of dates and the script should find those dates in the mysql table and display the dates.The date column in mysql is varchar type.SO i need to convert varchar to date and then use between operator to get range of dates.Following is the code snippet:</p>
<pre><code>import MySQLdb
import os,sys
import datetime
path="C:/Python27/"
conn = MySQLdb.connect (host = "localhost",user = "root", passwd = "CIMIS",db = "cimis")
c = conn.cursor()
message = """select stationId,Date,hour,airTemperature from cimishourly where Date between CAST((SELECT STR_TO_DATE('5/16/2011 ', '%c/%e/%Y')) AS DATE) and CAST((SELECT STR_TO_DATE('5/18/2011 ', '%c/%e/%Y')) AS DATE)"""
c.execute(message,)
result=c.fetchall()
for row in result:
print(row)
conn.commit()
c.close()
</code></pre>
<p>the error message is:truncated incorrect date value '6/8/1982'</p>
| -2 | 2016-08-08T20:24:23Z | 38,859,741 | <p>The correct way to convert date from varchar to date datatype and then perform select operation is as shown:</p>
<p>message = """SELECT stationId,datecol,airTemperature FROM cimishourly
WHERE STR_TO_DATE(datecol, '%m/%d/%Y')
BETWEEN STR_TO_DATE('8/10/2015', '%m/%d/%Y')
AND STR_TO_DATE('8/12/2015', '%m/%d/%Y') AND stationId in (2)"""</p>
| 0 | 2016-08-09T20:21:19Z | [
"python",
"mysql",
"python-2.7",
"mysql-python"
] |
psycopg for python 3? Unable to find vcvarsall.bat error | 38,837,920 | <p>I'm trying to install psycopg on my computer; I have Python3.5 and PostgreSQL9.5.3 installed. I get the error <code>Unable to find vcvarsall.bat error</code> when typing <code>python setup.py build</code> on cmd. Having read some answers, it appeared to me, that the 3rd python's version is what might have caused the problem. What way should I approach it?</p>
| 0 | 2016-08-08T20:24:55Z | 38,862,648 | <p><a href="http://www.stickpeople.com/projects/python/win-psycopg/" rel="nofollow">This</a> helped. Downloaded Python 3.5 release.</p>
| 0 | 2016-08-10T00:58:36Z | [
"python",
"postgresql",
"psycopg2",
"psycopg"
] |
Scrapy shell returning blank array with steam website? | 38,837,921 | <p>I've used scrapy before to some success with craiglist, but now that I'm trying to scrape steam for user names arbitrarily, I keep getting a blank array in the scrapy shell.</p>
<p>The user name element (which is xempy for example) is contained in:</p>
<pre><code><a class="searchPersonaName" href="https://steamcommunity.com/id/zxZEmpy">xempy</a>
</code></pre>
<p>the command I'm using to scrape the actual user names from the URL above is: </p>
<pre><code>response.select('//*[@id="search_results"]/div[3]/div[3]/a/text()').extract()
</code></pre>
<p>the URL I'm attempting to scrape is </p>
<pre><code>https://steamcommunity.com/search/users/#filter=users&text=xempy
</code></pre>
<p>I used Chrome to copy the xpath of the element I'm interested in instead of typing it by hand to make sure it was free of typos, but even typing it all out by hand, with the absolute paths, I still get a blank array, when i'm attempting to get a simple string with the user name "xempy".</p>
<p>What am I doing wrong? i've used the same process to successfully scrape craigslist, but on steam's website it doesn't seem to be working and I can't find any actual examples of steam scrapy scripts.</p>
| 1 | 2016-08-08T20:24:59Z | 38,838,395 | <p>If you look at the actual source in your browser, right click and choose view source you will see no sign of the results, the data is dynamically added through an ajax request to <em><a href="https://steamcommunity.com/search/SearchCommunityAjax" rel="nofollow">https://steamcommunity.com/search/SearchCommunityAjax</a></em>.</p>
<p>You will have to mimic the ajax request, I have used requests but the steps will be the same for scrapy:</p>
<pre><code>import requests
headers = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36",
"X-Requested-With": "XMLHttpRequest"}
params = {"text": "xempy", "filter": "users", "sessionid": "", "steamid_user": "false", "page": "1"}
ajax_url = "https://steamcommunity.com/search/SearchCommunityAjax"
with requests.Session() as s:
s.headers.update()
r = s.get("https://steamcommunity.com/search/users/#filter=users&text=xempy")
# need to update the session id which we get from the previous gets headers
params["sessionid"] = next(
c.split("=", 1)[1] for c in r.headers["set-cookie"].split(";") if c.startswith("sessionid"))
# need to update the session headers
s.headers.update(r.headers)
# and also the cookies from the previous request
s.cookies.update(r.cookies)
result = (s.get(ajax_url, params=params).json())
</code></pre>
<p>If we run the code you can see we get some json returned:</p>
<pre><code>In [5]: with requests.Session() as s:
...: s.headers.update()
...: r = s.get("https://steamcommunity.com/search/users/#filter=users&text=xempy")
...: params["sessionid"] = next(
...: c.split("=", 1)[1] for c in r.headers["set-cookie"].split(";") if c.startswith("sessionid"))
...: s.headers.update(r.headers)
...: s.cookies.update(r.cookies)
...: result = (s.get(ajax_url, params=params).json())
...: print(result)
...:
{u'html': u'\t\t<div style="float: right; padding-bottom: 2px">\r\n\t\t\t\t\t\tShowing 1 - 11 of 11\t\t\t</div>\r\n\t<div style="clear: both"></div>\r\n\t\t\t\t\t\t<div class="search_row">\r\n\t<div class="search_result_friend">\r\n\t\t\t</div>\r\n\t<div class="mediumHolder_default" data-miniprofile="16183171" style="float:left;"><div class="avatarMedium"><a href="https://steamcommunity.com/id/zxZEmpy"><img src="https://steamcdn-a.akamaihd.net/steamcommunity/public/images/avatars/b9/b9c886a08cf17c4f1f31ea19148d8b3bbd748762_medium.jpg"></a></div></div>\r\n\t<div class="searchPersonaInfo">\r\n\t\t<a class="searchPersonaName" href="https://steamcommunity.com/id/zxZEmpy">xempy</a><br />\r\n\t\t\t\t\t\t\t\t&nbsp;\t\t\t</div>\r\n\t<div style="clear:left"></div>\r\n\r\n\t\t\t<div class="search_match_info">\r\n\t\t\t\t\t\t\t\t\t\t<div>Custom URL: steamcommunity.com/id/<span style="color: whitesmoke">zxZEmpy</span></div>\r\n\t\t\t\t\t\t\t\t\t\t<div>\r\n\t\t\t\t\tAlso known as: <span style="color: whitesmoke">trill</span>, <span style="color: whitesmoke">[TGIF] Mario Batali</span>, <span style="color: whitesmoke">[TGIF] Mario \xdfatali</span>, <span style="color: whitesmoke">Mario \xdfatali</span>, <span style="color: whitesmoke">[TGIF\'</span>, <span style="color: whitesmoke">[TGIF] Mario \u03b2atali</span>\t\t\t\t</div>\r\n\t\t\t\t\t</div>\r\n\t\t</div>\r\n\t\t\t\t\t\t\t\t<div class="search_row">\r\n\t<div class="search_result_friend">\r\n\t\t\t</div>\r\n\t<div class="mediumHolder_default" data-miniprofile="280326130" style="float:left;"><div class="avatarMedium"><a href="https://steamcommunity.com/id/xempyjecar"><img src="https://steamcdn-a.akamaihd.net/steamcommunity/public/images/avatars/89/8928b324ba9c12859283e8be3f11f19d9232033c_medium.jpg"></a></div></div>\r\n\t<div class="searchPersonaInfo">\r\n\t\t<a class="searchPersonaName" href="https://steamcommunity.com/id/xempyjecar">Xempy -A-</a><br />\r\n\t\t\t\t\tIgor<br />\t\t\tSerbia&nbsp;<img style="margin-bottom:-2px" src="https://steamcommunity-a.akamaihd.net/public/images/countryflags/rs.gif" border="0" />\t\t\t</div>\r\n\t<div style="clear:left"></div>\r\n\r\n\t\t\t<div class="search_match_info">\r\n\t\t\t\t\t\t\t\t\t\t<div>Custom URL: steamcommunity.com/id/<span style="color: whitesmoke">xempyjecar</span></div>\r\n\t\t\t\t\t\t\t\t\t\t<div>\r\n\t\t\t\t\tAlso known as: <span style="color: whitesmoke">Xempy -A- NEW SEASON HYPEE</span>, <span style="color: whitesmoke">Brekija</span>, <span style="color: whitesmoke">FAIRPLAY ORGANISATION</span>, <span style="color: whitesmoke">Xempy | csgoshit.com</span>, <span style="color: whitesmoke">Xempy | csgorage.com</span>, <span style="color: whitesmoke">\u2500\u2500\u2500\u2554\u2550\u2550\u2550\u2557</span>, <span style="color: whitesmoke">XempyTheCupcake</span>\t\t\t\t</div>\r\n\t\t\t\t\t</div>\r\n\t\t</div>\r\n\t\t\t\t\t\t\t\t<div class="search_row">\r\n\t<div class="search_result_friend">\r\n\t\t\t</div>\r\n\t<div class="mediumHolder_default" data-miniprofile="315139919" style="float:left;"><div class="avatarMedium"><a href="https://steamcommunity.com/id/filipppp"><img src="https://steamcdn-a.akamaihd.net/steamcommunity/public/images/avatars/ca/caa5747851b5255a2d76699d855bf20e709af3d1_medium.jpg"></a></div></div>\r\n\t<div class="searchPersonaInfo">\r\n\t\t<a class="searchPersonaName" href="https://steamcommunity.com/id/filipppp">Xempy -A-</a><br />\r\n\t\t\t\t\tIgor<br />\t\t\tSerbia&nbsp;<img style="margin-bottom:-2px" src="https://steamcommunity-a.akamaihd.net/public/images/countryflags/rs.gif" border="0" />\t\t\t</div>\r\n\t<div style="clear:left"></div>\r\n\r\n\t\t\t<div class="search_match_info">\r\n\t\t\t\t\t\t\t\t\t\t<div>Custom URL: steamcommunity.com/id/<span style="color: whitesmoke">filipppp</span></div>\r\n\t\t\t\t\t\t\t\t\t\t<div>\r\n\t\t\t\t\tAlso known as: <span style="color: whitesmoke">Extreeemeeee</span>, <span style="color: whitesmoke">Ratatatatatata</span>\t\t\t\t</div>\r\n\t\t\t\t\t</div>\r\n\t\t</div>\r\n\t\t\t\t\t\t\t\t<div class="search_row">\r\n\t<div class="search_result_friend">\r\n\t\t\t</div>\r\n\t<div class="mediumHolder_default" data-miniprofile="258386073" style="float:left;"><div class="avatarMedium"><a href="https://steamcommunity.com/id/lenyagoglov"><img src="https://steamcdn-a.akamaihd.net/steamcommunity/public/images/avatars/71/71ee8d0519c74cea0352836b188c747b36224f8f_medium.jpg"></a></div></div>\r\n\t<div class="searchPersonaInfo">\r\n\t\t<a class="searchPersonaName" href="https://steamcommunity.com/id/lenyagoglov">Xempys</a><br />\r\n\t\t\t\t\tTed<br />\t\t\tLuxembourg&nbsp;<img style="margin-bottom:-2px" src="https://steamcommunity-a.akamaihd.net/public/images/countryflags/lu.gif" border="0" />\t\t\t</div>\r\n\t<div style="clear:left"></div>\r\n\r\n\t\t\t<div class="search_match_info">\r\n\t\t\t\t\t\t\t\t\t\t<div>Custom URL: steamcommunity.com/id/<span style="color: whitesmoke">lenyagoglov</span></div>\r\n\t\t\t\t\t\t\t\t</div>\r\n\t\t</div>\r\n\t\t\t\t\t\t\t\t<div class="search_row">\r\n\t<div class="search_result_friend">\r\n\t\t\t</div>\r\n\t<div class="mediumHolder_default" data-miniprofile="257927191" style="float:left;"><div class="avatarMedium"><a href="https://steamcommunity.com/id/rostislavtseychuk85"><img src="https://steamcdn-a.akamaihd.net/steamcommunity/public/images/avatars/86/8641de85a283f0d23d1cbeb35ee0c0d5ca87a83b_medium.jpg"></a></div></div>\r\n\t<div class="searchPersonaInfo">\r\n\t\t<a class="searchPersonaName" href="https://steamcommunity.com/id/rostislavtseychuk85">Xempys</a><br />\r\n\t\t\t\t\tGabriel<br />\t\t\tLebanon&nbsp;<img style="margin-bottom:-2px" src="https://steamcommunity-a.akamaihd.net/public/images/countryflags/lb.gif" border="0" />\t\t\t</div>\r\n\t<div style="clear:left"></div>\r\n\r\n\t\t\t<div class="search_match_info">\r\n\t\t\t\t\t\t\t\t\t\t<div>Custom URL: steamcommunity.com/id/<span style="color: whitesmoke">rostislavtseychuk85</span></div>\r\n\t\t\t\t\t\t\t\t</div>\r\n\t\t</div>\r\n\t\t\t\t\t\t\t\t<div class="search_row">\r\n\t<div class="search_result_friend">\r\n\t\t\t</div>\r\n\t<div class="mediumHolder_default" data-miniprofile="252811169" style="float:left;"><div class="avatarMedium"><a href="https://steamcommunity.com/id/mochulskayaa"><img src="https://steamcdn-a.akamaihd.net/steamcommunity/public/images/avatars/76/76c10b0744403468aaf8090f56ca8ddd61338925_medium.jpg"></a></div></div>\r\n\t<div class="searchPersonaInfo">\r\n\t\t<a class="searchPersonaName" href="https://steamcommunity.com/id/mochulskayaa">Xempys</a><br />\r\n\t\t\t\t\tRichard<br />\t\t\tGuatemala&nbsp;<img style="margin-bottom:-2px" src="https://steamcommunity-a.akamaihd.net/public/images/countryflags/gt.gif" border="0" />\t\t\t</div>\r\n\t<div style="clear:left"></div>\r\n\r\n\t\t\t<div class="search_match_info">\r\n\t\t\t\t\t\t\t\t\t\t<div>Custom URL: steamcommunity.com/id/<span style="color: whitesmoke">mochulskayaa</span></div>\r\n\t\t\t\t\t\t\t\t</div>\r\n\t\t</div>\r\n\t\t\t\t\t\t\t\t<div class="search_row">\r\n\t<div class="search_result_friend">\r\n\t\t\t</div>\r\n\t<div class="mediumHolder_default" data-miniprofile="260028611" style="float:left;"><div class="avatarMedium"><a href="https://steamcommunity.com/id/katerukhina"><img src="https://steamcdn-a.akamaihd.net/steamcommunity/public/images/avatars/24/24241e97a6caf3bd932a01ea22afc6b3d758f1a1_medium.jpg"></a></div></div>\r\n\t<div class="searchPersonaInfo">\r\n\t\t<a class="searchPersonaName" href="https://steamcommunity.com/id/katerukhina">Xempys</a><br />\r\n\t\t\t\t\tChristian<br />\t\t\tFiji&nbsp;<img style="margin-bottom:-2px" src="https://steamcommunity-a.akamaihd.net/public/images/countryflags/fj.gif" border="0" />\t\t\t</div>\r\n\t<div style="clear:left"></div>\r\n\r\n\t\t\t<div class="search_match_info">\r\n\t\t\t\t\t\t\t\t\t\t<div>Custom URL: steamcommunity.com/id/<span style="color: whitesmoke">katerukhina</span></div>\r\n\t\t\t\t\t\t\t\t</div>\r\n\t\t</div>\r\n\t\t\t\t\t\t\t\t<div class="search_row">\r\n\t<div class="search_result_friend">\r\n\t\t\t</div>\r\n\t<div class="mediumHolder_default" data-miniprofile="292454844" style="float:left;"><div class="avatarMedium"><a href="https://steamcommunity.com/id/purdenkos"><img src="https://steamcdn-a.akamaihd.net/steamcommunity/public/images/avatars/5c/5c7f9d1b71a68ab8599ae0fe8f2c4e0445348eaa_medium.jpg"></a></div></div>\r\n\t<div class="searchPersonaInfo">\r\n\t\t<a class="searchPersonaName" href="https://steamcommunity.com/id/purdenkos">Xempys</a><br />\r\n\t\t\t\t\tPatrik<br />\t\t\tCote D\'ivoire (Ivory Coast)&nbsp;<img style="margin-bottom:-2px" src="https://steamcommunity-a.akamaihd.net/public/images/countryflags/ci.gif" border="0" />\t\t\t</div>\r\n\t<div style="clear:left"></div>\r\n\r\n\t\t\t<div class="search_match_info">\r\n\t\t\t\t\t\t\t\t\t\t<div>Custom URL: steamcommunity.com/id/<span style="color: whitesmoke">purdenkos</span></div>\r\n\t\t\t\t\t\t\t\t</div>\r\n\t\t</div>\r\n\t\t\t\t\t\t\t\t<div class="search_row">\r\n\t<div class="search_result_friend">\r\n\t\t\t</div>\r\n\t<div class="mediumHolder_default" data-miniprofile="56000172" style="float:left;"><div class="avatarMedium"><a href="https://steamcommunity.com/id/v2incent"><img src="https://steamcdn-a.akamaihd.net/steamcommunity/public/images/avatars/ac/ac45a256e0a14712efff255db0105fedd80a4f0e_medium.jpg"></a></div></div>\r\n\t<div class="searchPersonaInfo">\r\n\t\t<a class="searchPersonaName" href="https://steamcommunity.com/id/v2incent">Ext4ze ` ^0| \'Xempy^0\'</a><br />\r\n\t\t\t\t\tv2incent<br />\t\t\t&nbsp;\t\t\t</div>\r\n\t<div style="clear:left"></div>\r\n\r\n\t\t\t<div class="search_match_info">\r\n\t\t\t\t\t\t\t\t\t\t<div>Custom URL: steamcommunity.com/id/<span style="color: whitesmoke">v2incent</span></div>\r\n\t\t\t\t\t\t\t\t</div>\r\n\t\t</div>\r\n\t\t\t\t\t\t\t\t<div class="search_row">\r\n\t<div class="search_result_friend">\r\n\t\t\t</div>\r\n\t<div class="mediumHolder_default" data-miniprofile="297670812" style="float:left;"><div class="avatarMedium"><a href="https://steamcommunity.com/id/xempy"><img src="https://steamcdn-a.akamaihd.net/steamcommunity/public/images/avatars/62/62ea583f7f838562c73cb70e3993e27acd583aef_medium.jpg"></a></div></div>\r\n\t<div class="searchPersonaInfo">\r\n\t\t<a class="searchPersonaName" href="https://steamcommunity.com/id/xempy">xempsanity `\xb4</a><br />\r\n\t\t\t\t\tIgor<br />\t\t\tSerbia&nbsp;<img style="margin-bottom:-2px" src="https://steamcommunity-a.akamaihd.net/public/images/countryflags/rs.gif" border="0" />\t\t\t</div>\r\n\t<div style="clear:left"></div>\r\n\r\n\t\t\t<div class="search_match_info">\r\n\t\t\t\t\t\t\t\t\t\t<div>Custom URL: steamcommunity.com/id/<span style="color: whitesmoke">xempy</span></div>\r\n\t\t\t\t\t\t\t\t\t\t<div>\r\n\t\t\t\t\tAlso known as: <span style="color: whitesmoke">XEMPYKiNGOFNOTHiNG</span>, <span style="color: whitesmoke">X3MPY</span>, <span style="color: whitesmoke">X3MPY * brother\'s on acc</span>\t\t\t\t</div>\r\n\t\t\t\t\t</div>\r\n\t\t</div>\r\n\t\t\t\t\t\t\t\t<div class="search_row">\r\n\t<div class="search_result_friend">\r\n\t\t\t</div>\r\n\t<div class="mediumHolder_default" data-miniprofile="121633219" style="float:left;"><div class="avatarMedium"><a href="https://steamcommunity.com/id/Empyrk"><img src="https://steamcdn-a.akamaihd.net/steamcommunity/public/images/avatars/6b/6b87d7a04bf211a2665b828436ad34e549f2b193_medium.jpg"></a></div></div>\r\n\t<div class="searchPersonaInfo">\r\n\t\t<a class="searchPersonaName" href="https://steamcommunity.com/id/Empyrk">Empyrk</a><br />\r\n\t\t\t\t\tMatteo<br />\t\t\tToscana, Italy&nbsp;<img style="margin-bottom:-2px" src="https://steamcommunity-a.akamaihd.net/public/images/countryflags/it.gif" border="0" />\t\t\t</div>\r\n\t<div style="clear:left"></div>\r\n\r\n\t\t\t<div class="search_match_info">\r\n\t\t\t\t\t\t\t\t\t\t<div>Custom URL: steamcommunity.com/id/<span style="color: whitesmoke">Empyrk</span></div>\r\n\t\t\t\t\t\t\t\t</div>\r\n\t\t</div>\r\n\t\t\t\t<div style="clear: both"></div>\r\n\t\t<div style="float: right; padding-bottom: 2px">\r\n\t\t\t\t\t\tShowing 1 - 11 of 11\t\t\t</div>\r\n\t<div style="clear: both"></div>\r\n\r\n\r\n', u'search_filter': u'users', u'search_text': u'xempy', u'success': 1, u'search_page': 1}
</code></pre>
<p>You just need to access <code>results["html"]</code> to get the source.</p>
| 0 | 2016-08-08T20:56:51Z | [
"python",
"scrapy"
] |
Sympy's latex printer, folding short fractions | 38,837,942 | <p>I'm feeding <code>sympy.latex</code> a string involving fractions and I'd like to get its latex representation with fractions "folded", i.e., typeset as 3/2 rather than as \frac{3}{2}. I'm setting the <code>fold_short_fractions</code> keyword argument to <code>True</code>, but the results I'm getting are inconsistent:</p>
<pre><code>>>> from sympy.parsing.sympy_parser import parse_exp
>>> from sympy import latex
>>>
>>> print(latex(parse_expr("3*x**2/y"))) # OK
\frac{3 x^{2}}{y}
>>> print(latex(parse_expr("3*x**2/y"), fold_short_frac=True)) # OK
3 x^{2} / y
>>>
>>> print(latex(parse_expr("3/2"))) # OK
\frac{3}{2}
>>> print(latex(parse_expr("3/2"), fold_short_frac=True)) # No!!
\frac{3}{2}
</code></pre>
<p>As you can see, it refuses to fold the numerical fraction 3/2, although it seems to be okay with symbolic expressions. Does anyone have an explanation/fix/workaround for this? Thanks!</p>
| 2 | 2016-08-08T20:26:45Z | 38,858,641 | <p>This is a bug in the SymPy latex printer. It isn't looking at the <code>fold_short_frac</code> option for rational numbers. I've got a fix <a href="https://github.com/sympy/sympy/pull/11497" rel="nofollow">here</a>. </p>
<p>In the meantime, you can use this is as workaround:</p>
<pre><code>from sympy.printing.latex import LatexPrinter
class MyLatexPrinter(LatexPrinter):
def _print_Rational(self, expr):
if expr.q != 1:
sign = ""
p = expr.p
if expr.p < 0:
sign = "- "
p = -p
if self._settings['fold_short_frac']:
return r"%s%d / %d" % (sign, p, expr.q)
return r"%s\frac{%d}{%d}" % (sign, p, expr.q)
else:
return self._print(expr.p)
def latex(expr, **settings):
return MyLatexPrinter(settings).doprint(expr)
</code></pre>
| 0 | 2016-08-09T19:08:51Z | [
"python",
"latex",
"sympy"
] |
Python multiprocessing numpy.linalg.pinv cause segfault | 38,837,948 | <p>I wrote a function using multiprocessing packages from python and tried to boost the speed of my code. </p>
<pre><code>from arch.univariate import ARX, GARCH
from multiprocessing import Process
import multiprocessing
import time
def batch_learning(X, lag_array=None):
"""
X is a time series array
lag_array contains all possible lag numbers
"""
# init a queue used for triggering different processes
queue = multiprocessing.JoinableQueue()
data = multiprocessing.Queue()
# a worker called ARX_fit triggered by queue.get()
def ARX_fit(queue):
while True:
q = queue.get()
q.volatility = GARCH()
print "Starting to fit lags %s" %str(q.lags.size/2)
try:
q_res=q.fit(update_freq=500)
except:
print "Error:...."
print "finished lags %s" %str(q.lags.size/2)
queue.task_done()
# init four processes
for i in range(4):
process_i = Process(target=ARX_fit, name="Process_%s"%str(i), args=(queue,))
process_i.start()
# put ARX model objects into queue continuously
for num in lag_array:
queue.put(ARX(X, lags=num))
# sync processes here
queue.join()
return
</code></pre>
<p>After calling function:</p>
<pre><code>batch_learning(a, lag_array=range(1,10))
</code></pre>
<p>However it got stuck in the middle and I got the print out messages as below:</p>
<pre><code>Starting to fit lags 1
Starting to fit lags 3
Starting to fit lags 2
Starting to fit lags 4
finished lags 1
finished lags 2
Starting to fit lags 5
finished lags 3
Starting to fit lags 6
Starting to fit lags 7
finished lags 4
Starting to fit lags 8
finished lags 6
finished lags 5
Starting to fit lags 9
</code></pre>
<p>It runs forever but without any printouts on my Mac OS El Captain. Then using PyCharm debug mode and thanks for Tim Peters suggestions, I successfully find out that the processes actually quitted unexpectedly. Under debug mode, I can pinpoint it is actually <code>svd</code> function inside numpy.linalg.pinv() used by arch library causing this problem. Then my question is: Why? It works with single process for-loop but it cannot work with 2 processes or above. I don't know how to fix this problem. Is it a numpy bug? Can anyone help me a bit here?</p>
| 1 | 2016-08-08T20:27:38Z | 38,840,759 | <p>There's not much to go on here, and the code indentation is wrong so it's hard to guess what you're really doing. To the extent I <em>can</em> guess, what you're seeing could happen if the OS killed a process in a way that didn't raise a Python exception.</p>
<p>One thing to try: first make a list, <code>ps</code>, of your four <code>process_i</code> objects. Then before <code>queue.join()</code> add:</p>
<pre><code>while ps:
new_ps = []
for p in ps:
if p.is_alive():
new_ps.append(p)
else:
print("*********", p.name, "exited with", p.exitcode)
ps = new_ps
time.sleep(1)
</code></pre>
<p>So about once per second, this just runs through the list of worker processes to see whether any have (unexpectedly!) died. If one (or more) has, it displays the process name (which you supplied already) and the process exit code (as given by your OS). If that triggers, it would be a big clue.</p>
<p>If none die, then we have to wonder whether</p>
<pre><code>q_res=q.fit(update_freq=500)
</code></pre>
<p>"simply" takes a very long time for some <code>q</code> states.</p>
| 0 | 2016-08-09T01:39:13Z | [
"python",
"numpy",
"python-multiprocessing",
"statsmodels"
] |
Python multiprocessing numpy.linalg.pinv cause segfault | 38,837,948 | <p>I wrote a function using multiprocessing packages from python and tried to boost the speed of my code. </p>
<pre><code>from arch.univariate import ARX, GARCH
from multiprocessing import Process
import multiprocessing
import time
def batch_learning(X, lag_array=None):
"""
X is a time series array
lag_array contains all possible lag numbers
"""
# init a queue used for triggering different processes
queue = multiprocessing.JoinableQueue()
data = multiprocessing.Queue()
# a worker called ARX_fit triggered by queue.get()
def ARX_fit(queue):
while True:
q = queue.get()
q.volatility = GARCH()
print "Starting to fit lags %s" %str(q.lags.size/2)
try:
q_res=q.fit(update_freq=500)
except:
print "Error:...."
print "finished lags %s" %str(q.lags.size/2)
queue.task_done()
# init four processes
for i in range(4):
process_i = Process(target=ARX_fit, name="Process_%s"%str(i), args=(queue,))
process_i.start()
# put ARX model objects into queue continuously
for num in lag_array:
queue.put(ARX(X, lags=num))
# sync processes here
queue.join()
return
</code></pre>
<p>After calling function:</p>
<pre><code>batch_learning(a, lag_array=range(1,10))
</code></pre>
<p>However it got stuck in the middle and I got the print out messages as below:</p>
<pre><code>Starting to fit lags 1
Starting to fit lags 3
Starting to fit lags 2
Starting to fit lags 4
finished lags 1
finished lags 2
Starting to fit lags 5
finished lags 3
Starting to fit lags 6
Starting to fit lags 7
finished lags 4
Starting to fit lags 8
finished lags 6
finished lags 5
Starting to fit lags 9
</code></pre>
<p>It runs forever but without any printouts on my Mac OS El Captain. Then using PyCharm debug mode and thanks for Tim Peters suggestions, I successfully find out that the processes actually quitted unexpectedly. Under debug mode, I can pinpoint it is actually <code>svd</code> function inside numpy.linalg.pinv() used by arch library causing this problem. Then my question is: Why? It works with single process for-loop but it cannot work with 2 processes or above. I don't know how to fix this problem. Is it a numpy bug? Can anyone help me a bit here?</p>
| 1 | 2016-08-08T20:27:38Z | 38,950,680 | <p>I have to answer this question by myself and providing my solutions. I have already solved this issue, thanks to the help from @Tim Peters and @aganders. </p>
<p>The multiprocessing usually hangs when you use numpy/scipy libraries on Mac OS because of the Accelerate Framework used in Apple OS which is a replacement for OpenBlas numpy is built on. Simply, in order to solve the similar problem, you have to do as follows:</p>
<ol>
<li>uninstall numpy and scipy (scipy needs to be matched with proper version of numpy)</li>
<li><p>follow the procedure on this <a href="http://stackoverflow.com/questions/11443302/compiling-numpy-with-openblas-integration">link</a> to rebuild numpy with Openblas.</p></li>
<li><p>reinstall scipy and test your code to see if it works.</p></li>
</ol>
<p>Some heads up for testing your multiprocessing codes on Mac OS, when you run your code, it is better to set up a env variable to run your code:</p>
<pre><code>OPENBLAS_NUM_THREADS=1 python import_test.py
</code></pre>
<p>The reason for doing this is that OpenBlas by default create 2 threads for each core to run, in which case there are 8 threads running (2 for each core) even though you set up 4 processes. This creates a bit overhead for the thread switching. I tested OPENBLAS_NUM_THREADS=1 config to limit 1 thread each process on each core, it is indeed faster than default settings. </p>
| 1 | 2016-08-15T07:07:42Z | [
"python",
"numpy",
"python-multiprocessing",
"statsmodels"
] |
How can I print two boxplots on the same axis in python? | 38,837,958 | <p>I'm using matplotlib to graph two boxplots. I am able to get them printed as subplots on the same figure, but I am having trouble getting them side by side on the same set of axes. </p>
| -2 | 2016-08-08T20:28:22Z | 38,838,131 | <p>Here's the code which finally worked: </p>
<pre><code>fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].boxplot(mads_dp52)
axes[1].boxplot(mads_dp53)
plt.show()
</code></pre>
| 0 | 2016-08-08T20:39:33Z | [
"python",
"matplotlib",
"boxplot",
"axes"
] |
Slicing strings in a column in pandas | 38,838,015 | <p>I have a CSV that has a column of URLs and I'm trying to slice out some unnecessary characters leading and trailing characters. I'm using the following syntax:</p>
<pre><code>df.['column_name'].str[3:10]
</code></pre>
<p>Unfortunately I get <code>TypeError: 'method' object is not subscriptable</code>.</p>
| 3 | 2016-08-08T20:31:21Z | 38,838,066 | <p>try this</p>
<pre><code>df['new_column'] = df['text_column'].apply(lambda x: x[3:10])
</code></pre>
| 0 | 2016-08-08T20:34:45Z | [
"python",
"csv",
"pandas"
] |
Pandas equivalent rbind operation | 38,838,059 | <p>Basically, I am looping through a bunch of CSV files and in the end would like to <code>append</code> each dataframe into one. Actually, all I need is an <code>rbind</code> type function. So, I did some search and followed the <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html" rel="nofollow">guide</a>. However, I still could not get the ideal solution.</p>
<p>A sample code is attached below. For instance shape of data1 is always 47 by 42. But shape of <code>data_out_final</code> becomes (47, 42), (47, 84), and (47, 126) after the first three files. Idealy, it should be (141, 42). In addition, I check index of <code>data1</code>, which is <code>RangeIndex(start=0, stop=47, step=1)</code>. Appreciate any suggestions!</p>
<p>My <code>pandas</code> version is <code>0.18.1</code></p>
<h2>code</h2>
<pre><code>appended_data = []
for csv_each in csv_pool:
data1 = pd.read_csv(csv_each, header=0)
# do something here
appended_data.append(data2)
data_out_final = pd.concat(appended_data, axis=1)
</code></pre>
<p>If using <code>data_out_final = pd.concat(appended_data, axis=1)</code>, shape of data_out_final becomes (141, 94)</p>
<h2>PS</h2>
<p>kind of figure it out. Actually, you have to standardize column names before <code>pd.concat</code>.</p>
| 2 | 2016-08-08T20:34:31Z | 38,838,160 | <p>Try: <a href="http://pandas.pydata.org/pandas-docs/stable/10min.html?highlight=concat#concat" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/10min.html?highlight=concat#concat</a></p>
<p>"pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations."</p>
| 0 | 2016-08-08T20:42:00Z | [
"python",
"pandas"
] |
Pandas equivalent rbind operation | 38,838,059 | <p>Basically, I am looping through a bunch of CSV files and in the end would like to <code>append</code> each dataframe into one. Actually, all I need is an <code>rbind</code> type function. So, I did some search and followed the <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html" rel="nofollow">guide</a>. However, I still could not get the ideal solution.</p>
<p>A sample code is attached below. For instance shape of data1 is always 47 by 42. But shape of <code>data_out_final</code> becomes (47, 42), (47, 84), and (47, 126) after the first three files. Idealy, it should be (141, 42). In addition, I check index of <code>data1</code>, which is <code>RangeIndex(start=0, stop=47, step=1)</code>. Appreciate any suggestions!</p>
<p>My <code>pandas</code> version is <code>0.18.1</code></p>
<h2>code</h2>
<pre><code>appended_data = []
for csv_each in csv_pool:
data1 = pd.read_csv(csv_each, header=0)
# do something here
appended_data.append(data2)
data_out_final = pd.concat(appended_data, axis=1)
</code></pre>
<p>If using <code>data_out_final = pd.concat(appended_data, axis=1)</code>, shape of data_out_final becomes (141, 94)</p>
<h2>PS</h2>
<p>kind of figure it out. Actually, you have to standardize column names before <code>pd.concat</code>.</p>
| 2 | 2016-08-08T20:34:31Z | 38,838,233 | <pre><code>>>> df1
a b
0 -1.417866 -0.828749
1 0.212349 0.791048
2 -0.451170 0.628584
3 0.612671 -0.995330
4 0.078460 -0.322976
5 1.244803 1.576373
6 1.169629 -1.135926
7 -0.652443 0.506388
8 0.549604 -0.691054
9 -0.512829 -0.959398
>>> df2
a b
0 -0.652161 0.940932
1 2.495067 0.004833
2 -2.187792 1.692402
3 1.900738 0.372425
4 0.245976 1.894527
5 0.627297 0.029331
6 -0.828628 -1.600014
7 -0.991835 -0.061202
8 0.543389 0.703457
9 -0.755059 1.239968
>>> pd.concat([df1, df2])
a b
0 -1.417866 -0.828749
1 0.212349 0.791048
2 -0.451170 0.628584
3 0.612671 -0.995330
4 0.078460 -0.322976
5 1.244803 1.576373
6 1.169629 -1.135926
7 -0.652443 0.506388
8 0.549604 -0.691054
9 -0.512829 -0.959398
0 -0.652161 0.940932
1 2.495067 0.004833
2 -2.187792 1.692402
3 1.900738 0.372425
4 0.245976 1.894527
5 0.627297 0.029331
6 -0.828628 -1.600014
7 -0.991835 -0.061202
8 0.543389 0.703457
9 -0.755059 1.239968
</code></pre>
<p>Unless I'm misinterpreting what you need, this is what you need.</p>
| 4 | 2016-08-08T20:46:52Z | [
"python",
"pandas"
] |
Consecutive numbers list where each number repeats | 38,838,147 | <p>How can I create a list of consecutive numbers where each number repeats N times, for example:</p>
<pre><code>list = [0,0,0,1,1,1,2,2,2,3,3,3,4,4,4,5,5,5]
</code></pre>
| 2 | 2016-08-08T20:40:54Z | 38,838,231 | <p>My first instinct is to get some functional help from the funcy package. If <code>N</code> is the number of times to repeat each value, and <code>M</code> is the maximum value to repeat, then you can do</p>
<pre><code>import funcy as fp
fp.flatten(fp.repeat(i, N) for i in range(M + 1))
</code></pre>
<p>This will return a generator, so to get the array you can just call <code>list()</code> around it</p>
| 1 | 2016-08-08T20:46:41Z | [
"python",
"list",
"python-2.7"
] |
Consecutive numbers list where each number repeats | 38,838,147 | <p>How can I create a list of consecutive numbers where each number repeats N times, for example:</p>
<pre><code>list = [0,0,0,1,1,1,2,2,2,3,3,3,4,4,4,5,5,5]
</code></pre>
| 2 | 2016-08-08T20:40:54Z | 38,838,234 | <p><code>sum([[i]*n for i in range(0,x)], [])</code></p>
| 0 | 2016-08-08T20:47:02Z | [
"python",
"list",
"python-2.7"
] |
Consecutive numbers list where each number repeats | 38,838,147 | <p>How can I create a list of consecutive numbers where each number repeats N times, for example:</p>
<pre><code>list = [0,0,0,1,1,1,2,2,2,3,3,3,4,4,4,5,5,5]
</code></pre>
| 2 | 2016-08-08T20:40:54Z | 38,838,292 | <p>Another idea, without any need for other packages or sums:</p>
<pre><code>[x//N for x in range((M+1)*N)]
</code></pre>
<p>Where <code>N</code> is your number of repeats and <code>M</code> is the maximum value to repeat. E.g.</p>
<pre><code>N = 3
M = 5
[x//N for x in range((M+1)*N)]
</code></pre>
<p>yields</p>
<pre><code>[0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5]
</code></pre>
| 4 | 2016-08-08T20:51:20Z | [
"python",
"list",
"python-2.7"
] |
Consecutive numbers list where each number repeats | 38,838,147 | <p>How can I create a list of consecutive numbers where each number repeats N times, for example:</p>
<pre><code>list = [0,0,0,1,1,1,2,2,2,3,3,3,4,4,4,5,5,5]
</code></pre>
| 2 | 2016-08-08T20:40:54Z | 38,840,594 | <p>The following piece of code is the simplest version I can think of.
Itâs a bit dirty and long, but it gets the job done.</p>
<p>In my opinion, itâs easier to comprehend.</p>
<pre><code>def mklist(s, n):
l = [] # An empty list that will contain the list of elements
# and their duplicates.
for i in range(s): # We iterate from 0 to s
for j in range(n): # and appending each element (i) to l n times.
l.append(i)
return l # Finally we return the list.
</code></pre>
<p>If you run the code â¦:</p>
<pre><code>print mklist(10, 2)
</code></pre>
<p>[0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9]</p>
<pre><code>print mklist(5, 3)
</code></pre>
<p>[0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4 </p>
<p>Another version a little neater, with list comprehension.
But uhmmm⦠We have to sort it though.</p>
<pre><code>def mklist2(s, n):
return sorted([l for l in range(s) * n])
</code></pre>
<p>Running that version will give the following results.</p>
<pre><code>print mklist2(5, 3)
</code></pre>
<p>Raw : [0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4]</p>
<p>Sorted: [0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4]</p>
| 0 | 2016-08-09T01:14:17Z | [
"python",
"list",
"python-2.7"
] |
Understanding NumPy's interpretation of string data types | 38,838,218 | <p>Lets say I have a bytes object that represents some data, and I want to convert it to a <code>numpy</code> array via <code>np.genfromtxt</code>. I am having trouble understanding how I should handle strings in this case. Let's start with the following:</p>
<pre><code>from io import BytesIO
import numpy as np
text = b'test, 5, 1.2'
types = ['str', 'i4', 'f4']
np.genfromtxt(BytesIO(text), delimiter = ',', dtype = types)
</code></pre>
<p>This does not work. It raises </p>
<p><code>TypeError: data type not understood</code></p>
<p>If I change <code>types</code> so that <code>types = ['c', 'i4', 'f4']</code></p>
<p>Then the <code>numpy</code> call returns </p>
<pre><code>array((b't', 5, 1.2000000476837158),
dtype=[('f0', 'S1'), ('f1', '<i4'), ('f2', '<f4')])
</code></pre>
<p>So it works, but I am only getting the first letter of the string, obviously. </p>
<p>If I use <code>c8</code> or <code>c16</code> for the dtype of <code>test</code>, then I get </p>
<pre><code>array(((nan+0j), 5, 1.2000000476837158),
dtype=[('f0', '<c8'), ('f1', '<i4'), ('f2', '<f4')])
</code></pre>
<p>which is garbage. I've also tried using <code>a</code>, and <code>U</code>, no success. How in the world do I get <code>genfromtxt</code> to recognize and save elements as a string?</p>
<hr>
<p><strong>Edit:</strong> I assume part of the ssue is that this is a <code>bytes</code> object. However, if I instead use a normal string as <code>text</code>, and use <code>StringIO</code> rather than <code>BytesIO</code>, then <code>genfromtxt</code> raises an error:</p>
<p><code>TypeError: Can't convert</code>bytes<code>object to str implicitly</code></p>
| 0 | 2016-08-08T20:45:55Z | 38,839,263 | <p>In my Python3 session:</p>
<pre><code>In [568]: text = b'test, 5, 1.2'
# I don't need BytesIO since genfromtxt works with a list of
# byte strings, as from text.splitlines()
In [570]: np.genfromtxt([text], delimiter=',', dtype=None)
Out[570]:
array((b'test', 5, 1.2),
dtype=[('f0', 'S4'), ('f1', '<i4'), ('f2', '<f8')])
</code></pre>
<p>If left to its own devices <code>genfromtxt</code> deduces that the 1st field should be <code>S4</code> - 4 bytestring characters.</p>
<p>I could also be explicit with the types:</p>
<pre><code>In [571]: types=['S4', 'i4', 'f4']
In [572]: np.genfromtxt([text],delimiter=',',dtype=types)
Out[572]:
array((b'test', 5, 1.2000000476837158),
dtype=[('f0', 'S4'), ('f1', '<i4'), ('f2', '<f4')])
In [573]: types=['S10', 'i', 'f']
In [574]: np.genfromtxt([text],delimiter=',',dtype=types)
Out[574]:
array((b'test', 5, 1.2000000476837158),
dtype=[('f0', 'S10'), ('f1', '<i4'), ('f2', '<f4')])
In [575]: types=['U10', 'int', 'float']
In [576]: np.genfromtxt([text],delimiter=',',dtype=types)
Out[576]:
array(('test', 5, 1.2),
dtype=[('f0', '<U10'), ('f1', '<i4'), ('f2', '<f8')])
</code></pre>
<p>I can specify either <code>S</code> or <code>U</code> (unicode), but I also have to specify the length. I don't think there's a way with <code>genfromtxt</code> to let it deduce the length - except for the <code>None</code> type. I'd have to dig into the code to see how it deduces the string length.</p>
<p>I could also create this array with <code>np.array</code> (by making it a tuple of substrings, and giving a correct dtype:</p>
<pre><code>In [599]: np.array(tuple(text.split(b',')), dtype=[('f0', 'S4'), ('f1', '<i4'), ('f2', '<f8')])
Out[599]:
array((b'test', 5, 1.2),
dtype=[('f0', 'S4'), ('f1', '<i4'), ('f2', '<f8')])
</code></pre>
| 0 | 2016-08-08T22:09:37Z | [
"python",
"arrays",
"string",
"numpy"
] |
UnicodeEncodeError: 'ascii' codec can't encode character u'\u0446' in position 32: ordinal not in range(128) | 38,838,274 | <p>I'm trying to debug some code a previous intern wrote and I'm having some difficulties resolving this issue with answers from other unicode error posts. </p>
<p>The error is found in the last line of this function:</p>
<pre><code> def dumpTextPacket(self, header, bugLog, offset, outfile):
bugLog.seek(offset)
data = bugLog.read( header[1] ) # header[1] = size of the packet
outString = data.decode("utf-8","ignore")
if(header[3] == 8): # Removing ugly characters from packet that has bTag = 8.
outString = outString[1:]
outString = outString.strip('\0') # Remove all 'null' characters from text
outString = "{:.3f}".format(header[5]) + ' ms: ' + outString # Append the timestamp to the beginning of the line
outfile.write(outString)
</code></pre>
<p>I don't have much experience with unicode,so I would really appreciate any pointers with this issue! </p>
<hr>
<p>edit: Using Python 2.7, and below is the entire file. Another thing I should mention is that the code does work when parsing some files, but I think it errors on other files when the timestamp gets too big?</p>
<p>In the main.py file, we call the method LogInterpreter.execute(), and the traceback gives the error shown in the title on the line "outfile.write(outString)", the last line in the dumpTextPacket method which is called in the execute method: </p>
<pre><code>import sys
import os
from struct import unpack
class LogInterpreter:
def __init__( self ):
self.RTCUpdated = False
self.RTCOffset = 0.0
self.LastTimeStamp = 0.0
self.TimerRolloverCount = 0
self.ThisTimeStamp = 0.0
self.m_RTCSeconds = 0.0
self.m_StartTimeInSec = 0.0
def GetRTCOffset( self ):
return self.m_RTCSeconds - self.m_StartTimeInSec
def convertTimeStamp(self,uTime,LogRev):
TicsPerSecond = 24000000.0
self.ThisTimeStamp = uTime
self.RTCOffset = self.GetRTCOffset()
if int( LogRev ) == 2:
if self.RTCUpdated:
self.LastTimeStamp = 0.0
if self.LastTimeStamp > self.ThisTimeStamp:
self.TimerRolloverCount += 1
self.LastTimeStamp = self.ThisTimeStamp
ULnumber = (-1 & 0xffffffff)
return ((ULnumber/TicsPerSecond)*self.TimerRolloverCount + (uTime/TicsPerSecond) + self.RTCOffset) * 1000.0
##########################################################################
# Information about the header for the current packet we are looking at. #
##########################################################################
def grabHeader(self, bugLog, offset):
'''
s_PktHdrRev1
/*0*/ u16 StartOfPacketMarker; # uShort 2
/*2*/ u16 SizeOfPacket; # uShort 2
/*4*/ u08 LogRev; # uChar 1
/*5*/ u08 bTag; # uChar 1
/*6*/ u16 iSeq; # uShort 2
/*8*/ u32 uTime; # uLong 4
'''
headerSize = 12 # Header size in bytes
bType = 'HHBBHL' # codes for our byte type
bugLog.seek(offset)
data = bugLog.read(headerSize)
if len(data) < headerSize:
print('Error in the format of BBLog file')
sys.exit()
headerArray = unpack(bType, data)
convertedTime = self.convertTimeStamp(headerArray[5],headerArray[2])
headerArray = headerArray[:5] + (convertedTime,)
return headerArray
################################################################
# bTag = 8 or bTag = 16 --> just write the data to LogMsgs.txt #
################################################################
def dumpTextPacket(self, header, bugLog, offset, outfile):
bugLog.seek(offset)
data = bugLog.read( header[1] ) # header[1] = size of the packet
outString = data.decode("utf-8","ignore")
if(header[3] == 8): # Removing ugly characters from packet that has bTag = 8.
outString = outString[1:]
outString = outString.strip('\0') # Remove all 'null' characters from text
outString = "{:.3f}".format(header[5]) + ' ms: ' + outString # Append the timestamp to the beginning of the line
outfile.write(outString)
def execute(self):
path = './Logs/'
for fn in os.listdir(path):
fileName = fn
print fn
if (fileName.endswith(".bin")):
# if(fileName.split('.')[1] == "bin"):
print("Parsing "+fileName)
outfile = open(path+fileName.split('.')[0]+".txt", "w") # Open a file for output
fileSize = os.path.getsize(path+fileName)
packetOffset = 0
with open(path+fileName, 'rb') as bugLog:
while(packetOffset < fileSize):
currHeader = self.grabHeader(bugLog, packetOffset) # Grab the header for the current packet
packetOffset = packetOffset + 12 # Increment the pointer by 12 bytes (size of a header packet)
if currHeader[3]==8 or currHeader[3]==16: # Look at the bTag and see if it is a text packet
self.dumpTextPacket(currHeader, bugLog, packetOffset, outfile)
packetOffset = packetOffset + currHeader[1] # Move on to the next packet by incrementing the pointer by the size of the current packet
outfile.close()
print(fileName+" completed.")
</code></pre>
| -1 | 2016-08-08T20:49:59Z | 38,858,501 | <p>When you add together two strings with one of them being Unicode, Python 2 will coerce the result to Unicode too.</p>
<pre><code>>>> 'a' + u'b'
u'ab'
</code></pre>
<p>Since you used <code>data.decode</code>, <code>outString</code> will be Unicode.</p>
<p>When you write to a binary file, you must have a byte string. Python 2 will attempt to convert your Unicode string to a byte string, but it uses the most generic codec it has: <code>'ascii'</code>. This codec fails on many Unicode characters, specifically those with a codepoint above <code>'\u007f'</code>. You can encode it yourself with a more capable codec to get around this problem:</p>
<pre><code>outfile.write(outString.encode('utf-8'))
</code></pre>
<p>Everything changes in Python 3, which won't let you mix byte strings and Unicode strings nor attempt any automatic conversions.</p>
| 0 | 2016-08-09T19:00:57Z | [
"python",
"unicode"
] |
Python methods and "switches" | 38,838,298 | <p>I'm trying to use a dictionary as a switch statement as in </p>
<pre><code>def add(first, second):
return first + second
def sub():
...
return something
operations = {
"Add": add,
"Sub": sub
}
ret_val = operations[operation]
</code></pre>
<p>Now how can I pass the arguments to add and sub and get their response? Currently, I don't pass anything to the methods, and testing the ret_val. What I see is the operation getting called, but the return doesn't come back. What I get is the pointer to the operation method.</p>
<p>Thanks!</p>
| 0 | 2016-08-08T20:51:38Z | 38,838,347 | <p>To call a function, put the arguments in parentheses after it, just like when you call a function directly by its name.</p>
<pre><code>ret_val = operations[operation](1, 2)
</code></pre>
<p>Note that for this to work properly, all the functions in <code>operations</code> need to take the same number of arguments. So it won't work if <code>add()</code> takes two arguments but <code>sub()</code> takes none, as you've shown.</p>
<p>If the functions can take different numbers of arguments, you could put the arguments in a list and use the unpacking operator.</p>
<pre><code>args = (1, 2)
ret_val = operations[operation](*args)
</code></pre>
<p>Then you just have to ensure that <code>args</code> contains the appropriate number of arguments for the particular operation.</p>
| 1 | 2016-08-08T20:54:02Z | [
"python"
] |
Python methods and "switches" | 38,838,298 | <p>I'm trying to use a dictionary as a switch statement as in </p>
<pre><code>def add(first, second):
return first + second
def sub():
...
return something
operations = {
"Add": add,
"Sub": sub
}
ret_val = operations[operation]
</code></pre>
<p>Now how can I pass the arguments to add and sub and get their response? Currently, I don't pass anything to the methods, and testing the ret_val. What I see is the operation getting called, but the return doesn't come back. What I get is the pointer to the operation method.</p>
<p>Thanks!</p>
| 0 | 2016-08-08T20:51:38Z | 38,838,370 | <p>The dictionary contains callable functions. To call them, just add the arguments in parentheses.</p>
<pre><code>operations[operation](arg1, ...)
</code></pre>
| 0 | 2016-08-08T20:55:30Z | [
"python"
] |
Python methods and "switches" | 38,838,298 | <p>I'm trying to use a dictionary as a switch statement as in </p>
<pre><code>def add(first, second):
return first + second
def sub():
...
return something
operations = {
"Add": add,
"Sub": sub
}
ret_val = operations[operation]
</code></pre>
<p>Now how can I pass the arguments to add and sub and get their response? Currently, I don't pass anything to the methods, and testing the ret_val. What I see is the operation getting called, but the return doesn't come back. What I get is the pointer to the operation method.</p>
<p>Thanks!</p>
| 0 | 2016-08-08T20:51:38Z | 38,838,658 | <p>So, the main thing you're missing is executing the function call. The code as provided grabs the function reference properly, but you need parens to execute it.</p>
<p>Once you execute it, you need some way to pass arguments. Because the number of args varies by function, the best way is to pass both a variable number of args list (<code>*args</code>) and a dictionary of keyword arguments (<code>**kwargs</code>).</p>
<p>I've filled in your pseudocode slightly so these run:</p>
<pre><code>def add(first, second):
return first + second
def sub(first, second):
return first - second
operations = {
"Add": add,
"Sub": sub,
}
</code></pre>
<p>Call add with args:</p>
<pre><code>op = 'Add'
op_args = [1, 2]
op_kwargs = {}
ret_val = operations[operation](*op_args, **op_kwargs)
print(ret_val)
</code></pre>
<blockquote>
<p>3</p>
</blockquote>
<p>Call add with kwargs:</p>
<pre><code>op = 'Add'
op_args = []
op_kwargs = {'first': 3, 'second': 4}
ret_val = operations[operation](*op_args, **op_kwargs)
print(ret_val)
</code></pre>
<blockquote>
<p>7</p>
</blockquote>
<p>If you try to pass both args and kwargs in a conflicting way, it will fail:</p>
<pre><code># WON'T WORK
op = 'Add'
op_args = [1, 2]
op_kwargs = {'first': 3, 'second': 4}
ret_val = operations[operation](*op_args, **op_kwargs)
print(ret_val)
</code></pre>
<blockquote>
<p>TypeError: add() got multiple values for argument 'first'</p>
</blockquote>
<p>But you can use both in a complementary way:</p>
<pre><code>op = 'Add'
op_args = [1]
op_kwargs = {'second': 4}
ret_val = operations[operation](*op_args, **op_kwargs)
print(ret_val)
</code></pre>
<blockquote>
<p>5</p>
</blockquote>
<p>One technical note is that the naming <code>args</code> and <code>kwargs</code> is purely convention in Python. You could call them whatever you want. An answer that discusses the two more is available here: <a href="http://stackoverflow.com/a/36908/149428">http://stackoverflow.com/a/36908/149428</a>.</p>
<p>Note that I did not do any input validation, etc for the purpose of a simple, focused answer. If you're getting input from a user, that's an important step to remember.</p>
| 0 | 2016-08-08T21:16:04Z | [
"python"
] |
Get DOT graphviz of nested list elements which can contain duplicated nodes | 38,838,323 | <p>I have a set of nested lists which can be seperated in three groups:</p>
<ul>
<li><p>A (subelements are disjunction, line colors GREEN), e.g.</p>
<pre><code>listA = {
âa1â: ['b1', 'a2'],
âa2â: ['c1', 'c2']
}
</code></pre></li>
<li><p>B (subelements are <strong>ordered</strong> conjunction, line colors ORANGE), e.g.</p>
<pre><code>listB = {
âb1â: ['c4', 'c5', 'c7'],
âb2â:['c3', 'b1']
}
</code></pre></li>
<li>C (final elements - leaf nodes)</li>
</ul>
<p>The function <code>combinations</code> itereates through the nested lists and returns all possible combinations (which at the end only contains of elements of type C, so leaf nodes). The function <code>write_nodes</code> helps to write the nodes with colored lines. The call <code>write_nodes('task', inputlist)</code> is for creating the init node:</p>
<pre><code>def write_nodes(node, subnotes):
for k in subnotes:
if node in type_a:
text_file.write("{} -> {} [color=\"green\"]\n".format(node, k))
elif (node in type_b) or (node is 'task'):
text_file.write("{} -> {} [color=\"orange\"]\n".format(node, k))
write_nodes('task', inputlist)
def combinations(actions):
if len(actions)==1:
action= actions[0]
if action not in type_c:
root = action
try:
actions= type_a[action]
write_nodes(root, actions)
except KeyError:
try:
actions= type_b[action]
write_nodes(root, actions)
except KeyError:
#action is of type C, the only possible combination is itself
yield actions
else:
#action is of type B (conjunction), combine all the actions
for combination in combinations(actions):
yield combination
else:
#action is of type A (disjunction), generate combinations for each action
for action in actions:
for combination in combinations([action]):
yield combination
else:
#generate combinations for the first action in the list
#and combine them with the combinations for the rest of the list
action= actions[0]
for combination in combinations(actions[1:]):
for combo in combinations([action]):
yield combo + combination
</code></pre>
<p><strong>Example input (ordered conjunction):</strong></p>
<p><code>['a1', 'b2', 'c6']</code></p>
<p><strong>Example result:</strong> </p>
<pre><code>['c4', 'c5', 'c7', 'c3', 'c4', 'c5', 'c7', 'c6']
['c1', 'c3', 'c4', 'c5', 'c7', 'c6']
['c2', 'c3', 'c4', 'c5', 'c7', 'c6']
</code></pre>
<p><strong>The result I got from my code:</strong>
<a href="http://i.stack.imgur.com/uZXLO.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/uZXLO.jpg" alt="enter image description here"></a>
corresponding dot file: </p>
<pre><code>task -> a1 [color="orange"]
task -> b2 [color="orange"]
task -> c6 [color="orange"]
b2 -> c3 [color="orange"]
b2 -> b1 [color="orange"]
b1 -> c4 [color="orange"]
b1 -> c5 [color="orange"]
b1 -> c7 [color="orange"]
a1 -> b1 [color="green"]
a1 -> a2 [color="green"]
b1 -> c4 [color="orange"]
b1 -> c5 [color="orange"]
b1 -> c7 [color="orange"]
a2 -> c1 [color="green"]
a2 -> c2 [color="green"]
</code></pre>
<p><strong>The result I want (colors are not prior one):</strong>
<a href="http://i.stack.imgur.com/5HOyj.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/5HOyj.jpg" alt="enter image description here"></a>
<br></p>
<p><strong>Questions:</strong> </p>
<p>How can I handle the fact, there there are some duplicated nodes to get a result as mentioned?</p>
<p>Thanks for any help.</p>
| 0 | 2016-08-08T20:53:03Z | 38,854,894 | <h1>Duplicate nodes problem</h1>
<p>To avoid the duplicate nodes problem you should name each node with a unique name and use a <code>label</code> for the displayed name. For example change:</p>
<pre><code>b2 -> b1 [color="orange"]
b1 -> c4 [color="orange"]
b1 -> c5 [color="orange"]
b1 -> c7 [color="orange"]
a1 -> b1 [color="green"]
b1 -> c4 [color="orange"]
b1 -> c5 [color="orange"]
b1 -> c7 [color="orange"]
</code></pre>
<p>to:</p>
<pre><code>b21 [label="b2"]
b11 [label="b1"]
b21 -> b11 [color="orange"]
c41 [label="c4"]
b11 -> c41 [color="orange"]
c51 [label="c5"]
b11 -> c51 [color="orange"]
c71 [label="c7"]
b11 -> c71 [color="orange"]
a11 [label="a2"]
b12 [label="b1"]
a11 -> b12 [color="green"]
c42 [label="c4"]
b12 -> c42 [color="orange"]
c52 [label="c5"]
b12 -> c52 [color="orange"]
c72 [label="c7"]
b12 -> c72 [color="orange"]
</code></pre>
<p>Which produce:
<a href="http://i.stack.imgur.com/O57R2.png" rel="nofollow"><img src="http://i.stack.imgur.com/O57R2.png" alt="res"></a></p>
<h1>Avoid <code>try/except</code> flow</h1>
<p>It's better to use <code>if/else</code> and not <code>try/except</code> for normal program flow. For example instead of:</p>
<pre><code> try:
actions= type_a[action]
write_nodes(root, actions)
except KeyError:
#do whatever you do
</code></pre>
<p>use:</p>
<pre><code> actions = type_a.get(action, None) #if the key doesn't exists assign None
if actions:
write_nodes(root, actions)
else:
#do whatever you do
</code></pre>
<h1>Graphviz python package</h1>
<p>You can use the <a href="http://matthiaseisen.com/articles/graphviz/" rel="nofollow"><code>graphviz</code> python package</a> instead of writing the dot file yourself. </p>
| 1 | 2016-08-09T15:31:41Z | [
"python",
"graphviz",
"decision-tree",
"dot",
"pygraphviz"
] |
How to override event (dropEvent) of a widget in dynamic UI of PyQt? | 38,838,354 | <p><strong>What am I doing/ What I have so far :</strong></p>
<p>I am using Qt-designer to create PyQt .ui file which I am loading in my python script using QUiLoader which gives me access to the widgets/components as :</p>
<pre><code>self.ui.tree_widget_of_items ( which is a QTreeWidget created in Qt-designer)
</code></pre>
<p>I am able to read and write the values of widgets and I am able to use the signal on TreeWidget like this :</p>
<pre><code>self.ui.tree_widget_of_items.itemSelectionChanged.connect(self.myFunction)
</code></pre>
<p><strong>What I am trying to do ?</strong> </p>
<ul>
<li>I want to override dropEvent of treeWidget in my python script</li>
</ul>
<p><strong>What I have tried but didn't work :</strong></p>
<pre><code>self.ui.tree_widget_of_items.dropEvent = self.drop_action
def drop_action(self,e):
print "drop action"
</code></pre>
<p>I have tried assigning a my own function to dropEvent of TreeWidget but it doesn't get triggered when I drop an item on TreeWidget.</p>
<p>I have also tried :</p>
<pre><code>self.ui.tree_widget_of_items.dragEnterEvent = self.drop_action
</code></pre>
<p>I made sure that Drag and Drop is enabled on TreeWidget.</p>
| 0 | 2016-08-08T20:54:38Z | 38,843,487 | <p>You have to install an event filter on the tree and implement the <code>QObject.eventFilter</code> method in your class.</p>
<p>Example (install event filter):</p>
<pre><code>self.ui.tree_widget_of_items.installEventFilter(self)
</code></pre>
<p>And implement eventFilter:</p>
<pre><code>def eventFilter(self, o, e):
if e.type() == QEvent.DragEnter: #remember to accept the enter event
e.acceptProposedAction()
return True
if e.type() == QEvent.Drop:
# handle the event
# ...
return True
return False #remember to return false for other event types
</code></pre>
<p>See <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qobject.html#installEventFilter" rel="nofollow">QObject.installEventFilter</a> and <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qobject.html#eventFilter" rel="nofollow">QObject.eventFilter</a></p>
| 1 | 2016-08-09T06:22:51Z | [
"python",
"qt",
"pyqt",
"pyqt4"
] |
How to override event (dropEvent) of a widget in dynamic UI of PyQt? | 38,838,354 | <p><strong>What am I doing/ What I have so far :</strong></p>
<p>I am using Qt-designer to create PyQt .ui file which I am loading in my python script using QUiLoader which gives me access to the widgets/components as :</p>
<pre><code>self.ui.tree_widget_of_items ( which is a QTreeWidget created in Qt-designer)
</code></pre>
<p>I am able to read and write the values of widgets and I am able to use the signal on TreeWidget like this :</p>
<pre><code>self.ui.tree_widget_of_items.itemSelectionChanged.connect(self.myFunction)
</code></pre>
<p><strong>What I am trying to do ?</strong> </p>
<ul>
<li>I want to override dropEvent of treeWidget in my python script</li>
</ul>
<p><strong>What I have tried but didn't work :</strong></p>
<pre><code>self.ui.tree_widget_of_items.dropEvent = self.drop_action
def drop_action(self,e):
print "drop action"
</code></pre>
<p>I have tried assigning a my own function to dropEvent of TreeWidget but it doesn't get triggered when I drop an item on TreeWidget.</p>
<p>I have also tried :</p>
<pre><code>self.ui.tree_widget_of_items.dragEnterEvent = self.drop_action
</code></pre>
<p>I made sure that Drag and Drop is enabled on TreeWidget.</p>
| 0 | 2016-08-08T20:54:38Z | 38,860,594 | <p>To implement Events in dynamic UI (which means UI created in Qt-designer and used in python tool using .ui file). You need following things :</p>
<ol>
<li><p>MainWindow class of your tool should inherit QtGui.QMainWindow</p>
<pre><code>class main_window(QtGui.QMainWindow):
</code></pre></li>
<li><p>It should call super().<strong>init</strong>()</p>
<pre><code>class main_window(QtGui.QMainWindow):
def __init__(self, parent=None):
super(main_window, self).__init__(parent)
loader = QUiLoader()
file = QtCore.QFile(os.path.join(SCRIPT_DIRECTORY, 'mainwindow.ui'))
file.open(QtCore.QFile.ReadOnly)
if parent:
self.ui = loader.load(file, parentWidget=parent)
else:
self.ui = loader.load(file)
file.close()
</code></pre></li>
<li><p>Install EventFilter on component :</p>
<pre><code>self.ui.tree_widget_of_items.installEventFilter(self)
# not --> self.ui.tree_widget_of_items.installEventFilter(self.ui)
</code></pre></li>
<li><p>Define eventFilter() :</p>
<pre><code>def eventFilter(self, o, e):
if (o.objectName() == "tree_widget_of_items"):
if e.type() == QtCore.QEvent.Type.Enter:
self.drop_action(e)
</code></pre></li>
</ol>
| 0 | 2016-08-09T21:17:38Z | [
"python",
"qt",
"pyqt",
"pyqt4"
] |
How to print a dictionary separated by commas | 38,838,407 | <p>Let's say we have a dictionary</p>
<pre><code>dict = { 'Dollar': 12, 'Half-Coin': 4, 'Quarter': 3, 'Dime': 7 }
</code></pre>
<p>How would I go about printing the code so it looks like:</p>
<blockquote>
<p>Dollar 12, Half-Coin 4, Quarter 3, Dime 7</p>
</blockquote>
| 0 | 2016-08-08T20:57:44Z | 38,838,516 | <pre><code>dict = { 'Dollar': 12, 'Half-Coin': 4, 'Quarter': 3, 'Dime': 7 }
out=""
for i in dict:
out += i+" "+str(dict[i])+", "
print out[:-2]
</code></pre>
<p>result:</p>
<pre><code>Half-Coin 4, Quarter 3, Dollar 12, Dime 7
</code></pre>
| -1 | 2016-08-08T21:05:34Z | [
"python",
"dictionary",
"formatting"
] |
How to print a dictionary separated by commas | 38,838,407 | <p>Let's say we have a dictionary</p>
<pre><code>dict = { 'Dollar': 12, 'Half-Coin': 4, 'Quarter': 3, 'Dime': 7 }
</code></pre>
<p>How would I go about printing the code so it looks like:</p>
<blockquote>
<p>Dollar 12, Half-Coin 4, Quarter 3, Dime 7</p>
</blockquote>
| 0 | 2016-08-08T20:57:44Z | 38,838,532 | <p>Use <code>','.join()</code>, passing in a generator of strings.</p>
<pre><code>d = { 'Dollar': 12, 'Half-Coin': 4, 'Quarter': 3, 'Dime': 7 }
print ', '.join('{} {}'.format(k,v) for k,v in d.items())
</code></pre>
<p>Result:</p>
<pre><code>Half-Coin 4, Quarter 3, Dollar 12, Dime 7
</code></pre>
<p>If you want the results to be in a predictable order, you'll need to sort the items.</p>
<pre><code>order=('Dollar', 'Half-Coin', 'Quarter', 'Dime')
d = { 'Dollar': 12, 'Half-Coin': 4, 'Quarter': 3, 'Dime': 7 }
print ', '.join('{} {}'.format(k,d[k]) for k in sorted(d, key=order.index))
</code></pre>
<p>Result:</p>
<pre><code>Dollar 12, Half-Coin 4, Quarter 3, Dime 7
</code></pre>
<p>Ps. Don't name your variables with names of builtin types. Your name eclipses the builtin name, so subsequent code won't be able to call <code>dict()</code>, for example.</p>
| 1 | 2016-08-08T21:06:51Z | [
"python",
"dictionary",
"formatting"
] |
How to print a dictionary separated by commas | 38,838,407 | <p>Let's say we have a dictionary</p>
<pre><code>dict = { 'Dollar': 12, 'Half-Coin': 4, 'Quarter': 3, 'Dime': 7 }
</code></pre>
<p>How would I go about printing the code so it looks like:</p>
<blockquote>
<p>Dollar 12, Half-Coin 4, Quarter 3, Dime 7</p>
</blockquote>
| 0 | 2016-08-08T20:57:44Z | 38,838,586 | <pre><code>", ".join([x +" "+str(dict[x]) for x in dict.keys()])
</code></pre>
| -1 | 2016-08-08T21:11:04Z | [
"python",
"dictionary",
"formatting"
] |
BeautifulSoup difference between findAll and findChildren | 38,838,460 | <p>What is the difference? Don't they do the same thing - find the inside tags with given properties?</p>
| 2 | 2016-08-08T21:01:29Z | 38,839,074 | <p><em>findChildren</em> returns a <em>resultSet</em> just as <em>find_all</em> does, there is no difference in using either method as <a href="https://github.com/JinnLynn/beautifulsoup/blob/master/bs4/element.py#L1178" rel="nofollow"><em>findChildren</em></a> is actually <em>find_all</em>, if you look at the link to the source you can see:</p>
<pre><code> findChildren = find_all # BS2
</code></pre>
<p>It's there for backwards compatibility as is <code>findAll = find_all # BS3</code></p>
| 1 | 2016-08-08T21:52:29Z | [
"python",
"beautifulsoup"
] |
PySpark distributing module imports | 38,838,465 | <p>Over the past few days I've been working on trying to understand how Spark executors know how to use a module by a given name upon import. I am working on AWS EMR. Situation:
I initialize pyspark on EMR by typing</p>
<p>pyspark --master yarn</p>
<p>Then, in pyspark,</p>
<pre><code>import numpy as np ## notice the naming
def myfun(x):
n = np.random.rand(1)
return x*n
rdd = sc.parallelize([1,2,3,4], 2)
rdd.map(lambda x: myfun(x)).collect() ## works!
</code></pre>
<p>My understanding is that when I import <code>numpy as np</code>, the master node is the only node importing and identifying <code>numpy</code> through <code>np</code>. However, with an EMR cluster (2 worker nodes), if I run the map function on the rdd, the driver program sends the function to the worker nodes to execute the function for each item in the list (for each partition), and a successful result is returned.</p>
<p>My question is this: How do the workers know that numpy should be imported as np? Each worker has numpy already installed, but I've not defined explicitly defined a way for each node to import the module <code>as np</code>.</p>
<p>Please refer to the following post by Cloudera for further details on dependencies:
<a href="http://blog.cloudera.com/blog/2015/09/how-to-prepare-your-apache-hadoop-cluster-for-pyspark-jobs/" rel="nofollow">http://blog.cloudera.com/blog/2015/09/how-to-prepare-your-apache-hadoop-cluster-for-pyspark-jobs/</a></p>
<p>Under <strong>Complex Dependency</strong> they have an example (code) where the pandas module is explicitly imported on each node.</p>
<p>One theory that I've heard being thrown around is that the driver program distributes all code passed in the pyspark interactive shell. I am skeptical of this. The example I bring up to counter this idea is, if on the master node I type:</p>
<pre><code>print "hello"
</code></pre>
<p>is every worker node also printing "hello"? I don't think so. But maybe I am wrong on this.</p>
| 4 | 2016-08-08T21:01:57Z | 38,839,181 | <p>When function is serialized there is a <a href="https://github.com/apache/spark/blob/d48935400ca47275f677b527c636976af09332c8/python/pyspark/cloudpickle.py#L222" rel="nofollow">number of objects is being saved</a>:</p>
<ul>
<li><strong>code</strong></li>
<li><strong>globals</strong></li>
<li>defaults</li>
<li><a href="http://stackoverflow.com/q/14413946/1560062">closure</a></li>
<li>dict</li>
</ul>
<p>which can be later used to restore complete environment required for a given function.</p>
<p>Since <code>np</code> is referenced by the function it can be extracted from its code:</p>
<pre><code>from pyspark.cloudpickle import CloudPickler
CloudPickler.extract_code_globals(myfun.__code__)
## {'np'}
</code></pre>
<p>and binding can be extracted from its <code>globals</code>:</p>
<pre><code>myfun.__globals__['np']
## <module 'numpy' from ...
</code></pre>
<p>So serialized closure (in a broad sense) captures all information required to restore environment. Of course all modules accessed in the closure have to be importable on every worker machine.</p>
<p>Everything else is just reading and writing machinery.</p>
<p>On a side note master node shouldn't execute any Python code. It is responsible for resources allocation not running application code.</p>
| 3 | 2016-08-08T22:01:24Z | [
"python",
"apache-spark",
"pyspark"
] |
Python How to extract specified string within [ ] brackets in pandas dataframe and create a new column with boolean values | 38,838,519 | <p>I'm new to programming and would appreciate any of your insights!</p>
<p>I have a data frame like this. </p>
<p>df;</p>
<pre><code> info Price
0 [100:Sailing] $100
1 [150:Boating, 100:Sailing] $200
2 [200:Surfing] $300
</code></pre>
<p>I would like to create new columns with activity names based on information in info column and add 1 in the new column if there is a corresponding name in info column. It is going to look like dataframe below.</p>
<pre><code> Price Sailing Boating Surfing
0 $100 1 0 0
1 $200 1 1 0
2 $300 0 0 1
</code></pre>
<p>I tried a code blow but did not work..(eventhough this approach works in other columns)</p>
<pre><code>df1 = df.info.str.extract(r'(Boating|Sailing|Surfing)',expand=False)
df2 = pd.concat([df,pd.get_dummies(df1).astype(int)],axis=1)
</code></pre>
<p>I have over 10 thousands of data like this so idealy I would like to write a code which automatically extract specified string (like Surfing) in info column, create a new column with the activity name and return 1 or 0 as shown above. I thought that maybe brackets in the data or data type in the dataframe are causing the problem, but I am not sure how to tackle this..</p>
| 2 | 2016-08-08T21:05:42Z | 38,838,796 | <p>I assumed the format of the values in the info column is like a Python list.</p>
<pre><code>df1 = df['info'].str[1:-1].str.replace(' ', '').str.get_dummies(',')
df1.rename(columns=lambda x: x.rsplit(':')[-1], inplace=True)
df2 = pd.concat([df, df1.astype(int)], axis=1)
df2
Out:
info Price Sailing Boating Surfing
0 [100:Sailing] $100 1 0 0
1 [150:Boating, 100:Sailing] $200 1 1 0
2 [200:Surfing] $300 0 0 1
</code></pre>
| 5 | 2016-08-08T21:28:44Z | [
"python",
"string",
"python-2.7",
"pandas",
"extraction"
] |
MongoEngine returns empty list | 38,838,676 | <p>I have a database named suvaider. It contains two collection Relation and Reviews. I have filled these two by importing from json files. I have created models for these two collections. But while trying to use these models to get data with mongoengine it returns empty array. I'm a beginner I'm using mongodb for the first time with flask. Thanks in advance!!!</p>
<pre><code># This is models.py
from flask import url_for
from suvaiderBackend import db
class Hotels(db.EmbeddedDocument):
property_id = db.StringField(max_length=255,required=True)
name = db.StringField(max_length=255,required=True)
class Relation(db.Document):
parent = db.EmbeddedDocumentField('Hotels')
units = db.ListField(db.EmbeddedDocumentField('Hotels'))
class Reviews(db.Document):
property_id = db.StringField(max_length=255,required=True)
rating = db.IntField(default=0)
review = db.StringField()
sentiment = db.StringField(max_length=255)
review_link = db.StringField()
#This is __init__.py
from flask import Flask
from flask.ext.mongoengine import MongoEngine
app = Flask(__name__)
app.config["MONGODB_SETTINGS"] = {'DB': "suvaider"}
app.config["SECRET_KEY"] = "Keep3H9Secret"
db = MongoEngine(app)
if __name__ == '__main__':
app.run(debug=true)
</code></pre>
| 0 | 2016-08-08T21:17:37Z | 39,005,201 | <p>According to the <a href="http://docs.mongoengine.org/guide/defining-documents.html#document-collections" rel="nofollow">documentation</a> Mongoengine by default converts your Document class name to lowercase and use it as a name of the collection. So in your example it looks for collections named: relation and reviews. Since you have existing database with different collections(notice the spelling - first letter uppercase) you should set custom collection name by adding</p>
<pre><code>meta = {'collection': 'collectionName'}
</code></pre>
<p>to your documents.</p>
<pre><code>class Relation(db.Document):
parent = db.EmbeddedDocumentField('Hotels')
units = db.ListField(db.EmbeddedDocumentField('Hotels'))
meta = {'collection': 'Relation'}
class Reviews(db.Document):
property_id = db.StringField(max_length=255,required=True)
rating = db.IntField(default=0)
review = db.StringField()
sentiment = db.StringField(max_length=255)
review_link = db.StringField()
meta = {'collection': 'Reviews'}
</code></pre>
| 0 | 2016-08-17T19:54:39Z | [
"python",
"mongodb",
"mongoengine",
"flask-mongoengine"
] |
Why is my string recognition algorithm not detecting the correct string? | 38,838,714 | <p>Apparently, my string recognition algorithm is not working properly. It is returning the wrong responses based on the user's input. I know it's probably something simple, but I just can't see what it is right now. I done the research and I haven't found another python issue here where when the user input is entered, the wrong response is returned. Are my if statements properly formed? Is there an issue with the string search?</p>
<pre><code>import random
def po_response():
response = {1 : 'This is random response A from po_response()',
2 : 'This is random response B from po_response()',
3 : 'This is randomw response C from po_response()'}
answer = random.sample(response.items(),1)
print(answer)
main()
def greetings_response():
response = {1 : 'This is response A from greetings_response()',
2 : 'This is response B from greetings_response()',
3 : 'This is response C from greetings_response()'}
answer = random.sample(response.items(),1)
print(answer)
return response
main()
def main():
userRequest = input('Welcome, Please enter your request. \n')
userReq = userRequest.lower()
if 'how are you' or 'how\'s it going' in userReq:
print('first if')
print('The value of the input is ')
print(userReq)
greetings_response()
elif 'ship' + 'po' or 'purchase order' in userReq:
print('elif')
print('The value of the input is ')
print(userReq)
po_response()
else:
print('Please re-enter your request')
main()
</code></pre>
<p>Here is the response I get when I enter 'ship po'</p>
<pre><code> Welcome, Please enter your request.
>>>ship po
first if
The value of the input is
ship po
[(2, 'This is response B from greetings_response()')]
</code></pre>
<p>It should not go to the greetings_response() function, it should go to the po_response() function. Not sure why it's acting this way. </p>
| 0 | 2016-08-08T21:21:42Z | 38,838,902 | <p>Test using <code>or</code> seems wrong. Maybe you meant <code>if ('how are you' in userReq) or ('how\'s it goind' in userReq)</code> ? Have a look to Python operators precedence.</p>
| 0 | 2016-08-08T21:36:58Z | [
"python",
"string",
"algorithm"
] |
Best practice to deploy python app to use a specific version of Python with venv | 38,838,715 | <p>I am done with a project and it has been been pushed to git but the client wants VENV. I already got <code>venv</code> to work and created a <code>requirements.txt</code> file.</p>
<p>My question is what is the best practice for a deployment workflow. So far this is what I created as a deploy workflow:</p>
<pre><code>git clone ssh://myawesomerepo
cd myawesomerepo
pip install virtualenv
venv -python=python3.5 env
source env/bin/activate
pip install -r requirements.txt
python run.py
</code></pre>
<p>Is this the correct workflow?</p>
<p><strong>Assuming that we don't know what version of python the client has. My project is written for python 3.5, and if the client has 2.7 will this work?*</strong></p>
| 2 | 2016-08-08T21:22:03Z | 38,840,083 | <p>I tend to you use the Anaconda package manager rather than venv, one of the nice features is that if you run</p>
<pre><code>conda create -n myenv python=3.5
</code></pre>
<p>it will download and install Python 3.5 even if it's not installed on the system already.</p>
| 1 | 2016-08-08T23:50:11Z | [
"python",
"git",
"python-venv"
] |
Best practice to deploy python app to use a specific version of Python with venv | 38,838,715 | <p>I am done with a project and it has been been pushed to git but the client wants VENV. I already got <code>venv</code> to work and created a <code>requirements.txt</code> file.</p>
<p>My question is what is the best practice for a deployment workflow. So far this is what I created as a deploy workflow:</p>
<pre><code>git clone ssh://myawesomerepo
cd myawesomerepo
pip install virtualenv
venv -python=python3.5 env
source env/bin/activate
pip install -r requirements.txt
python run.py
</code></pre>
<p>Is this the correct workflow?</p>
<p><strong>Assuming that we don't know what version of python the client has. My project is written for python 3.5, and if the client has 2.7 will this work?*</strong></p>
| 2 | 2016-08-08T21:22:03Z | 39,522,736 | <p>I use the following bash script to install my python applications:
I check for either versions of pip ( 2.x or 3.x ) if they are in PATH:</p>
<pre><code>#!/bin/bash
if [ command -v pip3 >/dev/null 2>&1 ];
then
echo "installing virtual env with pip3"
pip3 install virtualenv
else
echo "installing virtual env with pip"
pip install virtualenv
fi
echo "Installing python 3.5 bin"
virtualenv --python=python3.5 env
echo "Activate virtual env"
source env/bin/activate
echo "Installing requirements"
pip install -r requirements.txt
</code></pre>
| 1 | 2016-09-16T01:53:24Z | [
"python",
"git",
"python-venv"
] |
Merging crosstabs in Python | 38,838,764 | <p>I am trying to merge multiple crosstabs into a single one. Note that the data provided is obviously only for test purposes. The actual data is much larger so efficiency is quite important for me.</p>
<p>The crosstabs are generated, listed, and then merged with a lambda function on the <code>word</code> column. However, the result of this merging is not what I expect it to be. I think the problem is that the columns with only NA values of the crosstabs are being dropped even when using <code>dropna = False</code>, which would then result in the <code>merge</code> function failing. I'll first show the code and after that present the intermediate data and errors.</p>
<pre><code>import pandas as pd
import numpy as np
import functools as ft
def main():
# Create dataframe
df = pd.DataFrame(data=np.zeros((0, 3)), columns=['word','det','source'])
df["word"] = ('banana', 'banana', 'elephant', 'mouse', 'mouse', 'elephant', 'banana', 'mouse', 'mouse', 'elephant', 'ostrich', 'ostrich')
df["det"] = ('a', 'the', 'the', 'a', 'the', 'the', 'a', 'the', 'a', 'a', 'a', 'the')
df["source"] = ('BE', 'BE', 'BE', 'NL', 'NL', 'NL', 'FR', 'FR', 'FR', 'FR', 'FR', 'FR')
create_frequency_list(df)
def create_frequency_list(df):
# Create a crosstab of ALL values
# NOTE that dropna = False does not seem to work as expected
total = pd.crosstab(df.word, df.det, dropna = False)
total.fillna(0)
total.reset_index(inplace=True)
total.columns = ['word', 'a', 'the']
crosstabs = [total]
# For the column headers, multi-level
first_index = [('total','total')]
second_index = [('a','the')]
# Create crosstabs per source (one for BE, one for NL, one for FR)
# NOTE that dropna = False does not seem to work as expected
for source, tempDf in df.groupby('source'):
crosstab = pd.crosstab(tempDf.word, tempDf.det, dropna = False)
crosstab.fillna(0)
crosstab.reset_index(inplace=True)
crosstab.columns = ['word', 'a', 'the']
crosstabs.append(crosstab)
first_index.extend((source,source))
second_index.extend(('a','the'))
# Just for debugging: result as expected
for tab in crosstabs:
print(tab)
merged = ft.reduce(lambda left,right: pd.merge(left,right, on='word'), crosstabs).set_index('word')
# UNEXPECTED RESULT
print(merged)
arrays = [first_index, second_index]
# Throws error: NotImplementedError: > 1 ndim Categorical are not supported at this time
columns = pd.MultiIndex.from_arrays(arrays)
df_freq = pd.DataFrame(data=merged.as_matrix(),
columns=columns,
index = crosstabs[0]['word'])
print(df_freq)
main()
</code></pre>
<p><strong>Individual crosstabs</strong>: not as expected. The NA columns are dropped</p>
<pre><code> word a the
0 banana 2 1
1 elephant 1 2
2 mouse 2 2
3 ostrich 1 1
word a the
0 banana 1 1
1 elephant 0 1
word a the
0 banana 1 0
1 elephant 1 0
2 mouse 1 1
3 ostrich 1 1
word a the
0 elephant 0 1
1 mouse 1 1
</code></pre>
<p>That means that the dataframes do not share all values among each other which in turn will probably mess up the merging.</p>
<p><strong>Merge</strong>: not as expected, obviously</p>
<pre><code> a_x the_x a_y the_y a_x the_x a_y the_y
word
elephant 1 2 0 1 1 0 0 1
</code></pre>
<p>However, the error only gets thrown at the columns assignment:</p>
<pre><code># NotImplementedError: > 1 ndim Categorical are not supported at this time
columns = pd.MultiIndex.from_arrays(arrays)
</code></pre>
<p>So as far as I can tell the problem starts early, with the NAs and makes the whole thing fail. However, as I a not experienced enough in Python, I cannot know for sure.</p>
<p>What I expected, was a multi index output:</p>
<pre><code> source total BE FR NL
det a the a the a the a the
word
0 banana 2 1 1 1 1 0 0 0
1 elephant 1 2 0 1 1 0 0 1
2 mouse 2 2 0 0 1 1 1 1
3 ostrich 1 1 0 0 1 1 0 0
</code></pre>
| 4 | 2016-08-08T21:26:19Z | 38,839,103 | <p>I just decided to give you a better way of getting you what you want:</p>
<p>I use <code>df.groupby([col1, col2]).size().unstack()</code> to proxy as my <code>pd.crosstab</code> as a general rule. You were trying to do a crosstab for every group of <code>source</code>. I can fit that in nicely with my existing groupby with <code>df.groupby([col1, col2, col3]).size().unstack([2, 1])</code></p>
<p>The <code>sort_index(1).fillna(0).astype(int)</code> is just to pretty things up.</p>
<p>If you want to understand even better. Try the following things and look what you get:</p>
<ul>
<li><code>df.groupby(['word', 'gender']).size()</code></li>
<li><code>df.groupby(['word', 'gender', 'source']).size()</code></li>
</ul>
<p><code>unstack</code> and <code>stack</code> are convenient ways to get things that were in the index into the columns instead and vice versa. <code>unstack([2, 1])</code> is specifying the order in which index levels get unstacked.</p>
<p>Finally, I take my <code>xtabs</code> and <code>stack</code> again and sum across the rows and <code>unstack</code> to prep to <code>pd.concat</code>. Voilà !</p>
<pre><code>xtabs = df.groupby(df.columns.tolist()).size() \
.unstack([2, 1]).sort_index(1).fillna(0).astype(int)
pd.concat([xtabs.stack().sum(1).rename('total').to_frame().unstack(), xtabs], axis=1)
</code></pre>
<p><a href="http://i.stack.imgur.com/zKlWx.png" rel="nofollow"><img src="http://i.stack.imgur.com/zKlWx.png" alt="enter image description here"></a></p>
<p><strong><em>Your Code</em></strong> should now look like this:</p>
<pre><code>import pandas as pd
import numpy as np
import functools as ft
def main():
# Create dataframe
df = pd.DataFrame(data=np.zeros((0, 3)), columns=['word','gender','source'])
df["word"] = ('banana', 'banana', 'elephant', 'mouse', 'mouse', 'elephant', 'banana', 'mouse', 'mouse', 'elephant', 'ostrich', 'ostrich')
df["gender"] = ('a', 'the', 'the', 'a', 'the', 'the', 'a', 'the', 'a', 'a', 'a', 'the')
df["source"] = ('BE', 'BE', 'BE', 'NL', 'NL', 'NL', 'FR', 'FR', 'FR', 'FR', 'FR', 'FR')
return create_frequency_list(df)
def create_frequency_list(df):
xtabs = df.groupby(df.columns.tolist()).size() \
.unstack([2, 1]).sort_index(1).fillna(0).astype(int)
total = xtabs.stack().sum(1)
total.name = 'total'
total = total.to_frame().unstack()
return pd.concat([total, xtabs], axis=1)
main()
</code></pre>
| 2 | 2016-08-08T21:54:52Z | [
"python",
"pandas",
"merge",
"multi-index"
] |
Encoding error using csv.reader on io file object with non-ascii encoding | 38,838,877 | <p>I am trying to read a csv file with cp1252 encoding like this:</p>
<pre><code>import io
import csv
csvr = csv.reader(io.open('data.csv', encoding='cp1252'))
for row in csvr:
print row
</code></pre>
<p>The relevant content of 'data.csv' is</p>
<pre><code>Curva IV
Fecha: 27-Jul-2016 16:22:40
Muestra: 1
Tensión Corriente Ig
0.000000e+000 1.154330e-004 -2.984730e-004
...
</code></pre>
<p>and I get the following output</p>
<pre><code>['Curva IV']
['Fecha: 27-Jul-2016 16:22:40']
['Muestra: 1']
Traceback (most recent call last):
File "D:/sandbox/bla.py", line 347, in <module>
mist()
File "D:/sandbox/bla.py", line 343, in mist
for row in csvr:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf3' in position 5: ordinal not in range(128)
</code></pre>
<p>which I do not understand at all. Obviously the critical line is that with the accent on the 'o'. It seems like the iterator of the object returned by csv.reader is attempting to do a conversion. The exception is raised before the print statement, so it is not a problem with my terminal encoding. Any ideas what is going on here?</p>
| 0 | 2016-08-08T21:35:16Z | 38,839,135 | <p>From the docs:</p>
<blockquote>
<p>Note</p>
<p>This version of the csv module doesnât support Unicode input. Also,
there are currently some issues regarding ASCII NUL characters.
Accordingly, all input should be UTF-8 or printable ASCII to be safe;
see the examples in section Examples.</p>
</blockquote>
<p>The input has to be converted to UTF-8 before passing it to csv.reader.</p>
| 0 | 2016-08-08T21:57:24Z | [
"python",
"python-2.7",
"csv",
"encoding"
] |
How to deal with an 'endless' webpage when scraping | 38,838,904 | <p>I'm making a scraper to grab a list of my friends from facebook then scrape a list of mutual friends from them, with goal of constructing a web with the data. I looked at the official facebook api, and it doesn't seem possible to do so I decided to simply scrape from the webpages.</p>
<p>After using mechanize to login, I scraped the page and discovered that facebook only loads 20 friends at a time, loading more as you scroll. I looked through he mechanize docs, but I couldn't find a solution. I tried sleeping for a few seconds before souping the page and that didn't work either.</p>
<p>Not sure where to go from here, is there anyway to emulate scrolling in mechanize?</p>
| -1 | 2016-08-08T21:37:07Z | 38,838,968 | <p>Unless you use <a href="http://selenium-python.readthedocs.io/" rel="nofollow">Selenium</a> to simulate the actual webpage, you won't be able to simulate "scrolling" (how do you scroll when there is no window, therefore no window height?)</p>
<p>You state that there's nothing in the API which allows you to fetch friends of friends, but there seems to be an <a href="https://developers.facebook.com/docs/graph-api/reference/friend-list/" rel="nofollow">API function</a> that allows fetching the friend list of a user.</p>
<p>If that doesn't work either, your only choice would be to track down the ajax that FB uses to fetch the next list of friends, and use that to fetch more information.</p>
| 0 | 2016-08-08T21:42:46Z | [
"python",
"facebook",
"mechanize",
"mechanize-python"
] |
Failure to import matplotlib.pyplot in jupyter (but not ipython) | 38,838,914 | <p>Update: <code>ipykeynel 4.4.1</code> patched this issue the morning of Aug 9.</p>
<p>I have a fresh install and I have been trying to get my python dependencies up and running, namely jupyter notebook and matplotlib. I've pip installed everything, and "import matplotlib" works. If I am in a jupyter notebook, and I try "import matplotlib.pyplot" or "from matplotlib import pyplot as plt", I get: </p>
<pre><code>ImportError Traceback (most recent call last)
...
/usr/local/lib/python2.7/dist-packages/IPython/core/pylabtools.pyc in configure_inline_support(shell, backend)
359 except ImportError:
360 return
--> 361 from matplotlib import pyplot
362
363 cfg = InlineBackend.instance(parent=shell)
ImportError: cannot import name pyplot
</code></pre>
<p><a href="http://pastebin.com/txPi9gUe" rel="nofollow">Full traceback</a></p>
<p>However, if I am in ipython (command line), this works fine. Also, running plots from a module from the command line, fine. I have tried a variety of techniques:</p>
<ul>
<li>Pip install / uninstall matplotlib, ipython, and jupyter in various order</li>
<li>Using pip with --no-cache-dir and/or --ignore-installed</li>
<li>Deleting ~/.cache, ~/.ipython and ~/.jupyter</li>
<li>Making sure no packages are installed with apt-get, only installed with pip</li>
<li>Using apt-get to install python-matplotlib, ipython, and python-jupyter</li>
</ul>
<p>It feels like I have mangled some sort of path information, but I cannot locate what or where would cause this, especially after multiple pip uninstall/reinstall and cache clearing. I've read every SO question relating to importing matplotlib, none have been helpful. </p>
<p>I rolled back to matplotlib 1.4.3, and that worked, but it lacks a couple of features I need. I realize this is probably a tricky one, so if you have any insight, even if incomplete, that would be greatly appreciated. Also, if this is something worthy of a bug report (never done one, not sure if this is a matplotlib problem, or just locally goofed up), comment as such and I'll submit one. Thanks!</p>
<p>System info:</p>
<pre><code>Linux Mint 18 "Sarah"
Python==2.7.12
ipykernel==4.4.0
ipython==5.0.0
ipython-genutils==0.1.0
ipywidgets==5.2.2
jupyter==1.0.0
jupyter-client==4.3.0
jupyter-console==5.0.0
jupyter-core==4.1.0
notebook==4.2.2
numpy==1.11.1
pip 8.1.2 from /usr/local/lib/python2.7/dist-packages (python 2.7)
</code></pre>
<p>Output of sys.path in ipython and jupyter (same for both):</p>
<pre><code>['',
'/usr/local/bin',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-x86_64-linux-gnu',
'/usr/lib/python2.7/lib-tk',
'/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages/PILcompat',
'/usr/lib/python2.7/dist-packages/gtk-2.0',
'/usr/lib/python2.7/dist-packages/ubuntu-sso-client',
'/usr/local/lib/python2.7/dist-packages/IPython/extensions',
'/home/mm/.ipython']
</code></pre>
| 5 | 2016-08-08T21:37:56Z | 38,845,174 | <p>I have the same problem, and the problem maybe produced by ipykernel.
after i roll back ipykernel version to 4.3.1. the problem solved. </p>
<p>just like @Igor Raush said, it's look like import circular of matplotlib.pyplot.</p>
| 5 | 2016-08-09T08:00:23Z | [
"python",
"matplotlib",
"jupyter-notebook"
] |
Failure to import matplotlib.pyplot in jupyter (but not ipython) | 38,838,914 | <p>Update: <code>ipykeynel 4.4.1</code> patched this issue the morning of Aug 9.</p>
<p>I have a fresh install and I have been trying to get my python dependencies up and running, namely jupyter notebook and matplotlib. I've pip installed everything, and "import matplotlib" works. If I am in a jupyter notebook, and I try "import matplotlib.pyplot" or "from matplotlib import pyplot as plt", I get: </p>
<pre><code>ImportError Traceback (most recent call last)
...
/usr/local/lib/python2.7/dist-packages/IPython/core/pylabtools.pyc in configure_inline_support(shell, backend)
359 except ImportError:
360 return
--> 361 from matplotlib import pyplot
362
363 cfg = InlineBackend.instance(parent=shell)
ImportError: cannot import name pyplot
</code></pre>
<p><a href="http://pastebin.com/txPi9gUe" rel="nofollow">Full traceback</a></p>
<p>However, if I am in ipython (command line), this works fine. Also, running plots from a module from the command line, fine. I have tried a variety of techniques:</p>
<ul>
<li>Pip install / uninstall matplotlib, ipython, and jupyter in various order</li>
<li>Using pip with --no-cache-dir and/or --ignore-installed</li>
<li>Deleting ~/.cache, ~/.ipython and ~/.jupyter</li>
<li>Making sure no packages are installed with apt-get, only installed with pip</li>
<li>Using apt-get to install python-matplotlib, ipython, and python-jupyter</li>
</ul>
<p>It feels like I have mangled some sort of path information, but I cannot locate what or where would cause this, especially after multiple pip uninstall/reinstall and cache clearing. I've read every SO question relating to importing matplotlib, none have been helpful. </p>
<p>I rolled back to matplotlib 1.4.3, and that worked, but it lacks a couple of features I need. I realize this is probably a tricky one, so if you have any insight, even if incomplete, that would be greatly appreciated. Also, if this is something worthy of a bug report (never done one, not sure if this is a matplotlib problem, or just locally goofed up), comment as such and I'll submit one. Thanks!</p>
<p>System info:</p>
<pre><code>Linux Mint 18 "Sarah"
Python==2.7.12
ipykernel==4.4.0
ipython==5.0.0
ipython-genutils==0.1.0
ipywidgets==5.2.2
jupyter==1.0.0
jupyter-client==4.3.0
jupyter-console==5.0.0
jupyter-core==4.1.0
notebook==4.2.2
numpy==1.11.1
pip 8.1.2 from /usr/local/lib/python2.7/dist-packages (python 2.7)
</code></pre>
<p>Output of sys.path in ipython and jupyter (same for both):</p>
<pre><code>['',
'/usr/local/bin',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-x86_64-linux-gnu',
'/usr/lib/python2.7/lib-tk',
'/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages/PILcompat',
'/usr/lib/python2.7/dist-packages/gtk-2.0',
'/usr/lib/python2.7/dist-packages/ubuntu-sso-client',
'/usr/local/lib/python2.7/dist-packages/IPython/extensions',
'/home/mm/.ipython']
</code></pre>
| 5 | 2016-08-08T21:37:56Z | 38,854,391 | <p><a href="http://stackoverflow.com/a/38851858/3988037">As mentioned here</a>, using the magic line <code>%matplotlib</code> allows me to use the plot-in-new-window backend (Qt4Agg in my case). I did not know you could use <code>%matplotlib</code> by itself, without an argument. Even though an update to <code>ipykernel 4.4.1</code> fixes this issue, I thought the magic line trick was pretty clever, and may clear up other import weirdness/bugs in the future. </p>
| 0 | 2016-08-09T15:07:00Z | [
"python",
"matplotlib",
"jupyter-notebook"
] |
How to decode a pdf encrypted file from python | 38,838,930 | <p>I have got a PDF file and associated password.</p>
<p>I would to convert an encrypted file to a clear version using python only.</p>
<p>I found <a href="http://stackoverflow.com/questions/6413441/python-pdf-library">here</a> some python modules (pyPdf2 , PDFMiner)
to treat PDF file but none of them will work with encryption.</p>
<p>Someone have already done this ? </p>
| 0 | 2016-08-08T21:39:38Z | 38,857,175 | <p>You'd also need to know the encryption algorithm and key length to be able to advise which tool might work... and depending on the answers, a python library may not be available.</p>
| 0 | 2016-08-09T17:36:38Z | [
"python",
"pdf",
"encryption"
] |
Can I do a dynamic comparison on SQLAlchemy? | 38,838,939 | <p>I have this model:</p>
<pre><code>class PUC(Base):
pucId = Column(Integer, primary_key=True)
asset = Column(TINYINT)
article = Column(TINYINT)
more values ...
more values ...
</code></pre>
<p>And I need to do a query dynamically (This way I tried):</p>
<pre><code>pucs = session.query(PUC).filter(PUC[unique_by_param] == 1).all()
</code></pre>
<p>The value of <code>unique_by_param</code> come from the Frontend.
An example of <code>unique_by_param</code> is: <code>{str}'asset'</code>, <code>{str}'article'</code>, <code>{str}'another_model_value'</code></p>
<p>What I really need is a way to do.
<code>session.query(PUC).filter(PUC.asset == 1)</code> or <code>session.query(PUC).filter(PUC.article == 1)</code> dynamically, like the first way I tried.</p>
<p>The result using (<code>PUC[unique_by_param]</code>) is <code>TypeError: 'DeclarativeMeta' object is not subscriptable</code></p>
<p>There is a way I have used before, but isn't pretty way to do that, but isn't pretty way to do that:</p>
<pre><code># this is a accounting table, so this have around 250 columns
#and this special columns be around 70 variables...
#So this isn't an option o do this.
if unique_by_param == 'asset':
q = (PUC.asset == 1)
elif unique_by_param == 'article':
q = (PUC.article)
elif ...more values:
pucs = session.query(PUC).filter(or_(*q))
</code></pre>
| 1 | 2016-08-08T21:40:21Z | 38,839,386 | <p>Here's an approach that uses <code>filter_by</code> and <a href="https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists" rel="nofollow">keyword argument unpacking</a>:</p>
<pre><code>keyword = {unique_by_param : 1}
session.query(PUC).filter_by(**keyword).all()
</code></pre>
| 1 | 2016-08-08T22:22:49Z | [
"python",
"sqlalchemy"
] |
Extend Python list by assigning beyond end (Matlab-style) | 38,838,942 | <p>I want to use Python to create a file that looks like</p>
<pre><code># empty in the first line
this is the second line
this is the third line
</code></pre>
<p>I tried to write this script</p>
<pre><code>myParagraph = []
myParagraph[0] = ''
myParagraph[1] = 'this is the second line'
myParagraph[2] = 'this is the third line'
</code></pre>
<p>An error is thrown: <code>IndexError: list index out of range</code>. There are many answers on similar questions that recommend using <code>myParagraph.append('something')</code>, which I know works. But I want to better understand the initialization of Python lists. How to manipulate a specific elements in a list that's not populated yet?</p>
| 0 | 2016-08-08T21:40:55Z | 38,838,998 | <pre><code>myParagraph = []
myParagraph.append('')
myParagraph.append('this is the second line')
myParagraph.append('this is the third line')
for i,item in enumerate(myParagraph):
print "i:"+str(i)+": item:"+item
</code></pre>
<p>result:</p>
<pre><code>i:0: item:
i:1: item:this is the second line
i:2: item:this is the third line
</code></pre>
| 0 | 2016-08-08T21:46:01Z | [
"python",
"list"
] |
Extend Python list by assigning beyond end (Matlab-style) | 38,838,942 | <p>I want to use Python to create a file that looks like</p>
<pre><code># empty in the first line
this is the second line
this is the third line
</code></pre>
<p>I tried to write this script</p>
<pre><code>myParagraph = []
myParagraph[0] = ''
myParagraph[1] = 'this is the second line'
myParagraph[2] = 'this is the third line'
</code></pre>
<p>An error is thrown: <code>IndexError: list index out of range</code>. There are many answers on similar questions that recommend using <code>myParagraph.append('something')</code>, which I know works. But I want to better understand the initialization of Python lists. How to manipulate a specific elements in a list that's not populated yet?</p>
| 0 | 2016-08-08T21:40:55Z | 38,839,018 | <p>A list doesn't have an unkown size - len(myParagraph) will give you its length</p>
| 1 | 2016-08-08T21:48:03Z | [
"python",
"list"
] |
Extend Python list by assigning beyond end (Matlab-style) | 38,838,942 | <p>I want to use Python to create a file that looks like</p>
<pre><code># empty in the first line
this is the second line
this is the third line
</code></pre>
<p>I tried to write this script</p>
<pre><code>myParagraph = []
myParagraph[0] = ''
myParagraph[1] = 'this is the second line'
myParagraph[2] = 'this is the third line'
</code></pre>
<p>An error is thrown: <code>IndexError: list index out of range</code>. There are many answers on similar questions that recommend using <code>myParagraph.append('something')</code>, which I know works. But I want to better understand the initialization of Python lists. How to manipulate a specific elements in a list that's not populated yet?</p>
| 0 | 2016-08-08T21:40:55Z | 38,839,046 | <p><code>append</code> is the easiest way to get around this, but if it makes you more comfortable having those indices then you should consider using <a href="https://docs.python.org/3/tutorial/datastructures.html" rel="nofollow"><code>insert</code></a>:</p>
<blockquote>
<p>Insert an item at a given position. The first argument is the index of
the element before which to insert, so <code>a.insert(0, x)</code> inserts at the
front of the list, and <code>a.insert(len(a), x)</code> is equivalent to
<code>a.append(x)</code>.</p>
</blockquote>
<pre><code>myParagraph = []
myParagraph.insert(0, '\n')
myParagraph.insert(1, 'this is the second line\n')
myParagraph.insert(2, 'this is the third line\n')
</code></pre>
<p>And don't forget the new line character <code>'\n'</code> when writing to a file.</p>
| 0 | 2016-08-08T21:50:16Z | [
"python",
"list"
] |
Extend Python list by assigning beyond end (Matlab-style) | 38,838,942 | <p>I want to use Python to create a file that looks like</p>
<pre><code># empty in the first line
this is the second line
this is the third line
</code></pre>
<p>I tried to write this script</p>
<pre><code>myParagraph = []
myParagraph[0] = ''
myParagraph[1] = 'this is the second line'
myParagraph[2] = 'this is the third line'
</code></pre>
<p>An error is thrown: <code>IndexError: list index out of range</code>. There are many answers on similar questions that recommend using <code>myParagraph.append('something')</code>, which I know works. But I want to better understand the initialization of Python lists. How to manipulate a specific elements in a list that's not populated yet?</p>
| 0 | 2016-08-08T21:40:55Z | 38,839,298 | <p>Since you want to associate an index (whether it exists or not) with an element of data, just use a <code>dict</code> with integer indexes:</p>
<pre><code>>>> myParagraph={}
>>> myParagraph[0] = ''
>>> myParagraph[1] = 'this is the second line'
>>> myParagraph[2] = 'this is the third line'
>>> myParagraph[99] = 'this is the 100th line'
>>> myParagraph
{0: '', 1: 'this is the second line', 2: 'this is the third line', 99: 'this is the 100th line'}
</code></pre>
<p>Just know that you will need to sort the dict to reassemble in integer order.</p>
<p>You can reassemble into a string (and skip missing lines) like so:</p>
<pre><code>>>> '\n'.join(myParagraph.get(i, '') for i in range(max(myParagraph)+1))
</code></pre>
| 1 | 2016-08-08T22:13:46Z | [
"python",
"list"
] |
Extend Python list by assigning beyond end (Matlab-style) | 38,838,942 | <p>I want to use Python to create a file that looks like</p>
<pre><code># empty in the first line
this is the second line
this is the third line
</code></pre>
<p>I tried to write this script</p>
<pre><code>myParagraph = []
myParagraph[0] = ''
myParagraph[1] = 'this is the second line'
myParagraph[2] = 'this is the third line'
</code></pre>
<p>An error is thrown: <code>IndexError: list index out of range</code>. There are many answers on similar questions that recommend using <code>myParagraph.append('something')</code>, which I know works. But I want to better understand the initialization of Python lists. How to manipulate a specific elements in a list that's not populated yet?</p>
| 0 | 2016-08-08T21:40:55Z | 38,839,545 | <p>You can do a limited form of this by assigning to a range of indexes starting at the end of the list, instead of a single index beyond the end of the list:</p>
<pre><code>myParagraph = []
myParagraph[0:] = ['']
myParagraph[1:] = ['this is the second line']
myParagraph[2:] = ['this is the third line']
</code></pre>
<p>Note: In Matlab, you can assign to arbitrary positions beyond the end of the array, and Matlab will fill in values up to that point. In Python, any assignment beyond the end of the array (using this syntax or <code>list.insert()</code>) will just append the value(s) into the first position beyond the end of the array, which may not be the same as the index you assigned.</p>
| 1 | 2016-08-08T22:40:01Z | [
"python",
"list"
] |
Convert string of list of dictionary to Python DataFrame | 38,839,100 | <p>I have a .JSON file which is around 3GB. I would like to read this JSON data and load it to pandas data frames. Below is what i did so far..</p>
<p>Step 1: Read JSON file</p>
<pre><code>import pandas as pd
with open('MyFile.json', 'r') as f:
data = f.readlines()
</code></pre>
<p>Step2: just take one component, since data is huge and i want to see how it looks</p>
<pre><code>cp = data[0:1]
print(cp)
['{"reviewerID": "AO94DHGC771SJ", "asin": "0528881469", "reviewerName": "amazdnu", "helpful": [0, 0], "reviewText": "some review text...", "overall": 5.0, "summary": "Gotta have GPS!", "unixReviewTime": 1370131200, "reviewTime": "06 2, 2013"}\n']
</code></pre>
<p>Step3: to remove new line('\n') character</p>
<pre><code>while ix<len(t):
t[ix]=t[ix].rstrip("\n")
ix+=1
</code></pre>
<p>Questions:</p>
<ol>
<li>Why this JSON data is in string? Am I making any mistakes?</li>
<li>How do I convert it into dictionary?</li>
</ol>
<p>What I tried?</p>
<ol>
<li><p>I tried <code>b=dict(zip(t[0::2],t[1::2]))</code>,
but get - 'dict' object not callable</p></li>
<li><p>Tried joining, but did not work though</p></li>
</ol>
<p>Can any one please help me? Thanks!</p>
| 0 | 2016-08-08T21:54:43Z | 38,839,723 | <p>Why haven't you tried <code>pandas.read_json</code>?</p>
<pre><code>import pandas as pd
df = pd.read_json('MyFile.json')
</code></pre>
<p>Works for the example you posted!</p>
<pre><code>In[82]: i = '{"reviewerID": "AO94DHGC771SJ", "asin": "0528881469", "reviewerName": "amazdnu", "helpful": [0, 0], "reviewText": "some review text...", "overall": 5.0, "summary": "Gotta have GPS!", "unixReviewTime": 1370131200, "reviewTime": "06 2, 2013"}'
In[83]: pd.read_json(i)
Out[83]:
asin helpful overall reviewText reviewTime reviewerID reviewerName summary unixReviewTime
0 528881469 0 5 some review text... 06 2, 2013 AO94DHGC771SJ amazdnu Gotta have GPS! 1370131200
</code></pre>
| 0 | 2016-08-08T22:59:05Z | [
"python",
"python-3.x"
] |
Python itertools with multiprocessing - huge list vs inefficient CPUs usage with iterator | 38,839,170 | <p>I work on n elements (named "pair" below) variations with repetition used as my function's argument. Obviously everything works fine as long as the "r" list is not big enough to consume all the memory. The issue is I have to make more then 16 repetitions for 6 elements eventually. I use 40 cores system in cloud for this. </p>
<p>The code looks looks like the following:</p>
<pre><code>if __name__ == '__main__':
pool = Pool(39)
r = itertools.product(pairs,repeat=16)
pool.map(f, r)
</code></pre>
<p>I believe i should use iterator instead of creating the huge list upfront and here the problem starts..</p>
<p>I tried to solve the issue with the following code:</p>
<pre><code>if __name__ == '__main__':
pool = Pool(39)
for r in itertools.product(pairs,repeat=14):
pool.map(f, r)
</code></pre>
<p>The memory problem goes away but the CPUs usage is like 5% per core. Now the single core version of the code is faster then this.</p>
<p>I'd really appreciate if you could guide me a bit..</p>
<p>Thanks.</p>
| 3 | 2016-08-08T22:00:29Z | 38,839,831 | <p>The second code example is slower because you're submitting a single pair to a Pool of 39 works. Only one worker will be processing your request and the other 38 will do nothing! Will be slower because you'll have overhead in piping data from the main thread to the workers processes.</p>
<p>You can "buffer" some pairs, then execute the set of pairs to balance out memory usage but still get advantage of the multiprocess environment.</p>
<pre><code>import itertools
from multiprocessing import Pool
def foo(x):
return sum(x)
cpus = 3
pool = Pool(cpus)
# 10 is buffer size multiplier - the number of pair that each process will get
buff_size = 10*cpus
buff = []
for i, r in enumerate(itertools.product(range(20), range(10))):
if (i % buff_size) == (buff_size-1):
print pool.map(foo, buff)
buff = []
else:
buff.append(r)
if len(buff) > 0:
print pool.map(foo, buff)
buff = []
</code></pre>
<p>The output of the above will look like this</p>
<pre><code>[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 2, 3, 4, 5, 6, 7, 8, 9, 10]
[3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 5, 6, 7, 8, 9, 10, 11, 12, 13]
[6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 8, 9, 10, 11, 12, 13, 14, 15, 16]
[9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 11, 12, 13, 14, 15, 16, 17, 18, 19]
[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 14, 15, 16, 17, 18, 19, 20, 21, 22]
[15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 17, 18, 19, 20, 21, 22, 23, 24, 25]
[18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]
</code></pre>
<p>Play with the <code>buff_size</code> multiplier to get the right balance for your system!</p>
| 0 | 2016-08-08T23:16:20Z | [
"python",
"multithreading",
"itertools"
] |
Python itertools with multiprocessing - huge list vs inefficient CPUs usage with iterator | 38,839,170 | <p>I work on n elements (named "pair" below) variations with repetition used as my function's argument. Obviously everything works fine as long as the "r" list is not big enough to consume all the memory. The issue is I have to make more then 16 repetitions for 6 elements eventually. I use 40 cores system in cloud for this. </p>
<p>The code looks looks like the following:</p>
<pre><code>if __name__ == '__main__':
pool = Pool(39)
r = itertools.product(pairs,repeat=16)
pool.map(f, r)
</code></pre>
<p>I believe i should use iterator instead of creating the huge list upfront and here the problem starts..</p>
<p>I tried to solve the issue with the following code:</p>
<pre><code>if __name__ == '__main__':
pool = Pool(39)
for r in itertools.product(pairs,repeat=14):
pool.map(f, r)
</code></pre>
<p>The memory problem goes away but the CPUs usage is like 5% per core. Now the single core version of the code is faster then this.</p>
<p>I'd really appreciate if you could guide me a bit..</p>
<p>Thanks.</p>
| 3 | 2016-08-08T22:00:29Z | 38,839,904 | <p>Your original code isn't creating a <code>list</code> upfront in your own code (<code>itertools.product</code> returns a generator), but <code>pool.map</code> is realizing the whole generator (because it assumes if you can store all outputs, you can store all inputs too).</p>
<p>Don't use <code>pool.map</code> here. If you need ordered results, using <code>pool.imap</code>, or if result order is unimportant, use <code>pool.imap_unordered</code>. Iterate the result of either call (don't wrap in <code>list</code>), and process the results as they come, and memory should not be an issue:</p>
<pre><code>if __name__ == '__main__':
pool = Pool(39)
for result in pool.imap(f, itertools.product(pairs, repeat=16)):
print(result)
</code></pre>
<p>If you're using <code>pool.map</code> for side-effects, so you just need to run it to completion but the results and ordering don't matter, you could dramatically improve performance by using <code>imap_unordered</code> and using <code>collections.deque</code> to efficiently drain the "results" without actually storing anything (a <code>deque</code> with <code>maxlen</code> of <code>0</code> is the fastest, lowest memory way to force an iterator to run to completion without storing the results):</p>
<pre><code>from collections import deque
if __name__ == '__main__':
pool = Pool(39)
deque(pool.imap_unordered(f, itertools.product(pairs, repeat=16)), 0)
</code></pre>
<p>Lastly, I'm a little suspicious of specifying 39 <code>Pool</code> workers; <code>multiprocessing</code> is largely beneficial for CPU bound tasks; if you're using using more workers than you have CPU cores and gaining a benefit, it's possible <code>multiprocessing</code> is costing you more in IPC than it gains, and using more workers is just masking the problem by buffering more data.</p>
<p>If your work is largely I/O bound, you might try using a thread based pool, which will avoid the overhead of pickling and unpickling, as well as the cost of IPC between parent and child processes. Unlike process based pools, Python threading is subject to <a href="https://wiki.python.org/moin/GlobalInterpreterLock" rel="nofollow">GIL</a> issues, so your CPU bound work in Python (excluding GIL releasing calls for I/O, <code>ctypes</code> calls into .dll/.so files, and certain third party extensions like <code>numpy</code> that release the GIL for heavy CPU work) is limited to a single core (and in Python 2.x for CPU bound work you often waste a decent amount of that resolving GIL contention and performing context switches; Python 3 removes most of the waste). But if your work is largely I/O bound, blocking on I/O releases the GIL to allow other threads to run, so you can have many threads as long as most of them delay on I/O. It's easy to switch too (as long as you haven't designed your program to rely on separate address spaces for each worker by assuming you can write to "shared" state and not affect other workers or the parent process), just change:</p>
<pre><code>from multiprocessing import Pool
</code></pre>
<p>to:</p>
<pre><code>from multiprocessing.dummy import Pool
</code></pre>
<p>and you get the <a href="https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.dummy" rel="nofollow"><code>multiprocessing.dummy</code></a> version of the pool, based on threads instead of processes.</p>
| 3 | 2016-08-08T23:25:54Z | [
"python",
"multithreading",
"itertools"
] |
How can I make np.save work for an ndarray subclass? | 38,839,174 | <p>I want to be able to save my array subclass to a npy file, and recover the result later.</p>
<p>Something like:</p>
<pre><code>>>> class MyArray(np.ndarray): pass
>>> data = MyArray(np.arange(10))
>>> np.save('fname', data)
>>> data2 = np.load('fname')
>>> assert isinstance(data2, MyArray) # raises AssertionError
</code></pre>
<p><a href="http://docs.scipy.org/doc/numpy/neps/npy-format.html#requirements" rel="nofollow">the docs</a> says (emphasis mine):</p>
<blockquote>
<p>The format explicitly does not need to:</p>
<ul>
<li>[...]</li>
<li>Fully handle arbitrary subclasses of numpy.ndarray. Subclasses will be
accepted for writing, but only the array data will be written out. A
regular numpy.ndarray object will be created upon reading the file.
<strong>The API can be used to build a format for a particular subclass</strong>, but
that is out of scope for the general NPY format.</li>
</ul>
</blockquote>
<p>So is it possible to make the above code not raise an AssertionError?</p>
| 5 | 2016-08-08T22:00:57Z | 38,839,604 | <p>I don't see evidence that <code>np.save</code> handles array subclasses.</p>
<p>I tried to save a <code>np.matrix</code> with it, and got back a <code>ndarray</code>.</p>
<p>I tried to save a <code>np.ma</code> array, and got an error</p>
<pre><code>NotImplementedError: MaskedArray.tofile() not implemented yet.
</code></pre>
<p>Saving is done by <code>np.lib.npyio.format.write_array</code>, which does</p>
<pre><code>_write_array_header() # save dtype, shape etc
</code></pre>
<p>if <code>dtype</code> is object it uses <code>pickle.dump(array, fp ...)</code></p>
<p>otherwise it does <code>array.tofile(fp)</code>. <code>tofile</code> handles writing the data buffer.</p>
<p>I think <code>pickle.dump</code> of an array ends up using <code>np.save</code>, but I don't recall how that's triggered.</p>
<p>I can for example <code>pickle</code> an array, and load it:</p>
<pre><code>In [657]: f=open('test','wb')
In [658]: pickle.Pickler(f).dump(x)
In [659]: f.close()
In [660]: np.load('test')
In [664]: f=open('test','rb')
In [665]: pickle.load(f)
</code></pre>
<p>This <code>pickle</code> dump/load sequence works for test <code>np.ma</code>, <code>np.matrix</code> and <code>sparse.coo_matrix</code> cases. So that's probably the direction to explore for your own subclass.</p>
<p>Searching on <code>numpy</code> and <code>pickle</code> I found <a href="http://stackoverflow.com/q/26598109/901925">Preserve custom attributes when pickling subclass of numpy array</a>. The answer involves a custom <code>.__reduce__</code> and <code>.__setstate__</code>.</p>
| 2 | 2016-08-08T22:46:33Z | [
"python",
"numpy"
] |
Using label encoder on a dictionary | 38,839,211 | <p>I am using the sklearn <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html" rel="nofollow">LabelEncoder</a>. I know how to use it for a 1D array, but my use case is as such:</p>
<p>I have multiple arrays of dicts like this (which is effectively the cost of me assigning each text label <code>u'a'</code>,<code>u'b'</code> etc in a classifier), all within a dict:</p>
<pre><code>{'open_model':
[
{u'a': 47502.125, u'c': 45.3, u'd': 2.3, u'e': 0.45},
{u'b': 121, u'a': 1580.5625, u'c': 12, u'e': 62,u'd':0.343},
{u'e': 12321, u'b': 4, u'a': 0.1112}
],
'closed_model':
[
{u'a': 1231.22, u'c': 43.1},
{u'b': 342.2, u'a': 121.1, u'c': 343},
{u'b': 14.2, u'a': 53.2}
]
}
</code></pre>
<p>I need to be able to encode this into numerical labels and then decode all of them back, so for example:</p>
<pre><code>[
{1: 47502.125, 3: 45.3, 4: 2.3, 5: 0.45},
{2: 121, 1: 1580.5625, 3: 12, 5: 62, 4: 0.343},
{5: 12321, 2: 4, 1: 0.1112}
]
</code></pre>
<p>Which I use effectively to generate predictions of the best label for each row, so:</p>
<pre><code>[5, 4, 1] perhaps in this case.
</code></pre>
<p>What I need to do is to be able to decode this back into:</p>
<pre><code>[u'e',u'd', u'a'] perhaps in this case.
</code></pre>
<p>How can I get the same <code>LabelEncoder</code> functionality but to <code>fit_transform</code> on an array of dicts where the dict keys are my labels?</p>
<p>Note, dict within the array of dicts is a different length, but I do have list of all the potential labels, i.e. for the open_model labels, <code>set([u'a',u'b',u'c',u'd',u'e'])</code> and for the closed_model labels: <code>set([u'a',u'b',u'c'])</code>.</p>
| 0 | 2016-08-08T22:04:38Z | 38,839,549 | <p>It seems that you always have 'a', 'b', 'c', 'd', 'e'. If this is the case why don't you use pandas data frame and forget about the encoder? You kinda need to rewrite the keys of the dictionaries you use, so it's going to be messy anyway!</p>
<pre><code>import pandas as pd
i = [
{u'a': 47502.125, u'b': 1580.5625, u'c': 45.3, u'd': 2.3, u'e': 0.45},
{u'b': 121, u'a': 1580.5625, u'c': 12, u'e': 62, u'd': 0.343},
{u'e': 12321, u'b': 4, u'd': 5434, u'c': 2.3, u'a': 0.1112}
]
# transform to data frame
df = pd.DataFrame(i)
print df
a b c d e
0 47502.1250 1580.5625 45.3 2.300 0.45
1 1580.5625 121.0000 12.0 0.343 62.00
2 0.1112 4.0000 2.3 5434.000 12321.00
# create a mapping between columns and encoders
mapping = dict((k, v) for k, v in enumerate(df.columns))
# rename columns
df.columns = range(len(df.columns))
# print your new input data
print df.to_dict(orient='records)
[{0: 47502.125, 1: 1580.5625, 2: 45.3, 3: 2.3, 4: 0.45},
{0: 1580.5625, 1: 121.0, 2: 12.0, 3: 0.343, 4: 62.0},
{0: 0.1112, 1: 4.0, 2: 2.3, 3: 5434.0, 4: 12321.0}]
# translate prediction
prediction = [3, 4, 1]
print [mapping[k] for k in prediction]
[u'd', u'e', u'b']
</code></pre>
<p>It's not straight forward, but I guess it will take less time than using the encoder :)</p>
| 1 | 2016-08-08T22:40:40Z | [
"python",
"dictionary",
"scikit-learn",
"text-classification",
"multilabel-classification"
] |
Using label encoder on a dictionary | 38,839,211 | <p>I am using the sklearn <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html" rel="nofollow">LabelEncoder</a>. I know how to use it for a 1D array, but my use case is as such:</p>
<p>I have multiple arrays of dicts like this (which is effectively the cost of me assigning each text label <code>u'a'</code>,<code>u'b'</code> etc in a classifier), all within a dict:</p>
<pre><code>{'open_model':
[
{u'a': 47502.125, u'c': 45.3, u'd': 2.3, u'e': 0.45},
{u'b': 121, u'a': 1580.5625, u'c': 12, u'e': 62,u'd':0.343},
{u'e': 12321, u'b': 4, u'a': 0.1112}
],
'closed_model':
[
{u'a': 1231.22, u'c': 43.1},
{u'b': 342.2, u'a': 121.1, u'c': 343},
{u'b': 14.2, u'a': 53.2}
]
}
</code></pre>
<p>I need to be able to encode this into numerical labels and then decode all of them back, so for example:</p>
<pre><code>[
{1: 47502.125, 3: 45.3, 4: 2.3, 5: 0.45},
{2: 121, 1: 1580.5625, 3: 12, 5: 62, 4: 0.343},
{5: 12321, 2: 4, 1: 0.1112}
]
</code></pre>
<p>Which I use effectively to generate predictions of the best label for each row, so:</p>
<pre><code>[5, 4, 1] perhaps in this case.
</code></pre>
<p>What I need to do is to be able to decode this back into:</p>
<pre><code>[u'e',u'd', u'a'] perhaps in this case.
</code></pre>
<p>How can I get the same <code>LabelEncoder</code> functionality but to <code>fit_transform</code> on an array of dicts where the dict keys are my labels?</p>
<p>Note, dict within the array of dicts is a different length, but I do have list of all the potential labels, i.e. for the open_model labels, <code>set([u'a',u'b',u'c',u'd',u'e'])</code> and for the closed_model labels: <code>set([u'a',u'b',u'c'])</code>.</p>
| 0 | 2016-08-08T22:04:38Z | 38,843,254 | <p>Although it is a good practice to use already implemented functionality, you could easily achieve this with a couple of lines of code. Given your list input:</p>
<pre><code>dico = [
{u'a': 47502.125, u'b': 1580.5625, u'c': 45.3, u'd': 2.3, u'e': 0.45},
{u'b': 121, u'a': 1580.5625, u'c': 12, u'e': 62, u'd': 0.343},
{u'e': 12321, u'b': 4, u'd': 5434, u'c': 2.3, u'a': 0.1112}
]
</code></pre>
<p>you can get the set of labels by simply:</p>
<pre><code>keyset = set(dico[0].keys()) #Get the set of keys assuming they all appear in each list item.
mapping = { val:key+1 for key,val in enumerate(list(keyset))} # Create a mapping from int -> str
inv_mapping = { key+1:val for key,val in enumerate(list(keyset))} # Create a mapping from str:int.
</code></pre>
<p>Having the <code>mapping</code> and <code>inv_mapping</code> you can change the representation of your data by:</p>
<pre><code>for inner_dict in dico:
for key in inner_dict.keys():
inner_dict[mapping[key]] = inner_dict.pop(key)
print dico
</code></pre>
<p>which will give you <code>[{1: 47502.125, ...}]</code> and then if needed:</p>
<pre><code>for inner_dict in dico:
for key in inner_dict.keys():
inner_dict[inv_mapping[key]] = inner_dict.pop(key)
print dico
</code></pre>
<p>to get the initial version.</p>
<p>Also, and maybe more closely related to your issue, having your output <code>[5, 4, 1]</code> you can easily transform it by:</p>
<pre><code>print [inv_mapping[i] for i in x]
</code></pre>
| 1 | 2016-08-09T06:09:31Z | [
"python",
"dictionary",
"scikit-learn",
"text-classification",
"multilabel-classification"
] |
What is the advantage of running python script using command line? | 38,839,215 | <p>I am beginner to python and programming in general. As I am learning python, I am tying to develop a good habit or follow a good practice. So let me first explain what I am currently doing.</p>
<p>I use Emacs (prelude) to execute python scripts. The keybinding <code>C-c</code> <code>C-c</code> evaluates the buffer which contains the python script. Then I get a new buffer with a python interpreter with >>> prompt. In this environment all the variables used in the scripts are accessible. For example, if <code>x</code> and <code>y</code> were defined in the script, I can do <code>>>> x + y</code> to evaluate it. </p>
<p>I see many people (if not most) around me using command line to execute the python script (i.e., <code>$ python scriptname.py</code>). If I do this, then I return to the shell prompt, and I am not able to access the variables <code>x</code> and <code>y</code> to perform <code>x + y</code>. So I wasn't sure what the advantage of running python scripts using the command line. </p>
<p>Should I just use Emacs as a editor and use Terminal (I am using Mac) to execute the script? What is a better practice?</p>
<p>Thank you!</p>
| 0 | 2016-08-08T22:04:56Z | 38,909,330 | <p>People use different tools for different purposes. </p>
<p>An important question about the interface into any program is who is the user? You, as a programmer, will use the interpreter to test a program and check for errors. Often times, the user doesn't really need to access the variables inside because they are not interacting with the application/script with an interpreter. For example, with Python web applications, there is usually a main.py script to redirect client HTTP requests to appropriate handlers. These handlers execute a python script automatically when a client requests it. That output is then displayed to the user. In Python web applications, unless you are the developer trying to eliminate a bug in the program, you usually don't care about accessing variables within a file like main.py (in fact, giving the client access to those variables would pose a security issue in some cases). Since you only need the output of a script, you'd execute that script function in command line and display the result to the client.</p>
<p>About best practices: again, depends on what you are doing.</p>
<p>Using the python interpreter for computation is okay for smaller testing of isolated functions but it doesn't work for larger projects where there are more moving parts in a python script. If you have a python script reaching a few hundred lines, you won't really remember or need to remember variable names. In that case, it's better to execute the script in command-line, since you don't need access to the internal components. </p>
<p>You want to create a new script file if you are fashioning that script for a single set of tasks. For example with the handlers example above, the functions in main.py are all geared towards handling HTTP requests. For something like defining x, defining y, and then adding it, you don't really need your own file since you aren't creating a function that you might need in the future and adding two numbers is a built-in method. However, say you have a bunch of functions you've created that aren't available in a built-in method (complicated example: softmax function to reduce K dimension vector to another K dimension vector where every element is a value between 0 and 1 and all the elements sum to 1), you want to capture in a script file and cite that script's procedure later. In that case, you'd create your own script file and cite it in a different python script to execute. </p>
| 0 | 2016-08-12T03:18:10Z | [
"python",
"command-line",
"emacs"
] |
json error in loading from file (python) | 38,839,246 | <p>I'm learning how to use json in python and have encountered this problem:
The next two paragraphs are run separately from the same directory: </p>
<pre><code>x=[1,-1,[1]]
import json
f=open('states','w')
f.close()
f=open('states','r+')
json.dump(x,f)
json.dump(x,f)
f.close()
f=open('states','r+')
y=json.load(f)
f.close()
print y
</code></pre>
<p>The first part seems to run fine however when i run the second part this error occurs:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-41-e06f9ba74fae> in <module>()
1 f=open('states','r+')
----> 2 y=json.load(f)
3 f.close()
4 print y
C:\Users\Yael\Downloads\WinPython-64bit-2.7.10.2\python-2.7.10.amd64\lib\json\__init__.pyc in load(fp, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
288 parse_float=parse_float, parse_int=parse_int,
289 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook,
--> 290 **kw)
291
292
C:\Users\Yael\Downloads\WinPython-64bit-2.7.10.2\python-2.7.10.amd64\lib\json\__init__.pyc in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
336 parse_int is None and parse_float is None and
337 parse_constant is None and object_pairs_hook is None and not kw):
--> 338 return _default_decoder.decode(s)
339 if cls is None:
340 cls = JSONDecoder
C:\Users\Yael\Downloads\WinPython-64bit-2.7.10.2\python-2.7.10.amd64\lib\json\decoder.pyc in decode(self, s, _w)
367 end = _w(s, end).end()
368 if end != len(s):
--> 369 raise ValueError(errmsg("Extra data", s, end, len(s)))
370 return obj
371
ValueError: Extra data: line 1 column 13 - line 1 column 25 (char 12 - 24)
</code></pre>
<p>Why is this happening? I tried changing x to an int and a float an the same error occurs. Thank you for any help ^^.</p>
| 0 | 2016-08-08T22:08:14Z | 38,839,494 | <p>The error is that you dump the JSON twice. So when you want to load it again, it is not well formed. Try to dump only once and retry. Or verify that your JSON is correct in the file you saved.</p>
| 2 | 2016-08-08T22:34:41Z | [
"python",
"json"
] |
json error in loading from file (python) | 38,839,246 | <p>I'm learning how to use json in python and have encountered this problem:
The next two paragraphs are run separately from the same directory: </p>
<pre><code>x=[1,-1,[1]]
import json
f=open('states','w')
f.close()
f=open('states','r+')
json.dump(x,f)
json.dump(x,f)
f.close()
f=open('states','r+')
y=json.load(f)
f.close()
print y
</code></pre>
<p>The first part seems to run fine however when i run the second part this error occurs:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-41-e06f9ba74fae> in <module>()
1 f=open('states','r+')
----> 2 y=json.load(f)
3 f.close()
4 print y
C:\Users\Yael\Downloads\WinPython-64bit-2.7.10.2\python-2.7.10.amd64\lib\json\__init__.pyc in load(fp, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
288 parse_float=parse_float, parse_int=parse_int,
289 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook,
--> 290 **kw)
291
292
C:\Users\Yael\Downloads\WinPython-64bit-2.7.10.2\python-2.7.10.amd64\lib\json\__init__.pyc in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
336 parse_int is None and parse_float is None and
337 parse_constant is None and object_pairs_hook is None and not kw):
--> 338 return _default_decoder.decode(s)
339 if cls is None:
340 cls = JSONDecoder
C:\Users\Yael\Downloads\WinPython-64bit-2.7.10.2\python-2.7.10.amd64\lib\json\decoder.pyc in decode(self, s, _w)
367 end = _w(s, end).end()
368 if end != len(s):
--> 369 raise ValueError(errmsg("Extra data", s, end, len(s)))
370 return obj
371
ValueError: Extra data: line 1 column 13 - line 1 column 25 (char 12 - 24)
</code></pre>
<p>Why is this happening? I tried changing x to an int and a float an the same error occurs. Thank you for any help ^^.</p>
| 0 | 2016-08-08T22:08:14Z | 38,839,662 | <blockquote>
<p>I'm learning how to use json in python</p>
</blockquote>
<p>Alright, here's some examples. </p>
<p><strong>Write to a file</strong></p>
<pre><code>import json
x=[1,-1,[1]]
with open('states.txt', 'wb') as f:
json.dump(x, f)
</code></pre>
<p><strong>Read from a file</strong></p>
<pre><code>import json
with open('states.txt') as f:
y = json.load(f)
print(y) # [1, -1, [1]]
</code></pre>
| 0 | 2016-08-08T22:51:32Z | [
"python",
"json"
] |
What is the best way to send Python generated data to PHP? | 38,839,269 | <p>I have a Raspberry PI collecting data from a break beam sensor I wish to use as part of an already developed Laravel application. I was just wondering what would the best way to transfer the data be.</p>
<p>I was thinking of creating an JSON file uploading it to a directory then running a cron job hourly to pick up on new files before running them through the Laravel controller to update the database and send emails.</p>
<p>I would like to pass the data through the Laravel application rather than sending from Python for management purposes. Could anyone see any issues with my way/ know a better way?</p>
| 1 | 2016-08-08T22:10:07Z | 38,839,396 | <p>Use python to extract data from the serial ports of rasberry pi and json encode it and store it in the web directory of your laravel project files. Later json decode and present the data on the web end via laravel php. This is all good . Beind said that another way is to get data from python and then make a curl Post request to your php project and collect data </p>
| 1 | 2016-08-08T22:24:07Z | [
"php",
"python",
"json",
"laravel"
] |
What is the best way to send Python generated data to PHP? | 38,839,269 | <p>I have a Raspberry PI collecting data from a break beam sensor I wish to use as part of an already developed Laravel application. I was just wondering what would the best way to transfer the data be.</p>
<p>I was thinking of creating an JSON file uploading it to a directory then running a cron job hourly to pick up on new files before running them through the Laravel controller to update the database and send emails.</p>
<p>I would like to pass the data through the Laravel application rather than sending from Python for management purposes. Could anyone see any issues with my way/ know a better way?</p>
| 1 | 2016-08-08T22:10:07Z | 38,839,403 | <p>Your approach sounds fine - the only caveat would be that you will not have "real time" data. You rely on the schedule of your cron jobs to sync the data around - of course you could do this every minute if you wanted to, which would minimize most of the effect of that delay.</p>
<p>The other option is to expose an API in your Laravel application which can accept the JSON payload from your python script and process it immediately. This approach offers the benefits of real-time processing and less processing overall because it's <em>on demand</em>, but also requires you to properly secure your API endpoint which you wouldn't need to do with a cron based approach.</p>
<p>For the record, I highly recommend using JSON as the data transfer format. Unless you need to implement schema validation (in which case possibly look as XML), using JSON is easy on both PHP and python's side.</p>
| 2 | 2016-08-08T22:24:57Z | [
"php",
"python",
"json",
"laravel"
] |
Reducing an array of dictionaries based on condition and price | 38,839,334 | <p>I would like to reduce the following list maybe using a lambda function. I know that I could first iterate over the entire list and split it into sublists based on the condition value. Then I would iterate over the sublists to get the minimum price.</p>
<pre><code>price_list = [{'price':10.8,'condition':'new'},{'price':6.9,'condition':'new'},{'price':3.8,'condition':'used'},{'price':1.8,'condition':'used'}]
</code></pre>
<p>The final list should only contain one item per condition with the minimum price.</p>
<pre><code>final_list = [{'price':6.9,'condition':'new'},{'price':1.8,'condition':'used'}]
</code></pre>
| 0 | 2016-08-08T22:17:48Z | 38,839,499 | <p>You can do:</p>
<pre><code>li=[]
for c in {e['condition'] for e in price_list}:
di={}
di['price']=min(e['price'] for e in price_list if e['condition']==c)
di['condition']=c
li.append(di)
>>> li
[{'price': 6.9, 'condition': 'new'}, {'price': 1.8, 'condition': 'used'}]
</code></pre>
<hr>
<p>As ShadowRanger points out, you do this in one iteration like so:</p>
<pre><code>dd=defaultdict(lambda: float('inf'))
for itemdict in price_list:
cond = itemdict['condition']
dd[cond] = min(dd[cond], itemdict['price'])
li=[]
for k, v in dd.items():
li.append({'price':v, 'condition':k})
</code></pre>
| 0 | 2016-08-08T22:34:58Z | [
"python",
"list",
"dictionary",
"lambda",
"python-3.4"
] |
Reducing an array of dictionaries based on condition and price | 38,839,334 | <p>I would like to reduce the following list maybe using a lambda function. I know that I could first iterate over the entire list and split it into sublists based on the condition value. Then I would iterate over the sublists to get the minimum price.</p>
<pre><code>price_list = [{'price':10.8,'condition':'new'},{'price':6.9,'condition':'new'},{'price':3.8,'condition':'used'},{'price':1.8,'condition':'used'}]
</code></pre>
<p>The final list should only contain one item per condition with the minimum price.</p>
<pre><code>final_list = [{'price':6.9,'condition':'new'},{'price':1.8,'condition':'used'}]
</code></pre>
| 0 | 2016-08-08T22:17:48Z | 38,839,660 | <p>I will do it like this:</p>
<pre><code>from collections import defaultdict
import sys
d = defaultdict(lambda: float('inf'))
for x in price_list:
d[x['condition']]=min(d[x['condition']],x['price'])
[{ 'price':v, 'condition':k } for k,v in d.items() ]
#output:
[{'price': 1.8, 'condition': 'used'}, {'price': 6.9, 'condition': 'new'}]
</code></pre>
| 1 | 2016-08-08T22:51:22Z | [
"python",
"list",
"dictionary",
"lambda",
"python-3.4"
] |
matplotlib.pyplot.errorbar is throwing an error it shouldn't? | 38,839,354 | <p>I'm trying to make an errorbar plot with my data. X is a 9 element ndarray. Y and Yerr are 9x5 ndarrays. When I call:</p>
<pre><code>matplotlib.pyplot.errorbar(X, Y, Yerr)
</code></pre>
<p>I get a ValueError: "yerr must be a scalar, the same dimensions as y, or 2xN."</p>
<p>But <code>Y.shape == Yerr.shape</code> is True.</p>
<p>I'm running on 64 bit Windows 7 with Spyder 2.3.8 and Python 3.5.1. Matplotlib is up to date. I've installed Visual C++ Redistributable for Visual Studio 2015.</p>
<p>Any ideas?</p>
<p>Edit: Some data.</p>
<pre><code>X=numpy.array([1,2,3])
Y=numpy.array([[1,5,2],[3,6,4],[9,3,7]])
Yerr=numpy.ones_like(Y)
</code></pre>
| 1 | 2016-08-08T22:19:46Z | 38,860,377 | <p><strong>Hmmm....</strong></p>
<p>By studying lines 2962-2965 of the module that raises the error we find</p>
<pre><code>if len(yerr) > 1 and not ((len(yerr) == len(y) and not (iterable(yerr[0]) and len(yerr[0]) > 1)))
</code></pre>
<p>From the data</p>
<pre><code>1 T len(yerr) > 1
2 T len(yerr) == len(y)
3 T iterable(yerr[0])
4 T len(yerr[0]) > 1
5 T 1 and not (2 and not (3 and 4)
</code></pre>
<p>However, this will not be triggered if the following test is not passed:</p>
<pre><code>if (iterable(yerr) and len(yerr) == 2 and
iterable(yerr[0]) and iterable(yerr[1])):
....
</code></pre>
<p>And it is not triggered, because len(yerr) = 3</p>
<p>Everything seems to check out, except for the dimensionality. This works:</p>
<pre><code>X = numpy.tile([1,2,3],3)
Y = numpy.array([1,5,2,3,6,4,9,3,7])
Yerr = numpy.ones_like(Y)
</code></pre>
<p>I am not sure what causes the error. The "l0, = " assignment also seems a little quirky.</p>
| 0 | 2016-08-09T21:03:01Z | [
"python",
"python-3.x",
"matplotlib",
"errorbar"
] |
matplotlib.pyplot.errorbar is throwing an error it shouldn't? | 38,839,354 | <p>I'm trying to make an errorbar plot with my data. X is a 9 element ndarray. Y and Yerr are 9x5 ndarrays. When I call:</p>
<pre><code>matplotlib.pyplot.errorbar(X, Y, Yerr)
</code></pre>
<p>I get a ValueError: "yerr must be a scalar, the same dimensions as y, or 2xN."</p>
<p>But <code>Y.shape == Yerr.shape</code> is True.</p>
<p>I'm running on 64 bit Windows 7 with Spyder 2.3.8 and Python 3.5.1. Matplotlib is up to date. I've installed Visual C++ Redistributable for Visual Studio 2015.</p>
<p>Any ideas?</p>
<p>Edit: Some data.</p>
<pre><code>X=numpy.array([1,2,3])
Y=numpy.array([[1,5,2],[3,6,4],[9,3,7]])
Yerr=numpy.ones_like(Y)
</code></pre>
| 1 | 2016-08-08T22:19:46Z | 38,860,687 | <p>Maybe by "dimension of y" the docs actually meant 1xN...</p>
<p>Anyway, this could work:</p>
<pre><code>for y, yerr in zip(Y, Yerr):
matplotlib.pyplot.errorbar(X, y, yerr)
</code></pre>
| 0 | 2016-08-09T21:24:26Z | [
"python",
"python-3.x",
"matplotlib",
"errorbar"
] |
Relationship between two tables, SQLAlchemy | 38,839,365 | <p>I want to make a relationship between AuthorComments and Reply to his comments.</p>
<p>Here is my <strong>models.py</strong>:</p>
<pre><code>class AuthorComments(Base):
id = db.Column(db.Integer, primary_key=True)
author_id = db.Column(db.Integer, db.ForeignKey('author.id'))
name = db.Column(db.String(50))
email = db.Column(db.String(50), unique=True)
comment = db.Column(db.Text)
live = db.Column(db.Boolean)
comments = db.relationship('Reply', backref='reply', lazy='joined')
def __init__(self,author, name, email, comment, live=True):
self.author_id = author.id
self.name = name
self.email = email
self.comment = comment
self.live = live
class Reply(Base):
id = db.Column(db.Integer, primary_key=True)
reply_id = db.Column(db.Integer, db.ForeignKey('author.id'))
name = db.Column(db.String(50))
email = db.Column(db.String(50), unique=True)
comment = db.Column(db.Text)
live = db.Column(db.Boolean)
def __init__(self,author, name, email, comment, live=True):
self.reply_id = author.id
self.name = name
self.email = email
self.comment = comment
self.live = live
</code></pre>
<p>Why am I getting this error:
<strong>sqlalchemy.exc.InvalidRequestError</strong></p>
<p><code>InvalidRequestError: One or more mappers failed to initialize - can't proceed with initialization of other mappers. Original exception was: Could not determine join condition between parent/child tables on relationship AuthorComments.comments - there are no foreign keys linking these tables. Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or specify a 'primaryjoin' expression.</code></p>
| 0 | 2016-08-08T22:20:24Z | 38,840,512 | <p>Your trouble is that SQLAlchemy doesn't know, for a given row of the child table (<code>Reply</code>), which row of the parent table (<code>AuthorComments</code>) to select! You need to define a <em>foreign-key</em> column in <code>Reply</code> that references a column of its parent <code>AuthorComments</code>.</p>
<p><a href="http://docs.sqlalchemy.org/en/latest/orm/basic_relationships.html#one-to-many" rel="nofollow">Here</a> is the documentation on defining one-to-many relationships in SQLAlchemy.</p>
<p>Something like this:</p>
<pre><code>class AuthorComments(Base):
__tablename__ = 'author_comment'
...
class Reply(Base):
...
author_comment_id = db.Column(db.Integer, db.ForeignKey('author_comment.id'))
...
author_comment = db.relationship(
'AuthorComments',
backref='replies',
lazy='joined'
)
</code></pre>
<p>will result in each <code>reply</code> acquiring a relationship to an <code>author_comment</code> such that <code>some_reply.author_comment_id == some_author_comment.id</code>, or <code>None</code> if no such equality exists.</p>
<p>The <code>backref</code> allows each <code>author_comment</code> to, reciprocally, have a relationship to a collection of replies called <code>replies</code>, satisfying the above condition.</p>
| 2 | 2016-08-09T01:01:39Z | [
"python",
"sqlalchemy",
"flask-sqlalchemy",
"rdbms"
] |
Pygame Button getRect Collidepoint not working? | 38,839,401 | <p>I have finished the main code for my game and I have started on making a menu screen. I can display the buttons on the screen just fine but when I click somewhere I get this <a href="http://i.stack.imgur.com/pOjmi.png" rel="nofollow">Error</a>:</p>
<p>How can I go about fixing this? If I didn't make anything clear in this question please tell me so I can clarify. Thanks!</p>
<p>Here is my code for the menuscreen:</p>
<pre><code>import pygame
import random
import time
pygame.init()
#colours
white = (255,255,255)
black = (0,0,0)
red = (255,0,0)
green = (0,155,0)
blue = (50,50,155)
display_width = 800
display_height = 600
gameDisplay = pygame.display.set_mode((display_width,display_height))
pygame.display.set_caption('Numeracy Ninjas')
clock = pygame.time.Clock()
#Fonts
smallfont = pygame.font.SysFont("comicsansms", 25)
medfont = pygame.font.SysFont("comicsansms", 50)
largefont = pygame.font.SysFont("comicsansms", 75)
#Sprites
img_button_start = pygame.image.load('Sprites/Buttons/button_start.png')
img_button_options = pygame.image.load('Sprites/Buttons/button_options.png')
gameDisplay.fill(white)
pygame.display.update()
class Button(pygame.sprite.Sprite):
def __init__(self, image, buttonX, buttonY):
super().__init__()
gameDisplay.blit(image, (buttonX, buttonY))
pygame.display.update()
selfrect = image.get_rect()
def wasClicked(event):
if selfrect.collidepoint(event.pos):
return True
def gameIntro():
buttons = pygame.sprite.Group()
button_start = Button(img_button_start, 27, 0)
button_options = Button(img_button_options, 27, 500)
buttons.add(button_start)
buttons.add(button_options)
print(buttons)
#main game loop
running = True
while running:
for event in pygame.event.get():
if event.type == pygame.MOUSEBUTTONDOWN:
print(event.pos)
#check for every button whether it was clicked
for btn in buttons:
print('forbtninbuttons')
if btn.wasClicked():
print('clicked!')
if event.type == pygame.QUIT:
pygame.quit()
</code></pre>
| 2 | 2016-08-08T22:24:30Z | 38,839,463 | <p>You haven't declared any attributes for you class, just local variables. Try doing <code>self.selfrect = image.get_rect()</code> in your initializer and in your <code>wasClicked(event)</code> method do:</p>
<pre><code>def wasClicked(self, event):
if self.selfrect.collidepoint(event.pos):
return True
</code></pre>
<p>It's usually convention to name your rect variable just <em>rect</em> though.</p>
<pre><code>class Button(pygame.sprite.Sprite):
def __init__(self, image, buttonX, buttonY):
super().__init__()
# This code doesn't make sense here. It should be inside your game loop.
# gameDisplay.blit(image, (buttonX, buttonY))
# pygame.display.update()
self.image = image # It's usually good to have a reference to your image.
self.rect = image.get_rect()
def wasClicked(self, event):
if self.rect.collidepoint(event.pos):
return True
else:
return False
</code></pre>
| 2 | 2016-08-08T22:30:37Z | [
"python",
"button",
"menu",
"pygame"
] |
how to use assert_frame_equal in unittest | 38,839,402 | <p>New to unittest package.
I'm trying to verify the DataFrame returned by a function through the following code. Even though I hardcoded the inputs of <code>assert_frame_equal</code> to be equal (<code>pd.DataFrame([0,0,0,0])</code>), the unittest still fails. Anyone would like to explain why it happens?</p>
<pre><code>import unittest
from pandas.util.testing import assert_frame_equal
class TestSplitWeight(unittest.TestCase):
def test_allZero(self):
#splitWeight(pd.DataFrame([0,0,0,0]),10)
self.assert_frame_equal(pd.DataFrame([0,0,0,0]),pd.DataFrame([0,0,0,0]))
suite = unittest.TestLoader().loadTestsFromTestCase(TestSplitWeight)
unittest.TextTestRunner(verbosity=2).run(suite)
</code></pre>
<pre>Error: AttributeError: 'TestSplitWeight' object has no attribute 'assert_frame_equal'</pre>
| 1 | 2016-08-08T22:24:54Z | 38,839,418 | <p><code>assert_frame_equal()</code> is coming from the <code>pandas.util.testing</code> package, not from the <code>unittest.TestCase</code> class. Replace:</p>
<pre><code>self.assert_frame_equal(pd.DataFrame([0,0,0,0]),pd.DataFrame([0,0,0,0]))
</code></pre>
<p>with:</p>
<pre><code>assert_frame_equal(pd.DataFrame([0,0,0,0]), pd.DataFrame([0,0,0,0]))
</code></pre>
<hr>
<p>When you had <code>self.assert_frame_equal</code>, it tried to find <code>assert_frame_equal</code> attribute on the <code>unittest.TestCase</code> instance, and, since there is not <code>assert_frame_equal</code> attribute or method exposed on an <code>unittest.TestCase</code> class, it raised an <code>AttributeError</code>.</p>
| 4 | 2016-08-08T22:26:15Z | [
"python",
"unit-testing",
"pandas"
] |
Restrict python script access to os functions | 38,839,407 | <p>I need a fresh idea from you, any help is appreciated.</p>
<p>I am implementing a system where user would be able to upload his own Python scripts and execute them within one of servers.</p>
<p>I do beware about the security issues. I would like to restrict any access to the operating system from this script.</p>
<p>First of all the script get verified with the <code>ast</code> parser to disallow access to many most obvious keywords like <code>exec</code>, <code>import</code>, <code>open</code>, etc.</p>
<p>User can declare usage of some libraries though. One of important ones is <code>pandas</code> library (also I have to provide <code>matplotlib</code>, <code>numpy</code> and others). I have implemented 'proxy' objects, imitating modules, but providing access to ony limited set of attributes. For example I can provide a proxy object <code>json</code>, but access to functions <code>loads</code> or <code>dumps</code> is not allowed.</p>
<p>Most obvious attempts to get an attribute from any object by names <code>os</code>, <code>sys</code>, etc are not allowed too. This way I am trying to close a hole when a user try to access <code>os</code> module with <code>json.os</code> or like this.</p>
<p>This can work, but it is a simple shield. I can review all modules and disallow access to most of the dangerous functions, but even one missed might lead to a potential damage. Also, some modules may be accessed with a tricky way like <code>pandas.tools.util.pd</code> will refer to the original <code>pandas</code> module. I'll spend a year to close everything..</p>
<p>I thought about restricting access on the file system level, but the script runs with <code>eval</code> function within the main process (<code>celery</code>-based) and has the same permissions (and same user) as main process. Theoretically it can read all the sources and pass them to the user.</p>
<p>One of my ideas is to run the script in a separate process with minimal set of sources and permissions, and pass the data to/from it with pipes. But this will require to refactor a lot of code and the stability is not guaranteed - I still need a lot of code around to make it working.</p>
| 3 | 2016-08-08T22:25:31Z | 38,839,433 | <p><a href="https://pypi.python.org/pypi/RestrictedPython/" rel="nofollow">RestrictedPython</a> is what you need.</p>
| 2 | 2016-08-08T22:27:49Z | [
"python",
"security",
"permissions",
"exec",
"eval"
] |
Markov Switching Model in Python Statsmodels | 38,839,465 | <p>I would like to estimate a Markov Switching Model as done in the following:
<a href="http://www.chadfulton.com/posts/mar_hamilton.html" rel="nofollow">http://www.chadfulton.com/posts/mar_hamilton.html</a></p>
<p>However, when I try to import the function to fit the model, i.e. </p>
<pre><code>from statsmodels.tsa.mar_model import MAR
</code></pre>
<p>I get the following error message:</p>
<pre><code>ImportError: No module named 'statsmodels.tsa.mar_model'
</code></pre>
<p>What can I do to solve this error?</p>
| 0 | 2016-08-08T22:31:02Z | 38,843,922 | <p>A new version of Statsmodels including the Markov switching code has not yet (at least as of 8/8/16) been released. If you are using an older version of Statsmodels (e.g. 0.6.1) then the code will not be available for you.</p>
<p>A release candidate (0.8.0rc1) is available on PyPi, or you can download and install the cutting edge development version from Github (<a href="https://github.com/statsmodels/statsmodels/" rel="nofollow">https://github.com/statsmodels/statsmodels/</a>).</p>
<p>It is possible that a final release of v0.8 will happen this month, but nothing is certain yet.</p>
| 0 | 2016-08-09T06:48:27Z | [
"python",
"python-import",
"statsmodels",
"hidden-markov-models",
"markov-models"
] |
ctypes to int conversion with variable assigned by reference | 38,839,525 | <pre><code>def t(n):
print(n)
t(byref(c_int(5)))
</code></pre>
<p>prints <code><cparam 'P' (0000019607137590)></code></p>
<p>how can I convert above printed line to a python <code>int</code>? so that it would simply print <code>5</code></p>
<p>I tried with <code>c_int(n).value</code>, but didn't cut it.</p>
| 0 | 2016-08-08T22:37:42Z | 38,839,569 | <p><code>byref</code> is for (fast) passing to C level functions; if you need to use the pointer at the Python layer, use <code>pointer</code> (slow, but accessible in Python); for pointer types, you dereference with indexing (since this is a single value, it's <code>[0]</code>):</p>
<pre><code>def t(n):
print(n[0])
# Or to get back the c_int without converting to Python int:
# print(n.contents)
t(pointer(c_int(5)))
</code></pre>
<p>If you just want to pass the <code>c_int</code> within Python though, all <code>c_int</code>s are references anyway, so you could just do:</p>
<pre><code>def t(n):
print(n.value)
t(c_int(5))
</code></pre>
<p>And assigning to or reading from <code>n.value</code> will change the caller's <code>c_int</code>.</p>
| 1 | 2016-08-08T22:43:21Z | [
"python"
] |
list of dictionaries from kwargs of key:set pair | 38,839,541 | <pre><code>def funct ( **kwargs ):
#code goes here
</code></pre>
<p>I have variable number of arguments being passed to this function. the arguments are of the form key1 : {set1}, key2 : {set2}, and so on</p>
<p>Inside the funct I want to have the following data structure some how "synthesized" from the given kwargs</p>
<p>if the function is called like
funct ( key1 = {1,2,3}, key2 = {4,5} )
params were passed I want the following</p>
<pre><code>[
{ key1 : 1, key2 : 4 },
{ key1 : 2, key2 : 4},
{ key1 : 3, key2 : 4},
{ key1 : 1, key2 : 5 },
{ key1 : 2, key2 : 5},
{ key1 : 3, key2 : 5}
]
</code></pre>
<p>should work the same way if funct was passed arbitrary number of key : set pairs.</p>
<p>How can I accomplish this. The simpler the solution the better.</p>
<p>using python 3.5</p>
<p>Thanks.</p>
| -1 | 2016-08-08T22:39:39Z | 38,839,742 | <p><a href="https://docs.python.org/dev/library/itertools.html#itertools.product" rel="nofollow"><code>itertools.product</code></a> is a convenient way to go about this.</p>
<pre><code>import itertools
def dict_product(**kwargs):
"""
Cartesian product of kwargs as dicts with the same keys.
>>> p = list(dict_product(key1=[1, 2, 3], key2=[4, 5]))
>>> p.sort(key=lambda d: d['key1'])
>>> p == [
... {'key1': 1, 'key2': 4},
... {'key1': 1, 'key2': 5},
... {'key1': 2, 'key2': 4},
... {'key1': 2, 'key2': 5},
... {'key1': 3, 'key2': 4},
... {'key1': 3, 'key2': 5},
... ]
True
"""
items = kwargs.items()
keys = [key for key, value in items]
sets = [value for key, value in items]
for values in itertools.product(*sets):
yield dict(zip(keys, values))
</code></pre>
| 1 | 2016-08-08T23:01:36Z | [
"python"
] |
<function at > Python Curses | 38,839,590 | <p>I have a program. </p>
<p>I have made this</p>
<pre><code>scoreboard = '\n'.join([
'ââââââââââââââ',
'â Player â',
'â â',
'â 4 â',
'â â',
'â â',
'â Computer â',
'â â',
'â 5 â',
'â â',
'ââââââââââââââ'])
</code></pre>
<p>Then I have done</p>
<pre><code>score_board = scoreboard
</code></pre>
<p>Then I have defined the function scoreboard.</p>
<pre><code>def scoreboard():
for i, line in enumerate(score_board.splitlines()):
mvaddstr(12 + i, 1, line)
endwin()
</code></pre>
<p>Then I run</p>
<pre><code>scoreboard()
</code></pre>
<p>On my program what it prints is nothing. Then if I press any button, this pops up</p>
<pre><code><function scoreboard at 0x03ACF6A8>
</code></pre>
<p>My full program is here but it is not done. <a href="http://pastebin.com/L1nDNpx2" rel="nofollow">http://pastebin.com/L1nDNpx2</a></p>
<p>How can I make it print the scoreboard?</p>
<p>Thanks!</p>
| -1 | 2016-08-08T22:45:25Z | 38,845,025 | <p>That's because your redefine two times the variable scoreboard (Function / List).
Try changing the name of the function.</p>
| 0 | 2016-08-09T07:50:39Z | [
"python",
"windows",
"function",
"python-3.x",
"curses"
] |
How do I set a name_scope in a graph to be removed from main graph in python | 38,839,675 | <p>I'm trying to organize my TensorBoard graph such that a certain component is automatically placed on the side when I first initiate TensorBoard.</p>
<p><a href="http://i.stack.imgur.com/xoA39.png" rel="nofollow"><img src="http://i.stack.imgur.com/xoA39.png" alt="enter image description here"></a></p>
<p>I want the <code>save</code> node to be on the right, like this:</p>
<p><a href="http://i.stack.imgur.com/PTft6.png" rel="nofollow"><img src="http://i.stack.imgur.com/PTft6.png" alt="enter image description here"></a></p>
<p>I can right click on <code>save</code> in the graph itself, But I'd rather have it waiting for me this way when I get there.</p>
| 2 | 2016-08-08T22:53:03Z | 38,857,814 | <p>This isn't currently possible, although we are planning to do this as part of a broader push to improve the graph visualizer in the next few months.
If you want, you can file a GitHub issue for tracking (but we will do this either way). </p>
| 1 | 2016-08-09T18:18:01Z | [
"python",
"tensorflow",
"tensorboard"
] |
Remove duplicate items by value from a dictionary | 38,839,765 | <p>I've got the following dictionary:</p>
<pre><code>potential_duplicates = {
432L: (u'one two three', u'one two three'),
433L: (u'one two three', u'one two three'),
434L: (u'whole foods', u'whole foods'),
435L: (u'whole foods', u'whole foods'),
437L: (u'this is a dupe', u'this is a dupe'),
438L: (u'this is a dupe', u'this is a dupe'),
439L: (u'this is a dupe', u'this is a dupe')
}
</code></pre>
<p>Basically I'm removing duplicate entries of items in my database, so essentially I want to keep at least one of these in here, and throw the other in a list of duplicates that need to be removed.</p>
<p>Can I do it with this structure or should I be using lists instead? </p>
| 0 | 2016-08-08T23:04:28Z | 38,839,846 | <p>You can do this with two nested dictionary comprehensions. The inner one consolidates the duplicates by reversing the key and value, and the outer one rebuilds it in the original form.</p>
<pre><code>>>> {k:v for v,k in {v:k for k,v in potential_duplicates.items()}.items()}
{433L: (u'one two three', u'one two three'), 435L: (u'whole foods', u'whole foods'), 439L: (u'this is a dupe', u'this is a dupe')}
</code></pre>
<p>To get a list of the keys that were removed, use a list comprehension to compare the two dicts:</p>
<pre><code>>>> kept = {k:v for v,k in {v:k for k,v in potential_duplicates.items()}.items()}
>>> removed = [k for k in potential_duplicates.keys() if k not in kept]
>>> removed
[432L, 434L, 437L, 438L]
</code></pre>
| 0 | 2016-08-08T23:18:34Z | [
"python",
"dictionary",
"list-comprehension"
] |
Exporting data to a CSV file, error can't encode character '\xe8' | 38,839,816 | <p>I have a large set of data and I am going to export them into a CSV file using the following code. </p>
<pre><code> with open ('/Users/mz/Dropbox/dis/Programming/zoloft.csv',
'w', newline = '') as zolo:
zolo = csv.writer(zolo, delimiter =',', quotechar='|')
rows = zip(all_rating, all_disorders, all_side_effects,
all_comments, all_gender, all_age, all_dosage_duration, all_date)
for row in rows:
zolo.writerow(row)
</code></pre>
<p>But there is the following error:</p>
<pre><code> zolo.writerow(row)
UnicodeEncodeError: 'ascii' codec can't encode character '\xe8' in position 179: ordinal not in range(128)
</code></pre>
<p>Is there any way to handle this error inside the code I wrote? Thanks !</p>
| 0 | 2016-08-08T23:14:25Z | 38,839,857 | <p>The <a href="https://docs.python.org/2/library/csv.html" rel="nofollow"><code>csv</code> module, at present, doesn't have Unicode support</a>, but your dataset clearly contains Unicode characters. So what you can do is something similar to the answer to <a href="http://stackoverflow.com/questions/816285/where-is-pythons-best-ascii-for-this-unicode-database">this question</a>, and translate the Unicode characters to their nearest ASCII equivalents (so your text isn't illegible later).</p>
<p>I'd go with something like:</p>
<pre><code>from unidecode import unidecode
with open('file', 'w', newline = '') as zolo:
zolo = csv.writer(zolo, delimiter =',', quotechar='|')
rows = zip(all_rating, all_disorders, all_side_effects,
all_comments, all_gender, all_age,
all_dosage_duration, all_date)
for row in rows:
zolo.writerow(map(unidecode, row))
</code></pre>
| 2 | 2016-08-08T23:20:29Z | [
"python",
"csv"
] |
Why is my logging filter being applied to the wrong handler? | 38,839,817 | <p>Here's a simple Python filter that does nothing but put "TEST - " in front of a log message. (The real filter will do more helpful processing later):</p>
<pre><code>class TimeStamp_Filter(logging.Filter):
def filter(self, record):
record.msg = "TEST - " + str(record.msg)
return True
</code></pre>
<p>And here's the config being pulled in from a JSON file and parsed with <code>dictConfig()</code>:</p>
<pre><code>{
"version": 1,
"disable_existing_loggers": false,
"filters": {
"timestamp_filter": {
"()": "TimeStamp_Filter"
}
},
"handlers": {
"file_handler": {
"class": "logging.FileHandler",
"level": "INFO",
"filename": "../log/default.log",
"mode": "a"
},
"console": {
"class": "logging.StreamHandler",
"level": "DEBUG",
"filters": ["timestamp_filter"],
"stream": "ext://sys.stdout"
}
},
"root": {
"level": "DEBUG",
"handlers": ["console", "file_handler"]
}
}
</code></pre>
<p>The filter itself seems to work - if I create a logger and run <code>logger.info("Hello, world!")</code>, I get the output <code>TEST - Hello, world!</code> on screen. </p>
<p>However I <strong>also</strong> get that output (including the "TEST") in my <code>default.log</code> file. I had thought that by attaching the <code>timestamp_filter</code> only to the <code>console</code> handler, I would get that TEST output only on screen. </p>
<p>Why is also being sent to the <code>file_handler</code> handler and ending up in my log file?</p>
| 0 | 2016-08-08T23:14:26Z | 38,839,889 | <p>You are changing the message of the log record from a filter. That is causing the issue. </p>
<p>Python will apply that filter to your console output alright but when it does, it changes the original log message. So when the log message is passed to the file handler, the message has changed already and contains that extra input. </p>
<p>If you want to change the log format for specific handlers, you should consider using formatters instead. Filtering is for selecting which message gets logged and which one should not. </p>
<p><strong>Update:</strong> As per the comments, here's a sample code explaining how we can use custom formatter and handle business logic inside it. </p>
<pre><code>import logging
import sys
class CustomFormatter(logging.Formatter):
def format(self, record):
mycondition = True # Here goes your business logic
formatted_message = super().format(record=record)
if mycondition:
formatted_message += "TEST"
return formatted_message
logger = logging.getLogger("test")
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler(stream=sys.stdout)
handler.setFormatter(CustomFormatter())
logger.addHandler(handler)
logger.info("Hello!")
</code></pre>
| 0 | 2016-08-08T23:23:20Z | [
"python",
"logging",
"filter"
] |
SWIG: How to return a struct using %apply? 'No typemaps defined' warning | 38,839,828 | <p>I currently have a function that uses a struct as a buffer to return some information, like so:</p>
<pre><code>int example_reader(int code, void* return_struct);
</code></pre>
<p>My goal is to make it so that when I wrap this function using SWIG so that it can be used in Python, I will return the struct along with the function's regular return value. Thus far, I have been doing so using the %apply command like so:</p>
<pre><code>%apply struct ret_struct *OUTPUT {void* return_struct};
</code></pre>
<p>However, when I add the above line to my .i file and try to run SWIG, I get the following warning:</p>
<p>"warning 453: Can't apply (struct ret_struct *OUTPUT. No typemaps are defined"</p>
<p>I believe I'm including the .h file that defines the struct I'm trying to return, so I've had trouble pinpointing the issue. Please correct me if the issue seems to involve the improper inclusion of the struct. I've tried reading through the SWIG documentation as well as other Stack Overflow posts to get some inkling of what the problem might be, but I haven't been able to figure it out thus far. The problem is made slightly trickier because I am trying to return a void pointer to a struct, and the code I'm trying to wrap could have multiple kinds of structs for me to return. What would be a wise way of handling the return of this struct? Thank you!</p>
| 1 | 2016-08-08T23:16:07Z | 38,910,594 | <p>I have given here a full C example, where an interface is used for returning a struct to the target language together with a return value. In this way you can make a proper interface, where no implementation is given in the header. That is no default implementation of a virtual destructor. If you don't want to use an interface, you can let SWIG and Python know how data are represented.</p>
<p>Interface header: foo.h</p>
<pre><code>typedef struct _Foo Foo;
int foo_new(Foo **obj);
int foo_free(Foo *obj);
int foo_get_value_a(Foo *obj, int *result);
int foo_set_value_a(Foo *obj, int value);
int foo_get_value_b(Foo *obj, char **result);
int foo_set_value_b(Foo *obj, char *value);
</code></pre>
<p>SWIG interface: foo.i</p>
<pre><code>%module foo
%{
#include "foo.h"
%}
%include "typemaps.i"
%typemap(in, numinputs=0) Foo ** (Foo *temp) {
$1 = &temp;
}
%typemap(argout) Foo ** {
PyObject* temp = NULL;
if (!PyList_Check($result)) {
temp = $result;
$result = PyList_New(1);
PyList_SetItem($result, 0, temp);
}
temp = SWIG_NewPointerObj(*$1, SWIGTYPE_p__Foo, SWIG_POINTER_NEW);
PyList_Append($result, temp);
Py_DECREF(temp);
}
%delobject foo_free; // Protect for double deletion
struct _Foo {};
%extend _Foo {
~_Foo() {
foo_free($self);
}
};
%ignore _Foo;
</code></pre>
<p>Some implementation of the interface: foo.c</p>
<pre><code>%include "foo.h"
#include "foo.h"
#include "stdlib.h"
#include "string.h"
struct FooImpl {
char* c;
int i;
};
int foo_new(Foo **obj)
{
struct FooImpl* f = (struct FooImpl*) malloc(sizeof(struct FooImpl));
f->c = NULL;
*obj = (Foo*) f;
return 0;
}
int foo_free(Foo *obj)
{
struct FooImpl* impl = (struct FooImpl*) obj;
if (impl) {
if (impl->c) {
free(impl->c);
impl->c = NULL;
}
}
return 0;
}
int foo_get_value_a(Foo *obj, int *result)
{
struct FooImpl* impl = (struct FooImpl*) obj;
*result = impl->i;
return 0;
}
int foo_set_value_a(Foo *obj, int value)
{
struct FooImpl* impl = (struct FooImpl*) obj;
impl->i = value;
return 0;
}
int foo_get_value_b(Foo *obj, char **result)
{
struct FooImpl* impl = (struct FooImpl*) obj;
*result = impl->c;
return 0;
}
int foo_set_value_b(Foo *obj, char *value)
{
struct FooImpl* impl = (struct FooImpl*) obj;
int len = strlen(value);
if (impl->c) {
free(impl->c);
}
impl->c = (char*)malloc(len+1);
strcpy(impl->c,value);
return 0;
}
</code></pre>
<p>Script for building</p>
<pre><code>#!/usr/bin/env python
from distutils.core import setup, Extension
import os
os.environ['CC'] = 'gcc';
setup(name='foo',
version='1.0',
ext_modules =[Extension('_foo',
['foo.i','foo.c'])])
</code></pre>
<p>Usage:</p>
<pre><code>import foo
OK, f = foo.foo_new()
OK = foo.foo_set_value_b(f, 'Hello world!')
OK = foo.foo_free(f)
OK, f = foo.foo_new()
# Test safe to double delete
del f
</code></pre>
| 1 | 2016-08-12T05:30:22Z | [
"python",
"c++",
"c",
"swig"
] |
Python sorting str price with two decimal points | 38,839,892 | <p>My goal: Sort a <code>list</code> of Products (<code>dict</code>) first by Price, then by Name.
My problem: <code>Str</code> values with numbers in them aren't sorted properly (AKA "Human sorting" or "Natural Sorting").</p>
<p>I found this function from a similar question:
<a href="http://stackoverflow.com/questions/1143671/python-sorting-list-of-dictionaries-by-multiple-keys">Python sorting list of dictionaries by multiple keys</a></p>
<pre><code>def multikeysort(items, columns):
from operator import itemgetter
comparers = [((itemgetter(col[1:].strip()), -1) if col.startswith('-') else
(itemgetter(col.strip()), 1)) for col in columns]
def comparer(left, right):
for fn, mult in comparers:
result = cmp(fn(left), fn(right))
if result:
return mult * result
else:
return 0
return sorted(items, cmp=comparer)
</code></pre>
<p>The problem is that my Prices are <code>str</code> type, like this:</p>
<pre><code>products = [
{'name': 'Product 200', 'price': '3000.00'},
{'name': 'Product 4', 'price': '100.10'},
{'name': 'Product 15', 'price': '20.00'},
{'name': 'Product 1', 'price': '5.05'},
{'name': 'Product 2', 'price': '4.99'},
]
</code></pre>
<p>So they're getting sorted alphabetically, like this:</p>
<pre><code>'100.10'
'20.10'
'3000.00'
'4.99'
'5.05'
</code></pre>
<p>Similarly, when I sort by name, I get this:</p>
<pre><code>'Product 1'
'Product 15'
'Product 2'
'Product 200'
'Product 4'
</code></pre>
<p>The names should be listed in "human" order (1,2,15 instead of 1,15,2). Is it possible to fix this? I'm pretty new to python, so maybe I'm missing something vital. Thanks.</p>
<p><strong>EDIT</strong></p>
<p>More Info: I'm sending the list of products to a Django template, which requires the numbers to be properly formatted. If I float the prices and then un-float them, I have to iterate through the list of products twice, which seems like overkill.</p>
| 0 | 2016-08-08T23:23:49Z | 38,839,941 | <p>I think your best bet is to parse the prices as floats (so you can sort them):</p>
<pre><code>float("1.00")
# output: 1.0
</code></pre>
<p>Then output them with two decimal places:</p>
<pre><code>"{:.2f}".format(1.0)
# output: "1.00"
</code></pre>
| 2 | 2016-08-08T23:31:08Z | [
"python",
"sorted"
] |
Python sorting str price with two decimal points | 38,839,892 | <p>My goal: Sort a <code>list</code> of Products (<code>dict</code>) first by Price, then by Name.
My problem: <code>Str</code> values with numbers in them aren't sorted properly (AKA "Human sorting" or "Natural Sorting").</p>
<p>I found this function from a similar question:
<a href="http://stackoverflow.com/questions/1143671/python-sorting-list-of-dictionaries-by-multiple-keys">Python sorting list of dictionaries by multiple keys</a></p>
<pre><code>def multikeysort(items, columns):
from operator import itemgetter
comparers = [((itemgetter(col[1:].strip()), -1) if col.startswith('-') else
(itemgetter(col.strip()), 1)) for col in columns]
def comparer(left, right):
for fn, mult in comparers:
result = cmp(fn(left), fn(right))
if result:
return mult * result
else:
return 0
return sorted(items, cmp=comparer)
</code></pre>
<p>The problem is that my Prices are <code>str</code> type, like this:</p>
<pre><code>products = [
{'name': 'Product 200', 'price': '3000.00'},
{'name': 'Product 4', 'price': '100.10'},
{'name': 'Product 15', 'price': '20.00'},
{'name': 'Product 1', 'price': '5.05'},
{'name': 'Product 2', 'price': '4.99'},
]
</code></pre>
<p>So they're getting sorted alphabetically, like this:</p>
<pre><code>'100.10'
'20.10'
'3000.00'
'4.99'
'5.05'
</code></pre>
<p>Similarly, when I sort by name, I get this:</p>
<pre><code>'Product 1'
'Product 15'
'Product 2'
'Product 200'
'Product 4'
</code></pre>
<p>The names should be listed in "human" order (1,2,15 instead of 1,15,2). Is it possible to fix this? I'm pretty new to python, so maybe I'm missing something vital. Thanks.</p>
<p><strong>EDIT</strong></p>
<p>More Info: I'm sending the list of products to a Django template, which requires the numbers to be properly formatted. If I float the prices and then un-float them, I have to iterate through the list of products twice, which seems like overkill.</p>
| 0 | 2016-08-08T23:23:49Z | 38,839,947 | <p>Try typecasting them to floats in the question and when you need to print 2 decimal places, you can easily format the output like so:</p>
<pre><code>float_num = float("110.10")
print "{0:.2f}".format(float_num) # prints 110.10
</code></pre>
| 1 | 2016-08-08T23:31:27Z | [
"python",
"sorted"
] |
Python sorting str price with two decimal points | 38,839,892 | <p>My goal: Sort a <code>list</code> of Products (<code>dict</code>) first by Price, then by Name.
My problem: <code>Str</code> values with numbers in them aren't sorted properly (AKA "Human sorting" or "Natural Sorting").</p>
<p>I found this function from a similar question:
<a href="http://stackoverflow.com/questions/1143671/python-sorting-list-of-dictionaries-by-multiple-keys">Python sorting list of dictionaries by multiple keys</a></p>
<pre><code>def multikeysort(items, columns):
from operator import itemgetter
comparers = [((itemgetter(col[1:].strip()), -1) if col.startswith('-') else
(itemgetter(col.strip()), 1)) for col in columns]
def comparer(left, right):
for fn, mult in comparers:
result = cmp(fn(left), fn(right))
if result:
return mult * result
else:
return 0
return sorted(items, cmp=comparer)
</code></pre>
<p>The problem is that my Prices are <code>str</code> type, like this:</p>
<pre><code>products = [
{'name': 'Product 200', 'price': '3000.00'},
{'name': 'Product 4', 'price': '100.10'},
{'name': 'Product 15', 'price': '20.00'},
{'name': 'Product 1', 'price': '5.05'},
{'name': 'Product 2', 'price': '4.99'},
]
</code></pre>
<p>So they're getting sorted alphabetically, like this:</p>
<pre><code>'100.10'
'20.10'
'3000.00'
'4.99'
'5.05'
</code></pre>
<p>Similarly, when I sort by name, I get this:</p>
<pre><code>'Product 1'
'Product 15'
'Product 2'
'Product 200'
'Product 4'
</code></pre>
<p>The names should be listed in "human" order (1,2,15 instead of 1,15,2). Is it possible to fix this? I'm pretty new to python, so maybe I'm missing something vital. Thanks.</p>
<p><strong>EDIT</strong></p>
<p>More Info: I'm sending the list of products to a Django template, which requires the numbers to be properly formatted. If I float the prices and then un-float them, I have to iterate through the list of products twice, which seems like overkill.</p>
| 0 | 2016-08-08T23:23:49Z | 38,839,958 | <p>Your sort function is overkill. Try this simple approach:</p>
<pre><code>from pprint import pprint
products = [
{'name': 'Product 200', 'price': '3000.00'},
{'name': 'Product 4', 'price': '100.10'},
{'name': 'Product 15', 'price': '20.00'},
{'name': 'Product 1', 'price': '5.05'},
{'name': 'Product 2', 'price': '4.99'},
]
sorted_products = sorted(products, key=lambda x: (float(x['price']), x['name']))
pprint(sorted_products)
</code></pre>
<p>Result:</p>
<pre><code>[{'name': 'Product 2', 'price': '4.99'},
{'name': 'Product 1', 'price': '5.05'},
{'name': 'Product 15', 'price': '20.00'},
{'name': 'Product 4', 'price': '100.10'},
{'name': 'Product 200', 'price': '3000.00'}]
</code></pre>
<p>The essence of my solution is to have the <code>key</code> function return a tuple of the sort conditions. Tuples always compare lexicographically, so the first item is the primary sort, the second is the secondary sort, and so on.</p>
| 3 | 2016-08-08T23:32:29Z | [
"python",
"sorted"
] |
Python sorting str price with two decimal points | 38,839,892 | <p>My goal: Sort a <code>list</code> of Products (<code>dict</code>) first by Price, then by Name.
My problem: <code>Str</code> values with numbers in them aren't sorted properly (AKA "Human sorting" or "Natural Sorting").</p>
<p>I found this function from a similar question:
<a href="http://stackoverflow.com/questions/1143671/python-sorting-list-of-dictionaries-by-multiple-keys">Python sorting list of dictionaries by multiple keys</a></p>
<pre><code>def multikeysort(items, columns):
from operator import itemgetter
comparers = [((itemgetter(col[1:].strip()), -1) if col.startswith('-') else
(itemgetter(col.strip()), 1)) for col in columns]
def comparer(left, right):
for fn, mult in comparers:
result = cmp(fn(left), fn(right))
if result:
return mult * result
else:
return 0
return sorted(items, cmp=comparer)
</code></pre>
<p>The problem is that my Prices are <code>str</code> type, like this:</p>
<pre><code>products = [
{'name': 'Product 200', 'price': '3000.00'},
{'name': 'Product 4', 'price': '100.10'},
{'name': 'Product 15', 'price': '20.00'},
{'name': 'Product 1', 'price': '5.05'},
{'name': 'Product 2', 'price': '4.99'},
]
</code></pre>
<p>So they're getting sorted alphabetically, like this:</p>
<pre><code>'100.10'
'20.10'
'3000.00'
'4.99'
'5.05'
</code></pre>
<p>Similarly, when I sort by name, I get this:</p>
<pre><code>'Product 1'
'Product 15'
'Product 2'
'Product 200'
'Product 4'
</code></pre>
<p>The names should be listed in "human" order (1,2,15 instead of 1,15,2). Is it possible to fix this? I'm pretty new to python, so maybe I'm missing something vital. Thanks.</p>
<p><strong>EDIT</strong></p>
<p>More Info: I'm sending the list of products to a Django template, which requires the numbers to be properly formatted. If I float the prices and then un-float them, I have to iterate through the list of products twice, which seems like overkill.</p>
| 0 | 2016-08-08T23:23:49Z | 38,840,275 | <p>To break ties should there be any sorting the strings using the integer value from the product, you can return a tuple:</p>
<pre><code>products = [
{'name': 'Product 200', 'price': '2.99'},
{'name': 'Product 4', 'price': '4.99'},
{'name': 'Product 15', 'price': '4.99'},
{'name': 'Product 1', 'price': '9.99'},
{'name': 'Product 2', 'price': '4.99'},
]
def key(x):
p, i = x["name"].rsplit(None, 1)
return float(x["price"]), p, int(i)
sorted_products = sorted(products, key=key)
</code></pre>
<p>Which would give you:</p>
<pre><code>[{'name': 'Product 200', 'price': '2.99'},
{'name': 'Product 2', 'price': '4.99'},
{'name': 'Product 4', 'price': '4.99'},
{'name': 'Product 15', 'price': '4.99'},
{'name': 'Product 1', 'price': '9.99'}]
</code></pre>
<p>As opposed to:</p>
<pre><code>[{'name': 'Product 200', 'price': '2.99'},
{'name': 'Product 15', 'price': '4.99'},
{'name': 'Product 2', 'price': '4.99'},
{'name': 'Product 4', 'price': '4.99'},
{'name': 'Product 1', 'price': '9.99'}]
</code></pre>
<p>using just <code>float(x['price']), x['name']</code></p>
| 0 | 2016-08-09T00:17:57Z | [
"python",
"sorted"
] |
elegant way to check file directory path valid in Python 2.7 | 38,839,902 | <p>I am trying to read content of all files under a specific directory. I find it is a bit tricky if path name is not ends with <code>/</code>, then my code below will not work (will have I/O exception since <code>pathName+f</code> is not valid -- missing <code>/</code> in the middle). Here is a code example to show when it works and when it not works,</p>
<p>I can actually check if pathName ends with <code>/</code> by using endsWith, just wondering if more elegant solutions when concatenate path and file name for a full name?</p>
<p>My requirement is, I want to give input pathName more flexible to ends with both <code>\</code> and not ends with <code>\</code>.</p>
<p>Using Python 2.7.</p>
<pre><code>from os import listdir
from os.path import isfile, join
#pathName = '/Users/foo/Downloads/test/' # working
pathName = '/Users/foo/Downloads/test' # not working, since not ends with/
onlyfiles = [f for f in listdir(pathName) if isfile(join(pathName, f))]
for f in onlyfiles:
with open(pathName+f, 'r') as content_file:
content = content_file.read()
print content
</code></pre>
| 2 | 2016-08-08T23:25:19Z | 38,839,996 | <p>You would just use join again:</p>
<pre><code>pathName = '/Users/foo/Downloads/test' # not working, since not ends with/
onlyfiles = [f for f in listdir(pathName) if isfile(join(pathName, f))]
for f in onlyfiles:
with open(join(pathName, f), 'r') as content_file:
content = content_file.read()
print content
</code></pre>
<p>Or you could use <em>glob</em> and forget join:</p>
<pre><code>from glob import glob
pathName = '/Users/foo/Downloads/test' # not working, since not ends with/
onlyfiles = (f for f in glob(join(pathName,"*")) if isfile(f))
for f in onlyfiles:
with open(f, 'r') as content_file:
</code></pre>
<p>or combine it with filter for a more succinct solution:</p>
<pre><code>onlyfiles = filter(isfile, glob(join(pathName,"*")))
</code></pre>
| 3 | 2016-08-08T23:36:54Z | [
"python",
"python-2.7"
] |
Generate Integer Random Numbers in Python Array | 38,839,912 | <p>I'm trying to make an array with 30 integer elements between 0 and 2 randomly chosen. When some number is chosen 10 times, i can't append it anymore. In the end, I need an array with 30 elements with 10 numbers 0, 10 numbers 1 and 10 numbers 2. Here's what i'm trying:</p>
<pre><code>import random
array_size = 30
number = 3
counter = [0, 0, 0]
solution = []
for i in range(array_size):
number = random.randrange(number) #generates numbers between 0 and 2
while counter[number] > 10:
number = random.randrange(number)
counter[number] += 1
solution.append(number)
</code></pre>
<p>As result, i have more than 10 elements of the same number. I believe the problem is in the random number that i put in the while is not changed even if i change it inside the loop. Someone know how to do this?</p>
<p>Thanks</p>
| 0 | 2016-08-08T23:27:08Z | 38,839,970 | <pre><code>import math
import random
number = 3
size = 30
steps = math.ceil(size / number)
solution = []
for x in range(steps):
for n in range(number):
solution.append(n)
random.shuffle(solution)
print(solution)
</code></pre>
| 1 | 2016-08-08T23:33:35Z | [
"python",
"arrays",
"random"
] |
Generate Integer Random Numbers in Python Array | 38,839,912 | <p>I'm trying to make an array with 30 integer elements between 0 and 2 randomly chosen. When some number is chosen 10 times, i can't append it anymore. In the end, I need an array with 30 elements with 10 numbers 0, 10 numbers 1 and 10 numbers 2. Here's what i'm trying:</p>
<pre><code>import random
array_size = 30
number = 3
counter = [0, 0, 0]
solution = []
for i in range(array_size):
number = random.randrange(number) #generates numbers between 0 and 2
while counter[number] > 10:
number = random.randrange(number)
counter[number] += 1
solution.append(number)
</code></pre>
<p>As result, i have more than 10 elements of the same number. I believe the problem is in the random number that i put in the while is not changed even if i change it inside the loop. Someone know how to do this?</p>
<p>Thanks</p>
| 0 | 2016-08-08T23:27:08Z | 38,840,029 | <p>Just change </p>
<pre><code>while number[counter] > 10:
</code></pre>
<p>to</p>
<pre><code>while number[counter] >= 10:
</code></pre>
<p>Originally your code would only stop appending a certain number only if there were more than 10 instances of it within your array. By changing it to a >=, the program will stop appending the number the moment it adds it for a tenth time.</p>
| 0 | 2016-08-08T23:40:58Z | [
"python",
"arrays",
"random"
] |
How to combine n-grams into one vocabulary in Spark? | 38,839,924 | <p>Wondering if there is a built-in Spark feature to combine 1-, 2-, n-gram features into a single vocabulary. Setting <code>n=2</code> in <code>NGram</code> followed by invocation of <code>CountVectorizer</code> results in a dictionary containing only 2-grams. What I really want is to combine all frequent 1-grams, 2-grams, etc into one dictionary for my corpus.</p>
| 3 | 2016-08-08T23:29:18Z | 39,801,829 | <p>You can train separate <code>NGram</code> and <code>CountVectorizer</code> models and merge using <code>VectorAssembler</code>. </p>
<pre><code>from pyspark.ml.feature import NGram, CountVectorizer, VectorAssembler
from pyspark.ml import Pipeline
def build_ngrams(inputCol="tokens", n=3):
ngrams = [
NGram(n=i, inputCol="tokens", outputCol="{0}_grams".format(i))
for i in range(1, n + 1)
]
vectorizers = [
CountVectorizer(inputCol="{0}_grams".format(i),
outputCol="{0}_counts".format(i))
for i in range(1, n + 1)
]
assembler = [VectorAssembler(
inputCols=["{0}_counts".format(i) for i in range(1, n + 1)],
outputCol="features"
)]
return Pipeline(stages=ngrams + vectorizers + assembler)
</code></pre>
<p>Example usage:</p>
<pre><code>df = spark.createDataFrame([
(1, ["a", "b", "c", "d"]),
(2, ["d", "e", "d"])
], ("id", "tokens"))
build_ngrams().fit(df).transform(df)
</code></pre>
| 2 | 2016-10-01T00:32:07Z | [
"python",
"apache-spark",
"nlp",
"pyspark",
"apache-spark-ml"
] |
Comparing Excel dates to current date in Python | 38,840,012 | <p>Python newbie here! :)</p>
<p>Basically, I am trying to scan an excel file's column A (which contains all dates) and if the date in the cell is 7 days in the future...do something. Since I am learning, I am just looking at one cell before I progress and start looping through the data.</p>
<p>Here is my current code which isn't working.</p>
<pre><code>import openpyxl, smtplib, datetime, xlrd
from openpyxl import load_workbook
from datetime import datetime
wb = load_workbook(filename = 'FRANKLIN.xlsx')
sheet = wb.get_sheet_by_name('Master')
msg = 'Subject: %s\n%s' % ("Shift Reminder", "Dear a rem ")
cell = sheet['j7'].value
if xlrd.xldate_as_tuple(cell.datemode) == datetime.today.date() + 7:
print('ok!')
</code></pre>
<p>Here is the error code I am getting: 'datetime.datetime' object has no attribute 'datemode'</p>
<p>I've tried searching high and low, but can't quite find the solution.</p>
| 0 | 2016-08-08T23:38:46Z | 38,840,045 | <p>Your <code>cell</code> variable seems to be <code>datetime.datetime</code> object. So you can compare it like this: </p>
<pre><code>from datetime import timedelta
if cell.date() == (datetime.now().date() + timedelta(days=7)):
print("ok")
</code></pre>
| 1 | 2016-08-08T23:44:00Z | [
"python",
"excel",
"date",
"xlrd"
] |
Given an odd length list of values in Python, how can I swap all values other than the final value in the list? | 38,840,017 | <p>In regards to Python 2.7.12 (<em>disclaimer: I understand Python2 is being phased out to Python3, but the course I'm taking started us here, perhaps to understand older code bases</em>):</p>
<p>I have a list of integers whom I'd like to swap each with their neighboring value. So far, this works great for lists that are even in the number of integers they contain, however when the list length is odd, it's not so easy to simply swap each value, as the number of integers is uneven.</p>
<p>Giving the following code example, how can I swap all values other than the <em>final</em> value in the list? </p>
<pre><code>arr = [1, 2, 3, 4, 5]
def swapListPairs(arr):
for idx, val in enumerate(arr):
if len(arr) % 2 == 0:
arr[idx], arr[val] = arr[val], arr[idx] # traditional swap using evaluation order
else:
arr[0], arr[1] = arr[1], arr[0] # this line is not the solution but where I know I need some conditions to swap all list values other than len(arr)-1, but am not sure how to do this?
return arr
print swapListPairs(arr)
</code></pre>
<p><strong>Bonus Points to the ultimate Pythonic Master</strong>: How can this code be modified to also swap strings? Right now, I can only use this function using integers and am very curious how I can make this work for both <code>int</code> and <code>str</code> objects?</p>
<p>Thank you so greatly for any insight or suggestions to point me in the right direction! Everyone's help at times here has been invaluable and I thank you for reading and for your help!</p>
| 3 | 2016-08-08T23:39:20Z | 38,840,040 | <p>This is easier to do <em>without</em> <code>enumerate</code>. Note that it never, ever makes decisions based on the <em>contents</em> of <code>arr</code>; that is what makes it work on anything, not just a pre-sorted list of integers starting from 1.</p>
<pre><code>for i in range(len(arr)//2):
a = 2*i
b = a+1
if b < len(arr):
arr[a], arr[b] = arr[b], arr[a]
</code></pre>
<p>Exercise for you: is the <code>if</code> actually necessary? Why or why not?</p>
| 4 | 2016-08-08T23:42:45Z | [
"python",
"list"
] |
Given an odd length list of values in Python, how can I swap all values other than the final value in the list? | 38,840,017 | <p>In regards to Python 2.7.12 (<em>disclaimer: I understand Python2 is being phased out to Python3, but the course I'm taking started us here, perhaps to understand older code bases</em>):</p>
<p>I have a list of integers whom I'd like to swap each with their neighboring value. So far, this works great for lists that are even in the number of integers they contain, however when the list length is odd, it's not so easy to simply swap each value, as the number of integers is uneven.</p>
<p>Giving the following code example, how can I swap all values other than the <em>final</em> value in the list? </p>
<pre><code>arr = [1, 2, 3, 4, 5]
def swapListPairs(arr):
for idx, val in enumerate(arr):
if len(arr) % 2 == 0:
arr[idx], arr[val] = arr[val], arr[idx] # traditional swap using evaluation order
else:
arr[0], arr[1] = arr[1], arr[0] # this line is not the solution but where I know I need some conditions to swap all list values other than len(arr)-1, but am not sure how to do this?
return arr
print swapListPairs(arr)
</code></pre>
<p><strong>Bonus Points to the ultimate Pythonic Master</strong>: How can this code be modified to also swap strings? Right now, I can only use this function using integers and am very curious how I can make this work for both <code>int</code> and <code>str</code> objects?</p>
<p>Thank you so greatly for any insight or suggestions to point me in the right direction! Everyone's help at times here has been invaluable and I thank you for reading and for your help!</p>
| 3 | 2016-08-08T23:39:20Z | 38,840,090 | <p>Here's a shorter, probably faster way based on slice assignment:</p>
<pre><code>def swap_adjacent_elements(l):
end = len(l) - len(l) % 2
l[:end:2], l[1:end:2] = l[1:end:2], l[:end:2]
</code></pre>
<p>The slice assignment selects the elements of <code>l</code> at all even indices (<code>l[:end:2]</code>) or all odd indices (<code>l[1:end:2]</code>) up to and excluding index <code>end</code>, then uses the same kind of swapping technique you're already using to swap the slices.</p>
<p><code>end = len(l) - len(l) % 2</code> selects the index at which to stop. We set <code>end</code> to the closest even number less than or equal to <code>len(l)</code> by subtracting <code>len(l) % 2</code>, the remainder when <code>len(l)</code> is divided by 2.</p>
<p>Alternatively, we could have done <code>end = len(l) & ~1</code>, using bitwise operations. That would construct an integer to use as a mask (<code>~1</code>), with a 0 in the 1 bit and 1s everywhere else, then apply the mask (with <code>&</code>) to set the 1 bit of <code>len(l)</code> to 0 to produce <code>end</code>.</p>
| 6 | 2016-08-08T23:51:24Z | [
"python",
"list"
] |
Given an odd length list of values in Python, how can I swap all values other than the final value in the list? | 38,840,017 | <p>In regards to Python 2.7.12 (<em>disclaimer: I understand Python2 is being phased out to Python3, but the course I'm taking started us here, perhaps to understand older code bases</em>):</p>
<p>I have a list of integers whom I'd like to swap each with their neighboring value. So far, this works great for lists that are even in the number of integers they contain, however when the list length is odd, it's not so easy to simply swap each value, as the number of integers is uneven.</p>
<p>Giving the following code example, how can I swap all values other than the <em>final</em> value in the list? </p>
<pre><code>arr = [1, 2, 3, 4, 5]
def swapListPairs(arr):
for idx, val in enumerate(arr):
if len(arr) % 2 == 0:
arr[idx], arr[val] = arr[val], arr[idx] # traditional swap using evaluation order
else:
arr[0], arr[1] = arr[1], arr[0] # this line is not the solution but where I know I need some conditions to swap all list values other than len(arr)-1, but am not sure how to do this?
return arr
print swapListPairs(arr)
</code></pre>
<p><strong>Bonus Points to the ultimate Pythonic Master</strong>: How can this code be modified to also swap strings? Right now, I can only use this function using integers and am very curious how I can make this work for both <code>int</code> and <code>str</code> objects?</p>
<p>Thank you so greatly for any insight or suggestions to point me in the right direction! Everyone's help at times here has been invaluable and I thank you for reading and for your help!</p>
| 3 | 2016-08-08T23:39:20Z | 38,840,129 | <p>You could iterate through the length of the list with a step of two and try to swap values (and except index errors).</p>
<pre><code>def swap_list_pairs(arr):
for index in range(0, len(arr), 2):
try:
arr[index], arr[index+1] = arr[index+1], arr[index]
except IndexError:
pass
return arr
</code></pre>
<p>This will work for all data types.</p>
<p>As <a href="http://stackoverflow.com/users/5644961/copperfield">Copperfield</a> suggested, you could get rid of the try-except-clause:</p>
<pre><code>def swap_list_pairs(arr):
for index in range(1, len(arr), 2):
arr[index-1], arr[index] = arr[index], arr[index-1]
return arr
</code></pre>
| 4 | 2016-08-08T23:57:18Z | [
"python",
"list"
] |
Given an odd length list of values in Python, how can I swap all values other than the final value in the list? | 38,840,017 | <p>In regards to Python 2.7.12 (<em>disclaimer: I understand Python2 is being phased out to Python3, but the course I'm taking started us here, perhaps to understand older code bases</em>):</p>
<p>I have a list of integers whom I'd like to swap each with their neighboring value. So far, this works great for lists that are even in the number of integers they contain, however when the list length is odd, it's not so easy to simply swap each value, as the number of integers is uneven.</p>
<p>Giving the following code example, how can I swap all values other than the <em>final</em> value in the list? </p>
<pre><code>arr = [1, 2, 3, 4, 5]
def swapListPairs(arr):
for idx, val in enumerate(arr):
if len(arr) % 2 == 0:
arr[idx], arr[val] = arr[val], arr[idx] # traditional swap using evaluation order
else:
arr[0], arr[1] = arr[1], arr[0] # this line is not the solution but where I know I need some conditions to swap all list values other than len(arr)-1, but am not sure how to do this?
return arr
print swapListPairs(arr)
</code></pre>
<p><strong>Bonus Points to the ultimate Pythonic Master</strong>: How can this code be modified to also swap strings? Right now, I can only use this function using integers and am very curious how I can make this work for both <code>int</code> and <code>str</code> objects?</p>
<p>Thank you so greatly for any insight or suggestions to point me in the right direction! Everyone's help at times here has been invaluable and I thank you for reading and for your help!</p>
| 3 | 2016-08-08T23:39:20Z | 38,846,870 | <p>Similar to @user2357112 but I prefer it this way:</p>
<pre><code>arr[1::2], arr[:-1:2] = arr[:-1:2], arr[1::2]
</code></pre>
<p>Demo:</p>
<pre><code>>>> arr = [1, 2, 3, 4, 5]
>>> arr[1::2], arr[:-1:2] = arr[:-1:2], arr[1::2]
>>> arr
[2, 1, 4, 3, 5]
>>> arr = [1, 2, 3, 4, 5, 6]
>>> arr[1::2], arr[:-1:2] = arr[:-1:2], arr[1::2]
>>> arr
[2, 1, 4, 3, 6, 5]
</code></pre>
| 1 | 2016-08-09T09:26:03Z | [
"python",
"list"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.