title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
How to call a certain function inside an if statement in python? | 38,576,369 | <p>I am relatively new to python, in my foundation year I learned BBC BASIC which is pretty basic and I acquired many bad habits there.
I learned python with the aid of codecademy, however, how can I call a function inside an if statement? In my first if statement I called the function <code>mainMenu(menu)</code>, however, it is not displaying the function contents. Why? </p>
<p>(By the way I am just trying to do an ATM Machine just to practice some of the things I learned and consolidate it </p>
<pre><code>print "Hello ! Welcome to JD's bank"
print
print "Insert bank card and press any key to procede"
print
raw_input()
passcode = 1111
attempts = 0
while passcode == 1111:
passcodeInsertion= raw_input("Please insert your 4-digit code: ")
print""
if passcodeInsertion == str(passcode):
print "This is working" #testing-----
print ""
mainMenu(menu)
elif attempts < 2:
print "Sorry ! Wrong passcode"
attempts += 1
print "------------------------------------------------"
print ""
print"Try again !! This is your " + str(attempts) + " attempt"
print
print "------------------------------------------------"
print
else:
print""
print "Your card is unfortunately now blocked"
exit()
def mainMenu(menu):
print "------------------------------------------------"
print "Select one of this options"
print "1. Check Balance"
print "2. Withdraw Money"
print "3. Deposit Money "
print "0. Exit "
print "------------------------------------------------"
</code></pre>
| -3 | 2016-07-25T19:57:59Z | 38,576,416 | <p>As in C++, the function must be defined before the code area where it is used. Thus, your code should read:</p>
<pre><code>def mainMenu():
print "------------------------------------------------"
print "Select one of this options"
print "1. Check Balance"
print "2. Withdraw Money"
print "3. Deposit Money "
print "0. Exit "
print "------------------------------------------------"
print "Hello ! Welcome to JD's bank"
print
print "Insert bank card and press ENTER to proceed"
print
raw_input()
passcode = 1111
attempts = 0
while passcode == 1111:
passcodeInsertion= raw_input("Please insert your 4-digit code: ")
print
if passcodeInsertion == str(passcode):
print "This is working" #testing-----
print ""
mainMenu() #removed menu as you have not defined it above
elif attempts < 2:
print "Sorry ! Wrong passcode"
attempts += 1
print "------------------------------------------------"
print ""
print"Try again !! This is your " + str(attempts) + " attempt"
print
print "------------------------------------------------"
print
else:
print""
print "Your card is unfortunately now blocked"
exit()
</code></pre>
<p>There are other places where you can place the function like right above the while loop but make sure your function is above the area where it is called.</p>
| 1 | 2016-07-25T20:01:34Z | [
"python",
"function"
] |
How to call a certain function inside an if statement in python? | 38,576,369 | <p>I am relatively new to python, in my foundation year I learned BBC BASIC which is pretty basic and I acquired many bad habits there.
I learned python with the aid of codecademy, however, how can I call a function inside an if statement? In my first if statement I called the function <code>mainMenu(menu)</code>, however, it is not displaying the function contents. Why? </p>
<p>(By the way I am just trying to do an ATM Machine just to practice some of the things I learned and consolidate it </p>
<pre><code>print "Hello ! Welcome to JD's bank"
print
print "Insert bank card and press any key to procede"
print
raw_input()
passcode = 1111
attempts = 0
while passcode == 1111:
passcodeInsertion= raw_input("Please insert your 4-digit code: ")
print""
if passcodeInsertion == str(passcode):
print "This is working" #testing-----
print ""
mainMenu(menu)
elif attempts < 2:
print "Sorry ! Wrong passcode"
attempts += 1
print "------------------------------------------------"
print ""
print"Try again !! This is your " + str(attempts) + " attempt"
print
print "------------------------------------------------"
print
else:
print""
print "Your card is unfortunately now blocked"
exit()
def mainMenu(menu):
print "------------------------------------------------"
print "Select one of this options"
print "1. Check Balance"
print "2. Withdraw Money"
print "3. Deposit Money "
print "0. Exit "
print "------------------------------------------------"
</code></pre>
| -3 | 2016-07-25T19:57:59Z | 38,576,441 | <pre><code>print "Hello ! Welcome to JD's bank"
print
print "Insert bank card and press any key to procede"
print
raw_input()
def mainMenu():
print "------------------------------------------------"
print "Select one of this options"
print "1. Check Balance"
print "2. Withdraw Money"
print "3. Deposit Money "
print "0. Exit "
print "------------------------------------------------"
passcode = 1111
attempts = 0
while passcode == 1111:
passcodeInsertion= raw_input("Please insert your 4-digit code: ")
print""
if passcodeInsertion == str(passcode):
print "This is working" #testing-----
print ""
mainMenu()
elif attempts < 2:
print "Sorry ! Wrong passcode"
attempts += 1
print "------------------------------------------------"
print ""
print"Try again !! This is your " + str(attempts) + " attempt"
print
print "------------------------------------------------"
print
else:
print""
print "Your card is unfortunately now blocked"
exit()
</code></pre>
<p>Try the above. Moved mainMenu to the top and you don't need any parameters.</p>
| 4 | 2016-07-25T20:02:36Z | [
"python",
"function"
] |
Selectively convert list items in python | 38,576,377 | <p>So I have a list of lists in Python, where all the items are supposed to be numbers except when the list has three items. In this case, the last item is supposed to remain a string which I concert from a string. The code is as follows:</p>
<pre><code>special_ops = [6]
program = [ line for line in temp.split(";") ]
for i in range(len(program)):
line = [ int(p) for p in program[i].split(",")[:2] ]
if ( line[0] in special_ops ):
line.append( program[i].split(",")[2] )
program[i] = line
</code></pre>
<p>The structure of the pre-parsed string looks like this:</p>
<pre><code>0,1;2,1;2,0;3,2;6,1,a string
</code></pre>
<p>This doesn't seem very Pythonic to me so I was hoping for a more concise version of this code. Any help would be appreciated.</p>
| -2 | 2016-07-25T19:58:33Z | 38,576,584 | <pre><code>>>> a = [x.split(',') for x in temp.split(';')]
>>> [[int(x) for x in lst] if len(lst)==2 else [int(lst[0]), int(lst[1]), lst[2]] for lst in a]
[[0, 1], [2, 1], [2, 0], [3, 2], [6, 1, 'a string']]
</code></pre>
<h3>How it works</h3>
<p>The first statement just splits up temp in a convenient list of lists of strings:</p>
<pre><code>>>> a = [x.split(',') for x in temp.split(';')]
>>> a
[['0', '1'], ['2', '1'], ['2', '0'], ['3', '2'], ['6', '1', 'a string']]
</code></pre>
<p>The second command uses a ternary statement to treat the length-2 lists differently from the length-3 lists. Just looking at the ternary part:</p>
<pre><code>>>> lst = ['0', '1']
>>> [int(x) for x in lst] if len(lst)==2 else [int(lst[0]), int(lst[1]), lst[2]]
[0, 1]
>>> lst = [6, 1, 'a string']
>>> [int(x) for x in lst] if len(lst)==2 else [int(lst[0]), int(lst[1]), lst[2]]
[6, 1, 'a string']
</code></pre>
<h3>All as one command</h3>
<p>If one wants to avoid the intermediate variable, then the two commands above can be merged into one:</p>
<pre><code>>>> [[int(x) for x in lst] if len(lst)==2 else [int(lst[0]), int(lst[1]), lst[2]] for lst in [x.split(',') for x in temp.split(';')]]
[[0, 1], [2, 1], [2, 0], [3, 2], [6, 1, 'a string']]
</code></pre>
| 1 | 2016-07-25T20:10:58Z | [
"python"
] |
Pyspark does not use python3 in yarn cluster mode, even with PYSPARK_PYTHON=python3 | 38,576,397 | <p>I have set PYSPARK_PYTHON=python3 in spark-env.sh using ambari, and when I try 'pyspark' in commandline, it runs with python 3.4.3. However, when I submit a job using yarn cluster mode, it runs using python 2.7.9. How do I make it use python3?</p>
| 0 | 2016-07-25T19:59:56Z | 38,576,724 | <p>You need to give full path of python3 like:</p>
<pre><code>subprocess.call(['export PYSPARK_PYTHON=/usr/local/bin/python2.7'],shell=True)
</code></pre>
| 0 | 2016-07-25T20:19:41Z | [
"python",
"apache-spark",
"pyspark",
"ambari"
] |
Trying to exec Python script using PHP | 38,576,448 | <p>I'm trying to exec python script using PHP, but python seems to dont work when exec by php.</p>
<p>I tryied this code to test</p>
<pre><code>$cmdResult = shell_exec("ls & /usr/local/bin/python2.7 --version & echo done");
</code></pre>
<p>Returned:</p>
<pre><code>done
LICENSE
example.py
</code></pre>
<p>When I exec it on console (shell):</p>
<pre><code>[root@local folder]# /usr/local/bin/python2.7 --version
Python 2.7.6
</code></pre>
<p>Anyone have any idea whats the problem?</p>
<p>Aditional info:</p>
<pre><code>[root@local folder]# ls -all /usr/local/bin/py*
-rwxr-xr-x 1 root apache 84 Jul 21 21:53 /usr/local/bin/pydoc
lrwxrwxrwx 1 root root 24 Jul 21 21:43 /usr/local/bin/python -> /usr/local/bin/python2.7
-rwxrwxrwx 1 root apache 4669791 Jul 21 21:53 /usr/local/bin/python2.7
-rwxr-xr-x 1 root apache 1674 Jul 21 21:53 /usr/local/bin/python2.7-config
</code></pre>
| 1 | 2016-07-25T20:03:09Z | 38,594,890 | <p>In your shell command try using <code>&&</code> like so:</p>
<pre><code>ls && /usr/local/bin/python2.7 --version && echo done
</code></pre>
<p>so your code would read</p>
<pre><code>$cmdResult = shell_exec("ls && /usr/local/bin/python2.7 --version && echo done");
</code></pre>
| 1 | 2016-07-26T16:08:49Z | [
"php",
"python",
"bash"
] |
Unable to work with Django (python 2.7, OS X 10.11.1) | 38,576,461 | <p>I'm about to pull my hair out in frustration. Trying to start a Django project on my new iMac with OS X El Capital. Python 2.7 came installed on the computer, and it seems so was Django. However, I can't run django-admin. I've got Django installed on my laptop and I didn't have this much trouble. </p>
<p>Per the official documentation, I try </p>
<pre><code>pip install --upgrade Django==1.9.8
</code></pre>
<p>and the terminal returns </p>
<p><code>Requirement already up-to-date: Django==1.9.8 in /Library/Python/2.7/site-packages/Django-1.9.8-py2.7.egg</code></p>
<p>Then I try in python:</p>
<pre><code>>>> import django
>>> print(django.get_version())
1.9
</code></pre>
<p>Great! Next I try:</p>
<pre><code>Django-admin.py --version
</code></pre>
<p>and it returns</p>
<pre><code>-bash: django-admin.py: command not found
</code></pre>
<p>After googling it, seems that the path may be the issue? I try:</p>
<pre><code>echo $PATH
</code></pre>
<p>and I get</p>
<pre><code>/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
</code></pre>
<p>On the docs for troubleshooting, "django-admin should be on your system path if you installed Django via python setup.py" so I try that.</p>
<pre><code>sudo python setup.py install
</code></pre>
<p>A bunch of stuff happens, and then </p>
<p><code>Extracting Django-1.9.8-py2.7.egg to /Library/Python/2.7/site-packages
Django 1.9.8 is already the active version in easy-install.pth
Installing django-admin.py script to /usr/local/bin
error: [Errno 2] No such file or directory: '/usr/local/bin/django-admin.py'</code></p>
<p>So in summary, it seems that Django is installed and I can import and find its version through python. However, I cannot run django-admin.py or django-admin. </p>
<p>Please help!! </p>
| 0 | 2016-07-25T20:04:01Z | 38,577,686 | <p>Try installing it in a virtualenv.</p>
<pre><code>virtualenv -p /usr/bin/python2.7 venv # create virtual environment
source venv/bin/activate # activate venv
pip install --upgrade Django==1.9.8 # install django in your venv
django-admin # should run django-admin from your venv
</code></pre>
<p>Running in a venv is always cleaner IMO, it allows you to have different projects using their own venv without conflicting and you can get rid of your venv when you are done with it by simply deleting the folder.</p>
| 1 | 2016-07-25T21:24:34Z | [
"python",
"django",
"osx",
"python-2.7"
] |
Python- np.mean() giving wrong means? | 38,576,480 | <p><strong>The issue</strong></p>
<p>So I have 50 netCDF4 data files that contain decades of monthly temperature predictions on a global grid. I'm using np.mean() to make an ensemble average of all 50 data files together while preserving time length & spatial scale, but np.mean() gives me two different answers. The first time I run its block of code, it gives me a number that, when averaged over latitude & longitude & plotted against the individual runs, is slightly lower than what the ensemble mean should be. If I re-run the block, it gives me a different mean which looks correct.</p>
<p><strong>The code</strong></p>
<p>I can't copy every line here since it's long, but here's what I do for each run.</p>
<pre><code>#Historical (1950-2020) data
ncin_1 = Dataset("/project/wca/AR5/CanESM2/monthly/histr1/tas_Amon_CanESM2_historical-r1_r1i1p1_195001-202012.nc") #Import data file
tash1 = ncin_1.variables['tas'][:] #extract tas (temperature) variable
ncin_1.close() #close to save memory
#Repeat for future (2021-2100) data
ncin_1 = Dataset("/project/wca/AR5/CanESM2/monthly/histr1/tas_Amon_CanESM2_historical-r1_r1i1p1_202101-210012.nc")
tasr1 = ncin_1.variables['tas'][:]
ncin_1.close()
#Concatenate historical & future files together to make one time series array
tas11 = np.concatenate((tash1,tasr1),axis=0)
#Subtract the 1950-1979 mean to obtain anomalies
tas11 = tas11 - np.mean(tas11[0:359],axis=0,dtype=np.float64)
</code></pre>
<p>And I repeat that 49 times more for other datasets. Each tas11, tas12, etc file has the shape (1812, 64, 128) corresponding to time length in months, latitude, and longitude.</p>
<p>To get the ensemble mean, I do the following.</p>
<pre><code>#Move all tas data to one array
alltas = np.zeros((1812,64,128,51)) #years, lat, lon, members (no ensemble mean value yet)
alltas[:,:,:,0] = tas11
(...)
alltas[:,:,:,49] = tas50
#Calculate ensemble mean & fill into 51st slot in axis 3
alltas[:,:,:,50] = np.mean(alltas,axis=3,dtype=np.float64)
</code></pre>
<p>When I check a coordinate & month, the ensemble mean is off from what it should be. Here's what a plot of globally averaged temperatures from 1950-2100 looks like with the first mean (with monhly values averaged into annual values. Black line is ensemble mean & colored lines are individual runs.</p>
<p><a href="http://i.stack.imgur.com/uoSK4.png" rel="nofollow"><img src="http://i.stack.imgur.com/uoSK4.png" alt="enter image description here"></a></p>
<p>Obviously that deviated below the real ensemble mean. Here's what the plot looks like when I run alltas[:,:,:,50]=np.mean(alltas,axis=3,dtype=np.float64) a second time & keep everything else the same.</p>
<p><a href="http://i.stack.imgur.com/vtSQ5.png" rel="nofollow"><img src="http://i.stack.imgur.com/vtSQ5.png" alt="enter image description here"></a></p>
<p>Much better.</p>
<p><strong>The question</strong></p>
<p>Why does np.mean() calculate the wrong value the first time? I tried specifying the data type as a float when using np.mean() like in this question- <a href="http://stackoverflow.com/questions/17463128/wrong-numpy-mean-value">Wrong numpy mean value?</a>
But it didn't work. Any way I can fix it so it works correctly the first time? I don't want this problem to occur on a calculation where it's not so easy to notice a math error.</p>
| 7 | 2016-07-25T20:04:34Z | 38,576,857 | <p>In the line</p>
<pre><code>alltas[:,:,:,50] = np.mean(alltas,axis=3,dtype=np.float64)
</code></pre>
<p>the argument to <code>mean</code> should be <code>alltas[:,:,:,:50]</code>:</p>
<pre><code>alltas[:,:,:,50] = np.mean(alltas[:,:,:,:50], axis=3, dtype=np.float64)
</code></pre>
<p>Otherwise you are including those final zeros in the calculation of the ensemble means.</p>
| 8 | 2016-07-25T20:27:08Z | [
"python",
"numpy",
"mean"
] |
Return an object using its name in python | 38,576,495 | <p>So, I've create global objects that can be accessed by any method. Now I have to return the objects by a method like this.</p>
<pre><code>def get_my_obj (obj_name):
if obj_name == "abc": return abc
elif obj_name == "xyz": return xyz
elif obj_name == "def": return def
</code></pre>
<p>Is there a simpler way to achieve this? I have hundreds of these objects.</p>
| 0 | 2016-07-25T20:05:51Z | 38,576,655 | <p>Through the use of <code>globals()</code> in the appropriate file, it is entirely possible for achieve such an evil act.</p>
<pre><code># file1.py
abc = 'foo'
xyz = 'bar'
dfe = 'baz'
def get_my_obj(obj_name):
return globals()[obj_name]
# file2.py
from file1 import get_my_obj
print(get_my_obj('abc')) # 'foo'
</code></pre>
<p><a href="https://docs.python.org/2/library/functions.html#globals" rel="nofollow"><code>globals()</code></a> retrieves variables within the current module. In this case, it may be the file where it is executed.</p>
<p>Can you do it? Yes. Would I do it? No. Is it my business to decide if you should do it anyway? No.</p>
| 3 | 2016-07-25T20:15:21Z | [
"python",
"object"
] |
Develop GUI window | 38,576,506 | <p>It needs to display my picture on the left and my first and last name along with birthdate on the right side. This is what I have so far, but it keeps giving me a TCL error saying "No such file or directory" when I have the file saved onto my computer. </p>
<pre><code>from tkinter import Tk,Label,PhotoImage,LEFT,RIGHT
root=Tk()
text=Label(root,
text="First Name: Justin\n"
"Last Name: Joseph\n"
"Date of Birth:02/17/1995")
text.pack(side=RIGHT)
Justin=PhotoImage(file="Justin.gif")
JustinLabel=Label(root,
image=Justin)
JustinLabel.pack(side=LEFT)
</code></pre>
| -1 | 2016-07-25T20:06:29Z | 38,576,634 | <p>The problem has to do with the line: </p>
<pre><code>Justin=PhotoImage(file="Justin.gif")
</code></pre>
<p>Make sure that Justin.gif is located in the right place in the file hierarchy (same directory as the script itself) and is called 'Justin.gif' (capitalization sensitive).</p>
| 1 | 2016-07-25T20:13:45Z | [
"python",
"user-interface",
"tkinter"
] |
Checking if a user has been assigned a token in Django Restframework | 38,576,589 | <p>I am setting up token authentication for a site using Django Restframework and need to be able to have a user download their token, however the catch is that they much only be able to download their token once (similar to the Amazon AWS model). </p>
<p>In other words; is there a native way to check if a user has been assigned a token in restframework? </p>
| 0 | 2016-07-25T20:11:26Z | 38,576,881 | <p>you can do this:</p>
<pre><code>from rest_framework.authtoken.models import Token
from django.conf import settings
token = Token.objects.create(user=settings.AUTH_USER_MODEL)
</code></pre>
<p>now you can just check if your given user has a token:</p>
<pre><code>user_with_token = Token.objects.get(user=user)
</code></pre>
<p>if you just wanna see if the user has a token:</p>
<pre><code>is_tokened = Token.objects.filter(user=user).exist() # Returns a boolean
</code></pre>
<p>if the entry exists it means the user has a token assigned to it.
Reference: <a href="http://www.django-rest-framework.org/api-guide/authentication/#tokenauthentication" rel="nofollow">HERE</a></p>
<p>Follow the documentation there to make sure your database is migrated.</p>
| 0 | 2016-07-25T20:28:54Z | [
"python",
"django",
"django-rest-framework",
"http-token-authentication"
] |
Checking if a user has been assigned a token in Django Restframework | 38,576,589 | <p>I am setting up token authentication for a site using Django Restframework and need to be able to have a user download their token, however the catch is that they much only be able to download their token once (similar to the Amazon AWS model). </p>
<p>In other words; is there a native way to check if a user has been assigned a token in restframework? </p>
| 0 | 2016-07-25T20:11:26Z | 38,590,027 | <p>try use a django signal to create user token automatically, something like this on your models file.</p>
<pre><code>@receiver(post_save, sender=User)
def create_auth_token(sender, instance=None, created=False, **kwargs):
if created:
Token.objects.create(user=instance)
</code></pre>
| 0 | 2016-07-26T12:32:57Z | [
"python",
"django",
"django-rest-framework",
"http-token-authentication"
] |
Neo4j Bolt StatementResult to Pandas DataFrame | 38,576,674 | <p>Based on example from <a href="https://neo4j.com/developer/python/" rel="nofollow">Neo4j</a> </p>
<pre><code>from neo4j.v1 import GraphDatabase, basic_auth
driver = GraphDatabase.driver("bolt://localhost", auth=basic_auth("neo4j", "neo4j"))
session = driver.session()
session.run("CREATE (a:Person {name:'Arthur', title:'King'})")
result = session.run("MATCH (a:Person) WHERE a.name = 'Arthur' RETURN a.name AS name, a.title AS title")
for record in result:
print("%s %s" % (record["title"], record["name"]))
session.close()
</code></pre>
<p>Here <code>result</code> is of datatype <code>neo4j.v1.session.StatementResult</code>. How to access this data in pandas dataframe <strong>without explicitly iterating</strong>?</p>
<p><code>pd.DataFrame.from_records(result)</code> doesn't seem to help.</p>
<p>This is what I have using list comprehension</p>
<pre><code>resultlist = [[record['title'], record['name']] for record in result]
pd.DataFrame.from_records(resultlist, columns=['title', 'name'])
</code></pre>
| 0 | 2016-07-25T20:16:48Z | 38,577,709 | <p>The best I can come up with is a list comprehension similar to yours, but less verbose:</p>
<pre><code>df = pd.DataFrame([r.values() for r in result], columns=result.keys())
</code></pre>
<p>The <a href="http://py2neo.org/v3/" rel="nofollow"><code>py2neo</code></a> package seems to be more suitable for DataFrames, as it's fairly straightforward to return a list of dictionaries. Here's the equivalent code using <code>py2neo</code>:</p>
<pre><code>import py2neo
# Some of these keyword arguments are unnecessary, as they are the default values.
graph = py2neo.Graph(bolt=True, host='localhost', user='neo4j', password='neo4j')
graph.run("CREATE (a:Person {name:'Arthur', title:'King'})")
query = "MATCH (a:Person) WHERE a.name = 'Arthur' RETURN a.name AS name, a.title AS title"
df = pd.DataFrame(graph.data(query))
</code></pre>
| 1 | 2016-07-25T21:26:38Z | [
"python",
"pandas",
"neo4j"
] |
Boxplot placed on time axis | 38,576,692 | <p>I want to place a series of (matplotlib) boxplots in a time axis. They are series of measurements taken on different days along a year. The dates are not evenly distributed and I am interested on the variation along time.</p>
<hr>
<h2>Easy version</h2>
<p>I have a pandas DataFrame with indexes and series of numbers, more or less like this: (notice the indexes):</p>
<pre><code>np.random.seed(12345)
data = np.array( [ np.random.normal( i, 1, 10 ) for i in range(3) ] )
ii = np.array([ 3, 5, 8 ] )
df = pd.DataFrame( data=data, index=ii )
</code></pre>
<p>For each index, I need to make a boxplot, which is no problem:</p>
<pre><code>plt.boxplot( [ df.loc[i] for i in df.index ], vert=True, positions=ii )
</code></pre>
<p><a href="http://i.stack.imgur.com/8e6Px.png" rel="nofollow"><img src="http://i.stack.imgur.com/8e6Px.png" alt="enter image description here"></a></p>
<h2>Time version</h2>
<p>The problem is, I need to place the boxes in a time axis, i.e. place the boxes on a concrete date</p>
<pre><code>np.random.seed(12345)
data = np.array( [ np.random.normal( i, 1, 10 ) for i in range(3) ] )
dates = pd.to_datetime( [ '2015-06-01', '2015-06-15', '2015-08-30' ] )
df = pd.DataFrame( data=data, index=dates )
plt.boxplot( [ df.loc[i] for i in df.index ], vert=True )
</code></pre>
<p><a href="http://i.stack.imgur.com/XSZBR.png" rel="nofollow"><img src="http://i.stack.imgur.com/XSZBR.png" alt="enter image description here"></a></p>
<p>However, if I incorporate the positions: </p>
<p><code>ax.boxplot( [ df.loc[i] for i in df.index ], vert=True, positions=dates )</code></p>
<p>I get an error:</p>
<blockquote>
<p>TypeError: Cannot compare type 'Timedelta' with type 'float'</p>
</blockquote>
<p>A look up on the docs shows:</p>
<p><code>plt.boxplot?</code></p>
<blockquote>
<p>positions : array-like, default = [1, 2, ..., n]</p>
<p>Sets the positions of the boxes. The ticks and limits are automatically set to match the positions.</p>
</blockquote>
<hr>
<h2>Wished time version</h2>
<p>This code is intended to clarify, narrow down the problem. The boxes should apppear there, where the blue points are placed in the next figure.</p>
<pre><code>np.random.seed(12345)
data = np.array( [ np.random.normal( i, 1, 10 ) for i in range(3) ] )
dates = pd.to_datetime( [ '2015-06-01', '2015-06-15', '2015-08-30' ] )
df = pd.DataFrame( data=data, index=dates )
fig, ax = plt.subplots( figsize=(10,5) )
x1 = pd.to_datetime( '2015-05-01' )
x2 = pd.to_datetime( '2015-09-30' )
ax.set_xlim( [ x1, x2 ] )
# ax.boxplot( [ df.loc[i] for i in df.index ], vert=True ) # Does not throw error, but plots nothing (out of range)
# ax.boxplot( [ df.loc[i] for i in df.index ], vert=True, positions=dates ) # This is what I'd like (throws TypeError)
ax.plot( dates, [ df.loc[i].mean() for i in df.index ], 'o' ) # Added to clarify the positions I aim for
</code></pre>
<p><a href="http://i.stack.imgur.com/NdpYs.png" rel="nofollow"><img src="http://i.stack.imgur.com/NdpYs.png" alt="enter image description here"></a></p>
<hr>
<p><strong>Is there a method to place boxplots in a time axis?</strong></p>
<hr>
<p>I am using:</p>
<p>python: 3.4.3 + numpy: 1.11.0 + pandas: 0.18.0 + matplotlib: 1.5.1</p>
| 0 | 2016-07-25T20:17:38Z | 38,624,642 | <p>The desired output can be generated in two ways. But it is safe to keep in mind that <code>boxplots</code> plot ranges of a given field/column on the <code>y-axis</code> while keeping the name of the field/column on the <code>x-axis</code>. You could plot them horizontally. But the idea remains the same.</p>
<p>At any rate, you can create the dataframe with pandas <code>timestamp</code> objects as the column names. That way when you call the boxplot function on your dataframe, the output will show the column names on the <code>x-axis</code>:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(12345)
data = np.array([np.random.normal(i, 1, 50) for i in range(12)])
##Create an array that will be the names of your columns
ii = pd.date_range(pd.Timestamp('2015-06-01'),periods=data.shape[1], freq='MS')
##Create the DataFrame
df = pd.DataFrame(data=data, columns=ii)
##I am going to reduce the number of columns so that the plot can show
checker = ii[:3]
df[checker].boxplot()
#Show the boxplots. This is just for 3 columns out of 50
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/mYIrU.png" rel="nofollow"><img src="http://i.stack.imgur.com/mYIrU.png" alt="enter image description here"></a></p>
<p>You can also go with what you had by transposing the dataframe, so that the indices will become the column names.</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(12345)
data = np.array([np.random.normal(i, 1, 50) for i in range(12)])
##Create an array that will be the indices of your dataframe
ii = pd.date_range(pd.Timestamp('2015-06-01'),periods=data.shape[0], freq='MS')
##Create the DataFrame
df = pd.DataFrame(data=data, index=ii)
##I am going to reduce the number of columns so that the plot can show
checker = ii[:3]
df.T[checker].boxplot()
#Show the boxplots. This is just for 3 columns out of 50
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/GjoVE.png" rel="nofollow"><img src="http://i.stack.imgur.com/GjoVE.png" alt="enter image description here"></a></p>
<p>I hope this helps.</p>
| 2 | 2016-07-27T23:11:23Z | [
"python",
"datetime",
"pandas",
"matplotlib",
"boxplot"
] |
Boxplot placed on time axis | 38,576,692 | <p>I want to place a series of (matplotlib) boxplots in a time axis. They are series of measurements taken on different days along a year. The dates are not evenly distributed and I am interested on the variation along time.</p>
<hr>
<h2>Easy version</h2>
<p>I have a pandas DataFrame with indexes and series of numbers, more or less like this: (notice the indexes):</p>
<pre><code>np.random.seed(12345)
data = np.array( [ np.random.normal( i, 1, 10 ) for i in range(3) ] )
ii = np.array([ 3, 5, 8 ] )
df = pd.DataFrame( data=data, index=ii )
</code></pre>
<p>For each index, I need to make a boxplot, which is no problem:</p>
<pre><code>plt.boxplot( [ df.loc[i] for i in df.index ], vert=True, positions=ii )
</code></pre>
<p><a href="http://i.stack.imgur.com/8e6Px.png" rel="nofollow"><img src="http://i.stack.imgur.com/8e6Px.png" alt="enter image description here"></a></p>
<h2>Time version</h2>
<p>The problem is, I need to place the boxes in a time axis, i.e. place the boxes on a concrete date</p>
<pre><code>np.random.seed(12345)
data = np.array( [ np.random.normal( i, 1, 10 ) for i in range(3) ] )
dates = pd.to_datetime( [ '2015-06-01', '2015-06-15', '2015-08-30' ] )
df = pd.DataFrame( data=data, index=dates )
plt.boxplot( [ df.loc[i] for i in df.index ], vert=True )
</code></pre>
<p><a href="http://i.stack.imgur.com/XSZBR.png" rel="nofollow"><img src="http://i.stack.imgur.com/XSZBR.png" alt="enter image description here"></a></p>
<p>However, if I incorporate the positions: </p>
<p><code>ax.boxplot( [ df.loc[i] for i in df.index ], vert=True, positions=dates )</code></p>
<p>I get an error:</p>
<blockquote>
<p>TypeError: Cannot compare type 'Timedelta' with type 'float'</p>
</blockquote>
<p>A look up on the docs shows:</p>
<p><code>plt.boxplot?</code></p>
<blockquote>
<p>positions : array-like, default = [1, 2, ..., n]</p>
<p>Sets the positions of the boxes. The ticks and limits are automatically set to match the positions.</p>
</blockquote>
<hr>
<h2>Wished time version</h2>
<p>This code is intended to clarify, narrow down the problem. The boxes should apppear there, where the blue points are placed in the next figure.</p>
<pre><code>np.random.seed(12345)
data = np.array( [ np.random.normal( i, 1, 10 ) for i in range(3) ] )
dates = pd.to_datetime( [ '2015-06-01', '2015-06-15', '2015-08-30' ] )
df = pd.DataFrame( data=data, index=dates )
fig, ax = plt.subplots( figsize=(10,5) )
x1 = pd.to_datetime( '2015-05-01' )
x2 = pd.to_datetime( '2015-09-30' )
ax.set_xlim( [ x1, x2 ] )
# ax.boxplot( [ df.loc[i] for i in df.index ], vert=True ) # Does not throw error, but plots nothing (out of range)
# ax.boxplot( [ df.loc[i] for i in df.index ], vert=True, positions=dates ) # This is what I'd like (throws TypeError)
ax.plot( dates, [ df.loc[i].mean() for i in df.index ], 'o' ) # Added to clarify the positions I aim for
</code></pre>
<p><a href="http://i.stack.imgur.com/NdpYs.png" rel="nofollow"><img src="http://i.stack.imgur.com/NdpYs.png" alt="enter image description here"></a></p>
<hr>
<p><strong>Is there a method to place boxplots in a time axis?</strong></p>
<hr>
<p>I am using:</p>
<p>python: 3.4.3 + numpy: 1.11.0 + pandas: 0.18.0 + matplotlib: 1.5.1</p>
| 0 | 2016-07-25T20:17:38Z | 38,630,536 | <p>So far, my best solution is to convert the units of the axis into a suitable <code>int</code> unit and plot everything accordingly. In my case, those are days. </p>
<pre><code>np.random.seed(12345)
data = np.array( [ np.random.normal( i, 1, 10 ) for i in range(3) ] )
dates = pd.to_datetime( [ '2015-06-01', '2015-06-15', '2015-08-30' ] )
df = pd.DataFrame( data=data, index=dates )
fig, ax = plt.subplots( figsize=(10,5) )
x1 = pd.to_datetime( '2015-05-01' )
x2 = pd.to_datetime( '2015-09-30' )
pos = ( dates - x1 ).days
ax.boxplot( [ df.loc[i] for i in df.index ], vert=True, positions=pos )
ax.plot( pos, [ df.loc[i].mean() for i in df.index ], 'o' )
ax.set_xlim( [ 0, (x2-x1).days ] )
ax.set_xticklabels( dates.date, rotation=45 )
</code></pre>
<p><a href="http://i.stack.imgur.com/ccx2s.png" rel="nofollow"><img src="http://i.stack.imgur.com/ccx2s.png" alt="enter image description here"></a></p>
<p>The boxplots are placed on their correct position, but the code seems a bit cumbersome to me. </p>
<p>More importantly: The units of the x-axis are not "time" anymore.</p>
| 0 | 2016-07-28T08:07:19Z | [
"python",
"datetime",
"pandas",
"matplotlib",
"boxplot"
] |
Memory error when reading nested textfiles into spark from S3 | 38,576,813 | <p>I'm trying to read around one million zipped text files into <code>spark</code> from <code>S3</code>. The zipped size of each file is between 50 MB and 80 MB. In all it's about 6.5 terabytes of data.</p>
<p>Unfortunately I'm running into an out of memory exception that I don't know how to resolve. Something as simple as:</p>
<pre><code>raw_file_list = subprocess.Popen("aws s3 ls --recursive s3://my-bucket/export/", shell=True, stdout=subprocess.PIPE).stdout.read().strip().split('\n')
cleaned_names = ["s3://my-bucket/" + f.split()[3] for f in raw_file_list if not f.endswith('_SUCCESS')]
dat = sc.textFile(','.join(cleaned_names))
dat.count()
</code></pre>
<p>Yields:</p>
<pre><code> ---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-22-8ce3c7d1073e> in <module>() ----> 1 dat.count()
/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.pyc in count(self)
1002 3
1003 """
-> 1004 return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
1005
1006 def stats(self):
/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.pyc in sum(self)
993 6.0
994 """
--> 995 return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
996
997 def count(self):
/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.pyc in fold(self, zeroValue, op)
867 # zeroValue provided to each partition is unique from the one provided
868 # to the final reduce call
--> 869 vals = self.mapPartitions(func).collect()
870 return reduce(op, vals, zeroValue)
871
/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.pyc in collect(self)
769 """
770 with SCCallSiteSync(self.context) as css:
--> 771 port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
772 return list(_load_from_socket(port, self._jrdd_deserializer))
773
/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in __call__(self, *args)
811 answer = self.gateway_client.send_command(command)
812 return_value = get_return_value(
--> 813 answer, self.gateway_client, self.target_id, self.name)
814
815 for temp_arg in temp_args:
/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/sql/utils.pyc in deco(*a, **kw)
43 def deco(*a, **kw):
44 try:
---> 45 return f(*a, **kw)
46 except py4j.protocol.Py4JJavaError as e:
47 s = e.java_exception.toString()
/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
306 raise Py4JJavaError(
307 "An error occurred while calling {0}{1}{2}.\n".
--> 308 format(target_id, ".", name), value)
309 else:
310 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: java.lang.OutOfMemoryError: GC overhead limit exceeded
</code></pre>
<p>Update:</p>
<p>Part of the issue seems to have been solved through this <a href="https://stackoverflow.com/questions/34662953/spark-error-loading-files-from-s3-wildcard?rq=1">post</a>. Seems that spark was having difficulty munging so many files from S3. Updated the error so that it only reflects the memory issues now.</p>
| 3 | 2016-07-25T20:24:42Z | 38,731,548 | <p>The problem was that there were too many files. The solution seems to be to decrease the number of partitions by reading in a subset of files and coalescing them to a smaller number. You can't make the partitions too big though: 500 - 1000 MB files cause their own problems.</p>
| 0 | 2016-08-02T22:34:07Z | [
"python",
"apache-spark",
"amazon-s3",
"pyspark"
] |
Using dataframe in a class to filter results | 38,576,864 | <p>I've created a class to preprocess a document with pandas dataframes. However, I'm having trouble using filters within my class. My code is below:</p>
<pre><code>class Dataframe:
def __init__(self, my_dataframe):
self.my_dataframe = my_dataframe
self.my_dataframe = self.filter_priv()
def filter_priv(self):
df = self.my_dataframe.copy()
df = df[~(df.priv_id > -1) | ~(df.restriction_level > 0)]
df1 = Dataframe(df)
df
</code></pre>
<p>My output is always non filtered results. My input file has 262,000 records, and with the filter, when called outside my class it successfully filters my df down to 11,000 records. Any ideas why it does not filter in the class? </p>
| 1 | 2016-07-25T20:27:47Z | 38,576,945 | <p>You're problem might be that you're using the variable "df" to initialize a Dataframe class, but the variable df hasn't been defined yet...</p>
| 0 | 2016-07-25T20:32:32Z | [
"python",
"python-2.7",
"pandas"
] |
Using dataframe in a class to filter results | 38,576,864 | <p>I've created a class to preprocess a document with pandas dataframes. However, I'm having trouble using filters within my class. My code is below:</p>
<pre><code>class Dataframe:
def __init__(self, my_dataframe):
self.my_dataframe = my_dataframe
self.my_dataframe = self.filter_priv()
def filter_priv(self):
df = self.my_dataframe.copy()
df = df[~(df.priv_id > -1) | ~(df.restriction_level > 0)]
df1 = Dataframe(df)
df
</code></pre>
<p>My output is always non filtered results. My input file has 262,000 records, and with the filter, when called outside my class it successfully filters my df down to 11,000 records. Any ideas why it does not filter in the class? </p>
| 1 | 2016-07-25T20:27:47Z | 38,577,464 | <p>You're doing this <em>way</em> wrong. You're subclassing from dataframe, but then you're storing your data inside a special dataframe property. No bueno.</p>
<p>You should be subclassing if you want to have a class that quacks like a dataframe, but you want to add some other behavior that's not available.</p>
<p>If you want to subclass, you should be doing it like this:</p>
<pre><code>class Dataframe(DataFrame):
def __init__(self, *args, **kwargs):
super(Dataframe, self).__init__(*args, **kwargs)
def filter_priv(self):
return self[~(self.priv_id > -1) | ~(self.restriction_level > 0)]
# Not sure if you can create a dataframe from another
df1 = Dataframe(df)
</code></pre>
<p>But that's really probably not even what you want. It's probably better to just have:</p>
<pre><code>def filter_priv(df):
return df[~(df.priv_id > -1) | ~(df.restriction_level > 0)]
df1 = filter_priv(df)
</code></pre>
| 0 | 2016-07-25T21:09:31Z | [
"python",
"python-2.7",
"pandas"
] |
How to compare hours and minutes? | 38,576,920 | <p>I have four variables:</p>
<pre><code>start_hour = '12'
start_minute = '00'
end_hour = '22'
end_minute = '30'
</code></pre>
<p>and from datetime:</p>
<pre><code>current_hour = datetime.now().hour
curren_minute = datetime.now().minute
</code></pre>
<p>And I want to compare if the current time is within the range:</p>
<pre><code>if int(start_hour) <= current_hour and int(end_hour) >= current_hour:
something
</code></pre>
<p>But how to implement this with minutes?</p>
| 1 | 2016-07-25T20:31:02Z | 38,576,952 | <p>You can use <a href="https://docs.python.org/2/library/datetime.html#timedelta-objects" rel="nofollow"><code>datetime.timedelta</code></a> to do the comparisons reliably. You can specify a delta in different units of time (hours, minutes, seconds, etc.) Then you don't have to worry about converting to hours, minutes, etc. explicitly.</p>
<p>For example, to check if the current time is more than an hour from the <code>start_time</code>:</p>
<pre><code>if abs(datetime.now() - start_time) > datetime.timedelta(hours=1):
# Do thing
</code></pre>
<p>You can also use <code>timedelta</code> to shift a time by a given amount:</p>
<pre><code>six_point_five_hours_from_now = datetime.now() + datetime.timedelta(hours=6, minutes=30)
</code></pre>
<p>The nice thing about <code>timedelta</code> apart from easy conversions between units is it will automatically handle time differences that span multiple days, etc.</p>
| 7 | 2016-07-25T20:33:03Z | [
"python"
] |
How to compare hours and minutes? | 38,576,920 | <p>I have four variables:</p>
<pre><code>start_hour = '12'
start_minute = '00'
end_hour = '22'
end_minute = '30'
</code></pre>
<p>and from datetime:</p>
<pre><code>current_hour = datetime.now().hour
curren_minute = datetime.now().minute
</code></pre>
<p>And I want to compare if the current time is within the range:</p>
<pre><code>if int(start_hour) <= current_hour and int(end_hour) >= current_hour:
something
</code></pre>
<p>But how to implement this with minutes?</p>
| 1 | 2016-07-25T20:31:02Z | 38,576,979 | <p>A much better way to go about this would beto convert both times to minutes:</p>
<pre><code>start_time = int(start_hour)*60 + int(start_minute)
end_time = int(end_hour)*60 + int(end_minute)
current_time = datetime.now().hour*60 +datetime.now().minute
if start_time <= current_time and end_time >= current_time:
#doSomething
</code></pre>
<p>If you need to include seconds, convert everything to seconds.</p>
| 4 | 2016-07-25T20:34:51Z | [
"python"
] |
How to compare hours and minutes? | 38,576,920 | <p>I have four variables:</p>
<pre><code>start_hour = '12'
start_minute = '00'
end_hour = '22'
end_minute = '30'
</code></pre>
<p>and from datetime:</p>
<pre><code>current_hour = datetime.now().hour
curren_minute = datetime.now().minute
</code></pre>
<p>And I want to compare if the current time is within the range:</p>
<pre><code>if int(start_hour) <= current_hour and int(end_hour) >= current_hour:
something
</code></pre>
<p>But how to implement this with minutes?</p>
| 1 | 2016-07-25T20:31:02Z | 38,577,112 | <p>A simple and clear way to do it all with just <code>datetime</code> objects is:</p>
<pre><code>now = datetime.now()
start = now.replace(hour = int(start_hour), minute = int(start_minute))
end = now.replace(hour = int(end_hour), minute = int(end_minute))
if start <= now <= end:
print('something')
</code></pre>
| 1 | 2016-07-25T20:44:08Z | [
"python"
] |
How to compare hours and minutes? | 38,576,920 | <p>I have four variables:</p>
<pre><code>start_hour = '12'
start_minute = '00'
end_hour = '22'
end_minute = '30'
</code></pre>
<p>and from datetime:</p>
<pre><code>current_hour = datetime.now().hour
curren_minute = datetime.now().minute
</code></pre>
<p>And I want to compare if the current time is within the range:</p>
<pre><code>if int(start_hour) <= current_hour and int(end_hour) >= current_hour:
something
</code></pre>
<p>But how to implement this with minutes?</p>
| 1 | 2016-07-25T20:31:02Z | 38,577,215 | <p>What about:</p>
<pre><code>>>> import datetime
>>> now = datetime.datetime.now()
>>> breakfast_time = now.replace( hour=7, minute=30, second=0, microsecond=0 )
>>> lunch_time = now.replace( hour=12, minute=30, second=0, microsecond=0 )
>>> coffee_break = now.replace( hour=16, minute=00, second=0, microsecond=0 )
>>> breakfast_time <= lunch_time <= coffee_break
True
</code></pre>
| 2 | 2016-07-25T20:51:28Z | [
"python"
] |
Are there any Python NLP tools to figure out how many ways a sentence can be parsed? | 38,576,921 | <p>I want to be able to measure ambiguity of a sentence, and my current my idea to do so is by measuring how many ways a sentence can be parsed. For example, the sentence "Fruit flies like a banana" can have to interpretations.</p>
<p>So far I have tried using the Stanford Parser, but it only interpreted each sentence in one way. My other idea was to measure how many different parts of speech each word in a sentence could mean, but each POS tagger I found only marked each word with 1 tag even when it could be multiple.</p>
<p>Are there are tools to do either?</p>
| 0 | 2016-07-25T20:31:11Z | 38,593,679 | <p>From the Stanford Parser <a href="http://nlp.stanford.edu/software/parser-faq.shtml" rel="nofollow">FAQ page</a>, hope it helps:</p>
<blockquote>
<p><strong>Can I obtain multiple parse trees for a single input sentence?</strong></p>
<p>Yes, for the PCFG parser (only). With a PCFG parser, you can give the option <code>-printPCFGkBest n</code> and it will print the <code>n</code> highest-scoring parses for a sentence. They can be printed either as phrase structure trees or as typed dependencies in the usual way via the <code>-outputFormat</code> option, and each receives a score (log probability). The <code>k</code> best parses are extracted efficiently using the algorithm of Huang and Chiang (2005).</p>
</blockquote>
| 0 | 2016-07-26T15:11:23Z | [
"python",
"nlp"
] |
Python:Replace tab in double quotes | 38,576,959 | <p>Hi i have line where i want to replace tab in double quotes. I have wrote script for that but it is not working as I want.
My line:</p>
<pre><code>Q3U962 Mus musculus MRMP-mouse Optimization "MRMP-mouse "
</code></pre>
<p>My script:</p>
<pre><code> for repline in reppepdata:
findtorep=re.findall(r"['\"](.*?)['\"]", repline)
if len(findtorep) >0:
for repitem in findtorep:
repchar =repitem
repchar=repchar.replace('\t', '')
</code></pre>
<p>My output should be:</p>
<pre><code>Q3U962 Mus musculus MRMP-mouse Optimization "MRMP-mouse"
</code></pre>
<p>But I am getting like this:</p>
<pre><code>Q3U962 Mus musculus MRMP-mouseOptimization "MRMP-mouse"
</code></pre>
<p>Words are separated by tab delimiter here.</p>
<pre><code>Q3U962\tMus musculus\tMRMP-mouse\tOptimization \t"MRMP-mouse\t"
</code></pre>
<p>Anyone has any idea how to do it?</p>
| 1 | 2016-07-25T20:33:22Z | 38,577,234 | <p><strong>NOTE</strong>: This answer assumes (it is <a href="http://stackoverflow.com/questions/38576959/pythonreplace-tab-in-double-quotes/38577234#comment64543584_38576959">confirmed by OP</a>) that there are no escaped quotes/sequences in the input.</p>
<p>You may match the quoted string with a simple <code>"[^"]+"</code> regex that matches a <code>"</code>, 1+ chars other than <code>"</code> and a <code>"</code>, and replace the tabs inside within a lambda:</p>
<pre><code>import re
s = 'Q3U96 Mus musculu MRMP-mous Optimizatio "MRMP-mouse "'
res = re.sub(r'"[^"]+"', lambda m: m.group(0).replace("\t", ""), s)
print(res)
</code></pre>
<p>See the <a href="http://ideone.com/7kfJf1" rel="nofollow">Python demo</a></p>
| 1 | 2016-07-25T20:52:40Z | [
"python",
"regex",
"replace"
] |
Pct_change in python with missing data | 38,576,969 | <p>I have quarterly time series data that I am calculating derivatives for. The problem is, the raw data has gaps in the time series. Therefore, if I am trying to find the quarter-over-quarter percent change in a variable, there are times when it will not realize it's calculating a percent change for a period much longer than a quarter. How do I make sure the pct_change() is only being done if the preceding data point is from the previous quarter (not further back) </p>
<p>Related to this, I am looking to calculate Year-over-year percent changes, which would have to go back 4 periods. I could use pct_change and just have it look back 4 periods rather than 1, but again, that assumes all the data is present.</p>
<p>What would be the best approach for handling this situation?</p>
<p>Below is the code I would use if the data was perfect:</p>
<pre><code>dataRGQoQ = rawdata.groupby("ticker")['revenueusd'].pct_change()
</code></pre>
<p>I have included sample data below. There are 2 points in this data to focus on: (1) with ticker 'A', the gap between '2006-09-30' and '2007-12-31'; and (2) with ABV the gap (this time is slightly different because it has the dates and no data) between '2012-12-31' and '2013-12-31'.</p>
<pre><code>ticker,calendardate,revenueusd
A,2005-12-31,5139000000
A,2006-03-31,4817000000
A,2006-06-30,4560000000
A,2006-09-30,4325000000
A,2007-12-31,5420000000
A,2008-03-31,5533000000
A,2008-06-30,5669000000
A,2008-09-30,5739000000
AA,2005-12-31,26159000000
AA,2006-03-31,27242000000
AA,2006-06-30,28438000000
AA,2006-09-30,29503000000
AA,2006-12-31,30379000000
AA,2007-03-31,31338000000
AA,2007-06-30,31445000000
AA,2007-09-30,31201000000
AA,2007-12-31,30748000000
ABBV,2012-12-31,18380000000
ABBV,2013-03-31,
ABBV,2013-06-30,
ABBV,2013-09-30,
ABBV,2013-12-31,18790000000
ABBV,2014-03-31,19024000000
ABBV,2014-06-30,19258000000
ABBV,2014-09-30,19619000000
ABBV,2014-12-31,19960000000
ABBV,2015-03-31,20437000000
</code></pre>
| 2 | 2016-07-25T20:34:34Z | 38,577,310 | <p>I'm going to put <code>['calendardate', 'ticker']</code> in the index to facilitate pivoting. Then <code>unstack</code> to get ticker values in the columns.</p>
<pre><code>df.set_index(['calendardate', 'ticker']).unstack().head(10)
</code></pre>
<p><a href="http://i.stack.imgur.com/2jMrS.png" rel="nofollow"><img src="http://i.stack.imgur.com/2jMrS.png" alt="enter image description here"></a></p>
<p>With <code>calendardate</code> in the index, we can use <code>resample('Q')</code> to insert all quarters. This will ensure we get the proper <code>NaN</code>'s for missing quarters.</p>
<pre><code>df.set_index(['calendardate', 'ticker']).unstack().resample('Q').mean().head(10)
</code></pre>
<p>Assign this to <code>df1</code> and then we can do <code>pct_change</code>, <code>stack</code> back and <code>reset_index</code> to get columns back in the dataframe proper.</p>
<pre><code>df1 = df.set_index(['calendardate', 'ticker']).unstack().resample('Q').mean()
df1.pct_change().stack().reset_index()
</code></pre>
<p><a href="http://i.stack.imgur.com/NMxjv.png" rel="nofollow"><img src="http://i.stack.imgur.com/NMxjv.png" alt="enter image description here"></a></p>
| 2 | 2016-07-25T20:58:01Z | [
"python",
"numpy",
"pandas"
] |
How to use the data generated in a specific class in another class? (in python) | 38,576,981 | <p>I'm new in python and in OO programming, so I don't know if the following question makes sense (...but I think it's a nice way to learn): </p>
<p>I'm trying to have two classes. One that generates data (call it <code>DataGen()</code>) and another that is able to process this data and give me some statistics (call it <code>Stats()</code>). I want to put both in different py files to keep it cleaner and so I can add methods to Stats.py without touching DataGen.py. Something like this:<br>
In DataGen.py </p>
<pre><code>class DataGen(object):
def __init__(self, number_of_samples, arg2):
self.X = np.zeros([number_of_samples,1])
# etc..
def samples(self):
# do some sampling and save it in self.X
</code></pre>
<p>In Stats.py </p>
<pre><code>class Stats(object):
def __init__(self, something):
# something here to initialize
# etc..
def mean(self):
# calculate the mean using something like DataGen.X
</code></pre>
<p>Now, and here comes the part where I get lost. I want <code>Stats()</code> to work on the data belonging to an instance of <code>DataGen()</code>, but I don't know how to link the data contained in <code>DataGen.X</code> to <code>Stats</code>, so I can use the data every time I sample with <code>DataGen.samples()</code>.</p>
<p>I tried to construct an instance of <code>DG = DataGen(arg1,arg2)</code> and then pass this object to <code>S = Stats(DG)</code>. However, if I initialize this way, the data used to estimate the statistics didn't changed after I sampled again with <code>DataGen.samples()</code>. I guess every time I sample, I have to create an instance <code>S = Stats(DG)</code> with the new data. This seems bad... is there anyway I can attach this <code>Stats()</code> class to the data of <code>DataGen()</code>? Is this a bad idea/horrible construct?</p>
<p>I also don't know how I should think about this if I construct something where <code>DataGen</code> inherits the methods of Stats, or something similar. If <code>DataGen</code> inherits from <code>Stats</code>, but <code>Stats</code> needs the data from <code>DataGen</code>, how can I solve this loop? If <code>Stats</code> inherits from <code>DataGen</code>, do I need to create a single instance of Stats and then sample with it instead of <code>DataGen</code>, as <code>Stats.DataGen.samples()</code> or <code>Stats.samples()</code>. I wouldn't like that because the rest of my code uses <code>DataGen()</code> and I think it is better structured if I don't use <code>Stats()</code> to sample! </p>
<p>Does the above makes sense? Any comment regarding this would be very helpful!</p>
| 0 | 2016-07-25T20:34:58Z | 38,578,064 | <p>For someone new to Python and OO you're getting the hang of encapsulation very well. You will want to follow the <a href="https://en.wikipedia.org/wiki/Observer_pattern" rel="nofollow">Observer pattern</a> here which is where one object owns some info (DataGen) and some other object is interested in that info (Stats) and wants to be updated when it changes.</p>
<p>The way to do this is to pass a function from the interested object to owner object which can then be called when that info changes. The function is called either a callback or a listener.</p>
<pre><code>class DataGen(object):
def __init__(self, number_of_samples, arg2):
self.X = np.zeros([number_of_samples,1])
self.listeners = list()
def samples(self):
# do some sampling and save it in self.X
# Call back to the listener and let it know that your samples changed.
for listener in self.listeners:
listener.samples_changed(self.X)
def add_listener(self, listener):
listeners.append(listener)
class Stats(object):
def __init__(self, data_gen):
# Register this 'observer' with the 'observable'
something.add_listener(self)
def samples_changed(self, samples):
# Recalculate mean.
def mean(self):
# calculate the mean using something like DataGen.X
</code></pre>
<p>Here is an example of an object adding itself as a listener for another object. </p>
<ol>
<li>Stats registers itself as a listener with <code>DataGen</code></li>
<li>You call <code>samples()</code> on <code>DataGen</code></li>
<li><code>samples()</code> iterates through all of its listeners (there may be more than one stats) and calls <code>samples_changed(self.X)</code> on each one, passing the new set of samples. </li>
<li>One of those listeners (the only one in this case) is <code>Stats</code>, which can update its internal state to handle the new samples.</li>
</ol>
<p>If at some time you want to remove the <code>Stats</code> object you must make sure you remove it from the <code>DataGen.listeners</code> list, otherwise you will end up with a memory leak. </p>
| 0 | 2016-07-25T21:58:19Z | [
"python",
"oop"
] |
How would I collapse rows based on a value in a column? | 38,577,007 | <p>Iâll describe what I mean in more detail here.
Suppose I have a data sheet that looks like this:</p>
<pre><code>+-----------+---------+---------+---------+---------+---------+---------+--------------+
| | Person1 | Person2 | Person4 | Person4 | Person5 | Person6 | City |
+-----------+---------+---------+---------+---------+---------+---------+--------------+
| January | - | - | Yes | - | Yes | - | SanFrancisco |
| Febuary | Yes | - | - | - | - | - | SanFrancisco |
| March | - | - | - | - | - | - | SanFrancisco |
| April | - | - | - | - | - | - | NewYork |
| May | Yes | - | - | - | - | - | NewYork |
| June | - | - | - | - | - | - | NewYork |
| July | - | - | - | - | Yes | - | NewYork |
| August | - | - | - | - | - | - | NewYork |
| September | - | - | - | - | - | - | Miami |
| November | - | - | - | - | - | Yes | Miami |
| December | - | - | - | - | - | - | Miami |
+-----------+---------+---------+---------+---------+---------+---------+--------------+
</code></pre>
<p>Ignoring the ascii for stackoverflow formatting, Itâs a simple spreadsheet that tracks 6 people based on what city theyâve been to in which months.</p>
<p>What I want to only know is, which people have visited which cities. Effectively condensing the list to look like this:</p>
<pre><code>+---------+---------+---------+---------+---------+---------+--------------+
| Person1 | Person2 | Person4 | Person4 | Person5 | Person6 | City |
+---------+---------+---------+---------+---------+---------+--------------+
| Yes | - | Yes | - | Yes | - | SanFrancisco |
| Yes | - | - | - | Yes | - | NewYork |
| - | - | - | - | - | Yes | Miami |
+---------+---------+---------+---------+---------+---------+--------------+
</code></pre>
<p>Each row is only ONE city, and contains which people have visited it. Is there an optimum way to do this, or rather, is there some sort of tr(squeeze)/sed tool that already does this? If I had to code this, what would the optimum logic be?</p>
| 0 | 2016-07-25T20:36:43Z | 38,579,593 | <p>The proper term for what you're trying to do here is <em>aggregation</em>. The word <em>collapse</em> is not commonly used for this operation, in my experience.</p>
<p>I'm sort of learning python on-the-fly here, so there might be a better way, but I've gotten this to work using the <a href="http://pandas.pydata.org/" rel="nofollow"><code>pandas</code></a> module, specifically the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html" rel="nofollow"><code>DataFrame</code></a> type:</p>
<pre><code>import pandas;
import re;
df = pandas.DataFrame({
'Date':['January','Febuary','March','April','May','June','July','August','September','November','December'],
'Person1':['-','Yes','-','-','Yes','-','-','-','-','-','-'],
'Person2':['-','-','-','-','-','-','-','-','-','-','-'],
'Person3':['Yes','-','-','-','-','-','-','-','-','-','-'],
'Person4':['-','-','-','-','-','-','-','-','-','-','-'],
'Person5':['Yes','-','-','-','-','-','Yes','-','-','-','-'],
'Person6':['-','-','-','-','-','-','-','-','-','Yes','-'],
'City':['SanFrancisco','SanFrancisco','SanFrancisco','NewYork','NewYork','NewYork','NewYork','NewYork','Miami','Miami','Miami']
});
df.groupby('City').agg({k:lambda x: 'Yes' if 'Yes' in x.values else '-' for k in filter(lambda x:re.search(r'^Person',x),df.keys())});
## Person2 Person3 Person1 Person6 Person4 Person5
## City
## Miami - - - Yes - -
## NewYork - - Yes - - Yes
## SanFrancisco - Yes Yes - - Yes
</code></pre>
<hr>
<p>Also, I would highly recommend looking into the <a href="https://www.r-project.org/" rel="nofollow">R programming language</a>, which is an excellent and increasingly ubiquitous statistical, graphics, and general data analysis platform, which is perfect for working with Excel-style tabular data. These kinds of data format transformations are definitely more natural in R, although the learning curve is rather steep. Here's the R implementation:</p>
<pre><code>df <- read.csv(stringsAsFactors=F,text=
'Date,Person1,Person2,Person3,Person4,Person5,Person6,City
January,-,-,Yes,-,Yes,-,SanFrancisco
Febuary,Yes,-,-,-,-,-,SanFrancisco
March,-,-,-,-,-,-,SanFrancisco
April,-,-,-,-,-,-,NewYork
May,Yes,-,-,-,-,-,NewYork
June,-,-,-,-,-,-,NewYork
July,-,-,-,-,Yes,-,NewYork
August,-,-,-,-,-,-,NewYork
September,-,-,-,-,-,-,Miami
November,-,-,-,-,-,Yes,Miami
December,-,-,-,-,-,-,Miami'
);
aggregate(.~City,df[-1L],function(x) if (any(x=='Yes')) 'Yes' else '-');
## City Person1 Person2 Person3 Person4 Person5 Person6
## 1 Miami - - - - - Yes
## 2 NewYork Yes - - - Yes -
## 3 SanFrancisco Yes - Yes - Yes -
</code></pre>
| 2 | 2016-07-26T01:00:16Z | [
"python",
"bash",
"tr"
] |
How would I collapse rows based on a value in a column? | 38,577,007 | <p>Iâll describe what I mean in more detail here.
Suppose I have a data sheet that looks like this:</p>
<pre><code>+-----------+---------+---------+---------+---------+---------+---------+--------------+
| | Person1 | Person2 | Person4 | Person4 | Person5 | Person6 | City |
+-----------+---------+---------+---------+---------+---------+---------+--------------+
| January | - | - | Yes | - | Yes | - | SanFrancisco |
| Febuary | Yes | - | - | - | - | - | SanFrancisco |
| March | - | - | - | - | - | - | SanFrancisco |
| April | - | - | - | - | - | - | NewYork |
| May | Yes | - | - | - | - | - | NewYork |
| June | - | - | - | - | - | - | NewYork |
| July | - | - | - | - | Yes | - | NewYork |
| August | - | - | - | - | - | - | NewYork |
| September | - | - | - | - | - | - | Miami |
| November | - | - | - | - | - | Yes | Miami |
| December | - | - | - | - | - | - | Miami |
+-----------+---------+---------+---------+---------+---------+---------+--------------+
</code></pre>
<p>Ignoring the ascii for stackoverflow formatting, Itâs a simple spreadsheet that tracks 6 people based on what city theyâve been to in which months.</p>
<p>What I want to only know is, which people have visited which cities. Effectively condensing the list to look like this:</p>
<pre><code>+---------+---------+---------+---------+---------+---------+--------------+
| Person1 | Person2 | Person4 | Person4 | Person5 | Person6 | City |
+---------+---------+---------+---------+---------+---------+--------------+
| Yes | - | Yes | - | Yes | - | SanFrancisco |
| Yes | - | - | - | Yes | - | NewYork |
| - | - | - | - | - | Yes | Miami |
+---------+---------+---------+---------+---------+---------+--------------+
</code></pre>
<p>Each row is only ONE city, and contains which people have visited it. Is there an optimum way to do this, or rather, is there some sort of tr(squeeze)/sed tool that already does this? If I had to code this, what would the optimum logic be?</p>
| 0 | 2016-07-25T20:36:43Z | 38,581,967 | <pre><code>$ cat tst.awk
function prt() {
if ( prev != "" ) {
for (i=2;i<=NF;i++) {
printf "%s%s", vals[i], (i<NF ? OFS : ORS)
}
}
delete vals
}
BEGIN { FS=OFS="," }
$NF != prev { prt() }
{
for (i=1;i<=NF;i++) {
vals[i] = (vals[i] ~ /[[:alpha:]]/ ? vals[i] : $i)
}
prev = $NF
}
END { prt() }
$ awk -f tst.awk file
Person1,Person2,Person4,Person4,Person5,Person6,City
Yes,-,Yes,-,Yes,-,SanFrancisco
Yes,-,-,-,Yes,-,NewYork
-,-,-,-,-,Yes,Miami
</code></pre>
<p>The above assumes your input format is really a CSV like this:</p>
<pre><code>$ cat file
Month,Person1,Person2,Person4,Person4,Person5,Person6,City
January,-,-,Yes,-,Yes,-,SanFrancisco
Febuary,Yes,-,-,-,-,-,SanFrancisco
March,-,-,-,-,-,-,SanFrancisco
April,-,-,-,-,-,-,NewYork
May,Yes,-,-,-,-,-,NewYork
June,-,-,-,-,-,-,NewYork
July,-,-,-,-,Yes,-,NewYork
August,-,-,-,-,-,-,NewYork
September,-,-,-,-,-,-,Miami
November,-,-,-,-,-,Yes,Miami
December,-,-,-,-,-,-,Miami
</code></pre>
<p>and you want a CSV output.</p>
| 1 | 2016-07-26T05:54:03Z | [
"python",
"bash",
"tr"
] |
Issue running uWsgi using pid/gid | 38,577,013 | <p>I'm trying to run uWsgi using <code>uid</code>/<code>gid</code> parameters in my wsgi ini file, so that it drops privileged access after starting.</p>
<p>Note: Everything works fine as expected when I remove these two parameters from my ini file. Also, there are no issues with my socket. However, when I run with a specified <code>uid</code> and <code>gid</code> (nginx user and group), I get an error that is indicative of having a problem with my virtual env loading, </p>
<p><code>Traceback (most recent call last):
File "wsgi.py", line 14, in <module>
from app import app as application
File "/var/www/wsgi/flask-appbuilder/peds_registry/app/__init__.py", line 1, in <module>
import logging
ImportError: No module named logging</code></p>
<p>Again, this work fine when running without gid/pid. Also, note that the user and group nginx both exist and both have ownership on the python project's directory structure.</p>
<p>My Nginx config's server/location directives are as follows:</p>
<pre><code>server {
listen 80;
server_name hostname.domain;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name hostname.domain;
ssl_certificate /etc/ssl/certs/host.chained.crt;
ssl_certificate_key /etc/ssl/certs/host.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
location /test {
include uwsgi_params;
uwsgi_pass unix:/tmp/uwsgi.sock;
}
}
</code></pre>
<p>My uwsgi startup is:</p>
<pre><code>#!/bin/sh
# chkconfig: - 99 10
FLASK_HOME=/var/www/wsgi/flask-appbuilder
export PEDS_HOME
ACTIVATE_CMD=/var/www/wsgi/flask-appbuilder/venv/bin/activate
case "$1" in
start)
cd $FLASK_HOME
source $ACTIVATE_CMD
uwsgi -s /tmp/uwsgi.sock -H ./venv/ --ini /var/www/wsgi/flask-appbuilder/test.ini --virtualenv /var/www/wsgi/flask-appbuilder/venv --chmod-socket=666 --manage-script-name --mount /test=run:app --wsgi-file wsgi.py --logto test.log &
;;
stop)
pkill uwsgi
;;
restart)
$0 stop
$0 start
;;
*)
echo "usage: $0 (start|stop|restart|help)"
esac
</code></pre>
<p>And my uWsgi startup ini is:</p>
<pre><code>[uwsgi]
socket = /tmp/uwsgi.sock
chdir = /var/www/wsgi/flask-appbuilder/peds_registry
wsgi-file = wsgi.py
pyhome = /var/www/wsgi/flask-appbuilder/venv
callable = app
manage-script-name = true
mount: /test=run.py
</code></pre>
<p>As stated, this loads fine without the gid/uid parameters, but when I add</p>
<pre><code>uid = nginx
gid = nginx
</code></pre>
<p>to the ini file, I get the error noted above. </p>
<p>All my searches yield permissions with the socket, but my problem seems to be loading modules from within the virtual environment. </p>
<p>On a side note: I am using uWsgi installed from pip into my virtual environment.</p>
| 0 | 2016-07-25T20:37:09Z | 38,597,595 | <p>This was completely not obvious. As a test, I tried using my own <code>uid</code>/<code>gid</code> to run the app, and lo' and behold, it worked!</p>
<p>So, with "I must have ownership on something that the app <code>uid</code>/<code>gid</code> does not have permission to run" in mind, I grepped the venv on my username, and voila the answer appeared: One of the requirements for the app was that I needed to run python 2.7.6, which I had installed as per this Gist: <a href="https://gist.github.com/reorx/4067217" rel="nofollow">Python Deployment</a>. So, changing ownership of the <code>DEPLOY</code> directory structure (which is outside of the venv's directory structure) to the app's user/group was the ticket.</p>
| 0 | 2016-07-26T18:42:06Z | [
"python",
"nginx",
"flask",
"uwsgi",
"python-venv"
] |
Find and modify python nested dictionary (key, value) | 38,577,049 | <p>I have a json file that I need to update. I am converting it to a python dict (nested) to update it. Here is the input, but it could be any dept. I'm sure there is a better way to do this, but don't know.</p>
<p>Ultimatley I want to be able to perfom Create/Delete action in addition to the update. </p>
<hr>
<h2>Here is the script and input file.</h2>
<pre><code># Now find TARGET value in nested key value chain
# Replace old value with NEWVALUE
import json
from pprint import pprint
d1 = open('jinputstack.json', 'r')
d1 = json.load(d1)
def traverse(obj, path=None, callback=None):
"""
Traverse Python object structure, calling a callback function for every element in the structure,
and inserting the return value of the callback as the new value.
"""
if path is None:
path = []
if isinstance(obj, dict):
value = {k: traverse(v, path + [k], callback)
for k, v in obj.items()}
elif isinstance(obj, list):
value = [traverse(elem, path + [[]], callback)
for elem in obj]
else:
value = obj
if callback is None:
# print("Starting value Found-----------------------------------------------------")
print(value)
return value
else:
print(path, value)
return callback(path, value)
def traverse_modify(obj, target_path, action):
"""
Traverses any arbitrary object structure and performs the given action on the value,
replacing the node with the
action's return value.
"""
target_path = to_path(target_path)
pprint(value)
pprint(target_path)
def transformer(path, value):
if path == target_path:
print(action)
d2 = data["groups"][0]["properties"][1]["value"]["data"][2]["object"]["name"].update(action)
return d2
else:
return value
return traverse(obj, callback=transformer)
def to_path(path):
"""
Helper function, converting path strings into path lists.
>>> to_path('foo')
['foo']
>>> to_path('foo.bar')
['foo', 'bar']
>>> to_path('foo.bar[]')
['foo', 'bar', []]
"""
if isinstance(path, list):
return path # already in list format
def _iter_path(path):
#pprint(path.split)
for parts in path.split('[]'):
for part in parts.strip('.').split('.'):
yield part
yield []
return list(_iter_path(path))[:-1]
def updateit(newvalue):
data["groups"][0]["properties"][1]["value"]["data"][2]["object"]["name"] = newvalue
print(data["groups"][0]["properties"][1]["value"]["data"][2]["object"]["name"])
return data["groups"][0]["properties"][1]["value"]["data"][2]["object"]["name"]
traverse_modify(d1, d1["groups"][0]["properties"][1]["value"]["data"][1]["object"]["name"], updateit("XXXXXXXXXXXXXX"))
json_data = json.dumps(data)
f = open("jinputstack.json","w")
f.write(json_data)
f.close()
</code></pre>
<hr>
<pre><code>jinputstack.json = {
"groups": [
{
"name": "group1",
"properties": [
{
"name": "Test-Key-String",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value1"
}
},
{
"name": "Test-Key-ValueArray",
"value": {
"type": "ValueArray",
"data": [
{
"data": true
},
{
"type": "Blob",
"object": {
"name": "John Su",
"age": 25,
"salary": 104000.45,
"married": false,
"gender": "Male"
}
}
]
}
}
],
"groups": [
{
"name": "group-child",
"properties": [
{
"name": "Test-Key-String"
},
{
"name": "Test-Key-List",
"value": {
"type": "List",
"data": [
"String1",
"String2",
"String3"
]
}
}
]
}
]
},
{
"name": "group2",
"properties": [
{
"name": "Test-Key2-String",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value2"
}
},
{
"name": "MicroBox"
}
]
}
]
}
</code></pre>
<p>Credit goes to original author: Vincent Driessen</p>
| 1 | 2016-07-25T20:39:36Z | 38,579,309 | <p>I think the best way would be to convert the Json object to XML and use ElementTree and XPath to parse and modify your object. Later you can revert to Json if you need:</p>
<pre><code>import json
from xmljson import parker
from lxml.etree import Element
dataxml = parker.etree(datajson, root=Element('root'))
print(dataxml.find('.//data//name').text) # John Su
dataxml.find('.//data//name').text = "Joan d'Arc"
print(dataxml.find('.//data//name').text) # Joan d'Arc
print(json.dumps(parker.data(dataxml)))
</code></pre>
<p>There are some packages that do something like XPath on a Json string directly. One of them, <code>jsonpath-rw</code> changes the syntax. I prefer to stick with the standard XPath syntax.</p>
<pre><code>from jsonpath_rw import jsonpath, parse
expr = parse('$..data..name') # Notice that . is now $ and / is now .
# Confusing enough?
expr.find(datajson)[0] = 'yyyy'
print(expr.find(datajson)[0].value) # John Su
</code></pre>
<p>Another one <code>xjpath</code> very simple and perhaps easier to learn, does not give you much difference than what you are doing now.</p>
<pre><code>import xjpath
xj = xjpath.XJPath(datajson)
print(xj['groups.@0.properties.@1.value.data.@1.object.name'])
# Not much different than your code:
print(data["groups"][0]["properties"][1]["value"]["data"][1]["object"]["name"])
</code></pre>
<p>I hope this helps.</p>
| 1 | 2016-07-26T00:22:47Z | [
"python",
"json",
"dictionary"
] |
customize dateutil.parser century inference logic | 38,577,076 | <p>I am working on old text files with 2-digit years where the default century logic in <code>dateutil.parser</code> doesn't seem to work well. For example, the attack on Pearl Harbor was not on <code>dparser.parse("12/7/41")</code> (which returns 2041-12-7). </p>
<p>The buit-in century "threshold" to roll back into the 1900's seems to happen at 66:</p>
<pre><code>import dateutil.parser as dparser
print(dparser.parse("12/31/65")) # goes forward to 2065-12-31 00:00:00
print(dparser.parse("1/1/66")) # goes back to 1966-01-01 00:00:00
</code></pre>
<p>For my purposes I would like to set this "threshold" at 17, so that:</p>
<ul>
<li><code>"12/31/16"</code> parses to 2016-12-31 (<code>yyyy-mm-dd</code>)</li>
<li><code>"1/1/17"</code> parses to 1917-01-01</li>
</ul>
<p>But I would like to continue to use this module as its fuzzy match seems to be working well.</p>
<p>The <a href="http://dateutil.readthedocs.io/en/stable/examples.html" rel="nofollow">documentation</a> doesn't identify a parameter for doing this... is there an argument I'm overlooking?</p>
| 4 | 2016-07-25T20:41:37Z | 38,577,198 | <p>You can also <em>post-process the extracted dates</em> manually changing the century if the extracted year is more than a specified threshold, in your case - 2016:</p>
<pre><code>import dateutil.parser as dparser
THRESHOLD = 2016
date_strings = ["12/31/65", "1/1/66", "12/31/16", "1/1/17"]
for date_string in date_strings:
dt = dparser.parse(date_string)
if dt.year > THRESHOLD:
dt = dt.replace(year=dt.year - 100)
print(dt)
</code></pre>
<p>Prints:</p>
<pre><code>1965-12-31 00:00:00
1966-01-01 00:00:00
2016-12-31 00:00:00
1917-01-01 00:00:00
</code></pre>
| 1 | 2016-07-25T20:50:00Z | [
"python",
"python-dateutil"
] |
customize dateutil.parser century inference logic | 38,577,076 | <p>I am working on old text files with 2-digit years where the default century logic in <code>dateutil.parser</code> doesn't seem to work well. For example, the attack on Pearl Harbor was not on <code>dparser.parse("12/7/41")</code> (which returns 2041-12-7). </p>
<p>The buit-in century "threshold" to roll back into the 1900's seems to happen at 66:</p>
<pre><code>import dateutil.parser as dparser
print(dparser.parse("12/31/65")) # goes forward to 2065-12-31 00:00:00
print(dparser.parse("1/1/66")) # goes back to 1966-01-01 00:00:00
</code></pre>
<p>For my purposes I would like to set this "threshold" at 17, so that:</p>
<ul>
<li><code>"12/31/16"</code> parses to 2016-12-31 (<code>yyyy-mm-dd</code>)</li>
<li><code>"1/1/17"</code> parses to 1917-01-01</li>
</ul>
<p>But I would like to continue to use this module as its fuzzy match seems to be working well.</p>
<p>The <a href="http://dateutil.readthedocs.io/en/stable/examples.html" rel="nofollow">documentation</a> doesn't identify a parameter for doing this... is there an argument I'm overlooking?</p>
| 4 | 2016-07-25T20:41:37Z | 38,577,299 | <p>This isn't particularly well documented but you can actually override this using <code>dateutil.parser</code>. The second argument is a <code>parserinfo</code> object, and the method you'll be concerned with is <code>convertyear</code>. The <a href="http://dateutil.readthedocs.io/en/stable/_modules/dateutil/parser.html#parserinfo.convertyear" rel="nofollow">default implementation</a> is what's causing you problems. You can see that it is basing its interpretation of the century on the current year, plus or minus fifty years. That's why you're seeing the transition at 1966. Next year it will be 1967. :)</p>
<p>Since you are using this personally and may have very specific needs, you don't have to be super-generic. You could do something as simple as this if it works for you:</p>
<pre><code>from dateutil.parser import parse, parserinfo
class MyParserInfo(parserinfo):
def convertyear(self, year, *args, **kwargs):
if year < 100:
year += 1900
return year
parse('1/21/47', MyParserInfo())
# datetime.datetime(1947, 1, 21, 0, 0)
</code></pre>
| 4 | 2016-07-25T20:57:18Z | [
"python",
"python-dateutil"
] |
customize dateutil.parser century inference logic | 38,577,076 | <p>I am working on old text files with 2-digit years where the default century logic in <code>dateutil.parser</code> doesn't seem to work well. For example, the attack on Pearl Harbor was not on <code>dparser.parse("12/7/41")</code> (which returns 2041-12-7). </p>
<p>The buit-in century "threshold" to roll back into the 1900's seems to happen at 66:</p>
<pre><code>import dateutil.parser as dparser
print(dparser.parse("12/31/65")) # goes forward to 2065-12-31 00:00:00
print(dparser.parse("1/1/66")) # goes back to 1966-01-01 00:00:00
</code></pre>
<p>For my purposes I would like to set this "threshold" at 17, so that:</p>
<ul>
<li><code>"12/31/16"</code> parses to 2016-12-31 (<code>yyyy-mm-dd</code>)</li>
<li><code>"1/1/17"</code> parses to 1917-01-01</li>
</ul>
<p>But I would like to continue to use this module as its fuzzy match seems to be working well.</p>
<p>The <a href="http://dateutil.readthedocs.io/en/stable/examples.html" rel="nofollow">documentation</a> doesn't identify a parameter for doing this... is there an argument I'm overlooking?</p>
| 4 | 2016-07-25T20:41:37Z | 38,577,890 | <p>Other than writing your own <code>parserinfo.convertyear</code> method, you can customize this by passing a standard <code>parserinfo</code> object with changed <code>_century</code> and <code>_year</code> settings *):</p>
<pre><code>from dateutil.parser import parse, parserinfo
info = parserinfo()
info._century = 1900
info._year = 1965
parse('12/31/65', parserinfo=info)
=> 1965-12-31 00:00:00
</code></pre>
<p><code>_century</code> specifies the default years added to whatever year number is parsed, i.e. <code>65 + 1900 = 1965</code>. </p>
<p><code>_year</code> specifies the cut-off year +- 50. Any year at least 50 years off of <code>_years</code>, i.e. where the difference is</p>
<ul>
<li><code>< _year</code> will be switched to the next century</li>
<li><code>>= _year</code> will be switched to the previous century</li>
</ul>
<p>Think of this as a timeline:</p>
<pre><code>1900 1916 1965 2015
+--- (...) ---+--- (...) ---+--- (...) ---+
^ ^ ^ ^
_century _year - 49 _year _year + 50
parsed years:
16,17,... 99,00,...15
</code></pre>
<p>In other words, the years <code>00, 01, ..., 99</code> are mapped to the time range <code>_year - 49</code> .. <code>_year + 50</code> with <code>_year</code> set to the middle of this 100-year period. Using these two settings you can thus specify any cut off you like.</p>
<p>*) Note these two variables are undocumented however are used in the default implementation for <a href="http://dateutil.readthedocs.io/en/2.5.3/_modules/dateutil/parser.html#parserinfo.convertyear" rel="nofollow"><code>parserinfo.convertyear</code></a> in the newest stable version at the time of writing, 2.5.3. IMHO the default implementation is quite smart.</p>
| 1 | 2016-07-25T21:42:50Z | [
"python",
"python-dateutil"
] |
How to fix "unexpected keyword argument 'useChardet'" in html5lib | 38,577,080 | <p>I'm using html5lib and after updating it to the latest version, I keep getting this error:</p>
<pre><code>Traceback (most recent call last):
File "/home/travis/build/freelawproject/juriscraper/tests/test_everything.py", line 119, in test_scrape_all_example_files
site.parse()
File "/home/travis/build/freelawproject/juriscraper/juriscraper/AbstractSite.py", line 95, in parse
self.html = self._download()
File "/home/travis/build/freelawproject/juriscraper/juriscraper/AbstractSite.py", line 384, in _download
html_tree = self._make_html_tree(text)
File "/home/travis/build/freelawproject/juriscraper/juriscraper/opinions/united_states/federal_appellate/ca11_u.py", line 26, in _make_html_tree
e = html5parser.document_fromstring(text)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/lxml/html/html5parser.py", line 64, in document_fromstring
return parser.parse(html, useChardet=guess_charset).getroot()
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/html5lib/html5parser.py", line 235, in parse
self._parse(stream, False, None, *args, **kwargs)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/html5lib/html5parser.py", line 85, in _parse
self.tokenizer = _tokenizer.HTMLTokenizer(stream, parser=self, **kwargs)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/html5lib/_tokenizer.py", line 36, in __init__
self.stream = HTMLInputStream(stream, **kwargs)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/html5lib/_inputstream.py", line 149, in HTMLInputStream
return HTMLUnicodeInputStream(source, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'useChardet'
</code></pre>
<p>The code I'm using is very simple:</p>
<pre><code>from lxml.html import html5parser
html5parser.document_fromstring(u'<html></html')
</code></pre>
<p>Any ideas?</p>
| 4 | 2016-07-25T20:41:56Z | 38,577,081 | <p>Turns out that if you feed a unicode object to the <code>document_fromstring</code> method, it barfs. It didn't used to because this only happened when I updated my dependencies. </p>
<p>Anyway, the fix is easy:</p>
<pre><code>html5parser.document_fromstring(u'<html></html'.encode('utf-8'))
</code></pre>
| 4 | 2016-07-25T20:41:56Z | [
"python",
"html5lib"
] |
Convert Column Name from int to string in pandas | 38,577,126 | <p>I have a pandas dataframe with mixed column names:</p>
<p>1,2,3,4,5, 'Class'</p>
<p>When I save this dataframe to h5file, it says that the performance will be affected due to mixed types. How do I convert the integer to string in pandas?</p>
| 2 | 2016-07-25T20:45:02Z | 38,577,252 | <p>You can simply use <code>df.columns = df.columns.astype(str)</code>:</p>
<pre><code>In [26]: df = pd.DataFrame(np.random.random((3,6)), columns=[1,2,3,4,5,'Class'])
In [27]: df
Out[27]:
1 2 3 4 5 Class
0 0.773423 0.865091 0.614956 0.219458 0.837748 0.862177
1 0.544805 0.535341 0.323215 0.929041 0.042705 0.759294
2 0.215638 0.251063 0.648350 0.353999 0.986773 0.483313
In [28]: df.columns.map(type)
Out[28]:
array([<class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>,
<class 'int'>, <class 'str'>], dtype=object)
In [29]: df.to_hdf("out.h5", "d1")
C:\Anaconda3\lib\site-packages\pandas\io\pytables.py:260: PerformanceWarning:
your performance may suffer as PyTables will pickle object types that it cannot
map directly to c-types [inferred_type->mixed-integer,key->axis0] [items->None]
f(store)
C:\Anaconda3\lib\site-packages\pandas\io\pytables.py:260: PerformanceWarning:
your performance may suffer as PyTables will pickle object types that it cannot
map directly to c-types [inferred_type->mixed-integer,key->block0_items] [items->None]
f(store)
In [30]: df.columns = df.columns.astype(str)
In [31]: df.columns.map(type)
Out[31]:
array([<class 'str'>, <class 'str'>, <class 'str'>, <class 'str'>,
<class 'str'>, <class 'str'>], dtype=object)
In [32]: df.to_hdf("out.h5", "d1")
In [33]:
</code></pre>
| 3 | 2016-07-25T20:53:53Z | [
"python",
"pandas"
] |
Converting SVG lines to arcs | 38,577,196 | <p>I'm using Python's svgwrite to draw an SVG diagram. I have a bunch of horizontal lines that I'm currently drawing with Paths. It's pretty simple, point A to point B. I'd like to convert these lines to be slightly rounded up (the height will be some distance, say 10). I can't figure out how to convert this though.</p>
<p>In other words, if I have and I want to convert this into an arc going from (100,100)->(200,100) with a slight bend up, what would the corresponding arc command be? Or bezier curve if that's easier?</p>
| 0 | 2016-07-25T20:49:54Z | 38,580,065 | <p>I figured it out. I was confusing A and a in my tests. It's simply:</p>
<pre><code>M100,100 A100,50 0 0 0 200,100
</code></pre>
| 0 | 2016-07-26T02:13:48Z | [
"python",
"svg",
"svgwrite"
] |
Django REST Framework - View with serialization without model | 38,577,222 | <p>I just recently started developing for Django and am building an API using Django REST Framework and class based views. I am looking for a way to <strong>combine models, sort them based on time and then return a subset of the fields to an API with the table name appended.</strong></p>
<p>Currently I have the following:</p>
<p>views.py</p>
<pre><code>class RunLog(APIView):
"""
List log for a specific run sorted in reverse chronological order
"""
def get(self, request, run_id, format=None):
# Combine and sort based on time (decreasing)
result_list = sorted(chain(Output.objects.filter(run=run_id),
Downtime.objects.filter(run=run_id)),
key=attrgetter('start_time'), reverse=True)
// Replace this with serializer??
response = Response(serializers.serialize('json', result_list), status=status.HTTP_200_OK)
return response
</code></pre>
<p>models.py</p>
<pre><code>class Output(models.Model):
start_time = models.DateTimeField(default=datetime.now)
value = models.FloatField()
run = models.ForeignKey(Run, blank=True, null=True)
def __unicode__(self):
return str(self.id)
class Downtime(models.Model):
start_time = models.DateTimeField(default=datetime.now)
end_time = models.DateTimeField(null=True, blank=True)
reason = models.CharField(max_length=500)
run = models.ForeignKey(Run, blank=True, null=True)
</code></pre>
<p>I get the following JSON:</p>
<pre><code>"[{\"model\": \"app.downtime\", \"pk\": 91, \"fields\": {\"start_time\": \"2016-07-20T14:46:21Z\", \"end_time\": null, \"reason\": \"reason1\", \"run\": 71}}, {\"model\": \"app.downtime\", \"pk\": 101, \"fields\": {\"start_time\": \"2016-07-20T14:46:21Z\", \"end_time\": null, \"reason\": \"reason2\", \"run\": 71}}]"
</code></pre>
<p>I would like to serialize this data in the following JSON format:</p>
<pre><code> [
{
"id": 231,
"type": "speed",
"description": "Some description",
"time": "2016-07-21T21:26:26Z"
}
]
**Where type is the database table and description is concatenated columns from a model.
</code></pre>
<p>I have looked at the docs and <a href="http://stackoverflow.com/questions/13603027/django-rest-framework-non-model-serializer">this similar question</a> without any luck. </p>
| 0 | 2016-07-25T20:51:45Z | 38,594,008 | <p>As IanAuld suggested in the comments - ModelObj._meta.db_table got the name of the table. I then created a sorted list of dictionaries in views.py:</p>
<pre><code>speedList = Speed.objects.filter(run=run_id)
type = Speed._meta.db_table.split('_', 1)[1]
type = type[0].upper() + type[1:]
for speed in speedList:
description = "Speed change to %.2f (units)" % speed.value
logList.append({'id':speed.id, 'type':type, 'description':description, 'time':speed.start_time})
# Sort list by decreasing time
resultList= sorted(logList, key=itemgetter('time'), reverse=True)
serializer = LogSerializer(resultist, many=True)
return Response(serializer.data)
</code></pre>
<p>serializers.py:</p>
<pre><code>class LogSerializer(serializers.Serializer):
id = serializers.IntegerField()
type = serializers.CharField(max_length=100)
description = serializers.CharField(max_length=500)
time = serializers.DateTimeField()
</code></pre>
| 0 | 2016-07-26T15:25:27Z | [
"python",
"json",
"django",
"django-rest-framework"
] |
formatting data from a new source in pyalgotrade | 38,577,241 | <p>I am trying to adapt a <a href="http://gbeced.github.io/pyalgotrade/docs/v0.17/html/tutorial.html" rel="nofollow">pyalgotrade</a> feed to use data streamed from another source. I am getting an error inside of the <code>run_strategy</code> method when I try to point the feed to data data obtained using the function <code>getdata()</code>. </p>
<p>NOTE that <code>getdata</code> returns data of the format: <code>Date Close</code> and pyalgotrade apparently looks for <code>Date Open High Low Close</code>. </p>
<p>How do I correctly format my data for input into a feed?</p>
<p>Error: </p>
<pre><code>barFeed.getNewValuesEvent().subscribe(self.onBars)
AttributeError: 'list' object has no attribute 'getNewValuesEvent'
</code></pre>
<p>Code </p>
<pre><code>#http://gbeced.github.io/pyalgotrade/docs/v0.17/html/tutorial.html
#run this first in cmd prompt to download data:
#python -c "from pyalgotrade.tools import yahoofinance; yahoofinance.download_daily_bars('orcl', 2000, 'orcl-2000.csv')"
import httplib
import urllib
import json
from pyalgotrade import strategy
from pyalgotrade.barfeed import yahoofeed
from pyalgotrade.technical import ma
class MyStrategy(strategy.BacktestingStrategy):
def __init__(self, feed, instrument, smaPeriod):
strategy.BacktestingStrategy.__init__(self, feed, 1000)
self.__position = None
self.__instrument = instrument
# We'll use adjusted close values instead of regular close values.
self.setUseAdjustedValues(True)
self.__sma = ma.SMA(feed[instrument].getPriceDataSeries(), smaPeriod)
def onEnterOk(self, position):
execInfo = position.getEntryOrder().getExecutionInfo()
#self.info("BUY at $%.2f" % (execInfo.getPrice()))
def onEnterCanceled(self, position):
self.__position = None
def onExitOk(self, position):
execInfo = position.getExitOrder().getExecutionInfo()
#self.info("SELL at $%.2f" % (execInfo.getPrice()))
self.__position = None
def onExitCanceled(self, position):
# If the exit was canceled, re-submit it.
self.__position.exitMarket()
def onBars(self, bars):
# Wait for enough bars to be available to calculate a SMA.
if self.__sma[-1] is None:
return
bar = bars[self.__instrument]
# If a position was not opened, check if we should enter a long position.
if self.__position is None:
if bar.getPrice() > self.__sma[-1]:
# Enter a buy market order for 10 shares. The order is good till canceled.
self.__position = self.enterLong(self.__instrument, 10, True)
# Check if we have to exit the position.
elif bar.getPrice() < self.__sma[-1] and not self.__position.exitActive():
self.__position.exitMarket()
def getdata(period,pair,granularity):
conn = httplib.HTTPSConnection("api-fxpractice.oanda.com")
url = ''.join(["/v1/candles?count=", str(period + 1), "&instrument=", pair, "&granularity=", str(granularity), "&candleFormat=bidask"])#defines URL as what??
print url
conn.request("GET", url)
response = conn.getresponse().read()
candles = json.loads(response)['candles']
print candles
return(candles)
def run_strategy(smaPeriod):
# Load the yahoo feed from the CSV file
#feed = yahoofeed.Feed()
#feed.addBarsFromCSV("orcl", "orcl-2000.csv")
###########attempting to add data feed from another source
feed=getdata(50,"EUR_USD","H1")
#________________
# Evaluate the strategy with the feed.
myStrategy = MyStrategy(feed, "orcl", smaPeriod)
myStrategy.run()
print "SMA: {} Final portfolio value: {}".format(smaPeriod, myStrategy.getBroker().getEquity())
run_strategy(15)
#for i in range(10, 30):
# run_strategy(i)
</code></pre>
| 0 | 2016-07-25T20:53:06Z | 39,030,203 | <p>You're trying to use a list as a BarFeed and that won't work. Use this as a reference to implement a BarFeed class for your datasource:
<a href="https://github.com/gbeced/pyalgotrade/blob/master/pyalgotrade/barfeed/yahoofeed.py" rel="nofollow">https://github.com/gbeced/pyalgotrade/blob/master/pyalgotrade/barfeed/yahoofeed.py</a></p>
| 0 | 2016-08-19T02:24:16Z | [
"python",
"pyalgotrade"
] |
Optimizing by translation to map one x,y set of points onto another | 38,577,286 | <p>I have a list of x,y ideal points, and a second list of x,y measured points. The latter has some offset and some noise.</p>
<p>I am trying to "fit" the latter to the former. So, extract the x,y offset of the latter relative to the former. </p>
<p>I'm following some examples of <code>scipy.optimize.leastsq</code>, but having trouble getting it working. Here is my code:</p>
<pre><code>import random
import numpy as np
from scipy import optimize
# Generate fake data. Goal: Get back dx=0.1, dy=0.2 at the end of this exercise
dx = 0.1
dy = 0.2
# "Actual" (ideal) data.
xa = np.array([0,0,0,1,1,1])
ya = np.array([0,1,2,0,1,2])
# "Measured" (non-ideal) data. Add the offset and some randomness.
xm = map(lambda x: x + dx + random.uniform(0,0.01), xa)
ym = map(lambda y: y + dy + random.uniform(0,0.01), ya)
# Plot each
plt.figure()
plt.plot(xa, ya, 'b.', xm, ym, 'r.')
# The error function.
#
# Args:
# translations: A list of xy tuples, each xy tuple holding the xy offset
# between 'coords' and the ideal positions.
# coords: A list of xy tuples, each xy tuple holding the measured (non-ideal)
# coordinates.
def errfunc(translations, coords):
sum = 0
for t, xy in zip(translations, coords):
dx = t[0] + xy[0]
dy = t[1] + xy[1]
sum += np.sqrt(dx**2 + dy**2)
return sum
translations, coords = [], []
for xxa, yya, xxm, yym in zip(xa, ya, xm, ym):
t = (xxm-xxa, yym-yya)
c = (xxm, yym)
translations.append(t)
coords.append(c)
translation_guess = [0.05, 0.1]
out = optimize.leastsq(errfunc, translation_guess, args=(translations, coords), full_output=1)
print out
</code></pre>
<p>I get the error:</p>
<blockquote>
<p>errfunc() takes exactly 2 arguments (3 given)"</p>
</blockquote>
<p>I'm not sure why it says 3 arguments as I only gave it two. Can anyone help?</p>
<p>====</p>
<p>ANSWER: </p>
<p>I was thinking about this wrong. All I have to do is to take the average of the dx and dy's -- that gives the correct result.</p>
<pre><code>n = xa.shape[0]
dx = -np.sum(xa - xm) / n
dy = -np.sum(ya - ym) / n
print dx, dy
</code></pre>
| 0 | 2016-07-25T20:56:21Z | 38,577,717 | <p>The scipy.optimize.leastsq assumes that the function you are using already has one input, x0, the initial guess. Any other <strong>additional</strong> inputs are then listed in args.</p>
<p>So you are sending three arguments: translation_guess, transactions, and coords.</p>
<p>Note that <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html" rel="nofollow">here</a> it specifies that args are "<strong>extra</strong> arguments."</p>
<hr>
<p>Okay, I think I understand now. You have the actual locations and the measured locations and you want to figure out the constant offset, but there is noise on each pair. Correct me if I'm wrong:</p>
<p>xy = tuple with coordinates of measured point</p>
<p>t = tuple with measured offset (constant + noise)</p>
<p>The actual coordinates of a point are (xy - t) then?</p>
<p>If so, then we think it should be measured at (xy - t + guess).</p>
<p>If so, then our error is (xy - t + guess - xy) = (guess - t)</p>
<p>Where it is measured doesn't even matter! We just want to find the guess that is closest to all of the measured translations:</p>
<pre><code>def errfunc(guess, translations):
errx = 0
erry = 0
for t in translations:
errx += guess[0] - t[0]
erry += guess[1] - t[1]
return errx,erry
</code></pre>
<p>What do you think? Does that make sense or did I miss something?</p>
| 0 | 2016-07-25T21:27:27Z | [
"python",
"numpy",
"scipy",
"coordinates",
"least-squares"
] |
plotly/python- how to embed newest plot on your website | 38,577,287 | <p>I want to include a plot from plotly on my website.</p>
<p>Now I have this: (as from their website)</p>
<pre><code> <iframe width=750 height=500 frameborder=â0â seamless=âseamlessâ scrolling=ânoâ src='https://plot.ly/~zfrancica/66/machine-worn-percenatage/'> </iframe>
</code></pre>
<p>But I can only get a specific plot.
I call in the backend to generate a new picture every time I refresh the webpage and data may change.</p>
<p>How should I get the newest plot in my account?</p>
| 0 | 2016-07-25T20:56:31Z | 38,577,688 | <p>If I understand your question correctly you probably need a timer in javascript periodically checks the backend for a new plot</p>
| 0 | 2016-07-25T21:24:51Z | [
"python",
"plot",
"plotly"
] |
Django - reverse_lazy in DeleteView yields NoReverseMatch | 38,577,308 | <p>New to Django and html; I want to add the possibility to delete objects from the database.</p>
<p>When I get to the delete confirmation template and click "Confirm", the objects gets deleted but I get this error: </p>
<blockquote>
<p>"Reverse for 'assets' with arguments '()' and keyword arguments '{}'
not found. 0 pattern(s) tried: []"</p>
</blockquote>
<p>My DeleteView includes <code>success_url = reverse_lazy("assets")</code>. I get no error if I change that to <code>success_url = "/appname/assets/"</code>, the user gets redirected to the assets list as wanted, but I'd rather not use a hard-coded url.</p>
<p>Relevant code:</p>
<p>Models:</p>
<pre><code>class Asset(models.Model):
filename = models.CharField(max_length=250, default="")
file_location = models.CharField(max_length=1000, default="")
file_size = models.PositiveIntegerField(default=0, blank=True, null=True)
file_md5 = models.CharField(max_length=32, default="", blank=True)
project = models.ForeignKey(Project, blank=True, null=True)
provider = models.ForeignKey(Provider)
def get_absolute_url(self):
return reverse("appname:asset_details", kwargs={"pk": self.pk})
def __str__(self):
return self.filename
</code></pre>
<p>Views:</p>
<pre><code>class AssetsView(generic.ListView):
template_name = "assets.html"
context_object_name = "assets_list"
def get_queryset(self):
return Asset.objects.all()
class AssetDetailsView(generic.DetailView):
model = Asset
template_name = "asset_details.html"
class AssetDelete(DeleteView):
model = Asset
success_url = reverse_lazy("assets")
</code></pre>
<p>Urls:</p>
<pre><code># /tams/assets/
url(r'^assets/$', views.AssetsView.as_view(), name="assets"),
# /tams/asset/1/
url(r'^asset/(?P<pk>[0-9]+)/$', views.AssetDetailsView.as_view(), name="asset_details"),
# /tams/asset/1/delete/
url(r'^asset/(?P<pk>[0-9]+)/delete/$', views.AssetDelete.as_view(), name="asset_delete"),
</code></pre>
<p>Asset Delete Template:</p>
<pre><code>{% block body %}
<form action="" method="post">{% csrf_token %}
<p>Are you sure you want to delete "{{ asset.filename }}"?</p>
<input class="btn btn-link"
type="button" value="Cancel"
onclick="window.history.go(-1);"/>
<input class="btn btn-danger" type="submit" value="Confirm"/>
</form>
{% endblock %}
</code></pre>
<p>I'm using Django 1.9.8 with Python 3.5</p>
| 1 | 2016-07-25T20:58:00Z | 38,577,383 | <p>You were just specifying a url name <code>assets</code>, but I think it might be missing <code>appname</code> as prefix(I saw you have other urls that contains the app name), maybe try:</p>
<pre><code>success_url = reverse_lazy("appname:assets")
</code></pre>
| 3 | 2016-07-25T21:02:53Z | [
"python",
"django",
"python-3.x"
] |
Qt (QOpenGLShaderProgram) - setting a uniform boolean value? | 38,577,319 | <p>How could I pass a boolean value to my fragment shader, using QOpenGLShaderProgram? (Python)</p>
<p>If my program is called <code>_program</code>, could i just use something like this:</p>
<pre><code>self._program.setUniformValue('myVar', 1)
</code></pre>
<p>In order to pass a value of <code>true</code>, and then use it in my fragment shader using:</p>
<pre><code>uniform bool myVar;
</code></pre>
<p>Would that be okay? Or should i do that using another way?
Thanks!</p>
| 0 | 2016-07-25T20:58:39Z | 38,578,354 | <p>It is ok. <a href="https://www.opengl.org/sdk/docs/man/html/glUniform.xhtml" rel="nofollow"><code>glUniform</code> documentation</a> says:</p>
<blockquote>
<p>Either the i, ui or f variants may be used to provide values for uniform variables of type bool, bvec2, bvec3, bvec4, or arrays of these. The uniform variable will be set to false if the input value is 0 or 0.0f, and it will be set to true otherwise.</p>
</blockquote>
| 1 | 2016-07-25T22:26:52Z | [
"python",
"qt",
"opengl"
] |
Apply fuzzy matching across a dataframe column and save results in a new column | 38,577,332 | <p>I have two data frames with each having a different number of rows. Below is a couple rows from each data set</p>
<pre><code>df1 =
Company City State ZIP
FREDDIE LEES AMERICAN GOURMET SAUCE St. Louis MO 63101
CITYARCHRIVER 2015 FOUNDATION St. Louis MO 63102
GLAXOSMITHKLINE CONSUMER HEALTHCARE St. Louis MO 63102
LACKEY SHEET METAL St. Louis MO 63102
</code></pre>
<p>and</p>
<pre><code>df2 =
FDA Company FDA City FDA State FDA ZIP
LACKEY SHEET METAL St. Louis MO 63102
PRIMUS STERILIZER COMPANY LLC Great Bend KS 67530
HELGET GAS PRODUCTS INC Omaha NE 68127
ORTHOQUEST LLC La Vista NE 68128
</code></pre>
<p>I joined them side by side using <code>combined_data = pandas.concat([df1, df2], axis = 1)</code>. My next goal is to compare each string under <code>df1['Company']</code> to each string under in <code>df2['FDA Company']</code> using several different matching commands from the <code>fuzzy wuzzy</code> module and return the value of the best match and its name. I want to store that in a new column. For example if I did the <code>fuzz.ratio</code> and <code>fuzz.token_sort_ratio</code> on <code>LACKY SHEET METAL</code> in <code>df1['Company']</code> to <code>df2['FDA Company']</code> it would return that the best match was <code>LACKY SHEET METAL</code> with a score of <code>100</code> and this would then be saved under a new column in <code>combined data</code>. It results would look like</p>
<pre><code>combined_data =
Company City State ZIP FDA Company FDA City FDA State FDA ZIP fuzzy.token_sort_ratio match fuzzy.ratio match
FREDDIE LEES AMERICAN GOURMET SAUCE St. Louis MO 63101 LACKEY SHEET METAL St. Louis MO 63102 LACKEY SHEET METAL 100 LACKEY SHEET METAL 100
CITYARCHRIVER 2015 FOUNDATION St. Louis MO 63102 PRIMUS STERILIZER COMPANY LLC Great Bend KS 67530
GLAXOSMITHKLINE CONSUMER HEALTHCARE St. Louis MO 63102 HELGET GAS PRODUCTS INC Omaha NE 68127
LACKEY SHEET METAL St. Louis MO 63102 ORTHOQUEST LLC La Vista NE 68128
</code></pre>
<p>I tried doing </p>
<pre><code>combined_data['name_ratio'] = combined_data.apply(lambda x: fuzz.ratio(x['Company'], x['FDA Company']), axis = 1)
</code></pre>
<p>But got an error because the lengths of the columns are different.</p>
<p>I am stumped. How I can accomplish this?</p>
| 2 | 2016-07-25T20:59:28Z | 38,578,687 | <p>I couldn't tell what you were doing. This is how I would do it.</p>
<pre><code>from fuzzywuzzy import fuzz
from fuzzywuzzy import process
</code></pre>
<p>Create a series of tuples to compare:</p>
<pre><code>compare = pd.MultiIndex.from_product([df1['Company'],
df2['FDA Company']]).to_series()
</code></pre>
<p>Create a special function to calculate fuzzy metrics and return a series.</p>
<pre><code>def metrics(tup):
return pd.Series([fuzz.ratio(*tup),
fuzz.token_sort_ratio(*tup)],
['ratio', 'token'])
</code></pre>
<p>Apply <code>metrics</code> to the <code>compare</code> series</p>
<pre><code>compare.apply(metrics)
</code></pre>
<p><a href="http://i.stack.imgur.com/6GzKB.png" rel="nofollow"><img src="http://i.stack.imgur.com/6GzKB.png" alt="enter image description here"></a></p>
<p>There are bunch of ways to do this next part:</p>
<p>Get closest matches to each row of <code>df1</code></p>
<pre><code>compare.apply(metrics).unstack().idxmax().unstack(0)
</code></pre>
<p><a href="http://i.stack.imgur.com/EJ0kT.png" rel="nofollow"><img src="http://i.stack.imgur.com/EJ0kT.png" alt="enter image description here"></a></p>
<p>Get closest matches to each row of <code>df2</code></p>
<pre><code>compare.apply(metrics).unstack(0).idxmax().unstack(0)
</code></pre>
<p><a href="http://i.stack.imgur.com/n5AVZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/n5AVZ.png" alt="enter image description here"></a></p>
| 2 | 2016-07-25T23:03:38Z | [
"python",
"pandas",
"fuzzy-search",
"fuzzywuzzy"
] |
JSONDecodeError: Extra data: line 1 column 228 (char 227) | 38,577,399 | <p>I am using Ipython to do some data analysis, I can't load the JSON file. Please help me to load this JSON file in IPython.
And I also want to skip same words in the first line to make it a clean format, I want each record looks like this :</p>
<pre><code>{"station_id":"72","num_bikes_available":18,"num_bikes_disabled":0,"num_docks_available":20,"num_docks_disabled":1,"is_installed":1,"is_renting":1,"is_returning":1,"last_reported":"1467164372","eightd_has_available_keys":false},
</code></pre>
<p>Here is my code:</p>
<pre><code>In [9]: path = 'stationstatus.json'
In [10]: records = [json.loads(line) for line in open(path)]
</code></pre>
<p>Here is the error:</p>
<pre><code>JSONDecodeError Traceback (most recent call last)
<ipython-input-10-b1e0b494454a> in <module>()
----> 1 records = [json.loads(line) for line in open(path)]
<ipython-input-10-b1e0b494454a> in <listcomp>(.0)
----> 1 records = [json.loads(line) for line in open(path)]
//anaconda/lib/python3.5/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
317 parse_int is None and parse_float is None and
318 parse_constant is None and object_pairs_hook is None and not kw):
--> 319 return _default_decoder.decode(s)
320 if cls is None:
321 cls = JSONDecoder
//anaconda/lib/python3.5/json/decoder.py in decode(self, s, _w)
340 end = _w(s, end).end()
341 if end != len(s):
--> 342 raise JSONDecodeError("Extra data", s, end)
343 return obj
344
</code></pre>
<p>Here is part of my JSON file** :</p>
<pre><code>{
"last_updated": 1467164806,
"ttl": 10,
"data": {
"stations": [{
"station_id": "72",
"num_bikes_available": 18,
"num_bikes_disabled": 0,
"num_docks_available": 20,
"num_docks_disabled": 1,
"is_installed": 1,
"is_renting": 1,
"is_returning": 1,
"last_reported": "1467164372",
"eightd_has_available_keys": false
}, {
"station_id": "79",
"num_bikes_available": 1,
"num_bikes_disabled": 2,
"num_docks_available": 30,
"num_docks_disabled": 0,
"is_installed": 1,
"is_renting": 1,
"is_returning": 1,
"last_reported": "1467163375",
"eightd_has_available_keys": false
}, {
"station_id": "82",
"num_bikes_available": 3,
"num_bikes_disabled": 3,
"num_docks_available": 21,
"num_docks_disabled": 0,
"is_installed": 1,
"is_renting": 1,
"is_returning": 1,
"last_reported": "1467161631",
"eightd_has_available_keys": false
}, {
"station_id": "83",
"num_bikes_available": 36,
"num_bikes_disabled": 0,
"num_docks_available": 26,
"num_docks_disabled": 0,
"is_installed": 1,
"is_renting": 1,
"is_returning": 1,
"last_reported": "1467163453",
"eightd_has_available_keys": false
}, {
"station_id": "116",
"num_bikes_available": 5,
"num_bikes_disabled": 3,
"num_docks_available": 31,
"num_docks_disabled": 0,
"is_installed": 1,
"is_renting": 1,
"is_returning": 1,
"last_reported": "1467164693",
"eightd_has_available_keys": false
}, {
"station_id": "119",
"num_bikes_available": 15,
"num_bikes_disabled": 0,
"num_docks_available": 4,
"num_docks_disabled": 0,
"is_installed": 1,
"is_renting": 1,
"is_returning": 1,
"last_reported": "1467160413",
"eightd_has_available_keys": false
}]
}
}
</code></pre>
| 0 | 2016-07-25T21:04:36Z | 38,577,508 | <p>Here is a suggestion for loading the file: </p>
<pre><code>with open('Path/to/file', 'r') as content_file:
content = content_file.read()
records = json.loads(content)
</code></pre>
<p>The root object in your json will be in your <code>records</code> variable</p>
| 0 | 2016-07-25T21:12:20Z | [
"python",
"json",
"load"
] |
How can I perform a rebase with pygit2? | 38,577,476 | <p><a href="http://stackoverflow.com/q/21661211">This question</a> touches on how to perform a merge with <code>pygit2</code>, but, to the best of my understanding, that will result in a new commit. Is there a way to perform a rebase, which will not result in a new commit and will simply fast-forward the branch reference to correspond to the latest from a given remote?</p>
| 0 | 2016-07-25T21:10:23Z | 38,577,631 | <p>You can fast-forward with <a href="http://www.pygit2.org/references.html#pygit2.Reference.set_target" rel="nofollow">Reference.set_target()</a>.</p>
<p>Example (fast-forwarding <code>master</code> to <code>origin/master</code>, assuming that the script starts from checked out <code>master</code> branch in clean state):</p>
<pre><code>repo.remotes['origin'].fetch()
origin_master = repo.lookup_branch('origin/master', pygit2.GIT_BRANCH_REMOTE)
master = repo.lookup_branch('master')
master.set_target(origin_master.target)
# Fast-forwarding with set_target() leaves the index and the working tree
# in their old state. That's why we need to checkout() and reset()
repo.checkout('refs/heads/master')
repo.reset(master.target, pygit2.GIT_RESET_HARD)
</code></pre>
| 1 | 2016-07-25T21:21:18Z | [
"python",
"git",
"pygit2"
] |
Python for loop only goes through once | 38,577,629 | <p>I'm writing a script to search through multiple text files with mac addresses in them to find what port they are associated with. I need to do this for several hundred mac addresses. The function runs the first time through fine. After that though the new mac address doesn't get passed to the function it remains as the same one it already used and the functions for loop only seems to run once.</p>
<pre><code>import re
import csv
f = open('all_switches.csv','U')
source_file = csv.reader(f)
m = open('macaddress.csv','wb')
macaddress = csv.writer(m)
s = open('test.txt','r')
source_mac = s.read().splitlines()
count = 0
countMac = 0
countFor = 0
def find_mac(sneaky):
global count
global countFor
count = count +1
for switches in source_file:
countFor = countFor + 1
# print sneaky only goes through the loop once
switch = switches[4]
source_switch = open(switch + '.txt', 'r')
switch_read = source_switch.readlines()
for mac in switch_read:
# print mac does search through all the switches
found_mac = re.search(sneaky, mac)
if found_mac is not None:
interface = re.search("(Gi|Eth|Te)(\S+)", mac)
if interface is not None:
port = interface.group()
macaddress.writerow([sneaky, switch, port])
print sneaky + ' ' + switch + ' ' + port
source_switch.close()
for macs in source_mac:
match = re.search(r'[a-fA-F0-9]{4}[.][a-fA-F0-9]{4}[.][a-fA-F0-9]{4}', macs)
if match is not None:
sneaky = match.group()
find_mac(sneaky)
countMac = countMac + 1
print count
print countMac
print countFor
</code></pre>
<p>I've added the count countFor and countMac to see how many times the loops and functions run. Here is the output.</p>
<p>549f.3507.7674 the name of the switch Eth100/1/11
677
677
353</p>
<p>Any insight would be appreciated. </p>
| 0 | 2016-07-25T21:21:15Z | 38,577,764 | <p><code>source_file</code> is opened globally only once, so the first time you execute call <code>find_mac()</code>, the <code>for switches in source_file:</code> loop will exhaust the file. Since the file wasn't closed and reopened, the next time <code>find_mac()</code> is called the file pointer is at the end of the file and reads nothing.</p>
<p>Moving the following to the beginning of <code>find_mac</code> should fix it:</p>
<pre><code>f = open('all_switches.csv','U')
source_file = csv.reader(f)
</code></pre>
<p>Consider using <code>with</code> statements to ensure your files are closed as well.</p>
| 1 | 2016-07-25T21:30:43Z | [
"python",
"python-2.7"
] |
Saving variable size part of string to another variable | 38,577,657 | <p>I am messing around with string on an imageboard, and I want to be able to grab the board name from the end of the URL. I was wondering if I could possibly use %s to save it to another variable, but I only know how to use that to assign. Here is an example of the end of the URL.</p>
<pre><code>url = '/a/thread/144681013/'
board = #Where I want to save the 'a'
</code></pre>
<p>I can't just say it's at position <code>url[1]</code>, because the board could be more than 1 letter, like:</p>
<pre><code>url = '/fdf/thread/144681013/'
board = #Where I want to save the 'fdf'
</code></pre>
<p>I can't find an example in documentation anywhere, is there any placeholder I could put in the board that would automatically extract the board part of the url?</p>
| 1 | 2016-07-25T21:22:59Z | 38,577,675 | <p>You can just <a href="https://docs.python.org/3/library/stdtypes.html#str.split" rel="nofollow">split your string</a> at the <code>/</code> characters and grab the second element of the resulting list (the first one is empty because of the leading <code>/</code>)</p>
<pre><code>board = url.split('/')[1]
</code></pre>
<p>You could also use regular expressions if you wanted.</p>
<pre><code>import re
board = re.search('(?<=^\/).*?(?=\/)', url).group()
</code></pre>
| 1 | 2016-07-25T21:24:01Z | [
"python",
"string",
"variables",
"url"
] |
Biopython bootstrapping phylogenetic trees with custom distance matrix | 38,577,681 | <p>I am trying to create a bootstrapped phylogenetic tree but instead of using raw multiple sequence alignment data and a standard scoring system, I want to use my own custom distance matrix that I have created. I have currently looked at <a href="http://biopython.org/wiki/Phylo" rel="nofollow">http://biopython.org/wiki/Phylo</a> and have been able to create a single tree using my own custom distance matrix using the following code:</p>
<pre><code>dm = TreeConstruction._DistanceMatrix(tfs,dmat)
treeConstructor = DistanceTreeConstructor(method = 'upgma')
upgmaTree = treeConstructor.upgma(dm)
Phylo.draw(upgmaTree)
</code></pre>
<p>where dmat is a lower triangle distance matrix, and tfs is a list of names that are used for columns/rows. When looking at the bootstrapping examples, it seems like all of the input needs to be raw sequence data and not a distance matrix like I used above, does anyone know a workaround for this problem?
Thanks! </p>
| 0 | 2016-07-25T21:24:21Z | 38,594,097 | <p><strong>Short answer:</strong> No, you cannot use a distance matrix to bootstrap a phylogeny.</p>
<p><strong>Long answer:</strong>
The first step in bootstrapping a phylogeny calls for creating a set of data pseudoreplicates. For DNA sequences, nucleotide positions are randomly drawn from the alignment (the whole column) with repetitions up to the total length of the alignment.</p>
<p>Let's assume a 10 bp long alignment with two sequences differing by two mutations. For simplicity sake, their distance is <em>d</em> = 0.2.</p>
<pre><code>AATTCCGGGG
AACTCCGGAG
</code></pre>
<p>Bootstrapping such a dataset would call for positions 3, 8, 5, 9, 10, 1, 6, 9, 6, 5 to represent the pseudoreplicate.</p>
<pre><code>set.seed(123)
sample(1:10, 10, replace = TRUE)
[1] 3 8 5 9 10 1 6 9 6 5
TGCGGACGCC
CGCAGACACC
</code></pre>
<p>We obtained a dataset with variables (columns) identical to the original alignment, but occurring at different frequencies. Note that <em>d</em> = 0.3 in the bootstrapped alignment.</p>
<p>Using this approach, we can bootstrap any variable or a dataset containing multiple variables. A distance matrix cannot be used in this way, because it represents already processed information. </p>
<p><strong>Solution:</strong></p>
<p>Repeat the process for calculating the custom distance matrix on your own data pseudoreplications.</p>
<pre><code># Your function to calculate a custom distance matrix
calc.dist <- function(dat) { ... }
nrep <- 100
reps <- lapply(1:nrep, FUN=function(i) calc.dist(dat[,sample(1:ncol(dat), ncol(dat), replace = TRUE)]))
</code></pre>
| 0 | 2016-07-26T15:29:46Z | [
"python",
"biopython",
"phylogeny"
] |
DBF to CSV Convertion | 38,577,708 | <p>I have been trying all day to convert dbf files to CSV and cannot seem to get it. I have looked at various options and cannot seem to get one that will work. Here is one that I have been trying.</p>
<pre><code> import arcpy
import dbf
from arcpy import env
import os
def DBFtoCSV(path):
'''Convert every DBF table into CSV table.
'''
env.workspace = path
tablelist = arcpy.ListTables() # list tables in file
for table in tablelist: # iterate through every table
#make sure you are just working with .dbf tables
if table.endswith('.dbf'):
with dbf.Table(os.path.join(path, table)) as current_table:
print current_table
dbf.export(current_table)
print "\n Processing ",table[:-4]+".csv table complete."
if __name__ == '__main__':
path=r'path'
DBFtoCSV(path)
</code></pre>
<p>The error I am getting now is:</p>
<pre><code> Processing name.csv table complete.
Table: F:/name.dbf
Type: Visual Foxpro
Codepage: cp1252 (Windows ANSI)
Status: read-write
Last updated: 2014-02-24
Record count: 4887170
Field count: 23
Record length: 235
--Fields--
0) respondent I binary
1) report_yr I binary
2) report_prd I binary
3) sys_key I binary
4) tr_id C(24)
5) tr_contrac I binary null
6) tr_begin_d T binary null
7) tr_end_dat T binary null
8) tr_timezon C(2) null
9) tr_delv_ct C(4) null
10) tr_delv_sp C(48) null
11) tr_class_n C(4) null
12) tr_term_na C(4) null
13) tr_inc_nam C(4) null
14) tr_inc_pea C(4) null
15) tr_prod_na C(49) null
16) tr_quantit B binary null
17) tr_price B binary
18) tr_units C(9) null
19) tr_tot_tra B binary null
20) tr_tot_tr2 B binary null
21) tr_other M
22) tr_revised T binary
array('c', '\x00\x00')
16
(2, 0)
(235, array('c', ' \x8f\x04\x00\x00\xd9\x07\x00\x00\x03\x00\x00\x00\x01\x00\x00\
x001Q09 \x04\x00\x00\x001u%\x00\xe5\x03\x00\x00\x8au%\x00\x18
X&\x05MPPNM PNM Switchyard F LT M FP CAPA
CITY \x00\x00\x00\x00\x80+\x18A\xba\xda\
x8a\xfdew\x0f@$/KW-MO \x00\x00\x00\x00\x00\x00\x00\x00\xcd\xcc\xcc\xccR\xc47A\x
00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'))
('0', 233, 2, 235, 0, 5, <function none at 0x110DF9B0>, <function none at 0x110D
F9B0>)
array('c', '\x00\x00')
Traceback (most recent call last):
File "dbf_convert_stack.py", line 20, in <module>
DBFtoCSV(path)
File "dbf_convert_stack.py", line 16, in DBFtoCSV
dbf.export(current_table)
File "C:\Python27\ArcGIS10.4\lib\site-packages\dbf\ver_2.py", line 7859, in ex
port
data = record[fieldname]
File "C:\Python27\ArcGIS10.4\lib\site-packages\dbf\ver_2.py", line 2541, in __
getitem__
return self.__getattr__(item)
File "C:\Python27\ArcGIS10.4\lib\site-packages\dbf\ver_2.py", line 2508, in __
getattr__
value = self._retrieve_field_value(index, name)
File "C:\Python27\ArcGIS10.4\lib\site-packages\dbf\ver_2.py", line 2693, in _r
etrieve_field_value
if ord(null_data[byte]) >> bit & 1:
IndexError: array index out of range
</code></pre>
| 0 | 2016-07-25T21:26:35Z | 38,578,126 | <p>Instead of using <code>dbfpy</code> use <a href="https://pypi.python.org/pypi/dbf" rel="nofollow">my dbf module</a> instead:</p>
<pre><code>import dbf # instead of dbfpy
def DBFtoCSV(path):
'''Convert every DBF table into CSV table. '''
env.workspace = path
tablelist = arcpy.ListTables() # list tables in file
for table in tablelist: # iterate through every table
#make sure you are just working with .dbf tables
if table.endswith('.dbf'):
with dbf.Table(table) as current_table:
dbf.export(current_table)
#keep track of processing
print "\n Processing ",table[:-4]+".csv table complete."
</code></pre>
| 0 | 2016-07-25T22:03:43Z | [
"python",
"csv",
"dbf",
"arcpy"
] |
DBF to CSV Convertion | 38,577,708 | <p>I have been trying all day to convert dbf files to CSV and cannot seem to get it. I have looked at various options and cannot seem to get one that will work. Here is one that I have been trying.</p>
<pre><code> import arcpy
import dbf
from arcpy import env
import os
def DBFtoCSV(path):
'''Convert every DBF table into CSV table.
'''
env.workspace = path
tablelist = arcpy.ListTables() # list tables in file
for table in tablelist: # iterate through every table
#make sure you are just working with .dbf tables
if table.endswith('.dbf'):
with dbf.Table(os.path.join(path, table)) as current_table:
print current_table
dbf.export(current_table)
print "\n Processing ",table[:-4]+".csv table complete."
if __name__ == '__main__':
path=r'path'
DBFtoCSV(path)
</code></pre>
<p>The error I am getting now is:</p>
<pre><code> Processing name.csv table complete.
Table: F:/name.dbf
Type: Visual Foxpro
Codepage: cp1252 (Windows ANSI)
Status: read-write
Last updated: 2014-02-24
Record count: 4887170
Field count: 23
Record length: 235
--Fields--
0) respondent I binary
1) report_yr I binary
2) report_prd I binary
3) sys_key I binary
4) tr_id C(24)
5) tr_contrac I binary null
6) tr_begin_d T binary null
7) tr_end_dat T binary null
8) tr_timezon C(2) null
9) tr_delv_ct C(4) null
10) tr_delv_sp C(48) null
11) tr_class_n C(4) null
12) tr_term_na C(4) null
13) tr_inc_nam C(4) null
14) tr_inc_pea C(4) null
15) tr_prod_na C(49) null
16) tr_quantit B binary null
17) tr_price B binary
18) tr_units C(9) null
19) tr_tot_tra B binary null
20) tr_tot_tr2 B binary null
21) tr_other M
22) tr_revised T binary
array('c', '\x00\x00')
16
(2, 0)
(235, array('c', ' \x8f\x04\x00\x00\xd9\x07\x00\x00\x03\x00\x00\x00\x01\x00\x00\
x001Q09 \x04\x00\x00\x001u%\x00\xe5\x03\x00\x00\x8au%\x00\x18
X&\x05MPPNM PNM Switchyard F LT M FP CAPA
CITY \x00\x00\x00\x00\x80+\x18A\xba\xda\
x8a\xfdew\x0f@$/KW-MO \x00\x00\x00\x00\x00\x00\x00\x00\xcd\xcc\xcc\xccR\xc47A\x
00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'))
('0', 233, 2, 235, 0, 5, <function none at 0x110DF9B0>, <function none at 0x110D
F9B0>)
array('c', '\x00\x00')
Traceback (most recent call last):
File "dbf_convert_stack.py", line 20, in <module>
DBFtoCSV(path)
File "dbf_convert_stack.py", line 16, in DBFtoCSV
dbf.export(current_table)
File "C:\Python27\ArcGIS10.4\lib\site-packages\dbf\ver_2.py", line 7859, in ex
port
data = record[fieldname]
File "C:\Python27\ArcGIS10.4\lib\site-packages\dbf\ver_2.py", line 2541, in __
getitem__
return self.__getattr__(item)
File "C:\Python27\ArcGIS10.4\lib\site-packages\dbf\ver_2.py", line 2508, in __
getattr__
value = self._retrieve_field_value(index, name)
File "C:\Python27\ArcGIS10.4\lib\site-packages\dbf\ver_2.py", line 2693, in _r
etrieve_field_value
if ord(null_data[byte]) >> bit & 1:
IndexError: array index out of range
</code></pre>
| 0 | 2016-07-25T21:26:35Z | 38,595,255 | <p>This can be fairly straightforward with SearchCursor. All you really need to do is get the field names, pass that into the cursor, then write the complete row to a csv with Python's csv module.</p>
<pre><code>import arcpy
import csv
dbf = table_name # Pass in the table you've identified
outputFile = '{}.csv'.format(dbf.split('.dbf')[0])
# Get the fields in the dbf to use for the cursor and csv header row.
fields = []
for field in arcpy.ListFields(dbf):
fields.append(field.name)
# Make the csv.
with open(outputFile, 'wb') as output:
dataWriter = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
# Write header row.
dataWriter.writerow(fields)
# Write each row of data to the csv.
with arcpy.da.SearchCursor(dbf, fields) as cursor:
for row in cursor:
dataWriter.writerow(row)
print('Finished creating {}'.format(outputFile))
</code></pre>
| 0 | 2016-07-26T16:27:57Z | [
"python",
"csv",
"dbf",
"arcpy"
] |
How to pass a new variable to python script using 'for' loop | 38,577,726 | <p>I am trying to write a python script that will traverse through a list and then pass a new variable to an other python script. Here is my code:</p>
<h1>Script: ENCViewer.py</h1>
<pre><code> # import necessary modules
import os
# list of all ENCS
root = os.listdir('/home/cassandra/desktop/file_formats/ENC_ROOT')
root.sort()
root = root[2:]
for ENC in root:
# pass new instance of variable 'ENC' to ENCReader.py
import ENCReader.py
</code></pre>
<h1>Script: ENCReader.py</h1>
<pre><code>from __main__ import *
print ENC
.... # remaining script
</code></pre>
<p>Currently, when executing the first code, ENCViewer.py, it will only execute once then exit. How can I pass new instance variables of 'ENC' to ENCReader.py so that it executes throughout the entire 'for' loop seen in the first snippet of code?</p>
<p>Thanks.</p>
| 0 | 2016-07-25T21:28:14Z | 38,577,786 | <p>I don't know if what you are asking is possible, but I think that you miss understood the idea of creating modules and importing code. The "standard" way of achieving the same result is the following:</p>
<p>ENCReader.py</p>
<pre><code>def printer(var):
print(var)
# your code..
</code></pre>
<p>ENCViewer.py</p>
<pre><code> import os
from ENCReader import printer
# list of all ENCS
root = os.listdir('/home/cassandra/desktop/file_formats/ENC_ROOT')
root.sort()
root = root[2:]
for ENC in root:
printer(ENC)
</code></pre>
| 2 | 2016-07-25T21:32:51Z | [
"python",
"variables",
"for-loop",
"main"
] |
Pandas - unflatten data frame with columns containing array | 38,577,737 | <p>I have a data frame which has been flattened on a specific property:</p>
<pre><code>id property_a properties_b
id_1 property_a_1 [property_b_11, property_b_12]
id_2 property_a_2 [property_b_21, property_b_22, property_b_23]
..................
</code></pre>
<p>I'd like to expand the column <code>properties_b</code> to go back to a data frame looking like this:</p>
<pre><code>id property_a property_b
id_1 property_a_1 property_b_11
id_1 property_a_1 property_b_12
id_2 property_a_2 property_b_21
id_2 property_a_2 property_b_22
id_2 property_a_2 property_b_23
..................
</code></pre>
<p>I suspect this is very simple with Pandas, but being new to Python, I struggle to find an elegant way to do so.</p>
| 2 | 2016-07-25T21:29:02Z | 38,577,988 | <p>This question was addressed <a href="http://stackoverflow.com/a/38432346/2336654">here</a> and <a href="http://stackoverflow.com/questions/32468402/how-to-explode-a-list-inside-a-dataframe-cell-into-separate-rows">here</a>. If you find these questions and answers useful, feel free to up vote them as well.</p>
<h3>Setup</h3>
<pre><code>df = pd.DataFrame([
['id_1', 'property_a_1', ['property_b_11', 'property_b_12']],
['id_2', 'property_a_2', ['property_b_21', 'property_b_22', 'property_b_23']],
], columns=['id', 'property_a', 'properties_b'])
df
</code></pre>
<p><a href="http://i.stack.imgur.com/nHYB5.png" rel="nofollow"><img src="http://i.stack.imgur.com/nHYB5.png" alt="enter image description here"></a></p>
<pre><code>rows = []
for i, row in df.iterrows():
for a in row.properties_b:
row.properties_b = a
rows.append(row)
pd.DataFrame(rows, columns=df.columns)
</code></pre>
<p><a href="http://i.stack.imgur.com/QL9dC.png" rel="nofollow"><img src="http://i.stack.imgur.com/QL9dC.png" alt="enter image description here"></a></p>
<h3>Handy functions</h3>
<pre><code>def loc_expand(df, loc):
rows = []
for i, row in df.iterrows():
vs = row.at[loc]
new = row.copy()
for v in vs:
new.at[loc] = v
rows.append(new)
return pd.DataFrame(rows)
def iloc_expand(df, iloc):
rows = []
for i, row in df.iterrows():
vs = row.iat[iloc]
new = row.copy()
for v in vs:
row.iat[iloc] = v
rows.append(row)
return pd.DataFrame(rows)
</code></pre>
<hr>
<p>These should both return the same result as above.</p>
<pre><code>loc_expand(df, 'properties_b')
iloc_expand(df, 2)
</code></pre>
| 3 | 2016-07-25T21:51:33Z | [
"python",
"pandas"
] |
Pandas - unflatten data frame with columns containing array | 38,577,737 | <p>I have a data frame which has been flattened on a specific property:</p>
<pre><code>id property_a properties_b
id_1 property_a_1 [property_b_11, property_b_12]
id_2 property_a_2 [property_b_21, property_b_22, property_b_23]
..................
</code></pre>
<p>I'd like to expand the column <code>properties_b</code> to go back to a data frame looking like this:</p>
<pre><code>id property_a property_b
id_1 property_a_1 property_b_11
id_1 property_a_1 property_b_12
id_2 property_a_2 property_b_21
id_2 property_a_2 property_b_22
id_2 property_a_2 property_b_23
..................
</code></pre>
<p>I suspect this is very simple with Pandas, but being new to Python, I struggle to find an elegant way to do so.</p>
| 2 | 2016-07-25T21:29:02Z | 38,578,177 | <p>Here is another approach using <code>to_records</code>, some tuple-mapping and <code>from_records</code>.</p>
<pre><code>import pandas as pd
import itertools
def expand_column(df, col_id):
records = map(lambda r: [r[1:col_id] + (l,) + r[col_id + 1:] for l in r[col_id]], map(tuple, df.to_records()))
return pd.DataFrame.from_records(itertools.chain.from_iterable(records), columns=df.columns)
df = pd.DataFrame([['a', [1,2,3], 'a'],['b', [4,5], 'b']], columns=['C1', 'L', 'C2'])
print(df)
print(expand_column(df, 2))
# C1 L C2
# 0 a [1, 2, 3] a
# 1 b [4, 5] b
#
# C1 L C2
# 0 a 1 a
# 1 a 2 a
# 2 a 3 a
# 3 b 4 b
# 4 b 5 b
</code></pre>
| 2 | 2016-07-25T22:08:32Z | [
"python",
"pandas"
] |
converting to a padas dataframe with numpy dtype of string | 38,577,841 | <p>This is a super basic question but I couldn't find a solution anywhere. I've looked at the docs and on Stack Overflow. I need to get a numpy array of strings which I will then save as a csv. I have a line like this:</p>
<pre><code>np.savetxt(path + 'section.csv', np.asarray(seclist, dtype=np.dtype('a2')), fmt='%s', delimiter=',')
</code></pre>
<p>and it returns an error:</p>
<blockquote>
<p>ValueError: cannot set an array element with a sequence.</p>
</blockquote>
<p>If I remove <code>dtype</code>, it works fine except it prints objects formatted as strings so each item has a bracket and <code>''</code> around it. When I try <code>dtype=string</code> that does not work it gives me an error:</p>
<blockquote>
<p>TypeError: data type not understood.</p>
</blockquote>
<p>I've tried <code>dtype='string'</code>, <code>dtype = np.str</code>, <code>dtype = np.str_</code> and quite a few more but none of them get the <code>dtype</code> to be string. Any help would be appreciated,</p>
<p>Cameron</p>
| -2 | 2016-07-25T21:38:34Z | 38,590,863 | <p>I had a python list of lists. However some lists were length 0 and some were length 1. Thus the numpy couldn't accept the string dtype. When I padded the lists so they were all the same length that fixed everything. </p>
| 0 | 2016-07-26T13:12:23Z | [
"python",
"arrays",
"string",
"csv",
"numpy"
] |
ImportError: No module named setuptools.command on Mac OS X within virtualenv | 38,577,959 | <p>Trying to install a Python pip package (Django Rest Framework docs, <a href="https://pypi.python.org/pypi/drfdocs/" rel="nofollow"><code>drfdocs</code></a>) on Mac OSX within a virtualenv:</p>
<p>Here are the relevant versions of pip, python, easy_install:</p>
<pre><code>$ virtualenv --version
1.11.4
$ mkvirtualenv test
New python executable in test/bin/python
Installing setuptools, pip...done.
(test)$ python --version; which python
Python 2.7.10
/Users/me/.virtualenvs/test/bin/python
(test)$ pip --version; which pip
pip 1.5.4 from /Users/me/.virtualenvs/test/lib/python2.7/site-packages (python 2.7)
/Users/me/.virtualenvs/test/bin/pip
(test)$ easy_install --version; which easy_install
setuptools 2.2
/Users/me/.virtualenvs/test/bin/easy_install
(test)$ python -c "import setuptools.command; print setuptools.command"
<module 'setuptools.command' from '/Users/me/.virtualenvs/test/lib/python2.7/site-packages/setuptools/command/__init__.pyc'>
</code></pre>
<p>And here's the error:</p>
<pre><code>$ pip install drfdocs
Downloading/unpacking drfdocs
Downloading drfdocs-0.0.11.tar.gz (771kB): 771kB downloaded
Running setup.py (path:/Users/me/.virtualenvs/test/build/drfdocs/setup.py) egg_info for package drfdocs
Traceback (most recent call last):
File "<string>", line 3, in <module>
ImportError: No module named setuptools.command
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 3, in <module>
ImportError: No module named setuptools.command
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /Users/me/.virtualenvs/test/build/drfdocs
Storing debug log for failure in /Users/me/.pip/pip.log
</code></pre>
<p>I tried a variety of fixes from other Stack Overflow answers <a href="http://stackoverflow.com/questions/17892071/pip-install-error-setuptools-command-not-found">here</a>, but none worked. </p>
<pre><code>$ pip install -U setuptools
</code></pre>
<p>did not help either. </p>
<hr>
<p><strong>EDIT</strong>: As requested:</p>
<pre><code>(test)$ python -c "import setuptools; print setuptools.__file__; print setuptools.__version__"
/Users/me/.virtualenvs/test/lib/python2.7/site-packages/setuptools/__init__.pyc
2.2
</code></pre>
<hr>
<p><strong>EDIT #2</strong>: I tried <code>pip install pip --upgrade</code> so now I'm on <code>pip==8.1.2</code>. </p>
<p>Now when I try to install I get a slightly different error:</p>
<pre><code>(test)$ python
Python 2.7.10 (default, Oct 23 2015, 19:19:21)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import setuptools
>>> exit()
(test)$ pip --version
pip 8.1.2 from /Users/me/.virtualenvs/test/lib/python2.7/site-packages (python 2.7)
(test)$ pip install drfdocs
Collecting drfdocs
Using cached drfdocs-0.0.11.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named setuptools
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/kc/00fkv5q91vz815b2jycc8cv40000gn/T/pip-build-olnkAD/drfdocs/
</code></pre>
<p>Still doesn't make sense why this is happening though.</p>
| 7 | 2016-07-25T21:49:13Z | 38,682,395 | <p>This is a bug in package. See <a href="https://github.com/ekonstantinidis/django-rest-framework-docs/issues/120" rel="nofollow">issue #120</a></p>
<p>Wait for <a href="https://github.com/ekonstantinidis/django-rest-framework-docs/milestone/3" rel="nofollow">v0.1.0</a> or download <a href="https://pypi.python.org/packages/e5/9e/3a9aa6908ad7bd95b46f7fe05256681f4101de9a7769b6928159a986ef61/drfdocs-0.0.11.tar.gz" rel="nofollow">package from PyPI</a>, remove <code>site</code> directory from this package, install patched package.</p>
| 3 | 2016-07-31T08:59:58Z | [
"python",
"django",
"osx",
"virtualenv",
"virtualenvwrapper"
] |
Matplotlib- Any way to use integers AND decimals in colorbar ticks? | 38,577,978 | <p><strong>The issue</strong></p>
<p>I have a plot and I need the colorbar ticks to be larger since I'm adding this plot to a multi-panel plot & the tick labels will be hard to read otherwise with everything so condensed. My problem is that if I make the labels bigger, they start running into each other near the end of the colorbar because of the .0 decimals. I can't make all the ticks integers though since I need the decimal labels near the left side & center of the colorbar.</p>
<p><a href="http://i.stack.imgur.com/ZF6pe.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZF6pe.png" alt="enter image description here"></a></p>
<p><strong>The code</strong></p>
<p>Here's the code I used to make the plot.</p>
<pre><code>#Set variables
lonlabels = ['0','45E','90E','135E','180','135W','90W','45W','0']
latlabels = ['90S','60S','30S','Eq.','30N','60N','90N']
#Set cmap properties
bounds = np.array([0.001,0.01,0.1,1,5,10,25,50])
norm = colors.LogNorm(vmin=0.0001,vmax=50) #creates logarithmic scale
cmap = plt.get_cmap('jet')
cmap.set_over('#660000') #everything above range of colormap
#Create basemap
fig,ax = plt.subplots(figsize=(15.,10.))
m = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,llcrnrlon=0,urcrnrlon=360.,lon_0=180.,resolution='c')
m.drawcoastlines(linewidth=1)
m.drawcountries(linewidth=1)
m.drawparallels(np.arange(-90,90,30.),linewidth=0.3)
m.drawmeridians(np.arange(-180.,180.,45.),linewidth=0.3)
meshlon,meshlat = np.meshgrid(lon,lat)
x,y = m(meshlon,meshlat)
#Plot variables
trend = m.pcolormesh(x,y,lintrends_120,cmap=cmap, norm=norm, shading='gouraud',vmin=0.0001,vmax=50)
#Set plot properties
#Colorbar
cbar=m.colorbar(trend, size='8%',ticks=bounds,extend="max",location='bottom',pad=0.8)
cbar.set_label(label='Linear Trend (mm/day/decade)',size=25)
cbar.set_ticklabels(bounds)
for t in cbar.ax.get_xticklabels():
t.set_fontsize(20)
#Titles & labels
ax.set_title('c) 1979-2098',fontsize=35)
ax.set_xlabel('Longitude',fontsize=25)
ax.set_xticks(np.arange(0,405,45))
ax.set_xticklabels(lonlabels,fontsize=20)
ax.set_ylabel('Latitude',fontsize=25)
ax.set_yticks(np.arange(-90,120,30))
ax.set_yticklabels(latlabels,fontsize=20)
</code></pre>
<p>The colorbar block in the last chunk of code near the bottom.</p>
<p><strong>The question</strong></p>
<p>Is there a way to combine decimals (floats) AND integers in the colorbar tick labels so 1.0, 2.0, etc. will show up as 1, 2, etc. instead? I tried making two separate np.array() instances with decimals in one and integers in the other, but when I append them, they all turn into floats.</p>
| 0 | 2016-07-25T21:50:30Z | 38,578,231 | <p>It's hard to test, since your example isn't directly runnable, but you should just be able to set the labels to whatever string you like. Something like so:</p>
<pre><code>bound_labels = [str(v) if v <=1 else str(int(v)) for v in bounds]
cbar.set_ticklabels(bound_labels)
</code></pre>
| 0 | 2016-07-25T22:13:30Z | [
"python",
"matplotlib",
"colorbar"
] |
Matplotlib- Any way to use integers AND decimals in colorbar ticks? | 38,577,978 | <p><strong>The issue</strong></p>
<p>I have a plot and I need the colorbar ticks to be larger since I'm adding this plot to a multi-panel plot & the tick labels will be hard to read otherwise with everything so condensed. My problem is that if I make the labels bigger, they start running into each other near the end of the colorbar because of the .0 decimals. I can't make all the ticks integers though since I need the decimal labels near the left side & center of the colorbar.</p>
<p><a href="http://i.stack.imgur.com/ZF6pe.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZF6pe.png" alt="enter image description here"></a></p>
<p><strong>The code</strong></p>
<p>Here's the code I used to make the plot.</p>
<pre><code>#Set variables
lonlabels = ['0','45E','90E','135E','180','135W','90W','45W','0']
latlabels = ['90S','60S','30S','Eq.','30N','60N','90N']
#Set cmap properties
bounds = np.array([0.001,0.01,0.1,1,5,10,25,50])
norm = colors.LogNorm(vmin=0.0001,vmax=50) #creates logarithmic scale
cmap = plt.get_cmap('jet')
cmap.set_over('#660000') #everything above range of colormap
#Create basemap
fig,ax = plt.subplots(figsize=(15.,10.))
m = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,llcrnrlon=0,urcrnrlon=360.,lon_0=180.,resolution='c')
m.drawcoastlines(linewidth=1)
m.drawcountries(linewidth=1)
m.drawparallels(np.arange(-90,90,30.),linewidth=0.3)
m.drawmeridians(np.arange(-180.,180.,45.),linewidth=0.3)
meshlon,meshlat = np.meshgrid(lon,lat)
x,y = m(meshlon,meshlat)
#Plot variables
trend = m.pcolormesh(x,y,lintrends_120,cmap=cmap, norm=norm, shading='gouraud',vmin=0.0001,vmax=50)
#Set plot properties
#Colorbar
cbar=m.colorbar(trend, size='8%',ticks=bounds,extend="max",location='bottom',pad=0.8)
cbar.set_label(label='Linear Trend (mm/day/decade)',size=25)
cbar.set_ticklabels(bounds)
for t in cbar.ax.get_xticklabels():
t.set_fontsize(20)
#Titles & labels
ax.set_title('c) 1979-2098',fontsize=35)
ax.set_xlabel('Longitude',fontsize=25)
ax.set_xticks(np.arange(0,405,45))
ax.set_xticklabels(lonlabels,fontsize=20)
ax.set_ylabel('Latitude',fontsize=25)
ax.set_yticks(np.arange(-90,120,30))
ax.set_yticklabels(latlabels,fontsize=20)
</code></pre>
<p>The colorbar block in the last chunk of code near the bottom.</p>
<p><strong>The question</strong></p>
<p>Is there a way to combine decimals (floats) AND integers in the colorbar tick labels so 1.0, 2.0, etc. will show up as 1, 2, etc. instead? I tried making two separate np.array() instances with decimals in one and integers in the other, but when I append them, they all turn into floats.</p>
| 0 | 2016-07-25T21:50:30Z | 38,578,243 | <p>Try </p>
<pre><code>bounds = ['0.001','0.01','0.1','1','5','10','25','50']
cbar.set_ticklabels(bounds)
</code></pre>
| 2 | 2016-07-25T22:14:35Z | [
"python",
"matplotlib",
"colorbar"
] |
KIVY - Python Keep doing while button is pressed | 38,578,280 | <p>A few days ago (yesterday actually :D) I started programming in KIVY (which is a python module -- if I can call it that).</p>
<p>I'm pretty familiar with python itself but I find KIVY pretty problamatic and difficult in some solutions.</p>
<p>I'm currently trying to make a platformer game for IOS/Android but I'm stuck on a problem. I've created two buttons and a character. I want the character to keep moving until the button is released. By that I mean: I can move the character once, when the button is pressed, but I want it to keep moving until the button is released.</p>
<p>I've tried multiple solution, for example I used pythons time module:</p>
<pre><code>class Level1(Screen):
posx = NumericProperty(0)
posy = NumericProperty(0)
moving = True
i = 0
def __init__(self, **kwargs):
super(Level1, self).__init__(**kwargs)
def rightmove(self):
self.posx = self.posx+1
time.sleep(10)
def goright(self):
while self.moving == True:
self.rightmove()
i += 1
if i == 10:
break
def stopright(self):
self.moving == False
</code></pre>
<p>but it doesn't work.
It think that it somehow is put in an endless loop, because when I press the button the app stop working ("app stopped working..." error).</p>
<p>I have pretty much no idea how I can fix this. I've been trying for the last few hours and havn't found a solution yet.
Here's my .py file:</p>
<pre><code>from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen, FadeTransition, SlideTransition
from kivy.config import Config
from kivy.core.window import Window
from kivy.uix.label import Label
from kivy.uix.image import Image
from kivy.uix.widget import Widget
from kivy.properties import ObjectProperty, NumericProperty
from kivy.clock import Clock
from kivy.uix.floatlayout import FloatLayout
import time
Config.set('graphics','resizable',0) #don't make the app re-sizeable
#Graphics fix
#this fixes drawing issues on some phones
Window.clearcolor = (0,0,0,1.)
language = "english"
curr1msg = 1
class HomeScreen(Screen):
pass
class OptionsScreen(Screen):
pass
class GameScreen(Screen):
pass
class LevelScreen(Screen):
pass
class Level1intro(Screen):
global language
global curr1msg
if language == "english" and curr1msg == 1:
pName = "Pedro"
msg1 = """Hello my friend!
My name is Pedro and I have a problem. Will you help me?
My spanish studens have a spanish test tomorrow, but I lost the exams!
You are the only one who can help me!"""
cont = "Press anywhere to continue..."
elif language == "swedish" and curr1msg == 1:
pName = "Pedro"
msg1 = """Hejsan!
Jag är Pedro och jag har ett problem. Kan du hjälpa mig?
Mina spanska-elever har ett spanskaprov imorgon men jag har tappat bort proven!
Du är den enda som kan hjälpa mig!"""
cont = "Tryck på skärmen för att fortsätta..."
class Level1(Screen):
posx = NumericProperty(0)
posy = NumericProperty(0)
moving = True
i = 0
def __init__(self, **kwargs):
super(Level1, self).__init__(**kwargs)
def rightmove(self):
self.posx = self.posx+1
time.sleep(10)
def goright(self):
while self.moving == True:
self.rightmove()
i += 1
if i == 10:
break
def stopright(self):
self.moving == False
class ScreenManagement(ScreenManager):
pass
presentation = Builder.load_file("main.kv")
class MainApp(App):
def build(self):
return presentation
if __name__ == "__main__":
MainApp().run()
</code></pre>
<p>And here is my .kv file:</p>
<pre><code>#: import FadeTransition kivy.uix.screenmanager.FadeTransition
#: import SlideTransition kivy.uix.screenmanager.SlideTransition
ScreenManagement:
transition: FadeTransition()
HomeScreen:
OptionsScreen:
LevelScreen:
Level1intro:
Level1:
<HomeScreen>:
name: 'home'
FloatLayout:
canvas:
Rectangle:
source:"images/home_background.jpg"
size: self.size
Image:
source:"images/logo.png"
allow_stretch: False
keep_ratio: False
opacity: 1.0
size_hint: 0.7, 0.8
pos_hint: {'center_x': 0.5, 'center_y': 0.9}
Button:
size_hint: 0.32,0.32
pos_hint: {"x":0.34, "y":0.4}
on_press:
app.root.transition = SlideTransition(direction="left")
app.root.current = "level"
background_normal: "images/play_button.png"
allow_stretch: False
Button:
size_hint: 0.25,0.25
pos_hint: {"x":0.38, "y":0.15}
on_press:
app.root.transition = SlideTransition(direction="left")
app.root.current = 'options'
background_normal: "images/settings_button.png"
<OptionsScreen>:
name: 'options'
<LevelScreen>
name: "level"
FloatLayout:
canvas:
Rectangle:
source:"images/home_background.jpg"
size: self.size
Label:
text: "[b]Choose Level[/b]"
markup: 1
font_size: 40
color: 1,0.5,0,1
pos: 0,250
Button:
size_hint: 0.1,0.1
pos_hint: {"x": 0.1, "y": 0.8}
on_press:
app.root.current = "level1intro"
Image:
source:"images/level1.png"
allow_stretch: True
y: self.parent.y + self.parent.height - 70
x: self.parent.x
height: 80
width: 80
Button:
background_normal: "images/menu_button.png"
pos_hint: {"x": 0.4, "y": 0}
size_hint: 0.3,0.3
pos_hint: {"x": 0.35}
on_press:
app.root.transition = SlideTransition(direction="right")
app.root.current = "home"
<Level1intro>
name: "level1intro"
canvas:
Rectangle:
source: "images/background.png"
size: self.size
Image:
source: "images/dialog.png"
pos_hint: {"y": -0.35}
size_hint: 0.7,1.0
Label:
font_size: 20
color: 1,1,1,1
pos_hint: {"x": -0.385, "y": -0.285}
text: root.pName
Label:
font_size: 15
color: 1,1,1,1
pos_hint: {"x": -0.15, "y": -0.4}
text: root.msg1
Label:
font_size: 15
color: 0.7,0.8,1,1
pos_hint: {"x": 0.025, "y": -0.449}
text: root.cont
on_touch_down:
app.root.transition = FadeTransition()
app.root.current = "level1"
<Level1>
name: "level1"
canvas:
Rectangle:
source: "images/background.png"
size: self.size
Button:
text: ">"
size_hint: 0.1,0.1
pos_hint: {"x":0.9, "y":0.0}
on_press:
root.goright()
on_release:
root.stopright()
Button:
text: "<"
size_hint: 0.1,0.1
pos_hint: {"x": 0.0, "y": 0.0}
on_press:
root.posx = root.posx-1
Image:
id: char
source: "images/idle1.png"
size: self.size
pos: root.posx,root.posy
</code></pre>
<p>Thank you for your time and help.
GryTrean</p>
<p>//I changed "i" to "self.i" and it doesn't fix the problem.</p>
| 1 | 2016-07-25T22:17:37Z | 38,578,488 | <p><a href="https://kivy.org/docs/api-kivy.uix.behaviors.button.html#kivy.uix.behaviors.button.ButtonBehavior" rel="nofollow">Here</a> is the button API in kivy. The two bindings that applicable to your problem are the <code>on_press</code> and <code>on_release</code> bindings. You would use these with the <code>Button.bind()</code> method. An example of binding a function to button binding is available <a href="https://kivy.org/docs/api-kivy.uix.button.html#" rel="nofollow">here</a>. </p>
| 0 | 2016-07-25T22:41:30Z | [
"android",
"python",
"ios",
"kivy"
] |
KIVY - Python Keep doing while button is pressed | 38,578,280 | <p>A few days ago (yesterday actually :D) I started programming in KIVY (which is a python module -- if I can call it that).</p>
<p>I'm pretty familiar with python itself but I find KIVY pretty problamatic and difficult in some solutions.</p>
<p>I'm currently trying to make a platformer game for IOS/Android but I'm stuck on a problem. I've created two buttons and a character. I want the character to keep moving until the button is released. By that I mean: I can move the character once, when the button is pressed, but I want it to keep moving until the button is released.</p>
<p>I've tried multiple solution, for example I used pythons time module:</p>
<pre><code>class Level1(Screen):
posx = NumericProperty(0)
posy = NumericProperty(0)
moving = True
i = 0
def __init__(self, **kwargs):
super(Level1, self).__init__(**kwargs)
def rightmove(self):
self.posx = self.posx+1
time.sleep(10)
def goright(self):
while self.moving == True:
self.rightmove()
i += 1
if i == 10:
break
def stopright(self):
self.moving == False
</code></pre>
<p>but it doesn't work.
It think that it somehow is put in an endless loop, because when I press the button the app stop working ("app stopped working..." error).</p>
<p>I have pretty much no idea how I can fix this. I've been trying for the last few hours and havn't found a solution yet.
Here's my .py file:</p>
<pre><code>from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen, FadeTransition, SlideTransition
from kivy.config import Config
from kivy.core.window import Window
from kivy.uix.label import Label
from kivy.uix.image import Image
from kivy.uix.widget import Widget
from kivy.properties import ObjectProperty, NumericProperty
from kivy.clock import Clock
from kivy.uix.floatlayout import FloatLayout
import time
Config.set('graphics','resizable',0) #don't make the app re-sizeable
#Graphics fix
#this fixes drawing issues on some phones
Window.clearcolor = (0,0,0,1.)
language = "english"
curr1msg = 1
class HomeScreen(Screen):
pass
class OptionsScreen(Screen):
pass
class GameScreen(Screen):
pass
class LevelScreen(Screen):
pass
class Level1intro(Screen):
global language
global curr1msg
if language == "english" and curr1msg == 1:
pName = "Pedro"
msg1 = """Hello my friend!
My name is Pedro and I have a problem. Will you help me?
My spanish studens have a spanish test tomorrow, but I lost the exams!
You are the only one who can help me!"""
cont = "Press anywhere to continue..."
elif language == "swedish" and curr1msg == 1:
pName = "Pedro"
msg1 = """Hejsan!
Jag är Pedro och jag har ett problem. Kan du hjälpa mig?
Mina spanska-elever har ett spanskaprov imorgon men jag har tappat bort proven!
Du är den enda som kan hjälpa mig!"""
cont = "Tryck på skärmen för att fortsätta..."
class Level1(Screen):
posx = NumericProperty(0)
posy = NumericProperty(0)
moving = True
i = 0
def __init__(self, **kwargs):
super(Level1, self).__init__(**kwargs)
def rightmove(self):
self.posx = self.posx+1
time.sleep(10)
def goright(self):
while self.moving == True:
self.rightmove()
i += 1
if i == 10:
break
def stopright(self):
self.moving == False
class ScreenManagement(ScreenManager):
pass
presentation = Builder.load_file("main.kv")
class MainApp(App):
def build(self):
return presentation
if __name__ == "__main__":
MainApp().run()
</code></pre>
<p>And here is my .kv file:</p>
<pre><code>#: import FadeTransition kivy.uix.screenmanager.FadeTransition
#: import SlideTransition kivy.uix.screenmanager.SlideTransition
ScreenManagement:
transition: FadeTransition()
HomeScreen:
OptionsScreen:
LevelScreen:
Level1intro:
Level1:
<HomeScreen>:
name: 'home'
FloatLayout:
canvas:
Rectangle:
source:"images/home_background.jpg"
size: self.size
Image:
source:"images/logo.png"
allow_stretch: False
keep_ratio: False
opacity: 1.0
size_hint: 0.7, 0.8
pos_hint: {'center_x': 0.5, 'center_y': 0.9}
Button:
size_hint: 0.32,0.32
pos_hint: {"x":0.34, "y":0.4}
on_press:
app.root.transition = SlideTransition(direction="left")
app.root.current = "level"
background_normal: "images/play_button.png"
allow_stretch: False
Button:
size_hint: 0.25,0.25
pos_hint: {"x":0.38, "y":0.15}
on_press:
app.root.transition = SlideTransition(direction="left")
app.root.current = 'options'
background_normal: "images/settings_button.png"
<OptionsScreen>:
name: 'options'
<LevelScreen>
name: "level"
FloatLayout:
canvas:
Rectangle:
source:"images/home_background.jpg"
size: self.size
Label:
text: "[b]Choose Level[/b]"
markup: 1
font_size: 40
color: 1,0.5,0,1
pos: 0,250
Button:
size_hint: 0.1,0.1
pos_hint: {"x": 0.1, "y": 0.8}
on_press:
app.root.current = "level1intro"
Image:
source:"images/level1.png"
allow_stretch: True
y: self.parent.y + self.parent.height - 70
x: self.parent.x
height: 80
width: 80
Button:
background_normal: "images/menu_button.png"
pos_hint: {"x": 0.4, "y": 0}
size_hint: 0.3,0.3
pos_hint: {"x": 0.35}
on_press:
app.root.transition = SlideTransition(direction="right")
app.root.current = "home"
<Level1intro>
name: "level1intro"
canvas:
Rectangle:
source: "images/background.png"
size: self.size
Image:
source: "images/dialog.png"
pos_hint: {"y": -0.35}
size_hint: 0.7,1.0
Label:
font_size: 20
color: 1,1,1,1
pos_hint: {"x": -0.385, "y": -0.285}
text: root.pName
Label:
font_size: 15
color: 1,1,1,1
pos_hint: {"x": -0.15, "y": -0.4}
text: root.msg1
Label:
font_size: 15
color: 0.7,0.8,1,1
pos_hint: {"x": 0.025, "y": -0.449}
text: root.cont
on_touch_down:
app.root.transition = FadeTransition()
app.root.current = "level1"
<Level1>
name: "level1"
canvas:
Rectangle:
source: "images/background.png"
size: self.size
Button:
text: ">"
size_hint: 0.1,0.1
pos_hint: {"x":0.9, "y":0.0}
on_press:
root.goright()
on_release:
root.stopright()
Button:
text: "<"
size_hint: 0.1,0.1
pos_hint: {"x": 0.0, "y": 0.0}
on_press:
root.posx = root.posx-1
Image:
id: char
source: "images/idle1.png"
size: self.size
pos: root.posx,root.posy
</code></pre>
<p>Thank you for your time and help.
GryTrean</p>
<p>//I changed "i" to "self.i" and it doesn't fix the problem.</p>
| 1 | 2016-07-25T22:17:37Z | 38,672,054 | <p>I have created a simple example for you, featuring how to move a character (in this case, <em>elf warrior level 1</em>) with a button press:</p>
<pre><code>#!/usr/bin/env python3.5
# -*- coding: utf-8 -*-
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.floatlayout import FloatLayout
from kivy.clock import mainthread, Clock
gui = '''
Root:
orientation: 'vertical'
arena: arena
control_button: control_button
Arena:
id: arena
Button
id: control_button
size_hint_y: None
height: dp(50)
text: 'move'
<Arena@FloatLayout>:
player: player
Button:
id: player
pos: 150, 300
text: 'elf warrior\\nlevel 1'
size_hint: None, None
size: 100, 100
'''
class Root(BoxLayout):
def __init__(self, **kwargs):
super().__init__(**kwargs)
@mainthread
def job():
self.control_button.bind(on_press=self._on_press)
self.control_button.bind(on_release=self._on_release)
job()
def _on_press(self, button):
self.arena.start_movement()
def _on_release(self, button):
self.arena.stop_movement()
class Arena(FloatLayout):
def start_movement(self):
Clock.schedule_interval(self._move_right, 0.01)
def stop_movement(self):
Clock.unschedule(self._move_right)
def _move_right(self, dt):
self.player.x += 1
class Test(App):
def build(self):
return Builder.load_string(gui)
Test().run()
</code></pre>
| 0 | 2016-07-30T08:45:18Z | [
"android",
"python",
"ios",
"kivy"
] |
Flask Rest API Versioning - Returned URLs | 38,578,378 | <h2>The FooBar Api is Published</h2>
<p>Let's say there's a team of developers working on an API, called the <code>FooBar api</code> with a few endpoints:</p>
<pre><code>GET /foo
# returns {'bar': '/bar', 'data': 'foo'}
GET /bar
# returns {'data': 'hello world!'}
</code></pre>
<p>Now, let's say the <code>FooBar api</code> exploded and became hugely popular. Developers from all around the world are using the <code>FooBar api</code> and now thousands of projects are completely dependent on it.</p>
<hr>
<h1>The Problem</h1>
<p>The <code>FooBar api</code> recently got a new project manager. He says, that it is now desired that the responses return a <code>message</code> instead of <code>data</code>, because <code>message</code> is "more descriptive." Unfortunately, any change to the <code>FooBar</code> API could break thousands of these projects. Even though all of these developers whose projects were broken would mostly be patient and understanding about the change, the <code>FooBar</code> team doesn't want to break their own dependent projects and decide it's best to keep the api backwards compatible. </p>
<hr>
<h2>The Solution</h2>
<p>The <code>FooBar api</code> needs to be versioned. Unfortunately <a href="https://www.troyhunt.com/your-api-versioning-is-wrong-which-is/" rel="nofollow">there is no good way to do this</a>. Luckily for the <code>FooBar</code> team, their project manager knows best and decided that the versioning should be accomplished by placing a version number in the url since "that's the part he can see." So, once the second version of the <code>FooBar api</code> is complete, the two versions should look something like this:</p>
<pre><code>FooBar v1
GET /foo
# returns {'bar': '/bar', 'data': 'foo'}
GET /bar
# returns {'data': 'hello world!'}
FooBar v2
GET /v2/foo
# returns {'bar': '<url to bar>', 'message': 'foo'}
GET /v2/bar
# returns {'message': 'hello world!'}
</code></pre>
<hr>
<h2>The Second Problem</h2>
<p>The <code>FooBar</code> team has another problem now; they don't know what should go in <code><url to bar></code>. Of the roughly infinite possible permutations of characters, they were impressively able to get it down to two choices - <code>/v2/bar</code> and <code>/bar</code>.</p>
<hr>
<h2>The Question</h2>
<p>What are the pro's and cons of using <code>/v2/bar</code> vs <code>/bar</code>?</p>
| 1 | 2016-07-25T22:29:05Z | 38,578,432 | <p>Do neither. It's not in keeping with REST or HTTP to have two different representations of the same resource with different URLs. They are the same resource, they should have the same URL.</p>
<p>Whether the clients are using version 1 of the API or version 2 of the API, they are still referring to the same resource, they just want different representations of it. So get rid of <code>/v2/</code> in your URLs altogether and have your clients ask for the version they want as a media type parameter:</p>
<pre><code>GET /foo HTTP/1.1
Accept: application/vnd.whatever+json;version=2
Connection: close
</code></pre>
<p>Your existing clients won't provide the version parameter and you can default to version 1. Newer clients that support version 2 of the API will know to ask for the version 2 representation of that resource. Your links can refer to the same resource properly regardless of which version of the API the client is using.</p>
| 1 | 2016-07-25T22:35:40Z | [
"python",
"api",
"rest",
"flask",
"api-versioning"
] |
python3 unotools connection error failed to connect | 38,578,454 | <p>I have searched for an answer, but nothing has helped so far. I have a method that I want to use to create an odt file and fill it with text. I also want the user to view the file when it is created. I am using python 3.4.3 unotools 0.3.3 LinuxMint 17.1 LibreOffice 4.2.8.2</p>
<p>The issue:</p>
<pre><code>unotools.errors.ConnectionError: failed to connect: ('socket,host=localhost,port=8100', {})
</code></pre>
<p>The unotools sample worked fine from terminal - created and saved a sample.odt without errors. My draft code:</p>
<pre><code>def writer_report(self):
subprocess.Popen(["soffice", "--accept='socket,host=localhost,port=8100;urp;StarOffice.Service'"])
time.sleep(5) # using this to give time for LibreOffice to open - temporary
context = connect(Socket('localhost', '8100'))
writer = Writer(context)
writer.set_string_to_end('world\n')
writer.set_string_to_start('hello\n')
writer.store_to_url('output.odt','FilterName','writer8')
writer.close(True)
</code></pre>
<p>The LibreOffice application opens and remains open. However, the connection seem to be lost.<br />I hope someone can give me assistance, thank you.</p>
| 0 | 2016-07-25T22:37:47Z | 38,595,717 | <p>I do not recommend code like this:</p>
<pre><code>subprocess.Popen(...)
time.sleep(...)
</code></pre>
<p>It is better to use a shell script to start <code>soffice</code> and then call the python script.</p>
<p>However if you are determined to run <code>soffice</code> in a subprocess, then I recommend increasing the sleep time to at least 15 seconds.</p>
<p>See <a href="https://forum.openoffice.org/en/forum/viewtopic.php?t=1014" rel="nofollow">https://forum.openoffice.org/en/forum/viewtopic.php?t=1014</a>.</p>
| 0 | 2016-07-26T16:51:44Z | [
"python",
"python-3.x",
"connection",
"libreoffice-writer"
] |
python3 unotools connection error failed to connect | 38,578,454 | <p>I have searched for an answer, but nothing has helped so far. I have a method that I want to use to create an odt file and fill it with text. I also want the user to view the file when it is created. I am using python 3.4.3 unotools 0.3.3 LinuxMint 17.1 LibreOffice 4.2.8.2</p>
<p>The issue:</p>
<pre><code>unotools.errors.ConnectionError: failed to connect: ('socket,host=localhost,port=8100', {})
</code></pre>
<p>The unotools sample worked fine from terminal - created and saved a sample.odt without errors. My draft code:</p>
<pre><code>def writer_report(self):
subprocess.Popen(["soffice", "--accept='socket,host=localhost,port=8100;urp;StarOffice.Service'"])
time.sleep(5) # using this to give time for LibreOffice to open - temporary
context = connect(Socket('localhost', '8100'))
writer = Writer(context)
writer.set_string_to_end('world\n')
writer.set_string_to_start('hello\n')
writer.store_to_url('output.odt','FilterName','writer8')
writer.close(True)
</code></pre>
<p>The LibreOffice application opens and remains open. However, the connection seem to be lost.<br />I hope someone can give me assistance, thank you.</p>
| 0 | 2016-07-25T22:37:47Z | 38,624,281 | <p>Thanks for the advice. I did want this run a a subprocess. I tried extending the time but still no joy.<br />I am now looking at using the Python odfpy 1.3.3 package which after beginning to use for a day or two, I already am having more success with. </p>
| 0 | 2016-07-27T22:32:35Z | [
"python",
"python-3.x",
"connection",
"libreoffice-writer"
] |
how to split the last element in the string | 38,578,493 | <p>I am trying to split the last element from the build_location if it is same as the previous element,however the expected ouput is not the same,can you help on how to fix it?</p>
<pre><code>build_location = "\\data\builds797\PROD\client.1.8-01180-STD.PROD-1\client.1.8-01180-STD.PROD-1"
buildid =build_location.split("\\")
if buildid[-1] == buildid[-2]:
#split the last element after "\"
build_location = build_location.split("\\")[-1]
print build_location
OUTPUT:-
client.1.8-01180-STD.PROD-1
EXPECTED OUTPUT:-
\\data\builds797\PROD\client.1.8-01180-STD.PROD-1
</code></pre>
| 0 | 2016-07-25T22:42:18Z | 38,578,506 | <p>That's because you're only indexing the last element. You should use a <em>slice</em> to exclude the last item not an <em>index</em> on the last item:</p>
<pre><code>if buildid[-1] == buildid[-2]:
#split the last element after "\"
build_location = build_location.split("\\")[:-1]
# ^^^^^
</code></pre>
<p>Or better, perform the <em>slicing</em> on the already splitted <code>buildid</code> to avoid <em>resplitting</em>:</p>
<pre><code>if buildid[-1] == buildid[-2]:
#split the last element after "\"
build_location = buildid[:-1]
</code></pre>
<hr>
<p>Then, to rebuild the original string from the <em>slice</em>, use <code>join</code>:</p>
<pre><code>build_location = "\\".join(build_location)
</code></pre>
| 0 | 2016-07-25T22:43:59Z | [
"python"
] |
how to split the last element in the string | 38,578,493 | <p>I am trying to split the last element from the build_location if it is same as the previous element,however the expected ouput is not the same,can you help on how to fix it?</p>
<pre><code>build_location = "\\data\builds797\PROD\client.1.8-01180-STD.PROD-1\client.1.8-01180-STD.PROD-1"
buildid =build_location.split("\\")
if buildid[-1] == buildid[-2]:
#split the last element after "\"
build_location = build_location.split("\\")[-1]
print build_location
OUTPUT:-
client.1.8-01180-STD.PROD-1
EXPECTED OUTPUT:-
\\data\builds797\PROD\client.1.8-01180-STD.PROD-1
</code></pre>
| 0 | 2016-07-25T22:42:18Z | 38,578,509 | <p>Change:</p>
<pre><code>build_location = build_location.split("\\")[-1]
</code></pre>
<p>To:</p>
<pre><code>build_location = build_location.split("\\")[:-1]
# ---^---
</code></pre>
<p>You want take all the element except the last one rather than just the last one.</p>
<p>That's called <strong>slicing</strong> and you can learn about it and see more examples <a class='doc-link' href="http://stackoverflow.com/documentation/python/1494/list-slicing-selecting-parts-of-lists#t=201607252247267486025">here</a></p>
<p>After that you should merge the list back to one string and add the extra <code>\</code> with:</p>
<pre><code>'\\'+'\\'.join(build_location)
</code></pre>
| 1 | 2016-07-25T22:44:20Z | [
"python"
] |
How do I convert all python operations involving None to None values? | 38,578,494 | <p>I want all mathematical operations involving one or more None variables to return None. </p>
<p>Example: </p>
<pre><code>a = None
b = 7
a*b
</code></pre>
<p>I want this last line to return None, but it instead gives me an error: </p>
<blockquote>
<p>TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'</p>
</blockquote>
<p>I understand why this error exists and all that, but is there any way to force the result to be just a <code>None</code>? </p>
<p><strong>Background</strong>: I have a few functions that mine data and return a value, called <code>diff</code>. Later on I multiply or add <code>diff</code> to a few things to get meaningful information, but not all of my original data contains a useful <code>diff</code>, so I have it set to return <code>diff = None</code> in these cases. I want to be able to skip over these points when I'm plotting results. Python seems to have no trouble skipping over <code>None</code> elements in an array when I'm plotting, so I'd like to just have the results of the operations be <code>None</code>.</p>
| 2 | 2016-07-25T22:42:22Z | 38,578,510 | <p>You can check for presence of <code>None</code> in arguments and then return early:</p>
<pre><code>def calc(*args, **kwargs):
if None in list(args) + list(kwargs.keys()):
return None
#rest
</code></pre>
<p>You can also write a decorator and wrap the existing functions, this would be more useful if you want to apply this operation to many existing functions (or even just one, for clarity):</p>
<pre><code>def return_none_for_none(f):
def wrapped(*args, **kwargs):
return None if None in list(args) + list(kwargs.keys()) else f(*args, **kwargs)
return wrapped
@return_none_for_none
def some_math_exp(a, b, c=2):
return a + b*c
</code></pre>
| 0 | 2016-07-25T22:44:50Z | [
"python",
"nonetype"
] |
How do I convert all python operations involving None to None values? | 38,578,494 | <p>I want all mathematical operations involving one or more None variables to return None. </p>
<p>Example: </p>
<pre><code>a = None
b = 7
a*b
</code></pre>
<p>I want this last line to return None, but it instead gives me an error: </p>
<blockquote>
<p>TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'</p>
</blockquote>
<p>I understand why this error exists and all that, but is there any way to force the result to be just a <code>None</code>? </p>
<p><strong>Background</strong>: I have a few functions that mine data and return a value, called <code>diff</code>. Later on I multiply or add <code>diff</code> to a few things to get meaningful information, but not all of my original data contains a useful <code>diff</code>, so I have it set to return <code>diff = None</code> in these cases. I want to be able to skip over these points when I'm plotting results. Python seems to have no trouble skipping over <code>None</code> elements in an array when I'm plotting, so I'd like to just have the results of the operations be <code>None</code>.</p>
| 2 | 2016-07-25T22:42:22Z | 38,578,556 | <p>Instead of trying to force all mathematical operations that contain some arbitrary other value to return that arbitrary other value, you could simply use a <a href="https://en.wikipedia.org/wiki/NaN" rel="nofollow"><code>NaN</code></a>, which is designed for exactly this sort of purpose:</p>
<pre><code>>>> nan = float("NaN")
>>> 7 * nan
nan
>>> 7 + nan
nan
</code></pre>
<p>A <code>nan</code> will correctly cascade throughout your mathematical operations. </p>
<p>A good plotting library will also understand <code>nan</code> and omit them at plot time. But if not, simply do your replacement of <code>NaN</code> to <code>None</code> immediately prior to plotting.</p>
| 2 | 2016-07-25T22:49:34Z | [
"python",
"nonetype"
] |
How do I convert all python operations involving None to None values? | 38,578,494 | <p>I want all mathematical operations involving one or more None variables to return None. </p>
<p>Example: </p>
<pre><code>a = None
b = 7
a*b
</code></pre>
<p>I want this last line to return None, but it instead gives me an error: </p>
<blockquote>
<p>TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'</p>
</blockquote>
<p>I understand why this error exists and all that, but is there any way to force the result to be just a <code>None</code>? </p>
<p><strong>Background</strong>: I have a few functions that mine data and return a value, called <code>diff</code>. Later on I multiply or add <code>diff</code> to a few things to get meaningful information, but not all of my original data contains a useful <code>diff</code>, so I have it set to return <code>diff = None</code> in these cases. I want to be able to skip over these points when I'm plotting results. Python seems to have no trouble skipping over <code>None</code> elements in an array when I'm plotting, so I'd like to just have the results of the operations be <code>None</code>.</p>
| 2 | 2016-07-25T22:42:22Z | 38,578,598 | <p>Without having much context to your code structure; I assume you just want to try an operation, and return None if it's not possible.</p>
<p>Perhaps a try/except could work here:</p>
<pre><code>try:
# operation here
except TypeError:
return None
</code></pre>
<p>However I do agree with <strong>@donkopotamus</strong> using a <code>NaN</code> float value accomplishes exactly what you're looking for. If the expression you're try to evaluate contains a <code>NaN</code> then the entire expressions evaluates to <code>NaN</code>. You could always return <code>None</code> if <code>NaN</code> is found, or convert your data when passing it to the math library.</p>
<pre><code>>>> nan = float('NaN')
>>> nan + 15 ** 4
nan
>>> 1000 - nan /6%3
nan
>>>
</code></pre>
| 1 | 2016-07-25T22:53:36Z | [
"python",
"nonetype"
] |
Scrape webpage no ajax calls made but data not in DOM | 38,578,497 | <p>I'm doing an exercise in scraping data from a website. For example, <a href="https://zocdoc.com" rel="nofollow">ZocDoc</a>. I'm trying to get a list of all insurance providers and their plans (You can access this information on their homepage in the insurance dropdown). </p>
<p>It appears that all data is loaded via a <code><scipt></code> tag when the page loads. When looking in the network tab there doesn't appear to be any network calls that returns JSON including the plan names. I am able to get all the insurance plans using with the following (It's messy, but it works).</p>
<pre><code> import requests
from bs4 import BeautifulSoup as bs
resp = requests.get('https://zocdoc.com')
long_str = str(soup.findAll('script')[17].string)
pop = data.split("Popular Insurances")[1]
json.loads(pop[pop.find("[["):pop.find("]]")+2])
</code></pre>
<p>In the HTML returned there are no insurance plans. I also don't see any requests in the network tab where the plans are sent back (there are a few backbone files). One url looks encoded but I'm not sure that that is it and I'm just overthinking this <a href="https://www.zocdoc.com/_Incapsula_Resource?SWJIYLWA=2977d8d74f63d7f8fedbea018b7a1d05&ns=12" rel="nofollow">url</a>.</p>
<p>I've also tried waiting for all the JS to load so the data is in the DOM using <a href="http://dryscrape.readthedocs.io/en/latest/index.html" rel="nofollow">dryscrape</a> but still no plans in the HTML. </p>
<p>Is there a way to gather this information without having a crawler click on every insurance provider to get their plans? </p>
| 1 | 2016-07-25T22:42:48Z | 38,578,740 | <p>Yes, the list of insurances is kept deep inside the <code>script</code> tag:</p>
<pre><code>insuranceModel = new gs.CarrierGroupedSelect(gs.CarrierGroupedSelect.prototype.parse({
...
primary_options: {
name: "Popular Insurances",
group: "primary",
options: [[300,"Aetna",2,0,1,0],[304,"Blue Cross Blue Shield",2,1,1,0],[307,"Cigna",2,0,1,0],[369,"Coventry Health Care",2,0,1,0],[358,"Medicaid",2,0,1,0],[322,"UniCare",2,0,1,0],[323,"UnitedHealthcare",2,0,1,0]]
},
secondary_options: {
name: "All Insurances",
group: "secondary",
options: [[440,"1199SEIU",2,0,1,0],[876,"20/20 Eyecare Plan",2,0,1,1],...]
}
...
</code></pre>
<p>You can, of course, dive into wonderful world of JavaScript code parsing in Python either with regular expressions or Javascript parsers like <code>slimit</code> (<a href="http://stackoverflow.com/a/25112034/771848">example here</a>), but this might result into less hair on the head. Plus, the result solution would be quite fragile.</p>
<p>In this particular case, I think <a href="http://selenium-python.readthedocs.io/" rel="nofollow"><code>selenium</code></a> is a <em>much better fit</em>. Complete working example - getting the popular insurances:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.PhantomJS()
driver.maximize_window()
wait = WebDriverWait(driver, 10)
insurance_dropdown = wait.until(EC.element_to_be_clickable((By.LINK_TEXT, "I'll choose my insurance later")))
insurance_dropdown.click()
for option in driver.find_elements_by_css_selector("[data-group=primary] + .ui-gs-option-set > .ui-gs-option"):
print(option.get_attribute("data-value"))
driver.close()
</code></pre>
<p>Prints:</p>
<pre><code>Aetna
Blue Cross Blue Shield
Cigna
Coventry Health Care
Medicaid
UniCare
UnitedHealthcare
</code></pre>
<p>Note that in this case the headless <code>PhantomJS</code> browser is used, but you can use Chrome or Firefox or other browsers that selenium has an available driver for.</p>
| 2 | 2016-07-25T23:10:10Z | [
"javascript",
"python",
"web-scraping"
] |
Tensorflow: Convert Tensor to numpy array then pass into a feed_dict | 38,578,505 | <p>I'm trying to build a softmax regression model for CIFAR classification. At first when I tried to pass in my images and labels into the feed dictionary, I got an error that said that feed dictionaries do not accept Tensors. I then converted them into numpy arrays using .eval() but the program hangs at the .eval() line and does not continue any further. How can I pass this data into the feed_dict? </p>
<h1>CIFARIMAGELOADING.PY</h1>
<pre><code>import tensorflow as tf
import os
import tensorflow.models.image.cifar10 as cf
IMAGE_SIZE = 24
BATCH_SIZE = 128
def loadimagesandlabels(size):
# Load the images from the CIFAR data directory
FLAGS = tf.app.flags.FLAGS
data_dir = os.path.join(FLAGS.data_dir, 'cifar-10-batches-bin')
filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i) for i in xrange(1, 6)]
filename_queue = tf.train.string_input_producer(filenames)
read_input = cf.cifar10_input.read_cifar10(filename_queue)
# Reshape and crop the image
height = IMAGE_SIZE
width = IMAGE_SIZE
reshaped_image = tf.cast(read_input.uint8image, tf.float32)
cropped_image = tf.random_crop(reshaped_image, [height, width, 3])
# Generate a batch of images and labels by building up a queue of examples
print('Filling queue with CIFAR images')
num_preprocess_threads = 16
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(BATCH_SIZE*min_fraction_of_examples_in_queue)
images, label_batch = tf.train.batch([cropped_image,read_input.label],batch_size=BATCH_SIZE, num_threads=num_preprocess_threads, capacity=min_queue_examples+3*BATCH_SIZE)
print(images)
print(label_batch)
return images, tf.reshape(label_batch, [BATCH_SIZE])
</code></pre>
<h1>CIFAR.PY</h1>
<pre><code>#Set up placeholder vectors for image and labels
x = tf.placeholder(tf.float32, shape = [None, 1728])
y_ = tf.placeholder(tf.float32, shape = [None,10])
W = tf.Variable(tf.zeros([1728,10]))
b = tf.Variable(tf.zeros([10]))
#Implement regression model. Multiply input images x by weight matrix W, add the bias b
#Compute the softmax probabilities that are assigned to each class
y = tf.nn.softmax(tf.matmul(x,W) + b)
#Define cross entropy
#tf.reduce sum sums across all classes and tf.reduce_mean takes the average over these sums
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_*tf.log(y), reduction_indices = [1]))
#Train the model
#Each training iteration we load 128 training examples. We then run the train_step operation
#using feed_dict to replace the placeholder tensors x and y_ with the training examples
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
#Open up a Session
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(1000) :
images, labels = CIFARImageLoading.loadimagesandlabels(size=BATCH_SIZE)
unrolled_images = tf.reshape(images, (1728, BATCH_SIZE))
#convert labels to their one_hot representations
# should produce [[1,0,0,...],[0,1,0...],...]
one_hot_labels = tf.one_hot(indices= labels, depth=NUM_CLASSES, on_value=1.0, off_value= 0.0, axis=-1)
print(unrolled_images)
print(one_hot_labels)
images_numpy, labels_numpy = unrolled_images.eval(session=sess), one_hot_labels.eval(session=sess)
sess.run(train_step, feed_dict = {x: images_numpy, y_:labels_numpy})
#Evaluate the model
#.equal returns a tensor of booleans, we want to cast these as floats and then take their mean
#to get percent correctness (accuracy)
print("evaluating")
test_images, test_labels = CIFARImageLoading.loadimagesandlabels(TEST_SIZE)
test_images_unrolled = tf.reshape(test_images, (1728, TEST_SIZE))
test_images_one_hot = tf.one_hot(indices= test_labels, depth=NUM_CLASSES, on_value=1.0, off_value= 0.0, axis=-1)
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict = {x: unrolled_images.eval(), y_ : test_images_one_hot.eval()}))
</code></pre>
| 0 | 2016-07-25T22:43:51Z | 38,578,658 | <p>Theres a couple of things that you not are understanding really well. Throughout your graph you will work with Tensors. You define Tensors by either using tf.placeholder and feeding them in the session.run(feed_dict{}) or with tf.Variable and initializing it with session.run(tf.initialize_all_variables()). You must feed your input this way, and it should be numpy arrays in the same as shape as you expect in the placeholders. Here`s a simple example:</p>
<pre><code>images = tf.placeholder(type, [1728, BATCH_SIZE])
labels = tf.placeholder(type, [size])
'''
Build your network here so you have the variable: Output
'''
images_feed, labels_feed = CIFARImageLoading.loadimagesandlabels(size=BATCH_SIZE)
# here you can see your output
print sess.run(Output, feed_dict = {x: images_feed, y_:labels_feed})
</code></pre>
<p>You do not feed tf.functions with numpy arrays, you always feed them with Tensors. And the feed_dict is always fed with numpy arrays. The thing is: you will never have to convert tensors to numpy arrays for the input, that does not make sense. Your input must be numpy arrays, if it<code>s a list, you can use np.asarray(list), if it</code>s a tensor, you are doing this wrong.</p>
<p>I do not know what CIFARImageLoading.loadimagesandlabels returns to you, but I imagine it<code>s not a Tensor, it</code>s probably a numpy array already, so just get rid of this .eval().</p>
| 0 | 2016-07-25T23:00:11Z | [
"python",
"neural-network",
"tensorflow",
"mnist",
"softmax"
] |
list where multiple values have to be found and replaced with two new values | 38,578,574 | <p>The basis is:<br>
I want to iterate a certain number of times, each time I take a list, find a certain value, replace that value with two new values and make that the new list what goes into the iteration. </p>
<p>Here is what I have:</p>
<pre><code>list1 = ['ad', 'sbe', 'k3', 'lm0']
list2 = ['sb', 'e', 'lm', '0']
list3 = [1, 3]
</code></pre>
<p><code>list1</code> is the list I want to change<br>
<code>list2</code> are the elements I want to replace<br>
<code>list3</code> are the locations in the original list that values I want to replace are</p>
<p>The output would ideally look like this:</p>
<pre><code>list1 = ['ad', 'sb', 'e', 'k3', 'lm', '0']
</code></pre>
<p>This is what I had, but it ended up getting too complicated and filled with errors, I figure their has to be a simpler way to do this. I have also looked at itertools, but couldn't get that to do what I wanted.</p>
<pre><code>list4 = list1
count = 0
for i in range(int(len(list2)/2)):
del list4[int(list3[i]) + count]
for j in range(2):
if j == 1:
j = j + 1
list4.insert(int(list3[i]) + count, list2[(i + j)])
else:
list4.insert(int(list3[i]) + count, list2[(i + j)])
</code></pre>
<p>Any help is much appreciated.</p>
| 2 | 2016-07-25T22:50:46Z | 38,578,738 | <p>You could start from the end and do extends or the following will work, add the next two elements from <em>list2</em> if the current index is in a <em>set</em> of indexes we create from <em>list3</em> or just keep the original element, then <a href="https://docs.python.org/3/library/itertools.html#itertools.chain" rel="nofollow"><em>chain</em></a> all the elements into one list:</p>
<pre><code> In [1]: from itertools import chain
...: list1 = ['ad', 'sbe', 'k3', 'lm0']
...: list2 = ['sb', 'e', 'lm', '0']
...: list3 = [1, 3]
...: it = iter(list2) # create iterator so we can pull pairs
...: inds = set(list3) # create set from indexes list
...: list1[:]= chain(*([next(it), next(it)] if ind in inds else [ele] for in
...: d,ele in enumerate(list1)))
...:
In [2]: list1
Out[2]: ['ad', 'sb', 'e', 'k3', 'lm', '0']
</code></pre>
<p>Or create a simple generator function:</p>
<pre><code>def inserts(l1, l2, indexes):
it, index_set = iter(l2), set(indexes)
for ind, ele in enumerate(l1):
if ind in index_set:
yield next(it)
yield next(it)
else:
yield ele
</code></pre>
<p>Same output:</p>
<pre><code>In [5]: list1 = ['ad', 'sbe', 'k3', 'lm0']
...: list2 = ['sb', 'e', 'lm', '0']
...: list3 = [1, 3]
...:
In [6]: list1[:] = inserts(list1, list2, list3)
...:
...: print(list1)
...:
['ad', 'sb', 'e', 'k3', 'lm', '0']
</code></pre>
| 1 | 2016-07-25T23:10:00Z | [
"python",
"list",
"replace"
] |
list where multiple values have to be found and replaced with two new values | 38,578,574 | <p>The basis is:<br>
I want to iterate a certain number of times, each time I take a list, find a certain value, replace that value with two new values and make that the new list what goes into the iteration. </p>
<p>Here is what I have:</p>
<pre><code>list1 = ['ad', 'sbe', 'k3', 'lm0']
list2 = ['sb', 'e', 'lm', '0']
list3 = [1, 3]
</code></pre>
<p><code>list1</code> is the list I want to change<br>
<code>list2</code> are the elements I want to replace<br>
<code>list3</code> are the locations in the original list that values I want to replace are</p>
<p>The output would ideally look like this:</p>
<pre><code>list1 = ['ad', 'sb', 'e', 'k3', 'lm', '0']
</code></pre>
<p>This is what I had, but it ended up getting too complicated and filled with errors, I figure their has to be a simpler way to do this. I have also looked at itertools, but couldn't get that to do what I wanted.</p>
<pre><code>list4 = list1
count = 0
for i in range(int(len(list2)/2)):
del list4[int(list3[i]) + count]
for j in range(2):
if j == 1:
j = j + 1
list4.insert(int(list3[i]) + count, list2[(i + j)])
else:
list4.insert(int(list3[i]) + count, list2[(i + j)])
</code></pre>
<p>Any help is much appreciated.</p>
| 2 | 2016-07-25T22:50:46Z | 38,578,760 | <p>You can use list-comprehension with filter:</p>
<pre><code>sum([list2[i:i+2]if i in list3 else [v] for i,v in enumerate(list2)], [])
</code></pre>
<p>You can loop through and check, an imperative-version:</p>
<pre><code>res = []
for i,v in enumerate(list1):
if i in list3:
res += list2[i:i+2]
else:
res.append(v)
</code></pre>
| -1 | 2016-07-25T23:12:00Z | [
"python",
"list",
"replace"
] |
list where multiple values have to be found and replaced with two new values | 38,578,574 | <p>The basis is:<br>
I want to iterate a certain number of times, each time I take a list, find a certain value, replace that value with two new values and make that the new list what goes into the iteration. </p>
<p>Here is what I have:</p>
<pre><code>list1 = ['ad', 'sbe', 'k3', 'lm0']
list2 = ['sb', 'e', 'lm', '0']
list3 = [1, 3]
</code></pre>
<p><code>list1</code> is the list I want to change<br>
<code>list2</code> are the elements I want to replace<br>
<code>list3</code> are the locations in the original list that values I want to replace are</p>
<p>The output would ideally look like this:</p>
<pre><code>list1 = ['ad', 'sb', 'e', 'k3', 'lm', '0']
</code></pre>
<p>This is what I had, but it ended up getting too complicated and filled with errors, I figure their has to be a simpler way to do this. I have also looked at itertools, but couldn't get that to do what I wanted.</p>
<pre><code>list4 = list1
count = 0
for i in range(int(len(list2)/2)):
del list4[int(list3[i]) + count]
for j in range(2):
if j == 1:
j = j + 1
list4.insert(int(list3[i]) + count, list2[(i + j)])
else:
list4.insert(int(list3[i]) + count, list2[(i + j)])
</code></pre>
<p>Any help is much appreciated.</p>
| 2 | 2016-07-25T22:50:46Z | 38,578,951 | <p>The way I've approached this problem is by first creating your list by zipping together the lists, and with some simple arithmetic, decide at which index to delete elements from. This <em>does not</em>
account for the possibility of <strong>duplicate</strong> values in the indices list (<em>this input can be sanitized with a <code>set</code> or edge case error handling</em>) <br><strong>Possible solution:</strong></p>
<pre><code>def f(list1, list2, list3):
result=[]
for item_one, item_two in zip(list1,list2):
result.extend((item_one, item_two))
len_factor = float(len(result))/len(list1)
len_factor = len_factor if len_factor.is_integer() else int(len_factor) + 1
for index, item in enumerate(set(list3)):
del result[item * len_factor - index]
return result
</code></pre>
<p>Calling the method <code>f()</code>:</p>
<pre><code>list1 = ['ad', 'sbe', 'k3', 'lm0']
list2 = ['sb', 'e', 'lm', '0']
list3 = [1, 3]
print f(list1, list2, list3)
>>> ['ad', 'sb', 'e', 'k3', 'lm', '0']
</code></pre>
| 1 | 2016-07-25T23:35:53Z | [
"python",
"list",
"replace"
] |
Python regular expression with or and re.search | 38,578,608 | <p>Say I have two types of strings:</p>
<pre><code>str1 = 'NUM-140 A Thing: Foobar Analysis NUM-140'
str2 = 'NUM-140 Foobar Analysis NUM-140'
</code></pre>
<p>For both of these, I want to match <code>'Foobar'</code> (which could be anything). I have tried the following:</p>
<pre><code>m = re.compile('((?<=Thing: ).+(?= Analysis))|((?<=\d ).+(?= Analysis))')
ind1 = m.search(str1).span()
match1 = str1[ind1[0]:ind1[1]]
ind2 = m.search(str2).span()
match2 = str2[ind2[0]:ind2[1]]
</code></pre>
<p>However, match1 comes out to <code>'A Thing: Foobar'</code>, which seems to be the match for the second pattern, not the first. Applied individually, (pattern 1 to <code>str1</code> and pattern 2 to <code>str2</code>, without the <code>|</code>), both patterns match <code>'Foobar'</code>. I expected this, then, to stop when matched by the first pattern. This doesn't seem to be the case. What am I missing?</p>
| 2 | 2016-07-25T22:54:56Z | 38,578,739 | <p>If you use named groups, eg <code>(?P<name>...)</code> you'll be able to debug easier. But note the docs for span. </p>
<p><a href="https://docs.python.org/2/library/re.html#re.MatchObject.span" rel="nofollow">https://docs.python.org/2/library/re.html#re.MatchObject.span</a></p>
<blockquote>
<p>span([group]) For MatchObject m, return the 2-tuple (m.start(group),
m.end(group)). Note that if group did not contribute to the match,
this is (-1, -1). group defaults to zero, the entire match.</p>
</blockquote>
<p>You're not passing in the group number.</p>
<p>Why are you using span anyway? Just use <code>m.search(str1).groups()</code> or similar</p>
| 0 | 2016-07-25T23:10:00Z | [
"python",
"regex"
] |
Python regular expression with or and re.search | 38,578,608 | <p>Say I have two types of strings:</p>
<pre><code>str1 = 'NUM-140 A Thing: Foobar Analysis NUM-140'
str2 = 'NUM-140 Foobar Analysis NUM-140'
</code></pre>
<p>For both of these, I want to match <code>'Foobar'</code> (which could be anything). I have tried the following:</p>
<pre><code>m = re.compile('((?<=Thing: ).+(?= Analysis))|((?<=\d ).+(?= Analysis))')
ind1 = m.search(str1).span()
match1 = str1[ind1[0]:ind1[1]]
ind2 = m.search(str2).span()
match2 = str2[ind2[0]:ind2[1]]
</code></pre>
<p>However, match1 comes out to <code>'A Thing: Foobar'</code>, which seems to be the match for the second pattern, not the first. Applied individually, (pattern 1 to <code>str1</code> and pattern 2 to <code>str2</code>, without the <code>|</code>), both patterns match <code>'Foobar'</code>. I expected this, then, to stop when matched by the first pattern. This doesn't seem to be the case. What am I missing?</p>
| 2 | 2016-07-25T22:54:56Z | 38,580,212 | <p>According to the documentation, </p>
<blockquote>
<p>As the target string is scanned, REs separated by '|' are tried from left to right. When one pattern completely matches, that branch is accepted. This means that once A matches, B will not be tested further, even if it would produce a longer overall match. In other words, the '|' operator is never greedy.</p>
</blockquote>
<p>But the behavior seems to be different:</p>
<pre><code>import re
THING = r'(?<=Thing: )(?P<THING>.+)(?= Analysis)'
NUM = r'(?<=\d )(?P<NUM>.+)(?= Analysis)'
MIXED = THING + '|' + NUM
str1 = 'NUM-140 A Thing: Foobar Analysis NUM-140'
str2 = 'NUM-140 Foobar Analysis NUM-140'
print(re.match(THING, str1))
# <... match='Foobar'>
print(re.match(NUM, str1))
# <... match='A Thing: Foobar'>
print(re.match(MIXED, str1))
# <... match='A Thing: Foobar'>
</code></pre>
<p>We would expect that because THING matches 'Foobar', the MIXED pattern would get that 'Foobar' and quit searching. (as per the documentation)</p>
<p>Because it is not working as documented, the solution has to rely on Python's <code>or</code> short-circuiting:</p>
<pre><code>print(re.search(THING, str1) or re.search(NUM, str1))
# <_sre.SRE_Match object; span=(17, 23), match='Foobar'>
print(re.search(THING, str2) or re.search(NUM, str2))
# <_sre.SRE_Match object; span=(8, 14), match='Foobar'>
</code></pre>
| 1 | 2016-07-26T02:35:47Z | [
"python",
"regex"
] |
MultiValueDictKeyError - Passing GET parameter | 38,578,637 | <p>I'm using django allauth.</p>
<p>All my users should have access to an url that is generated dynamically. Ex: www.example.com/uuid/</p>
<p>From this page they should be able to login with Soundcloud and should be redirected to this page after connecting.</p>
<p>I am using the following to get the previous link but I am receiving a good url in html but is empty on django.</p>
<pre><code>#Html
<a href="/accounts/soundcloud/login?process=login?next={{request.path}}" name="next" value="next" class="waves-effect waves-light btn-large" style="margin-bottom: 10px;">Download</a>
#adapter.py
class AccountAdapter(DefaultAccountAdapter):
def get_login_redirect_url(self, request):
#assert request.user.is_authenticated()
#pass
return request.GET['next']
</code></pre>
| 1 | 2016-07-25T22:57:54Z | 38,579,175 | <p>You have a typo in your url - it should be:</p>
<pre><code>href="/accounts/soundcloud/login?process=login&next={{request.path}}"
</code></pre>
<p>Notice the <code>&</code> instead of the second <code>?</code>.</p>
| 1 | 2016-07-26T00:04:24Z | [
"python",
"html",
"django"
] |
Memory error when using Pandas built-in divide, but looping works? | 38,578,662 | <p>I have two DataFrames, each having 100,000 rows. I am trying to do the following:</p>
<pre><code>new = dataframeA['mykey']/dataframeB['mykey']
</code></pre>
<p>and I get an 'Out of Memory' error. I get the same error if I try:</p>
<pre><code>new = dataframeA['mykey'].divide(dataframeB['mykey'])
</code></pre>
<p>But if I loop through each element, like this, it works:</p>
<pre><code>result = []
for idx in range(0,dataframeA.shape[0]):
result.append(dataframeA.ix[idx,'mykey']/dataframeB.ix[idx,'mykey'])
</code></pre>
<p>What's going on here? I'd think that the built-in Pandas functions would be much more memory efficient.</p>
| 0 | 2016-07-25T23:00:32Z | 38,580,681 | <p>@ayhan got it right off the bat.</p>
<p>My two dataframes were not using the same indices. Resetting them worked.</p>
| 0 | 2016-07-26T03:44:06Z | [
"python",
"performance",
"pandas",
"memory"
] |
Error installing flask-mysql python module | 38,578,685 | <p>I'm trying to install the flask-mysql module and am running into an error. It looks like a problem with vcvarsall.bat, but I'm not really sure what that hints at.</p>
<p>Any ideas from someone more experienced than myself?</p>
<pre><code>C:\eb-virt\bucketlist>pip install flask-mysql
Collecting flask-mysql
Using cached Flask_MySQL-1.3-py2.py3-none-any.whl
Collecting MySQL-python (from flask-mysql)
Using cached MySQL-python-1.2.5.zip
Requirement already satisfied (use --upgrade to upgrade): Flask in c:\python27\lib\site-packages (from flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): itsdangerous>=0.21 in c:\python27\lib\site-packages (from Flask->flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): click>=2.0 in c:\python27\lib\site-packages (from Flask->flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): Werkzeug>=0.7 in c:\python27\lib\site-packages (from Flask->flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): Jinja2>=2.4 in c:\python27\lib\site-packages (from Flask->flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): MarkupSafe in c:\python27\lib\site-packages (from Jinja2>=2.4->Flask->flask-mysql)
Installing collected packages: MySQL-python, flask-mysql
Running setup.py install for MySQL-python ... error
Complete output from command c:\python27\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\tonype~1\\appdata\\local\\temp\\pip-build-3xn7it\\MySQL-python\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\tonype~1\appdata\local\temp\pip-vtdlrx-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-2.7
copying _mysql_exceptions.py -> build\lib.win-amd64-2.7
creating build\lib.win-amd64-2.7\MySQLdb
copying MySQLdb\__init__.py -> build\lib.win-amd64-2.7\MySQLdb
copying MySQLdb\converters.py -> build\lib.win-amd64-2.7\MySQLdb
copying MySQLdb\connections.py -> build\lib.win-amd64-2.7\MySQLdb
copying MySQLdb\cursors.py -> build\lib.win-amd64-2.7\MySQLdb
copying MySQLdb\release.py -> build\lib.win-amd64-2.7\MySQLdb
copying MySQLdb\times.py -> build\lib.win-amd64-2.7\MySQLdb
creating build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\CR.py -> build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\ER.py -> build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-2.7\MySQLdb\constants
running build_ext
building '_mysql' extension
error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat). Get it from http://aka.ms/vcpython27
----------------------------------------
Command "c:\python27\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\tonype~1\\appdata\\local\\temp\\pip-build-3xn7it\\MySQL-python\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\tonype~1\appdata\local\temp\pip-vtdlrx-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\tonype~1\appdata\local\temp\pip-build-3xn7it\MySQL-python\
</code></pre>
| 0 | 2016-07-25T23:03:38Z | 38,587,824 | <p>You could try using these <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysql-python" rel="nofollow">binaries</a> for <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="nofollow">windows</a> distributions. Flask-mysql uses mysql-python which has <a href="http://stackoverflow.com/questions/21440230/install-mysql-python-windows">issues</a> when trying to install on windows. See <a href="http://stackoverflow.com/a/8047236/4675937">this</a>.</p>
| 1 | 2016-07-26T10:49:21Z | [
"python",
"flask",
"flask-mysql"
] |
Error installing flask-mysql python module | 38,578,685 | <p>I'm trying to install the flask-mysql module and am running into an error. It looks like a problem with vcvarsall.bat, but I'm not really sure what that hints at.</p>
<p>Any ideas from someone more experienced than myself?</p>
<pre><code>C:\eb-virt\bucketlist>pip install flask-mysql
Collecting flask-mysql
Using cached Flask_MySQL-1.3-py2.py3-none-any.whl
Collecting MySQL-python (from flask-mysql)
Using cached MySQL-python-1.2.5.zip
Requirement already satisfied (use --upgrade to upgrade): Flask in c:\python27\lib\site-packages (from flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): itsdangerous>=0.21 in c:\python27\lib\site-packages (from Flask->flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): click>=2.0 in c:\python27\lib\site-packages (from Flask->flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): Werkzeug>=0.7 in c:\python27\lib\site-packages (from Flask->flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): Jinja2>=2.4 in c:\python27\lib\site-packages (from Flask->flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): MarkupSafe in c:\python27\lib\site-packages (from Jinja2>=2.4->Flask->flask-mysql)
Installing collected packages: MySQL-python, flask-mysql
Running setup.py install for MySQL-python ... error
Complete output from command c:\python27\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\tonype~1\\appdata\\local\\temp\\pip-build-3xn7it\\MySQL-python\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\tonype~1\appdata\local\temp\pip-vtdlrx-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-2.7
copying _mysql_exceptions.py -> build\lib.win-amd64-2.7
creating build\lib.win-amd64-2.7\MySQLdb
copying MySQLdb\__init__.py -> build\lib.win-amd64-2.7\MySQLdb
copying MySQLdb\converters.py -> build\lib.win-amd64-2.7\MySQLdb
copying MySQLdb\connections.py -> build\lib.win-amd64-2.7\MySQLdb
copying MySQLdb\cursors.py -> build\lib.win-amd64-2.7\MySQLdb
copying MySQLdb\release.py -> build\lib.win-amd64-2.7\MySQLdb
copying MySQLdb\times.py -> build\lib.win-amd64-2.7\MySQLdb
creating build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\CR.py -> build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\ER.py -> build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-2.7\MySQLdb\constants
copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-2.7\MySQLdb\constants
running build_ext
building '_mysql' extension
error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat). Get it from http://aka.ms/vcpython27
----------------------------------------
Command "c:\python27\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\tonype~1\\appdata\\local\\temp\\pip-build-3xn7it\\MySQL-python\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\tonype~1\appdata\local\temp\pip-vtdlrx-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\tonype~1\appdata\local\temp\pip-build-3xn7it\MySQL-python\
</code></pre>
| 0 | 2016-07-25T23:03:38Z | 38,588,111 | <p>Often this type of error occurred when you have installed python <strong>32bit</strong> and mysql <strong>64bit</strong> or vice versa. Try out same installations. I have experienced this error with <strong>postgresSql</strong> and python on this command <strong>pip install psycopg2</strong>
May be this will solve you issue.</p>
| 0 | 2016-07-26T11:03:00Z | [
"python",
"flask",
"flask-mysql"
] |
Django not creating tables " Table django_session does not exist" | 38,578,746 | <p>Whenever I run <code>python manage.py migrate</code> I cannot get the required tables. The only tables that I get are <code>django_content_type</code> and <code>django_migrations</code>. Because of this I cannot log into the admin page or create a super user. I am using <code>MySQL-connector-python-rf</code>, python 3.4 and Django 1.9.8 I have followed instructions of deleting the tables to creating a new database and the problem still persist. </p>
<p>This is my database and apps setup in <code>settings.py</code></p>
<pre><code># Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'usermie',
]
MIDDLEWARE_CLASSES = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
DATABASES = {
'default': {
'ENGINE': 'mysql.connector.django',
'NAME': 'testerdb',
'USER': 'root',
'PASSWORD': '',
}
}
</code></pre>
<p>I was using django before and working on a project on this same computer, but now I do not know what has happened and why I am not getting the tables.</p>
<p>Below is what I get when I run <code>python manage.py migrate</code></p>
<pre><code>(mie) C:\Users\dane_\DjangoProjects\pagemie>python manage.py makemigrations
Migrations for 'usermie':
0001_initial.py:
- Create model Usermie
(mie) C:\Users\dane_\DjangoProjects\pagemie>python manage.py migrate
Operations to perform:
Apply all migrations: contenttypes, auth, sessions, admin, usermie
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial...Traceback (most recent call last):
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\django\base.py", line 177, in _execute_wrapper
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\cursor.py", line 515, in execute
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\connection.py", line 488, in cmd_query
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\connection.py", line 395, in _handle_result
mysql.connector.errors.ProgrammingError: 1050 (42S01): Table 'django_content_type' already exists
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\__init__.py", line 353, in execute_from_command_line
utility.execute()
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\base.py", line 399, in execute
output = self.handle(*args, **options)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\commands\migrate.py", line 200, in handle
executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\migrations\executor.py", line 92, in migrate
self._migrate_all_forwards(plan, full_plan, fake=fake, fake_initial=fake_initial)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\migrations\executor.py", line 121, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\migrations\executor.py", line 198, in apply_migration
state = migration.apply(state, schema_editor)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\migrations\migration.py", line 123, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\migrations\operations\models.py", line 59, in database_forwards
schema_editor.create_model(model)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\backends\base\schema.py", line 284, in create_model
self.execute(sql, params or None)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\backends\base\schema.py", line 110, in execute
cursor.execute(sql, params)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\backends\utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\backends\utils.py", line 62, in execute
return self.cursor.execute(sql)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\django\base.py", line 227, in execute
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\django\base.py", line 180, in _execute_wrapper
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\utils\six.py", line 685, in reraise
raise value.with_traceback(tb)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\django\base.py", line 177, in _execute_wrapper
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\cursor.py", line 515, in execute
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\connection.py", line 488, in cmd_query
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\connection.py", line 395, in _handle_result
django.db.utils.ProgrammingError: Table 'django_content_type' already exists
</code></pre>
| 0 | 2016-07-25T23:10:50Z | 38,581,000 | <p>MySql does not accept timezone info in the datetime value such as <code>'2016-07-25 22:45:16.507552+00:00'</code>, the problem is the <code>+00:00</code> part. By default, Django should use naive datetime objects, as you can see here: <a href="https://docs.djangoproject.com/en/1.9/topics/i18n/timezones/" rel="nofollow">https://docs.djangoproject.com/en/1.9/topics/i18n/timezones/</a> so normally this should not be a problem, but it seems that your data contains timezones.
Maybe you could check that USE_TZ is False in your settings.py and then try to run the migration again.</p>
| 1 | 2016-07-26T04:26:22Z | [
"python",
"mysql",
"django"
] |
Django not creating tables " Table django_session does not exist" | 38,578,746 | <p>Whenever I run <code>python manage.py migrate</code> I cannot get the required tables. The only tables that I get are <code>django_content_type</code> and <code>django_migrations</code>. Because of this I cannot log into the admin page or create a super user. I am using <code>MySQL-connector-python-rf</code>, python 3.4 and Django 1.9.8 I have followed instructions of deleting the tables to creating a new database and the problem still persist. </p>
<p>This is my database and apps setup in <code>settings.py</code></p>
<pre><code># Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'usermie',
]
MIDDLEWARE_CLASSES = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
DATABASES = {
'default': {
'ENGINE': 'mysql.connector.django',
'NAME': 'testerdb',
'USER': 'root',
'PASSWORD': '',
}
}
</code></pre>
<p>I was using django before and working on a project on this same computer, but now I do not know what has happened and why I am not getting the tables.</p>
<p>Below is what I get when I run <code>python manage.py migrate</code></p>
<pre><code>(mie) C:\Users\dane_\DjangoProjects\pagemie>python manage.py makemigrations
Migrations for 'usermie':
0001_initial.py:
- Create model Usermie
(mie) C:\Users\dane_\DjangoProjects\pagemie>python manage.py migrate
Operations to perform:
Apply all migrations: contenttypes, auth, sessions, admin, usermie
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial...Traceback (most recent call last):
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\django\base.py", line 177, in _execute_wrapper
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\cursor.py", line 515, in execute
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\connection.py", line 488, in cmd_query
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\connection.py", line 395, in _handle_result
mysql.connector.errors.ProgrammingError: 1050 (42S01): Table 'django_content_type' already exists
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\__init__.py", line 353, in execute_from_command_line
utility.execute()
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\base.py", line 399, in execute
output = self.handle(*args, **options)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\commands\migrate.py", line 200, in handle
executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\migrations\executor.py", line 92, in migrate
self._migrate_all_forwards(plan, full_plan, fake=fake, fake_initial=fake_initial)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\migrations\executor.py", line 121, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\migrations\executor.py", line 198, in apply_migration
state = migration.apply(state, schema_editor)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\migrations\migration.py", line 123, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\migrations\operations\models.py", line 59, in database_forwards
schema_editor.create_model(model)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\backends\base\schema.py", line 284, in create_model
self.execute(sql, params or None)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\backends\base\schema.py", line 110, in execute
cursor.execute(sql, params)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\backends\utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\backends\utils.py", line 62, in execute
return self.cursor.execute(sql)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\django\base.py", line 227, in execute
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\django\base.py", line 180, in _execute_wrapper
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\utils\six.py", line 685, in reraise
raise value.with_traceback(tb)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\django\base.py", line 177, in _execute_wrapper
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\cursor.py", line 515, in execute
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\connection.py", line 488, in cmd_query
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\connection.py", line 395, in _handle_result
django.db.utils.ProgrammingError: Table 'django_content_type' already exists
</code></pre>
| 0 | 2016-07-25T23:10:50Z | 38,581,460 | <p>All the tables I should have are here now. This was what I did. I deleted all tables created in the database. I also removed <code>usermie</code> from <code>INSTALLED_APPS</code>. After that I ran <code>python manage.py migrate</code>. I am wondering if changing <code>USE_TZ</code> to <code>False</code> did anything as I have deleted the tables multiple times in the database before with no success. I also removed <code>MySQL-connector-python-rf</code>.</p>
<p>I still get these messages but all seems fine now:</p>
<pre><code>(mie) C:\Users\dane_\DjangoProjects\pagemie>python manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying sessions.0001_initial... OK
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\__init__.py", line 353, in execute_from_command_line
utility.execute()
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\base.py", line 399, in execute
output = self.handle(*args, **options)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\commands\migrate.py", line 204, in handle
emit_post_migrate_signal(self.verbosity, self.interactive, connection.alias)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\core\management\sql.py", line 50, in emit_post_migrate_signal
using=db)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\dispatch\dispatcher.py", line 192, in send
response = receiver(signal=self, sender=sender, **named)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\contrib\auth\management\__init__.py", line 126, in create_permissions
Permission.objects.using(using).bulk_create(perms)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\models\query.py", line 450, in bulk_create
self._batched_insert(objs_without_pk, fields, batch_size)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\models\query.py", line 1056, in _batched_insert
using=self.db)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\models\manager.py", line 122, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\models\query.py", line 1039, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\models\sql\compiler.py", line 1059, in execute_sql
for sql, params in self.as_sql():
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\django\db\models\sql\compiler.py", line 1047, in as_sql
result.append(self.connection.ops.bulk_insert_sql(fields, placeholder_rows))
File "C:\Users\dane_\DjangoVirtualEnvs\mie\lib\site-packages\mysql_connector_python_rf-2.1.3-py3.4-win-amd64.egg\mysql\connector\django\operations.py", line 223, in bulk_insert_sql
TypeError: can't multiply sequence by non-int of type 'tuple'
</code></pre>
| -1 | 2016-07-26T05:12:22Z | [
"python",
"mysql",
"django"
] |
How solve import error in Python when trying to import Numpy | 38,578,754 | <p>This is the error I get when trying to import numpy on opening python (2.7.8):</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named numpy
</code></pre>
<p>This is the path of my python binary <code>/usr/local/bin/python</code></p>
<p>This is the path of pip <code>/usr/local/bin/pip</code></p>
<p>Also, when I put in <code>pip freeze</code> I found the numpy package <code>numpy==1.8.0rc1</code></p>
<p>I have looked at other relevant questions, but I'm not able to diagnose the cause. I'm guessing it might be some problem in PATHS. Where do I start?</p>
| 0 | 2016-07-25T23:11:36Z | 38,579,188 | <p>As Akshat pointed out in the comments above, I had multiple versions of Python installed. This could have been the effect of using homebrew and/or macports in the past. I followed the steps detailed in <a href="http://stackoverflow.com/questions/13654756/too-many-pythons-on-my-mac-os-x-mountain-lion">Too many pythons on my Mac OS X Mountain Lion</a>
and did a fresh install of Python 2.7.12 I was then able to reinstall pip and the packages subsequently.</p>
| 0 | 2016-07-26T00:06:09Z | [
"python",
"python-2.7",
"numpy"
] |
How can I use the Work while tkinter | 38,578,773 | <p>Function running tkinter freezes.
I want to use the Tkinter window running that process.While running progress bar
i want use tkinter window.but I can not
because it freezes tkinter.<strong>how can i use root window while time.sleep(10) or another function working</strong></p>
<pre><code>import tkinter.ttk as ttk
import tkinter as tk
import time
progress = 0
def loading(window=None):
mpb = ttk.Progressbar(window, orient="horizontal", length=200, mode="determinate")
mpb.place(y=0, x=0)
mpb["maximum"] = 100
mpb["value"] = progress
print(progress)
def incrase():
global progress
print(progress)
progress += 1
time.sleep(10) # for example, a function works here and tkinter freezes
loading() # i don't want tkinter freezes
root = tk.Tk()
loading(root)
ttk.Button(root, text='increase', command=incrase).place(x=0, y=25, width=90)
root.mainloop()
</code></pre>
<p>thanks for answers</p>
| 1 | 2016-07-25T23:13:52Z | 38,578,920 | <p>You should use <a href="http://effbot.org/tkinterbook/widget.htm#Tkinter.Widget.after-method" rel="nofollow"><code>after()</code></a> with which you can schedule the <code>loading()</code> function to be called after some period of time. </p>
<h1>Program</h1>
<p>Here is how you can use it in your program:</p>
<pre><code>import tkinter.ttk as ttk
import tkinter as tk
import time
progress = 0
def loading(window):
mpb = ttk.Progressbar(window, orient="horizontal", length=200, mode="determinate")
mpb.place(y=0, x=0)
mpb["maximum"] = 100
mpb["value"] = progress
print(progress)
def incrase():
global root
global progress
print(progress)
progress += 1
root.after(10, loading(root)) # schedule loading()
root = tk.Tk()
loading(root)
ttk.Button(root, text='increase', command=incrase).place(x=0, y=25, width=90)
root.mainloop()
</code></pre>
<h1>Demo</h1>
<p>Screenshot of the above running program:</p>
<p><a href="http://i.stack.imgur.com/xqsvE.png" rel="nofollow"><img src="http://i.stack.imgur.com/xqsvE.png" alt="enter image description here"></a></p>
| 1 | 2016-07-25T23:32:41Z | [
"python",
"python-3.x",
"tkinter",
"progress-bar",
"freeze"
] |
How to get x and y coordinates from matplotlib scatter plot using plt.gca()? | 38,578,873 | <p>Is there a way to get the x and y coordinates of scatter plot points from a Matplotlib Axes object? For <code>plt.plot()</code>, there is an attribute called <code>data</code>, but the following code does not work:</p>
<pre><code>x = [1, 2, 6, 3, 11]
y = [2, 4, 10, 3, 2]
plt.scatter(x, y)
print(plt.gca().data)
plt.show()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-30-9346ca31279c> in <module>()
41 y = [2, 4, 10, 3, 2]
42 plt.scatter(x, y)
---> 43 print(plt.gca().data)
44 plt.show()
AttributeError: 'AxesSubplot' object has no attribute 'data'
</code></pre>
| 2 | 2016-07-25T23:26:39Z | 38,579,106 | <pre><code>import matplotlib.pylab as plt
x = [1, 2, 6, 3, 11]
y = [2, 4, 10, 3, 2]
plt.scatter(x, y)
ax = plt.gca()
cs = ax.collections[0]
cs.set_offset_position('data')
print cs.get_offsets()
</code></pre>
<p>Output is</p>
<pre><code>[[ 1 2]
[ 2 4]
[ 6 10]
[ 3 3]
[11 2]]
</code></pre>
| 1 | 2016-07-25T23:55:40Z | [
"python",
"matplotlib",
"plot"
] |
Why does python and my web browser show different codes for the same link? | 38,578,875 | <p>Let's use the url <a href="https://www.google.cl/#q=stackoverflow" rel="nofollow">https://www.google.cl/#q=stackoverflow</a> as an example. Using Chrome Developer Tools on the first link given by the search we see this html code:</p>
<p><a href="http://i.stack.imgur.com/YzLub.png" rel="nofollow"><img src="http://i.stack.imgur.com/YzLub.png" alt="inspecting google search first result"></a></p>
<p>Now, if I run this code:</p>
<pre><code>from urllib.request import urlopen
from bs4 import BeautifulSoup
url = urlopen("https://www.google.cl/#q=stackoverflow")
soup = BeautifulSoup(url)
print(soup.prettify())
</code></pre>
<p>I wont find the same elements. In fact, I wont find any link from the results given by the google search. Same goes if I use the <code>requests</code> module. Why does this happen? Can I do something to get the same results as if I was requesting from a web browser?</p>
| 0 | 2016-07-25T23:26:50Z | 38,579,174 | <p>Since the html is generated dynamically, likely from a modern single page javascript framework like Angular or React (or even just plain JavaScript), you will need to actually drive a browser to the site using selenium or phantomjs before parsing the dom.</p>
<p>Here is some skeleton code.</p>
<pre><code>from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome()
driver.get("http://google.com")
html = driver.execute_script("return document.documentElement.innerHTML")
soup = BeautifulSoup(html)
</code></pre>
<p>Here is the selenium documentation for more info on running selenium, configurations, etc.:</p>
<p><a href="http://selenium-python.readthedocs.io/" rel="nofollow">http://selenium-python.readthedocs.io/</a></p>
<p>edit:
you will likely need to add a <code>wait</code> before grabbing the html, since it may take a second or so to load certain elements of the page. See below for reference to the explicity wait documentation of python selenium:</p>
<p><a href="http://selenium-python.readthedocs.io/waits.html" rel="nofollow">http://selenium-python.readthedocs.io/waits.html</a></p>
<p>Another source of complication is that certain parts of the page might be hidden until AFTER user interaction. In this case you will need to code your selenium script to interact with the page in certain ways before grabbing the html.</p>
| 2 | 2016-07-26T00:04:22Z | [
"python",
"html"
] |
comparison ex of list comprehension and lambda | 38,578,908 | <pre><code>Celsius = [66.5,45.2,33.5,55.5]
Fahrenheit = [((float(9)/5)*x + 32) for x in Celsius]
</code></pre>
<p>How would I write this in a lambda function? Ex: lambda x,y:x+y</p>
| 0 | 2016-07-25T23:31:20Z | 38,578,931 | <p>Do you mean this?</p>
<pre><code>Fahrenheit = list(map(lambda x: x * 9.0 / 5 + 32, Celsius))
</code></pre>
<p>In general, list comprehension (what your example does) can be converted to a combination of <code>map</code> and a <code>lambda</code> (or other function).</p>
<p><strong>EDIT</strong></p>
<p>You could also use <code>lambda x: (float(9)/5)*x + 32</code>; I was just trying to simplify the expression. :-)</p>
| 0 | 2016-07-25T23:33:54Z | [
"python",
"list",
"lambda",
"list-comprehension"
] |
comparison ex of list comprehension and lambda | 38,578,908 | <pre><code>Celsius = [66.5,45.2,33.5,55.5]
Fahrenheit = [((float(9)/5)*x + 32) for x in Celsius]
</code></pre>
<p>How would I write this in a lambda function? Ex: lambda x,y:x+y</p>
| 0 | 2016-07-25T23:31:20Z | 38,578,987 | <pre><code>TempCtoF = lambda c: 9/5 * c + 32
TempFtoC = lambda f: 5/9 * (f - 32)
Celsius = [66.5,45.2,33.5,55.5]
Fahrenheit = [TempCtoF(c) for c in Celsius]
</code></pre>
<p>or </p>
<pre><code>Fahrenheit = list(map(TempCtoF, Celsius))
</code></pre>
| 0 | 2016-07-25T23:41:46Z | [
"python",
"list",
"lambda",
"list-comprehension"
] |
Dictionary Different Data Types Apache Thrift | 38,578,974 | <p>I have a dictionary in Python. From what I understand Thrift only allows a strictly typed map <code>map<type1, type2></code>. However, in Python values are not always of the same type.</p>
<pre><code>dict = {'id':1,
'text': 'some text',
'active': None}
</code></pre>
<p>I want to pass this structure into my .thrift file</p>
<pre><code>void submit_record(1: i32 id, 2: i32 time, 3: map<string, varying>)
</code></pre>
<p>Is there any way of doing this?</p>
| 0 | 2016-07-25T23:39:51Z | 38,586,448 | <p>Use a Thrift <code>union</code>, or a <code>struct</code>:</p>
<pre><code>union varying {
1 : double dbl
2 : i32 int // or maybe i64
3 : string str
}
</code></pre>
<p>Further reading:</p>
<ul>
<li><a href="http://stackoverflow.com/questions/18923120/how-do-you-say-in-a-thrift-idl-that-a-client-should-include-exactly-one-of-a-set">How do you say in a Thrift IDL that a client should include exactly one of a set of fields in a struct?</a></li>
<li><a href="https://issues.apache.org/jira/browse/THRIFT-409" rel="nofollow">THRIFT-409</a></li>
</ul>
| 0 | 2016-07-26T09:46:20Z | [
"python",
"dictionary",
"types",
"thrift"
] |
How to Find the Main Topic of a Body of Text | 38,579,081 | <p>I know that in NLP it is a challenge to determine the topic of a sentence or possibly a paragraph. However, I am trying to determine what the title may be for something like a Wikipedia article (of course without using other methods). My only though is finding the most frequent words. For the article on New York City these were the top results:</p>
<pre><code>[('new', 429), ('city', 380), ('york', 361), ("'s", 177), ('manhattan', 90), ('world', 84), ('united', 78), ('states', 74), ('===', 70), ('island', 68), ('largest', 66), ('park', 64), ('also', 56), ('area', 52), ('american', 49)]
</code></pre>
<p>From this I can see some sort of statistical significance is the sharp drop from 361 to 177. Regardless, I am neither a statistics or NLP expert (in fact I'm a complete noob at both) so <strong>is this a viable way of determining the topic of a longer body of text. If so, what math am I looking for to calculate this? If not is there some other way in NLP to determine the topic or title for a larger body of text?</strong> For reference, I am using nltk and Python 3. </p>
| -2 | 2016-07-25T23:52:53Z | 38,579,234 | <p>You might consider use below algorithms. These are keyword extracting algorithms</p>
<p><a href="http://www.tfidf.com/" rel="nofollow">TF-IDF</a></p>
<p><a href="https://web.eecs.umich.edu/~mihalcea/papers/mihalcea.emnlp04.pdf" rel="nofollow">TextRank</a></p>
<p><a href="https://yasserebrahim.wordpress.com/2012/10/25/tf-idf-with-pythons-nltk/" rel="nofollow">Here</a> is a tutorial get you start on using TF-IDF in ntlk</p>
| 2 | 2016-07-26T00:13:32Z | [
"python",
"python-3.x",
"nlp",
"nltk"
] |
How to Find the Main Topic of a Body of Text | 38,579,081 | <p>I know that in NLP it is a challenge to determine the topic of a sentence or possibly a paragraph. However, I am trying to determine what the title may be for something like a Wikipedia article (of course without using other methods). My only though is finding the most frequent words. For the article on New York City these were the top results:</p>
<pre><code>[('new', 429), ('city', 380), ('york', 361), ("'s", 177), ('manhattan', 90), ('world', 84), ('united', 78), ('states', 74), ('===', 70), ('island', 68), ('largest', 66), ('park', 64), ('also', 56), ('area', 52), ('american', 49)]
</code></pre>
<p>From this I can see some sort of statistical significance is the sharp drop from 361 to 177. Regardless, I am neither a statistics or NLP expert (in fact I'm a complete noob at both) so <strong>is this a viable way of determining the topic of a longer body of text. If so, what math am I looking for to calculate this? If not is there some other way in NLP to determine the topic or title for a larger body of text?</strong> For reference, I am using nltk and Python 3. </p>
| -2 | 2016-07-25T23:52:53Z | 38,597,654 | <p>If you have enough data and would like to have topics for a larger body of text like paragraph or an article you can use Topic Modelling methods like <a href="https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation" rel="nofollow">LDA</a>. </p>
<p><a href="https://radimrehurek.com/gensim/" rel="nofollow">Gensim</a> has a easy to use implementation of LDA.</p>
| 1 | 2016-07-26T18:46:17Z | [
"python",
"python-3.x",
"nlp",
"nltk"
] |
how to show subscript characters in a qlabel | 38,579,133 | <p>I would like to show subscripts in a QtGui.QLabel using python 3.4, qt 4.8, and pyqt 4.11. In the code sample below i have a function <code>_subscripter</code> that takes an integer and returns a string subscript i.e.</p>
<pre><code>_subscripter(13)
Out[8]: 'ââ'
</code></pre>
<p>I want the label just show the subscript 'ââ' however it does not recognize that (see image below). any help appreciated.</p>
<p><a href="http://i.stack.imgur.com/ZMtmz.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZMtmz.png" alt="enter image description here"></a></p>
<pre><code>from PyQt4 import QtGui
from sys import argv, exit
def _subscripter(n):
digits = len(str(n))
s = ''
for i in range(digits):
s += chr(0x2080 + int(str(n)[i]))
return s
def start_app():
app = QtGui.QApplication(argv)
window = QtGui.QLabel(_subscripter(13))
window.show()
window.activateWindow()
exit(app.exec_())
if __name__ == '__main__': start_app()
</code></pre>
| 0 | 2016-07-25T23:58:48Z | 38,587,458 | <p>Using <code>QChar</code> in place of <code>chr</code> should work.</p>
<pre><code>def _subscripter(n):
digits = len(str(n))
s = QtCore.QChar()
for i in range(digits):
s += QtCore.QChar(0x2080 + int(str(n)[i]))
return s
</code></pre>
| 0 | 2016-07-26T10:33:39Z | [
"python",
"pyqt"
] |
how to show subscript characters in a qlabel | 38,579,133 | <p>I would like to show subscripts in a QtGui.QLabel using python 3.4, qt 4.8, and pyqt 4.11. In the code sample below i have a function <code>_subscripter</code> that takes an integer and returns a string subscript i.e.</p>
<pre><code>_subscripter(13)
Out[8]: 'ââ'
</code></pre>
<p>I want the label just show the subscript 'ââ' however it does not recognize that (see image below). any help appreciated.</p>
<p><a href="http://i.stack.imgur.com/ZMtmz.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZMtmz.png" alt="enter image description here"></a></p>
<pre><code>from PyQt4 import QtGui
from sys import argv, exit
def _subscripter(n):
digits = len(str(n))
s = ''
for i in range(digits):
s += chr(0x2080 + int(str(n)[i]))
return s
def start_app():
app = QtGui.QApplication(argv)
window = QtGui.QLabel(_subscripter(13))
window.show()
window.activateWindow()
exit(app.exec_())
if __name__ == '__main__': start_app()
</code></pre>
| 0 | 2016-07-25T23:58:48Z | 38,594,899 | <p>Have you tried using a rich text label instead?</p>
<p>You could do this</p>
<pre><code>from PyQt4 import QtGui
from sys import argv, exit
def start_app():
app = QtGui.QApplication(argv)
window = QtGui.QLabel('Some text<sub>13</sub>')
window.show()
window.activateWindow()
exit(app.exec_())
if __name__ == '__main__':
start_app()
</code></pre>
| 1 | 2016-07-26T16:09:16Z | [
"python",
"pyqt"
] |
Invalid ELF header tensorflow | 38,579,190 | <p>I first tried installing tensorflow via the following:</p>
<pre><code>user@WS1:~/July 2016$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.9.0-cp27-none-linux_x86_64.whl
user@WS1:~/July 2016$ pip install --upgrade $TF_BINARY_URL
</code></pre>
<p>Then I tried using a (slightly modified version for linux and tensorflow 0.9.0) solution from <a href="https://github.com/tensorflow/tensorflow/issues/135" rel="nofollow">iRapha here</a>:</p>
<pre><code>user@WS1:~/July 2016$ wget https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.9.0-cp27-none-linux_x86_64.whl
user@WS1:~/July 2016$ pip install tensorflow-0.9.0-cp27-none-linux_x86_64.whl
</code></pre>
<p>Then I tried to test whether tensorflow was successfully installed. The following output shows that there is an 'invalid ELF header' error. </p>
<pre><code>user@WS1:~/July 2016$ python
Python 2.7.12 |Anaconda 2.5.0 (64-bit)| (default, Jul 2 2016, 17:42:40)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
>>> import tensorflow
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/export/mlrg/caugusta/anaconda2/lib/python2.7/site-packages /tensorflow/__init__.py", line 23, in <module>
from tensorflow.python import *
File "/export/mlrg/caugusta/anaconda2/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 48, in <module>
from tensorflow.python import pywrap_tensorflow
File "/export/mlrg/caugusta/anaconda2/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in <module>
_pywrap_tensorflow = swig_import_helper()
File "/export/mlrg/caugusta/anaconda2/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description)
ImportError: /export/mlrg/caugusta/anaconda2/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so: invalid ELF header
</code></pre>
<p>I've checked <a href="http://stackoverflow.com/questions/5713731/what-does-this-error-mean-invalid-elf-header">here</a>. Based on that answer, I tried:</p>
<pre><code>user@WS1:~July 2016$ pip install tensorflow
</code></pre>
<p>Everything says that tensorflow installed successfully, but when I import it in python I get that invalid ELF header error. Anyone know how I can resolve this?</p>
| 1 | 2016-07-26T00:06:27Z | 39,047,303 | <p>This could happen if you have install tensorflow package which does not match your platform. </p>
<p>An extreme example would be installing the mac platform package (as seen below) on linux.</p>
<pre><code>export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0rc0-cp35-cp35m-linux_x86_64.whl
</code></pre>
<p>Just make sure you use the correct platform.</p>
| 0 | 2016-08-19T20:36:49Z | [
"python",
"tensorflow",
"anaconda"
] |
How to edit and save existing HTML using Python? | 38,579,244 | <p>I'm trying to write a program that enables someone to edit html from python 'input()' questions. For example: change a paragraph from the command line in python. Is there some sort of library I can use to read html then edit and save it?</p>
| 0 | 2016-07-26T00:14:55Z | 38,579,274 | <p>Since an HTML file is just a plain text file it can be opened by python without the need for any extra libraries and such. Just open the file, edit what you need and write it.</p>
<p>Check out the following link:
<a href="http://www.pythonforbeginners.com/files/reading-and-writing-files-in-python" rel="nofollow">http://www.pythonforbeginners.com/files/reading-and-writing-files-in-python</a></p>
| 0 | 2016-07-26T00:18:54Z | [
"python",
"html",
"html-parsing"
] |
my program dose not run because of confuse between integer value and string | 38,579,256 | <pre><code>my program confuse between integer value and string
import random
def test():
num=random.randrange(100,1000,45)
while True:
ans=run(num)
print ans
ss=raw_input("if you want to exit press t: ")
if ss=='t':
break
def run(userinput2):
first = int(userinput2/20)
print "i will send it in %s big pack"%str(first)
userinput2=userinput2%20
second =int(userinput2/10)
print "i will send it in %s med pack"%str(second)
third =userinput2%10
print "i will send it in %s med pack"%str(third)
def main():
print "the began of pro"
print "@"*20
userinput=raw_input("test or run: ")
if userinput.lower()=='test':
test()
else:
while True:
userinput2=int(raw_input("press t to exit or chose a number:"))
if userinput2 =='t':
break
else:
answer=run(userinput2)
if __name__ == "__main__":
main()
</code></pre>
<p>this piece of code i has error in it </p>
<p>userinput2=int(raw_input("press t to exit or chose a number:"))
if userinput2 =='t':</p>
<p>if i change it to string i had it not accept string and if make it string it not accept integers </p>
| 1 | 2016-07-26T00:16:10Z | 38,579,346 | <p>I think that this covers the cases you need:</p>
<pre><code>while True:
userinput2=raw_input("press t to exit or chose a number:")
if userinput2 =='t':
break
try:
userinput2 = int(userinput2)
except ValueError:
print('That was neither a number nor a "t". Try again.')
continue
answer=run(userinput2)
</code></pre>
| 2 | 2016-07-26T00:27:59Z | [
"python"
] |
Variable considered NoneType at run time, but logging shows the correct values | 38,579,275 | <p>I have a flask route where i'm posting JSON data to. Requests are converted to python dictionaries through this function.</p>
<pre><code>def request_handler(request_data):
try:
data = json.loads(request_data)
except ValueError:
return jsonres("error", "bad json")
return data
</code></pre>
<p>The route i'm having trouble with looks like:</p>
<pre><code>@mod_api.route('/thing/create', methods=['POST'])
@api_token_required
def create_thing():
if request.method == 'POST':
data = request_handler(request.data)
</code></pre>
<p>In all other routes I can grab values from keys and work with them using <code>data.get('key-name')</code> without troubles. Though on this particular route I get errors because the values of data are considered NoneType. Though if I log the value, type, or any other info about values from the data variable it shows up as it should in the logging.</p>
<p><code>logging.debug(type(data))</code> results in <code><dict></code><br>
<code>logging.debug(data.get('starttime')</code> results in <code>123123</code> (made up number but it shows the epoch integer i'm looking for properly)</p>
<pre><code>starttime = data.get('starttime') + 1
</code></pre>
<p>results in </p>
<blockquote>
<p>Python Error: unsupported operand type(s) for +: 'int' and 'NoneType'</p>
</blockquote>
<p>Logging will show me everything correctly about the value of keys in data. As soon as I try to do anything with that value it considers it a NoneType resulting in server error.</p>
<p>This is happening with all of the data in various keys and types on this route.</p>
| 0 | 2016-07-26T00:19:01Z | 38,582,111 | <p>Try:</p>
<p><code>data = request.get_json( force=True, silent=True )</code></p>
| 0 | 2016-07-26T06:04:06Z | [
"python",
"flask"
] |
Why doesn't groupby sum convert boolean to int or float? | 38,579,297 | <p>I'll start with 3 simple examples:</p>
<pre><code>pd.DataFrame([[True]]).sum()
0 1
dtype: int64
</code></pre>
<hr>
<pre><code>pd.DataFrame([True]).sum()
0 1
dtype: int64
</code></pre>
<hr>
<pre><code>pd.Series([True]).sum()
1
</code></pre>
<hr>
<p>All of these are as expected. Here is a more complicated example.</p>
<pre><code>df = pd.DataFrame([
['a', 'A', True],
['a', 'B', False],
['a', 'C', True],
['b', 'A', True],
['b', 'B', True],
['b', 'C', False],
], columns=list('XYZ'))
df.Z.sum()
4
</code></pre>
<p>Also as expected. However, if I <code>groupby(['X', 'Y']).sum()</code></p>
<p><a href="http://i.stack.imgur.com/UlT3T.png" rel="nofollow"><img src="http://i.stack.imgur.com/UlT3T.png" alt="enter image description here"></a></p>
<p>I expected it to look like:</p>
<p><a href="http://i.stack.imgur.com/uGlNm.png" rel="nofollow"><img src="http://i.stack.imgur.com/uGlNm.png" alt="enter image description here"></a></p>
<p>I'm thinking bug. Is there another explanation?</p>
<hr>
<p>Per @unutbu's answer</p>
<p>pandas is trying to recast as original dtypes. I had thought that maybe the group by I'd performed didn't really groupby anything. So I tried this example to test out the idea.</p>
<pre><code>df = pd.DataFrame([
['a', 'A', False],
['a', 'B', False],
['a', 'C', True],
['b', 'A', False],
['b', 'B', False],
['b', 'C', False],
], columns=list('XYZ'))
</code></pre>
<p>I'll <code>groupby('X')</code> and <code>sum</code>. If @unutbu is correct, these sums should be <code>1</code> and <code>0</code> and are castable to <code>bool</code>, therefore we should see <code>bool</code></p>
<pre><code>df.groupby('X').sum()
</code></pre>
<p><a href="http://i.stack.imgur.com/Tunph.png" rel="nofollow"><img src="http://i.stack.imgur.com/Tunph.png" alt="enter image description here"></a></p>
<p>Sure enough... <code>bool</code></p>
<p>But if the process is the same but the values are slightly different.</p>
<pre><code>df = pd.DataFrame([
['a', 'A', True],
['a', 'B', False],
['a', 'C', True],
['b', 'A', False],
['b', 'B', False],
['b', 'C', False],
], columns=list('XYZ'))
df.groupby('X').sum()
</code></pre>
<p><a href="http://i.stack.imgur.com/5wh3K.png" rel="nofollow"><img src="http://i.stack.imgur.com/5wh3K.png" alt="enter image description here"></a></p>
<p>lesson learned. Always use <code>astype(int)</code> or something similar when doing this.</p>
<pre><code>df.groupby('X').sum().astype(int)
</code></pre>
<p>gives consistent results for either scenario.</p>
| 5 | 2016-07-26T00:21:38Z | 38,579,754 | <p>This occurs because <a href="https://github.com/pydata/pandas/blob/master/pandas/core/groupby.py#L3128" rel="nofollow"><code>_cython_agg_blocks</code></a> calls <code>_try_coerce_and_cast_result</code> which calls <a href="https://github.com/pydata/pandas/blob/master/pandas/core/internals.py#L536" rel="nofollow"><code>_try_cast_result</code></a> which tries to return a result <em>of the same dtype</em> as the original values (in this case, <code>bool</code>). </p>
<p>This returns something a little peculiar when <code>Z</code> has dtype bool (and all the groups have no more than one True value). If any of the groups have 2 or more True values, then the resulting values are floats since <code>_try_cast_result</code> does not convert 2.0 back to a boolean.</p>
<p><code>_try_cast_result</code> does something more useful when <code>Z</code> has dtype <code>int</code>: Internally, the Cython aggregator used by
<code>df.groupby(['X', 'Y']).sum()</code> returns a <code>result</code> of dtype <code>float</code>. Here then, <code>_try_cast_result</code> returns the result to dtype <code>int</code>.</p>
| 4 | 2016-07-26T01:26:21Z | [
"python",
"pandas"
] |
How to restart flask server? | 38,579,329 | <p>I am using Flask to create a web service. I hope to restart it like every 30 minutes. Is that possible to do that and how can it be realized?</p>
<p>Meanwhile, I tried to use subprocess (popen) to start the flask web service, terminate it and start it again, but the server could not be shut down unless the whole program, where I call subprocess, is also down.</p>
<p>I would very appreciate if you could share your knowledge and experience regarding to this issue.</p>
| 0 | 2016-07-26T00:25:10Z | 38,579,444 | <p>Flask web server is not supposed to serve production traffic. You should consider using some other service on top of WSGI protocol, such as <a href="http://gunicorn.org/" rel="nofollow">gunicorn</a> or <a href="https://uwsgi-docs.readthedocs.io/en/latest/" rel="nofollow">uwsgi</a> or <a href="http://flask.pocoo.org/docs/0.11/deploying/mod_wsgi/" rel="nofollow">apache mod_wsgi</a>.</p>
| 2 | 2016-07-26T00:40:56Z | [
"python",
"flask",
"subprocess"
] |
Processing groups of point-matrix multiplications with numpy | 38,579,358 | <p>Given two parallel arrays, one an array of rotation matrices, and the other an array of groups of 3D points, I'm looking for the fastest way to multiply each subgroup by the its corresponding matrix.</p>
<p>I was able to achieve what I want by looping over each group with numpy.einsum. I'm hoping there is a way to do this without the loop. This is the code i have so far:</p>
<pre><code>import numpy as np
N_GROUPS = 10
N_SUBGROUPS = 4
p = np.random.random((N_SUBGROUPS,N_GROUPS,3,)) # N_GROUPS of N_SUBGROUPS of 3D points
M = np.random.random((N_GROUPS,3,3,)) # N_GROUPS of rotation matrices
I = np.linalg.inv(M) # Inverse of M for testing purposes
# Use a loop to transform every subgroup.
for i in xrange(N_SUBGROUPS):
p_ = np.einsum('ij,ijk->ik', p[i], M)
# test
p__= np.einsum('ij,ijk->ik', p_, I)
print np.allclose(p[i],p__)# Returns True
</code></pre>
<p>Any help to rewrite the einsum expression to deal with my situation, or suggestions on how to use another method would be greatly appreciated.</p>
| 2 | 2016-07-26T00:29:12Z | 38,579,458 | <p>It's really straightforward: you've done most of the work yourself!</p>
<p>Just take the index corresponding to subgroups and put it on both sides of the einsum equation: that'll give you the desired array of dimension <code>(N_SUBGROUPS, N_GROUPS, 3)</code>.</p>
<p>Suppose we call the subgroup index <code>l</code>, then:</p>
<pre><code>p_ = np.einsum('lij,ijk->lik', p, M)
# I've changed the subgroup range index for clarity
for l in range(N_SUBGROUPS):
# test
p__= np.einsum('ij,ijk->ik', p_[l], I)
print(np.allclose(p[l],p__)) # Returns True
</code></pre>
| 1 | 2016-07-26T00:42:59Z | [
"python",
"arrays",
"numpy",
"matrix"
] |
How can I insert an XML element between text in the parent element using ElementTree | 38,579,478 | <p>I want to generate XML like this: </p>
<pre><code><Element>some text <Child>middle text</Child> some more text</Element>.
</code></pre>
<p>How can I do this using ElementTree?</p>
<p>I couldn't find it in <a href="https://docs.python.org/2/library/xml.etree.elementtree.html" rel="nofollow">the docs</a>. I thought <a href="http://stackoverflow.com/questions/25824920/python-elementtree-how-to-add-subelement-at-very-specific-position"><code>element#insert</code></a> would work, but that's for inserting a child in a specific position relative to other children.</p>
| 1 | 2016-07-26T00:45:13Z | 38,579,508 | <p>You need to define the child element and set it's <a href="https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.tail" rel="nofollow"><code>.tail</code></a>, then <a href="https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.append" rel="nofollow">append</a> it to the parent:</p>
<pre><code>import xml.etree.ElementTree as ET
parent = ET.Element("Element")
parent.text = "some text "
child = ET.Element("Child")
child.text = "middle text"
child.tail = " some more text"
parent.append(child)
print(ET.tostring(parent))
</code></pre>
<p>Prints:</p>
<pre><code><Element>some text <Child>middle text</Child> some more text</Element>
</code></pre>
| 2 | 2016-07-26T00:49:37Z | [
"python",
"xml",
"elementtree"
] |
angular js infinite scrolling with django rest framework | 38,579,513 | <p>I've been following this tutorial here for the infinite scrolling.</p>
<p><a href="http://en.proft.me/2015/09/4/how-make-infinity-scroll-loading-bar-angularjs/" rel="nofollow">http://en.proft.me/2015/09/4/how-make-infinity-scroll-loading-bar-angularjs/</a></p>
<p>but for some reason it's throwing me this error</p>
<p>"angular.js:38Uncaught Error: [$injector:modulerr"</p>
<p>am i doing something wrong here?</p>
<p>im a newbie to django rest framework and angular js.</p>
<p>what i want to achieve here is having the API json data loaded and injected into the html (which i did) and have it scrollable and clickable. (with hyperlinks, but without refreshing the page)</p>
<p>could anyone take a look at the code?</p>
<p>index.html</p>
<pre><code><body>
<div class="pinGridWrapper" ng-app="PinApp" ng-controller="PinCtrl">
<div class="pinGrid" infinite-scroll='pins.more()' infinite-scroll-disabled='pins.busy' infinite-scroll-distance='1'>
<div class="pin" ng-repeat="pin in pins.items">
<img ng-src="{$ pin.photo $}">
<div ng-app="myApp" class="app">
<div ng-controller="appCtrl as vm" class="main-container">
<h1>Post List</h1>
{% verbatim %}
<div ng-repeat="post in vm.posts | limitTo: 10" class="post">
<a href="{{ post.url}}">
<h2>{{ post.title }}</h2>
<p>{{ post.content }}</p>
</a>
<p ng-if="vm.loadingPosts">Loading...</p>
</div>
{% endverbatim %}
</div>
</div>
<p ng-bind-html="pin.text"></p>
</div>
</div>
<div ng-show='pins.busy'><i class="fa fa-spinner"></i></div>
</div>
<!-- Latest compiled and minified JavaScript -->
<script src="http://code.jquery.com/jquery-1.12.2.min.js" integrity="sha256-lZFHibXzMHo3GGeehn1hudTAP3Sc0uKXBXAzHX1sjtk=" crossorigin="anonymous"></script>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.8/angular.min.js"></script>
<script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.5.8/angular-animate.js"></script>
<script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.5.8/angular-route.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script>
<script src='https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.5/marked.min.js'></script>
<script>
/* ng-infinite-scroll - v1.0.0 - 2013-02-23 */
var mod;mod=angular.module("infinite-scroll",[]),mod.directive("infiniteScroll",["$rootScope","$window","$timeout",function(i,n,e){return{link:function(t,l,o){var r,c,f,a;return n=angular.element(n),f=0,null!=o.infiniteScrollDistance&&t.$watch(o.infiniteScrollDistance,function(i){return f=parseInt(i,10)}),a=!0,r=!1,null!=o.infiniteScrollDisabled&&t.$watch(o.infiniteScrollDisabled,function(i){return a=!i,a&&r?(r=!1,c()):void 0}),c=function(){var e,c,u,d;return d=n.height()+n.scrollTop(),e=l.offset().top+l.height(),c=e-d,u=n.height()*f>=c,u&&a?i.$$phase?t.$eval(o.infiniteScroll):t.$apply(o.infiniteScroll):u?r=!0:void 0},n.on("scroll",c),t.$on("$destroy",function(){return n.off("scroll",c)}),e(function(){return o.infiniteScrollImmediateCheck?t.$eval(o.infiniteScrollImmediateCheck)?c():void 0:c()},0)}}}]);
var app = angular.module('PinApp', ['ngAnimate', 'ngSanitize', 'ngResource', 'infinite-scroll']);
app.config(function($interpolateProvider, $httpProvider, cfpLoadingBarProvider) {
$interpolateProvider.startSymbol('{$');
$interpolateProvider.endSymbol('$}');
cfpLoadingBarProvider.includeSpinner = false;
});
app.factory('Pin', function($http, cfpLoadingBar){
var Pin = function() {
this.items = [];
this.busy = false;
this.url = "/api/posts/?limit=2&offset=0";
}
Pin.prototype.more = function() {
if (this.busy) return;
if (this.url) {
this.busy = true;
cfpLoadingBar.start();
$http.get(this.url).success(function(data) {
var items = data.results;
for (var i = 0; i < items.length; i++) {
this.items.push(items[i]);
}
this.url = data.next;
this.busy = false;
cfpLoadingBar.complete();
}.bind(this));
}
};
return Pin;
})
app.controller('PinCtrl', function($scope, Pin){
$scope.pins = new Pin();
$scope.pins.more();
});
</script>
</code></pre>
<p>urls.py</p>
<pre><code> from django.conf.urls import url
from django.contrib import admin
from .views import (
PostListAPIView,
PostDetailAPIView,
PostUpdateAPIView,
PostDeleteAPIView,
PostCreateAPIView,
)
from django.conf.urls import patterns, url, include
from .views import PostListView
urlpatterns = [
url(r'^$', PostListAPIView.as_view(), name='list'),
url(r'^create/$', PostCreateAPIView.as_view(), name='create'),
#url(r'^create/$', post_create),
url(r'^(?P<slug>[\w-]+)/$', PostDetailAPIView.as_view(), name='detail'),
url(r'^(?P<slug>[\w-]+)/edit/$', PostUpdateAPIView.as_view(), name='update'),
url(r'^(?P<slug>[\w-]+)/delete/$', PostDetailAPIView.as_view(), name='delete'),
#url(r'^(?P<slug>[\w-]+)/edit/$', post_update, name ='update'),
#url(r'^(?P<slug>[\w-]+)/delete/$', post_delete),
#url(r'^posts/$', "<appname>.views.<function_name>"),
url(r'^$', PostListView.as_view(), name='list2')
]
</code></pre>
<p>serializers.py</p>
<pre><code> from rest_framework.serializers import ModelSerializer, HyperlinkedIdentityField
from posts.models import Post
class PostCreateUpdateSerializer(ModelSerializer):
class Meta:
model = Post
fields = [
#'id',
'title',
#'slug',
'content',
'publish'
]
class PostDetailSerializer(ModelSerializer):
class Meta:
model = Post
fields = [
'id',
'title',
'slug',
'content',
'publish'
]
class PostListSerializer(ModelSerializer):
url = HyperlinkedIdentityField(
view_name='posts-api:detail',
lookup_field='slug'
)
delete_url = HyperlinkedIdentityField(
view_name='posts-api:delete',
lookup_field='slug'
)
class Meta:
model = Post
fields = [
'url',
'id',
'title',
'content',
'publish',
'delete_url'
]
from rest_framework import serializers
class PinSerializer(serializers.ModelSerializer):
class Meta:
model = Post
</code></pre>
<p>views.py</p>
<pre><code> from rest_framework.generics import CreateAPIView, ListAPIView, RetrieveAPIView, UpdateAPIView, DestroyAPIView
from posts.models import Post
from .serializers import PostCreateUpdateSerializer, PostListSerializer, PostDetailSerializer
from rest_framework.pagination import LimitOffsetPagination, PageNumberPagination
from .pagination import PostLimitOffsetPagination, PostPageNumberPagination
class PostCreateAPIView(CreateAPIView):
queryset = Post.objects.all()
serializer_class = PostCreateUpdateSerializer
class PostDetailAPIView(RetrieveAPIView):
queryset = Post.objects.all()
serializer_class = PostDetailSerializer
lookup_field = 'slug'
class PostUpdateAPIView(UpdateAPIView):
queryset = Post.objects.all()
serializer_class = PostCreateUpdateSerializer
lookup_field = 'slug'
class PostDeleteAPIView(DestroyAPIView):
queryset = Post.objects.all()
serializer_class = PostDetailSerializer
lookup_field = 'slug'
class PostListAPIView(ListAPIView):
queryset = Post.objects.all()
serializer_class = PostListSerializer
# pagination_class = PostLimitOffsetPagination
from rest_framework import generics
from rest_framework import filters
from rest_framework.pagination import LimitOffsetPagination
from .serializers import PostListSerializer
class PostListView(generics.ListAPIView):
queryset = Post.objects.all()
serializer_class = PostListSerializer
filter_backends = (filters.DjangoFilterBackend,)
filter_fields = ('category',)
pagination_class = LimitOffsetPagination
</code></pre>
<p>thanks. </p>
<p>the live site is here : <a href="http://192.241.153.25:8000/" rel="nofollow">http://192.241.153.25:8000/</a></p>
| 0 | 2016-07-26T00:50:10Z | 38,580,397 | <pre><code> <div ng-app="myApp" class="app">
<div ng-controller="appCtrl as vm" class="main-container">
<h1>Post List</h1>
{% verbatim %}
<div ng-repeat="post in vm.posts | limitTo: 10" class="post">
<a href="{{ post.url}}">
<h2>{{ post.title }}</h2> <!-- this is django template language and expects a django context variable named post that has an attribute of title -->
<p>{{ post.content }}</p>
</a>
<p ng-if="vm.loadingPosts">Loading...</p>
</div>
{% endverbatim %}
</div>
</div>
</code></pre>
<p>instead you must escape your <code>{{</code> and <code>}}</code> for angular to see them</p>
<pre><code> {% templatetag openvariable %} angular_variable {% templatetag closevariable %}
</code></pre>
<p>or use something like <a href="https://github.com/jrief/django-angular" rel="nofollow">https://github.com/jrief/django-angular</a></p>
| 0 | 2016-07-26T03:04:02Z | [
"python",
"angularjs",
"django"
] |
angular js infinite scrolling with django rest framework | 38,579,513 | <p>I've been following this tutorial here for the infinite scrolling.</p>
<p><a href="http://en.proft.me/2015/09/4/how-make-infinity-scroll-loading-bar-angularjs/" rel="nofollow">http://en.proft.me/2015/09/4/how-make-infinity-scroll-loading-bar-angularjs/</a></p>
<p>but for some reason it's throwing me this error</p>
<p>"angular.js:38Uncaught Error: [$injector:modulerr"</p>
<p>am i doing something wrong here?</p>
<p>im a newbie to django rest framework and angular js.</p>
<p>what i want to achieve here is having the API json data loaded and injected into the html (which i did) and have it scrollable and clickable. (with hyperlinks, but without refreshing the page)</p>
<p>could anyone take a look at the code?</p>
<p>index.html</p>
<pre><code><body>
<div class="pinGridWrapper" ng-app="PinApp" ng-controller="PinCtrl">
<div class="pinGrid" infinite-scroll='pins.more()' infinite-scroll-disabled='pins.busy' infinite-scroll-distance='1'>
<div class="pin" ng-repeat="pin in pins.items">
<img ng-src="{$ pin.photo $}">
<div ng-app="myApp" class="app">
<div ng-controller="appCtrl as vm" class="main-container">
<h1>Post List</h1>
{% verbatim %}
<div ng-repeat="post in vm.posts | limitTo: 10" class="post">
<a href="{{ post.url}}">
<h2>{{ post.title }}</h2>
<p>{{ post.content }}</p>
</a>
<p ng-if="vm.loadingPosts">Loading...</p>
</div>
{% endverbatim %}
</div>
</div>
<p ng-bind-html="pin.text"></p>
</div>
</div>
<div ng-show='pins.busy'><i class="fa fa-spinner"></i></div>
</div>
<!-- Latest compiled and minified JavaScript -->
<script src="http://code.jquery.com/jquery-1.12.2.min.js" integrity="sha256-lZFHibXzMHo3GGeehn1hudTAP3Sc0uKXBXAzHX1sjtk=" crossorigin="anonymous"></script>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.8/angular.min.js"></script>
<script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.5.8/angular-animate.js"></script>
<script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.5.8/angular-route.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script>
<script src='https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.5/marked.min.js'></script>
<script>
/* ng-infinite-scroll - v1.0.0 - 2013-02-23 */
var mod;mod=angular.module("infinite-scroll",[]),mod.directive("infiniteScroll",["$rootScope","$window","$timeout",function(i,n,e){return{link:function(t,l,o){var r,c,f,a;return n=angular.element(n),f=0,null!=o.infiniteScrollDistance&&t.$watch(o.infiniteScrollDistance,function(i){return f=parseInt(i,10)}),a=!0,r=!1,null!=o.infiniteScrollDisabled&&t.$watch(o.infiniteScrollDisabled,function(i){return a=!i,a&&r?(r=!1,c()):void 0}),c=function(){var e,c,u,d;return d=n.height()+n.scrollTop(),e=l.offset().top+l.height(),c=e-d,u=n.height()*f>=c,u&&a?i.$$phase?t.$eval(o.infiniteScroll):t.$apply(o.infiniteScroll):u?r=!0:void 0},n.on("scroll",c),t.$on("$destroy",function(){return n.off("scroll",c)}),e(function(){return o.infiniteScrollImmediateCheck?t.$eval(o.infiniteScrollImmediateCheck)?c():void 0:c()},0)}}}]);
var app = angular.module('PinApp', ['ngAnimate', 'ngSanitize', 'ngResource', 'infinite-scroll']);
app.config(function($interpolateProvider, $httpProvider, cfpLoadingBarProvider) {
$interpolateProvider.startSymbol('{$');
$interpolateProvider.endSymbol('$}');
cfpLoadingBarProvider.includeSpinner = false;
});
app.factory('Pin', function($http, cfpLoadingBar){
var Pin = function() {
this.items = [];
this.busy = false;
this.url = "/api/posts/?limit=2&offset=0";
}
Pin.prototype.more = function() {
if (this.busy) return;
if (this.url) {
this.busy = true;
cfpLoadingBar.start();
$http.get(this.url).success(function(data) {
var items = data.results;
for (var i = 0; i < items.length; i++) {
this.items.push(items[i]);
}
this.url = data.next;
this.busy = false;
cfpLoadingBar.complete();
}.bind(this));
}
};
return Pin;
})
app.controller('PinCtrl', function($scope, Pin){
$scope.pins = new Pin();
$scope.pins.more();
});
</script>
</code></pre>
<p>urls.py</p>
<pre><code> from django.conf.urls import url
from django.contrib import admin
from .views import (
PostListAPIView,
PostDetailAPIView,
PostUpdateAPIView,
PostDeleteAPIView,
PostCreateAPIView,
)
from django.conf.urls import patterns, url, include
from .views import PostListView
urlpatterns = [
url(r'^$', PostListAPIView.as_view(), name='list'),
url(r'^create/$', PostCreateAPIView.as_view(), name='create'),
#url(r'^create/$', post_create),
url(r'^(?P<slug>[\w-]+)/$', PostDetailAPIView.as_view(), name='detail'),
url(r'^(?P<slug>[\w-]+)/edit/$', PostUpdateAPIView.as_view(), name='update'),
url(r'^(?P<slug>[\w-]+)/delete/$', PostDetailAPIView.as_view(), name='delete'),
#url(r'^(?P<slug>[\w-]+)/edit/$', post_update, name ='update'),
#url(r'^(?P<slug>[\w-]+)/delete/$', post_delete),
#url(r'^posts/$', "<appname>.views.<function_name>"),
url(r'^$', PostListView.as_view(), name='list2')
]
</code></pre>
<p>serializers.py</p>
<pre><code> from rest_framework.serializers import ModelSerializer, HyperlinkedIdentityField
from posts.models import Post
class PostCreateUpdateSerializer(ModelSerializer):
class Meta:
model = Post
fields = [
#'id',
'title',
#'slug',
'content',
'publish'
]
class PostDetailSerializer(ModelSerializer):
class Meta:
model = Post
fields = [
'id',
'title',
'slug',
'content',
'publish'
]
class PostListSerializer(ModelSerializer):
url = HyperlinkedIdentityField(
view_name='posts-api:detail',
lookup_field='slug'
)
delete_url = HyperlinkedIdentityField(
view_name='posts-api:delete',
lookup_field='slug'
)
class Meta:
model = Post
fields = [
'url',
'id',
'title',
'content',
'publish',
'delete_url'
]
from rest_framework import serializers
class PinSerializer(serializers.ModelSerializer):
class Meta:
model = Post
</code></pre>
<p>views.py</p>
<pre><code> from rest_framework.generics import CreateAPIView, ListAPIView, RetrieveAPIView, UpdateAPIView, DestroyAPIView
from posts.models import Post
from .serializers import PostCreateUpdateSerializer, PostListSerializer, PostDetailSerializer
from rest_framework.pagination import LimitOffsetPagination, PageNumberPagination
from .pagination import PostLimitOffsetPagination, PostPageNumberPagination
class PostCreateAPIView(CreateAPIView):
queryset = Post.objects.all()
serializer_class = PostCreateUpdateSerializer
class PostDetailAPIView(RetrieveAPIView):
queryset = Post.objects.all()
serializer_class = PostDetailSerializer
lookup_field = 'slug'
class PostUpdateAPIView(UpdateAPIView):
queryset = Post.objects.all()
serializer_class = PostCreateUpdateSerializer
lookup_field = 'slug'
class PostDeleteAPIView(DestroyAPIView):
queryset = Post.objects.all()
serializer_class = PostDetailSerializer
lookup_field = 'slug'
class PostListAPIView(ListAPIView):
queryset = Post.objects.all()
serializer_class = PostListSerializer
# pagination_class = PostLimitOffsetPagination
from rest_framework import generics
from rest_framework import filters
from rest_framework.pagination import LimitOffsetPagination
from .serializers import PostListSerializer
class PostListView(generics.ListAPIView):
queryset = Post.objects.all()
serializer_class = PostListSerializer
filter_backends = (filters.DjangoFilterBackend,)
filter_fields = ('category',)
pagination_class = LimitOffsetPagination
</code></pre>
<p>thanks. </p>
<p>the live site is here : <a href="http://192.241.153.25:8000/" rel="nofollow">http://192.241.153.25:8000/</a></p>
| 0 | 2016-07-26T00:50:10Z | 38,580,934 | <p>Your first error was the missing sanitize module as I referenced in my comment. Adding the missing include:</p>
<pre><code><script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.5.8/angular-sanitize.js"></script>
</code></pre>
<p>Will fix that.</p>
<p>Your current error is really a new question but has to do with this code in your index page (line 498 of the rendered index page).</p>
<pre><code>$http.get(this.url).success(function(data) {
var items = data.results;
for (var i = 0; i < items.length; i++) {
this.items.push(items[i]);
}
...
</code></pre>
<p>The data object that your code is being returned looks like this:</p>
<pre>
[ {
"url":"http://192.241.153.25:8000/api/posts/test-2/",
"id":3,
"title":"test",
"content":"test",
"publish":"2016-01-01",
"delete_url":"http://192.241.153.25:8000/api/posts/test-2/delete/"
},
...
]
</pre>
<p>It doesn't have a results property. I didn't really look into what you were doing with it but you probably want this (notice the lack of the "results"):</p>
<pre><code>$http.get(this.url).success(function(data) {
var items = data;
for (var i = 0; i < items.length; i++) {
this.items.push(items[i]);
}
...
</code></pre>
<p>Or you want to change up what you're returning so that it has a results property.</p>
| 1 | 2016-07-26T04:18:16Z | [
"python",
"angularjs",
"django"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.