title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
How to rearrange a multidimensional array | 38,673,268 | <p>So for various reasons that go outside of this post, I am writing a file that transfers data from one place to another. I have some of the data held in a series of multidimensional arrays</p>
<p>lets pretend I have a 4 dimensional array with the following shape/dimensions:
[x, y, z, n]</p>
<p>How would I rearrange it into these dimensions:
[n, z, y, x]
OR
[z, y, n, z]</p>
<p>I am NOT looking for a short and quick answer or piece of code. I want to understand the answer so that for future I could do it on my own</p>
<p>My idea:
Flatten the array out with a series of nested for loops</p>
<p><code>for n in [n, :,:,:]
for x in [:, x, :,:]</code></p>
<p>so on and so forth until I unravel the whole thing into a one dimensional array. But I am not sure how exactly I would get it back in the form I would like</p>
| 0 | 2016-07-30T11:12:32Z | 38,673,357 | <p>for the horizontal flipping you only need one loop.
you need to go from the first place to the centered place, and replace each one with the same one on the opposite. for example:</p>
<pre><code>lets say that we have an int length. and then:
for(int i=0;i<length/2;i++)
{
switch array[i] with array[length-i]
}
</code></pre>
| 0 | 2016-07-30T11:20:29Z | [
"python",
"arrays",
"multidimensional-array"
] |
Custom JSONField in Django | 38,673,401 | <p>I am trying to implement a custom JSON Field for my models using Django + MySQL. This is what my <strong>models.py</strong> looks like:</p>
<pre><code>from __future__ import unicode_literals
from django.db import models
from django.db import models
from django.core.serializers.json import DjangoJSONEncoder
import json
class JSONField(models.TextField):
"""JSONField is a generic textfield that neatly serializes/unserializes
JSON objects seamlessly"""
# Used so to_python() is called
__metaclass__ = models.SubfieldBase
def to_python(self, value):
"""Convert our string value to JSON after we load it from the DB"""
if value == "":
return None
try:
if isinstance(value, basestring):
return json.loads(value)
except ValueError:
pass
return value
def get_db_prep_save(self, value):
"""Convert our JSON object to a string before we save"""
if value == "":
return None
if isinstance(value, dict):
value = json.dumps(value, cls=DjangoJSONEncoder)
return super(JSONField, self).get_db_prep_save(value)
# Articles / Content
class Content(models.Model):
title = models.CharField(max_length=255)
body = models.TextField()
data = JSONField(blank=True, null=True)
def __unicode__(self):
return self.title
def save(self, *args, **kwargs):
self.data = {
name1 : {
"image_url" : 'https://photosite.com/image1.jpg',
"views" : 0
},
name2 : {
"image_url" : 'https://photosite.com/image2.jpg',
"views" : 0
}
}
super(Content, self).save(*args, **kwargs)
</code></pre>
<p>Basically, when a content is saved, I am trying to initialize the data field. However, I get this error right now: </p>
<pre><code>get_db_prep_save() got an unexpected keyword argument 'connection'
</code></pre>
<p>What am I exactly doing wrong? And how can I fix this? Any help would be appreciated.</p>
| 0 | 2016-07-30T11:26:03Z | 38,673,441 | <p>According to exception and <a href="https://docs.djangoproject.com/en/1.9/ref/models/fields/#django.db.models.Field.get_db_prep_save" rel="nofollow">django docs</a>, your <code>get_db_prep_save</code> method should take one more argument, called <code>connection</code>, so this:</p>
<pre><code> def get_db_prep_save(self, value, connection):
"""Convert our JSON object to a string before we save"""
if value == "":
return None
if isinstance(value, dict):
value = json.dumps(value, cls=DjangoJSONEncoder)
return super(JSONField, self).get_db_prep_save(value, connection)
</code></pre>
<p>should be okay.</p>
| 0 | 2016-07-30T11:29:14Z | [
"python",
"mysql",
"json",
"django",
"django-jsonfield"
] |
lxml is not detecting empty div as expected | 38,673,474 | <p>For the below input, <code>lxml</code> modifies the <code>div</code> as if it understands that <code>div</code> can't be inside <code>p</code>.</p>
<p>Can anyone tell me how to just get the <code><div></div></code> for this type of input? I want to correct the input HTML.</p>
<p>Do I need to switch to <code>BeautifulSoup</code>?</p>
<pre><code>from lxml import etree
html_string = """
<html>
<head>
<title></title>
</head>
<body>
<p align="center">
<div></div>
This line should be centered.
</p>
<table>
<tbody>
<tr>
<td>
<div></div>
</td>
</tr>
</tbody>
</table>
</body>
</html>
"""
html_element = etree.fromstring(html_string)
page_break_elements = html_element.xpath("//div")
(Pdb) etree.tostring(html_element[1][0][0])
b'<div/>\n This line should be centered.\n '
</code></pre>
<p>I just want the below element to move it around.</p>
<pre><code><div></div>
</code></pre>
<p>For anyone curious, these are page-break <code>div</code>s used for PDF generation <code><div style="page-break-after:always"></div></code> that specify page-breaks. I get input from TinyMCE which doesn't position it correctly so I am trying to move it to the <code>body</code> element.</p>
<p>Output Desired</p>
<pre><code>from lxml import etree
html_string = """
<html>
<head>
<title></title>
</head>
<body>
<div></div>
<p align="center">
This line should be centered.
</p>
<div></div>
<table>
<tbody>
<tr>
<td>
</td>
</tr>
</tbody>
</table>
</body>
</html>
"""
</code></pre>
| 0 | 2016-07-30T11:33:12Z | 38,674,670 | <p>You need to pass some additional arguments to change the behavior of tostring():</p>
<pre><code>etree.tostring(d, method="html", with_tail=False)
'<div></div>'
</code></pre>
| 0 | 2016-07-30T13:51:20Z | [
"python",
"lxml"
] |
lxml is not detecting empty div as expected | 38,673,474 | <p>For the below input, <code>lxml</code> modifies the <code>div</code> as if it understands that <code>div</code> can't be inside <code>p</code>.</p>
<p>Can anyone tell me how to just get the <code><div></div></code> for this type of input? I want to correct the input HTML.</p>
<p>Do I need to switch to <code>BeautifulSoup</code>?</p>
<pre><code>from lxml import etree
html_string = """
<html>
<head>
<title></title>
</head>
<body>
<p align="center">
<div></div>
This line should be centered.
</p>
<table>
<tbody>
<tr>
<td>
<div></div>
</td>
</tr>
</tbody>
</table>
</body>
</html>
"""
html_element = etree.fromstring(html_string)
page_break_elements = html_element.xpath("//div")
(Pdb) etree.tostring(html_element[1][0][0])
b'<div/>\n This line should be centered.\n '
</code></pre>
<p>I just want the below element to move it around.</p>
<pre><code><div></div>
</code></pre>
<p>For anyone curious, these are page-break <code>div</code>s used for PDF generation <code><div style="page-break-after:always"></div></code> that specify page-breaks. I get input from TinyMCE which doesn't position it correctly so I am trying to move it to the <code>body</code> element.</p>
<p>Output Desired</p>
<pre><code>from lxml import etree
html_string = """
<html>
<head>
<title></title>
</head>
<body>
<div></div>
<p align="center">
This line should be centered.
</p>
<div></div>
<table>
<tbody>
<tr>
<td>
</td>
</tr>
</tbody>
</table>
</body>
</html>
"""
</code></pre>
| 0 | 2016-07-30T11:33:12Z | 38,678,456 | <p>You can use the <a href="http://lxml.de/elementsoup.html" rel="nofollow">soupparser</a> in <em>lxml</em> and still process the data with xpaths etc..:</p>
<pre><code>from lxml.html.soupparser import fromstring
html_element = fromstring(html_string)
</code></pre>
<p>That will maintain <code><div></div></code> inside the p.</p>
| 1 | 2016-07-30T21:01:09Z | [
"python",
"lxml"
] |
Dont show me again checkbox | 38,673,481 | <p>I want a window with "don't show me again" (a toplevel-window). If this box get's checked, I don't want it showing me this window another time.</p>
<pre><code>import configparser
from tkinter import *
config = configparser.RawConfigParser() #my ini file
config.add_section('Section1')
config.set('Section1', 'a_bool', 'False')
with open('settings.ini', 'w') as configfile:
config.write(configfile)
root = Tk()
def var_states(): #write to ini file
global mt
print(config.read('ayrlar.ini'))
if var1 == True:
config.set('Section1', 'a_bool', 'True')
with open('settings.ini', 'w') as configfile:
config.write(configfile)
global window
window.destroy()
elif var1 == False:
config.set('Section1', 'a_bool', 'False')
with open('settings.ini', 'w') as configfile:
config.write(configfile)
global window
window.destroy()
var1 = config.getboolean('Section1', 'a_bool')
def show(): #if checkbox is true
global window #dont show
window= Toplevel(root)
Checkbutton(window, text="Don't show me again", variable=var1).place(x=0, y=0)
Button(window, text='Okey', command=var_states).place(x=0, y=25)
root.after(10,show)
root.mainloop()
</code></pre>
<p>When run another time I don't want show this window. How can I do this with <code>ConfigParser</code>?</p>
| -1 | 2016-07-30T11:34:05Z | 38,673,937 | <p>For the variable of a <code>Checkbutton</code> to work, it should be an <code>IntVar</code> or a <code>BooleanVar</code>. So I replaced the <code>var1</code> in your code by a <code>BooleanVar</code>:</p>
<pre><code>def var_states(): #write to ini file
global mt
print(config.read('ayrlar.ini'))
if var1.get():
config.set('Section1', 'a_bool', 'True')
with open('settings.ini', 'w') as configfile:
config.write(configfile)
global window
window.destroy()
else:
config.set('Section1', 'a_bool', 'False')
with open('settings.ini', 'w') as configfile:
config.write(configfile)
global window
window.destroy()
var1 = BooleanVar(root, value=config.getboolean('Section1', 'a_bool'))
</code></pre>
| 0 | 2016-07-30T12:25:08Z | [
"python",
"tkinter",
"configparser"
] |
Subset in Python output error - HackerRank | 38,673,491 | <p><strong><em>This is a question from <a href="https://www.hackerrank.com/challenges/py-check-subset" rel="nofollow">HackerRank</a></em></strong></p>
<p>You are given two sets <code>A</code> and <code>B</code>.</p>
<p>Your job is to find whether set <code>A</code> is a subset of set <code>B</code>.</p>
<p>If set <code>A</code> is subset of set <code>B</code> print True.</p>
<p>If set <code>A</code> is not a subset of set <code>B</code> print False.</p>
<hr>
<p>Input Format:</p>
<p>The first line will contain the number of test cases <code>T</code>.</p>
<p>The first line of each test case contains the number of elements in set <code>A</code>.</p>
<p>The second line of each test case contains the space separated elements of set <code>A</code>.</p>
<p>The third line of each test case contains the number of elements in set <code>B</code>.</p>
<p>The fourth line of each test case contains the space separated elements of set <code>B</code>.</p>
<hr>
<p>Output Format:</p>
<p>Output True or False for each test case on separate lines.</p>
<hr>
<p>Sample Input:</p>
<pre><code>3
5
1 2 3 5 6
9
9 8 5 6 3 2 1 4 7
1
2
5
3 6 5 4 1
7
1 2 3 5 6 8 9
3
9 8 2
</code></pre>
<hr>
<p>Sample Output:</p>
<pre><code>True
False
False
</code></pre>
<p>I coded this and it worked fine. The output and expected output matches but the output is claimed to be wrong. I even checked if it was because of any trailing whitespace characters. Where am I going wrong ?</p>
<pre><code>for i in range(int(raw_input())):
a = int(raw_input()); A = set(raw_input().split())
b = int(raw_input()); B = set(raw_input().split())
if(b<a):
print "False"
else:
print A.issubset(B)
</code></pre>
<p><a href="http://i.stack.imgur.com/f0p8K.png" rel="nofollow"><img src="http://i.stack.imgur.com/f0p8K.png" alt="enter image description here"></a></p>
| -1 | 2016-07-30T11:34:41Z | 38,673,509 | <p>The problem specification says this:</p>
<blockquote>
<p>Note: More than 4 lines will result in a score of zero. Blank lines won't be counted.</p>
</blockquote>
<p>Your solution uses 7 lines, so it counts as a failure.</p>
| 1 | 2016-07-30T11:36:35Z | [
"python",
"python-2.7",
"set",
"subset"
] |
How to store prices for every pair of objects | 38,673,512 | <p>I'm working on a web app. This app has model <code>Location</code> which can be New York, Wienna, Paris etc. </p>
<p>Now I want to have some data structure/table which holds <strong>price</strong> for any tuple from set of <code>Location</code> objects. </p>
<p>So if there are 3 objects yet - NY, PA, WI, I have to store <strong>price</strong> for:</p>
<pre><code>NY - PA #(since there is NY - PA price, I don't have to store PA - NY which is the same)
NY - WI
PA - WI
</code></pre>
<p>And I want admin to be able to add/change prices for any tuple. </p>
<p>What should I do? I thought about some grid which would hold information about prices but I don't know how to simulate such grid in <code>Django admin</code> and <code>Django ORM</code>.</p>
<p>What I've done so far is creation of a model <code>CityPrice</code> which looks like this:</p>
<pre><code>class CityPrice(models.Model):
city_one = models.ForeignKey(City, related_name='city_tuple')
city_two = models.ForeignKey(City)
price = models.DecimalField(max_digits=8, decimal_places=2, blank=True, null=True)
class Meta:
unique_together = (('city_one', 'city_two'),)
</code></pre>
<p>But as you can see there are multiple problems there. One problem is that cities are not "equal" one has to be <code>city_one</code> and one has to be <code>city_two</code>. Another problem is that <code>Admin</code> would have to open for each tuple new tab in <code>Django-Admin</code> and change the price there which is very uncomfortable.</p>
| 0 | 2016-07-30T11:36:54Z | 38,673,681 | <p>What I think would be nice solution is to have each <code>Location</code> model a <code>ManyToManyField</code> field, which links to <code>CityPrice</code> model. That way many <code>Location</code> models will share the same price and you will be able to have multiple prices assigned to a single city.</p>
<pre><code>class Location(models.Model):
# your stuff
price = models.ManyToManyField(CityPrice)
</code></pre>
<p>You can than get the cities, which share the same price by:</p>
<pre><code>price_object.locations.all()
</code></pre>
<p>More information and examples about it here: <a href="https://docs.djangoproject.com/en/1.9/topics/db/examples/many_to_many/" rel="nofollow">https://docs.djangoproject.com/en/1.9/topics/db/examples/many_to_many/</a></p>
| 0 | 2016-07-30T11:58:41Z | [
"python",
"django",
"django-models",
"django-admin",
"django-orm"
] |
Multiply numpy int and float arrays | 38,673,531 | <p>I'd like to multiply an <code>int16</code> array but a <code>float</code> array, with auto rounding, but this fails :</p>
<pre><code>import numpy
A = numpy.array([1, 2, 3, 4], dtype=numpy.int16)
B = numpy.array([0.5, 2.1, 3, 4], dtype=numpy.float64)
A *= B
</code></pre>
<p>I get: </p>
<blockquote>
<p>TypeError: Cannot cast ufunc multiply output from dtype('float64') to dtype('int16') with casting rule 'same_kind'</p>
</blockquote>
| 2 | 2016-07-30T11:39:30Z | 38,677,336 | <p>You could use <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> to multiply the two arrays and take only the integer part as follows:</p>
<pre><code>In [2]: (A*B).astype(int)
Out[2]: array([ 0, 4, 9, 16])
</code></pre>
<p><strong>Timing Constraints:</strong></p>
<pre><code>In [8]: %timeit (A*B).astype(int)
1000000 loops, best of 3: 1.65 µs per loop
In [9]: %timeit np.multiply(A, B, out=A, casting='unsafe')
100000 loops, best of 3: 2.01 µs per loop
</code></pre>
| 1 | 2016-07-30T18:40:33Z | [
"python",
"arrays",
"numpy",
"floating-point",
"int"
] |
Multiply numpy int and float arrays | 38,673,531 | <p>I'd like to multiply an <code>int16</code> array but a <code>float</code> array, with auto rounding, but this fails :</p>
<pre><code>import numpy
A = numpy.array([1, 2, 3, 4], dtype=numpy.int16)
B = numpy.array([0.5, 2.1, 3, 4], dtype=numpy.float64)
A *= B
</code></pre>
<p>I get: </p>
<blockquote>
<p>TypeError: Cannot cast ufunc multiply output from dtype('float64') to dtype('int16') with casting rule 'same_kind'</p>
</blockquote>
| 2 | 2016-07-30T11:39:30Z | 38,677,658 | <pre><code>import numpy as np
A = np.float_(A)
A *= B
</code></pre>
<p>try this. I think are different array type you get fail.</p>
<p>Cast </p>
| 1 | 2016-07-30T19:18:09Z | [
"python",
"arrays",
"numpy",
"floating-point",
"int"
] |
Can I use Hendrix to run Falcon app? | 38,673,550 | <p>Hendrix is a WSGI compatible server written in Tornado. I was wondering if it can be used to run an app written in Falcon ? </p>
| 1 | 2016-07-30T11:42:18Z | 38,676,247 | <p>So found the solution. Created a python file according to hendrix's docs. And imported my app's wsgi callable there.</p>
| 1 | 2016-07-30T16:44:26Z | [
"python",
"tornado",
"wsgi",
"falcon",
"hendrix"
] |
How to implement python script to run on "N" number of CPU CORES? | 38,673,565 | <p>I have made a script to optimize particular part of structure. (scientific terms , you can ignore it) but the main purpose of script is optimization and it takes major time during these two steps optimize () and refine() function where it uses only one CPU out of 4 CPU's in my local system but i want to make this script to use all 4 CPU's (especially for these two functions optimize () and refine()).</p>
<p>I didn't have much idea about multiprocessing/multicore but still i uses multiprocessing module but it fails use all the CPU's. So, if someone knows how to implement the python script to run on all avail multiple CPU's could give me some suggestion would be really helpful.</p>
<p>MY SCRIPT:</p>
<pre><code>import sys
import os
from modeller import *
from modeller.optimizers import molecular_dynamics,conjugate_gradients
from modeller.automodel import autosched
def optimize(atmsel, sched):
for step in sched:
step.optimize(atmsel, max_iterations=200, min_atom_shift=0.001)
refine(atmsel)
cg = conjugate_gradients()
cg.optimize(atmsel, max_iterations=200, min_atom_shift=0.001)
def refine(atmsel):
md = molecular_dynamics(cap_atom_shift=0.39, md_time_step=4.0,
md_return='FINAL')
init_vel = True
for (its, equil, temps) in ((200, 20, (150.0, 250.0, 400.0, 700.0, 1000.0)),
(200, 600,
(1000.0, 800.0, 600.0, 500.0, 400.0, 300.0))):
for temp in temps:
md.optimize(atmsel, init_velocities=init_vel, temperature=temp,
max_iterations=its, equilibrate=equil)
init_vel = False
def make_restraints(mdl1, aln):
rsr = mdl1.restraints
rsr.clear()
s = selection(mdl1)
for typ in ('stereo', 'phi-psi_binormal'):
rsr.make(s, restraint_type=typ, aln=aln, spline_on_site=True)
for typ in ('omega', 'chi1', 'chi2', 'chi3', 'chi4'):
rsr.make(s, restraint_type=typ+'_dihedral', spline_range=4.0,
spline_dx=0.3, spline_min_points = 5, aln=aln,
spline_on_site=True)
log.verbose()
env = environ(rand_seed=int(-4243))
env.io.hetatm = True
env.edat.dynamic_sphere=False
env.edat.dynamic_lennard=True
env.edat.contact_shell = 4.0
env.edat.update_dynamic = 0.39
env.libs.topology.read(file='$(LIB)/top_heav.lib')
env.libs.parameters.read(file='$(LIB)/par.lib')
mdl1 = model(env, file = "3O26")
ali = alignment(env)
ali.append_model(mdl1, atom_files= "3O26.pdb", align_codes= "3O26")
s = selection(mdl1.chains["A"].residues["275"])
s.mutate(residue_type="ALA")
ali.append_model(mdl1, align_codes="3O26")
mdl1.clear_topology()
mdl1.generate_topology(ali[-1])
mdl1.transfer_xyz(ali)
mdl1.build(initialize_xyz=False, build_method='INTERNAL_COORDINATES')
mdl2 = model(env, file="3O26.pdb")
mdl1.res_num_from(mdl2,ali)
mdl1.write(file="3O26"+"ALA"+"275"+"A"+'.tmp')
mdl1.read(file="3O26"+"ALA"+"275"+"A"+'.tmp')
make_restraints(mdl1, ali)
mdl1.env.edat.nonbonded_sel_atoms=1
sched = autosched.loop.make_for_model(mdl1)
s = selection(mdl1.atoms['CA:'+"275"+':'+"A"].select_sphere(5)).by_residue()
mdl1.restraints.unpick_all()
mdl1.restraints.pick(s)
s.energy()
s.randomize_xyz(deviation=4.0)
mdl1.env.edat.nonbonded_sel_atoms=2
optimize(s,sched)
mdl1.env.edat.nonbonded_sel_atoms=1
optimize(s,sched)
s.energy()
atmsel = selection(mdl1.chains["A"])
score = atmsel.assess_dope()
mdl1.write(file="hi.pdb")
os.remove("3O26"+"ALA"+"275"+"A"+'.tmp')
from multiprocessing import Process
if __name__ == '__main__':
p = Process(target=optimize, args=(atmsel,sched))
p.start()
p.join()
</code></pre>
<p>In case of demo, kindly paste this ( <a href="http://files.rcsb.org/view/3o26.pdb" rel="nofollow">http://files.rcsb.org/view/3o26.pdb</a>) into a file 3O26.pdb and keep it in same directory.</p>
<p>Thanking you in advance</p>
<p>Based on @Dinesh suggestion I have modified the code by including pp module where its working fine with using all the cores but i am getting some errors that i couldn't figure out.</p>
<p>Modified script:</p>
<pre><code>import sys
import os
import pp
from modeller import *
from modeller.optimizers import molecular_dynamics, conjugate_gradients
from modeller.automodel import autosched
def optimize(atmsel, sched):
for step in sched:
step.optimize(atmsel, max_iterations=200, min_atom_shift=0.001)
refine(atmsel)
cg = conjugate_gradients()
cg.optimize(atmsel, max_iterations=200, min_atom_shift=0.001)
def refine(atmsel):
md = molecular_dynamics(cap_atom_shift=0.39, md_time_step=4.0,
md_return='FINAL')
init_vel = True
for (its, equil, temps) in ((200, 20, (150.0, 250.0, 400.0, 700.0, 1000.0)),
(200, 600,
(1000.0, 800.0, 600.0, 500.0, 400.0, 300.0))):
for temp in temps:
md.optimize(atmsel, init_velocities=init_vel, temperature=temp,
max_iterations=its, equilibrate=equil)
init_vel = False
def make_restraints(mdl1, aln):
rsr = mdl1.restraints
rsr.clear()
s = selection(mdl1)
for typ in ('stereo', 'phi-psi_binormal'):
rsr.make(s, restraint_type=typ, aln=aln, spline_on_site=True)
for typ in ('omega', 'chi1', 'chi2', 'chi3', 'chi4'):
rsr.make(s, restraint_type=typ + '_dihedral', spline_range=4.0,
spline_dx=0.3, spline_min_points=5, aln=aln,
spline_on_site=True)
################################### PPMODULE ############################
def main(s,sched):
print s,"*************************************************************************"
ppservers = ()
if len(sys.argv) > 1:
ncpus = int(sys.argv[1])
job_server = pp.Server(ncpus, ppservers=ppservers)
else:
job_server = pp.Server(ppservers=ppservers)
print "Starting pp with", job_server.get_ncpus(), "workers"
job_server.submit(optimize,(s,sched,),(refine,),("from modeller.optimizers import molecular_dynamics, conjugate_gradients",))()
#################################### PPMODULE ############################
if __name__=="__main__":
log.verbose()
env = environ(rand_seed=int(-4345))
env.io.hetatm = True
env.edat.dynamic_sphere = False
env.edat.dynamic_lennard = True
env.edat.contact_shell = 4.0
env.edat.update_dynamic = 0.39
env.libs.topology.read(file='$(LIB)/top_heav.lib')
env.libs.parameters.read(file='$(LIB)/par.lib')
mdl1 = model(env, file="3O26")
ali = alignment(env)
ali.append_model(mdl1, atom_files="3O26.pdb", align_codes="3O26")
s = selection(mdl1.chains["A"].residues["275"])
s.mutate(residue_type="ALA")
ali.append_model(mdl1, align_codes="3O26")
mdl1.clear_topology()
mdl1.generate_topology(ali[-1])
mdl1.transfer_xyz(ali)
mdl1.build(initialize_xyz=False, build_method='INTERNAL_COORDINATES')
mdl2 = model(env, file="3O26.pdb")
mdl1.res_num_from(mdl2, ali)
mdl1.write(file="3O26" + "ALA" + "275" + "A" + '.tmp')
mdl1.read(file="3O26" + "ALA" + "275" + "A" + '.tmp')
make_restraints(mdl1, ali)
mdl1.env.edat.nonbonded_sel_atoms = 1
sched = autosched.loop.make_for_model(mdl1)
s = selection(mdl1.atoms['CA:' + "275" + ':' + "A"].select_sphere(15)).by_residue()
mdl1.restraints.unpick_all()
mdl1.restraints.pick(s)
s.energy()
s.randomize_xyz(deviation=4.0)
mdl1.env.edat.nonbonded_sel_atoms = 2
main(s, sched)
mdl1.env.edat.nonbonded_sel_atoms = 1
main(s, sched)
s.energy()
atmsel = selection(mdl1.chains["A"])
score = atmsel.assess_dope()
mdl1.write(file="current.pdb")
os.remove("3O26" + "ALA" + "275" + "A" + '.tmp')
</code></pre>
<p>ERROR:</p>
<pre><code>randomi_498_> Atoms,selected atoms,random_seed,amplitude: 2302 558 1 4.0000
randomi_496_> Amplitude is > 0; randomization is done.
<Selection of 558 atoms> *************************************************************************
Starting pp with 4 workers
Traceback (most recent call last):
File "mutate_model.py", line 88, in <module>
main(s, sched)
File "m_m.py", line 52, in main
job_server.submit(optimize,(s,sched,),(refine,),("from modeller.optimizers import molecular_dynamics, conjugate_gradients",))()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pp.py", line 460, in submit
sfunc = self.__dumpsfunc((func, ) + depfuncs, modules)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pp.py", line 638, in __dumpsfunc
sources = [self.__get_source(func) for func in funcs]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pp.py", line 705, in __get_source
sourcelines = inspect.getsourcelines(func)[0]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/inspect.py", line 690, in getsourcelines
lines, lnum = findsource(object)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/inspect.py", line 526, in findsource
file = getfile(object)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/inspect.py", line 408, in getfile
raise TypeError('{!r} is a built-in class'.format(object))
TypeError: <module '__builtin__' (built-in)> is a built-in class
</code></pre>
| 0 | 2016-07-30T11:44:07Z | 38,677,305 | <p>Please check pp module. <a href="http://www.parallelpython.com/" rel="nofollow">Parallelpython.com</a> </p>
| 0 | 2016-07-30T18:36:42Z | [
"python",
"multithreading",
"parallel-processing",
"multiprocessing",
"multicore"
] |
How to implement python script to run on "N" number of CPU CORES? | 38,673,565 | <p>I have made a script to optimize particular part of structure. (scientific terms , you can ignore it) but the main purpose of script is optimization and it takes major time during these two steps optimize () and refine() function where it uses only one CPU out of 4 CPU's in my local system but i want to make this script to use all 4 CPU's (especially for these two functions optimize () and refine()).</p>
<p>I didn't have much idea about multiprocessing/multicore but still i uses multiprocessing module but it fails use all the CPU's. So, if someone knows how to implement the python script to run on all avail multiple CPU's could give me some suggestion would be really helpful.</p>
<p>MY SCRIPT:</p>
<pre><code>import sys
import os
from modeller import *
from modeller.optimizers import molecular_dynamics,conjugate_gradients
from modeller.automodel import autosched
def optimize(atmsel, sched):
for step in sched:
step.optimize(atmsel, max_iterations=200, min_atom_shift=0.001)
refine(atmsel)
cg = conjugate_gradients()
cg.optimize(atmsel, max_iterations=200, min_atom_shift=0.001)
def refine(atmsel):
md = molecular_dynamics(cap_atom_shift=0.39, md_time_step=4.0,
md_return='FINAL')
init_vel = True
for (its, equil, temps) in ((200, 20, (150.0, 250.0, 400.0, 700.0, 1000.0)),
(200, 600,
(1000.0, 800.0, 600.0, 500.0, 400.0, 300.0))):
for temp in temps:
md.optimize(atmsel, init_velocities=init_vel, temperature=temp,
max_iterations=its, equilibrate=equil)
init_vel = False
def make_restraints(mdl1, aln):
rsr = mdl1.restraints
rsr.clear()
s = selection(mdl1)
for typ in ('stereo', 'phi-psi_binormal'):
rsr.make(s, restraint_type=typ, aln=aln, spline_on_site=True)
for typ in ('omega', 'chi1', 'chi2', 'chi3', 'chi4'):
rsr.make(s, restraint_type=typ+'_dihedral', spline_range=4.0,
spline_dx=0.3, spline_min_points = 5, aln=aln,
spline_on_site=True)
log.verbose()
env = environ(rand_seed=int(-4243))
env.io.hetatm = True
env.edat.dynamic_sphere=False
env.edat.dynamic_lennard=True
env.edat.contact_shell = 4.0
env.edat.update_dynamic = 0.39
env.libs.topology.read(file='$(LIB)/top_heav.lib')
env.libs.parameters.read(file='$(LIB)/par.lib')
mdl1 = model(env, file = "3O26")
ali = alignment(env)
ali.append_model(mdl1, atom_files= "3O26.pdb", align_codes= "3O26")
s = selection(mdl1.chains["A"].residues["275"])
s.mutate(residue_type="ALA")
ali.append_model(mdl1, align_codes="3O26")
mdl1.clear_topology()
mdl1.generate_topology(ali[-1])
mdl1.transfer_xyz(ali)
mdl1.build(initialize_xyz=False, build_method='INTERNAL_COORDINATES')
mdl2 = model(env, file="3O26.pdb")
mdl1.res_num_from(mdl2,ali)
mdl1.write(file="3O26"+"ALA"+"275"+"A"+'.tmp')
mdl1.read(file="3O26"+"ALA"+"275"+"A"+'.tmp')
make_restraints(mdl1, ali)
mdl1.env.edat.nonbonded_sel_atoms=1
sched = autosched.loop.make_for_model(mdl1)
s = selection(mdl1.atoms['CA:'+"275"+':'+"A"].select_sphere(5)).by_residue()
mdl1.restraints.unpick_all()
mdl1.restraints.pick(s)
s.energy()
s.randomize_xyz(deviation=4.0)
mdl1.env.edat.nonbonded_sel_atoms=2
optimize(s,sched)
mdl1.env.edat.nonbonded_sel_atoms=1
optimize(s,sched)
s.energy()
atmsel = selection(mdl1.chains["A"])
score = atmsel.assess_dope()
mdl1.write(file="hi.pdb")
os.remove("3O26"+"ALA"+"275"+"A"+'.tmp')
from multiprocessing import Process
if __name__ == '__main__':
p = Process(target=optimize, args=(atmsel,sched))
p.start()
p.join()
</code></pre>
<p>In case of demo, kindly paste this ( <a href="http://files.rcsb.org/view/3o26.pdb" rel="nofollow">http://files.rcsb.org/view/3o26.pdb</a>) into a file 3O26.pdb and keep it in same directory.</p>
<p>Thanking you in advance</p>
<p>Based on @Dinesh suggestion I have modified the code by including pp module where its working fine with using all the cores but i am getting some errors that i couldn't figure out.</p>
<p>Modified script:</p>
<pre><code>import sys
import os
import pp
from modeller import *
from modeller.optimizers import molecular_dynamics, conjugate_gradients
from modeller.automodel import autosched
def optimize(atmsel, sched):
for step in sched:
step.optimize(atmsel, max_iterations=200, min_atom_shift=0.001)
refine(atmsel)
cg = conjugate_gradients()
cg.optimize(atmsel, max_iterations=200, min_atom_shift=0.001)
def refine(atmsel):
md = molecular_dynamics(cap_atom_shift=0.39, md_time_step=4.0,
md_return='FINAL')
init_vel = True
for (its, equil, temps) in ((200, 20, (150.0, 250.0, 400.0, 700.0, 1000.0)),
(200, 600,
(1000.0, 800.0, 600.0, 500.0, 400.0, 300.0))):
for temp in temps:
md.optimize(atmsel, init_velocities=init_vel, temperature=temp,
max_iterations=its, equilibrate=equil)
init_vel = False
def make_restraints(mdl1, aln):
rsr = mdl1.restraints
rsr.clear()
s = selection(mdl1)
for typ in ('stereo', 'phi-psi_binormal'):
rsr.make(s, restraint_type=typ, aln=aln, spline_on_site=True)
for typ in ('omega', 'chi1', 'chi2', 'chi3', 'chi4'):
rsr.make(s, restraint_type=typ + '_dihedral', spline_range=4.0,
spline_dx=0.3, spline_min_points=5, aln=aln,
spline_on_site=True)
################################### PPMODULE ############################
def main(s,sched):
print s,"*************************************************************************"
ppservers = ()
if len(sys.argv) > 1:
ncpus = int(sys.argv[1])
job_server = pp.Server(ncpus, ppservers=ppservers)
else:
job_server = pp.Server(ppservers=ppservers)
print "Starting pp with", job_server.get_ncpus(), "workers"
job_server.submit(optimize,(s,sched,),(refine,),("from modeller.optimizers import molecular_dynamics, conjugate_gradients",))()
#################################### PPMODULE ############################
if __name__=="__main__":
log.verbose()
env = environ(rand_seed=int(-4345))
env.io.hetatm = True
env.edat.dynamic_sphere = False
env.edat.dynamic_lennard = True
env.edat.contact_shell = 4.0
env.edat.update_dynamic = 0.39
env.libs.topology.read(file='$(LIB)/top_heav.lib')
env.libs.parameters.read(file='$(LIB)/par.lib')
mdl1 = model(env, file="3O26")
ali = alignment(env)
ali.append_model(mdl1, atom_files="3O26.pdb", align_codes="3O26")
s = selection(mdl1.chains["A"].residues["275"])
s.mutate(residue_type="ALA")
ali.append_model(mdl1, align_codes="3O26")
mdl1.clear_topology()
mdl1.generate_topology(ali[-1])
mdl1.transfer_xyz(ali)
mdl1.build(initialize_xyz=False, build_method='INTERNAL_COORDINATES')
mdl2 = model(env, file="3O26.pdb")
mdl1.res_num_from(mdl2, ali)
mdl1.write(file="3O26" + "ALA" + "275" + "A" + '.tmp')
mdl1.read(file="3O26" + "ALA" + "275" + "A" + '.tmp')
make_restraints(mdl1, ali)
mdl1.env.edat.nonbonded_sel_atoms = 1
sched = autosched.loop.make_for_model(mdl1)
s = selection(mdl1.atoms['CA:' + "275" + ':' + "A"].select_sphere(15)).by_residue()
mdl1.restraints.unpick_all()
mdl1.restraints.pick(s)
s.energy()
s.randomize_xyz(deviation=4.0)
mdl1.env.edat.nonbonded_sel_atoms = 2
main(s, sched)
mdl1.env.edat.nonbonded_sel_atoms = 1
main(s, sched)
s.energy()
atmsel = selection(mdl1.chains["A"])
score = atmsel.assess_dope()
mdl1.write(file="current.pdb")
os.remove("3O26" + "ALA" + "275" + "A" + '.tmp')
</code></pre>
<p>ERROR:</p>
<pre><code>randomi_498_> Atoms,selected atoms,random_seed,amplitude: 2302 558 1 4.0000
randomi_496_> Amplitude is > 0; randomization is done.
<Selection of 558 atoms> *************************************************************************
Starting pp with 4 workers
Traceback (most recent call last):
File "mutate_model.py", line 88, in <module>
main(s, sched)
File "m_m.py", line 52, in main
job_server.submit(optimize,(s,sched,),(refine,),("from modeller.optimizers import molecular_dynamics, conjugate_gradients",))()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pp.py", line 460, in submit
sfunc = self.__dumpsfunc((func, ) + depfuncs, modules)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pp.py", line 638, in __dumpsfunc
sources = [self.__get_source(func) for func in funcs]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pp.py", line 705, in __get_source
sourcelines = inspect.getsourcelines(func)[0]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/inspect.py", line 690, in getsourcelines
lines, lnum = findsource(object)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/inspect.py", line 526, in findsource
file = getfile(object)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/inspect.py", line 408, in getfile
raise TypeError('{!r} is a built-in class'.format(object))
TypeError: <module '__builtin__' (built-in)> is a built-in class
</code></pre>
| 0 | 2016-07-30T11:44:07Z | 38,786,046 | <p>Finally I have done myself by another method called multiprocessing.pool based on the blog <a href="http://chriskiehl.com/article/parallelism-in-one-line/" rel="nofollow">http://chriskiehl.com/article/parallelism-in-one-line/</a> and <a href="https://pymotw.com/2/multiprocessing/basics.html" rel="nofollow">https://pymotw.com/2/multiprocessing/basics.html</a></p>
<p>Here is my Pseudo CODE:</p>
<pre><code>from multiprocessing import Pool
def get_mm_script(scripts):
#I just created all my mm.py scripts as string template
return scripts
def run(filename):
#here i use system command to run my all scripts.
return
if __name__ == '__main__':
scripts=get_mm_script(f)
pool = Pool(4)
pool.map(run, scripts)
pool.close()
pool.join()
</code></pre>
| 1 | 2016-08-05T09:37:00Z | [
"python",
"multithreading",
"parallel-processing",
"multiprocessing",
"multicore"
] |
python: list index out of range error in code | 38,673,583 | <p>I'm pretty new to python, and really stuck at this.</p>
<p>Basically, I'm supposed to make a check code to check the last alphabet of the NRIC. My code works fine so long as there are 7 numbers (like there are supposed to be). However, my teacher just helped me find out that my code doesn't work whenever the number starts with 0. Below is my code.</p>
<pre><code>def check_code():
nricno = int(input("Please enter your NRIC(numbers only). If you don't type an nric number, this code will fail."))
NRIC = [ int(x) for x in str(nricno) ]
a = NRIC[0]*2
b = NRIC[1]*7
c = NRIC[2]*6
d = NRIC[3]*5
e = NRIC[4]*4
f = NRIC[5]*3
g = NRIC[6]*2
SUM = int(a + b + c + d + e + f +g)
remainder = int(SUM % 11)
leftovers = int(11 - remainder)
rightovers = leftovers - 1
Alphabet = "ABCDEFGHIZJ"
checkcode = chr(ord('a') + rightovers)
print(checkcode)
check_code()
</code></pre>
<p>This is the way the NRIC is supposed to be calculated, in the image below.</p>
<p><a href="http://i.stack.imgur.com/lT0Dz.png" rel="nofollow">NRIC calculation help.</a></p>
| -1 | 2016-07-30T11:46:35Z | 38,673,616 | <p>When you convert the string input into an <code>int</code>, the leading zero is stripped away (e.g. <code>"0153444"</code> -> <code>153444</code>). When you convert back to a string again in the list comprehension, you won't get the zero back, so you end up with an NRIC list of [1, 5, 3, 4, 4, 4] instead of [0, 1, 5, 3, 4, 4, 4]. If you remove the <code>int</code> call, like this, you won't lose the leading zero.</p>
<pre><code># Change this:
nricno = int(input("Please enter your NRIC(numbers only)..."))
# To this:
nricno = input("Please enter your NRIC(numbers only)...")
</code></pre>
| 2 | 2016-07-30T11:50:41Z | [
"python"
] |
python: list index out of range error in code | 38,673,583 | <p>I'm pretty new to python, and really stuck at this.</p>
<p>Basically, I'm supposed to make a check code to check the last alphabet of the NRIC. My code works fine so long as there are 7 numbers (like there are supposed to be). However, my teacher just helped me find out that my code doesn't work whenever the number starts with 0. Below is my code.</p>
<pre><code>def check_code():
nricno = int(input("Please enter your NRIC(numbers only). If you don't type an nric number, this code will fail."))
NRIC = [ int(x) for x in str(nricno) ]
a = NRIC[0]*2
b = NRIC[1]*7
c = NRIC[2]*6
d = NRIC[3]*5
e = NRIC[4]*4
f = NRIC[5]*3
g = NRIC[6]*2
SUM = int(a + b + c + d + e + f +g)
remainder = int(SUM % 11)
leftovers = int(11 - remainder)
rightovers = leftovers - 1
Alphabet = "ABCDEFGHIZJ"
checkcode = chr(ord('a') + rightovers)
print(checkcode)
check_code()
</code></pre>
<p>This is the way the NRIC is supposed to be calculated, in the image below.</p>
<p><a href="http://i.stack.imgur.com/lT0Dz.png" rel="nofollow">NRIC calculation help.</a></p>
| -1 | 2016-07-30T11:46:35Z | 38,673,856 | <p>Edit: This code verifies if the input is made of 7 digits.</p>
<pre><code>def check_code():
while True:
nricno = input("Please enter your NRIC(numbers only). If you don't type an nric number, this code will restart.")
if len(nricno) == 7 and nricno.digits == True:
print ("Works")
continue
else:
print("Error, 7 digit number was not inputted and/or letters and other characters were inputted.")
a = NRIC[0]*2
b = NRIC[1]*7
c = NRIC[2]*6
d = NRIC[3]*5
e = NRIC[4]*4
f = NRIC[5]*3
g = NRIC[6]*2
SUM = int(a + b + c + d + e + f +g)
remainder = int(SUM % 11)
print(remainder)
leftovers = int(11 - remainder)
rightovers = leftovers - 1
Alphabet = "ABCDEFGHIZJ"
checkcode = chr(ord('a') + rightovers)
print(checkcode.upper())
check_code()
</code></pre>
| 0 | 2016-07-30T12:16:12Z | [
"python"
] |
python: list index out of range error in code | 38,673,583 | <p>I'm pretty new to python, and really stuck at this.</p>
<p>Basically, I'm supposed to make a check code to check the last alphabet of the NRIC. My code works fine so long as there are 7 numbers (like there are supposed to be). However, my teacher just helped me find out that my code doesn't work whenever the number starts with 0. Below is my code.</p>
<pre><code>def check_code():
nricno = int(input("Please enter your NRIC(numbers only). If you don't type an nric number, this code will fail."))
NRIC = [ int(x) for x in str(nricno) ]
a = NRIC[0]*2
b = NRIC[1]*7
c = NRIC[2]*6
d = NRIC[3]*5
e = NRIC[4]*4
f = NRIC[5]*3
g = NRIC[6]*2
SUM = int(a + b + c + d + e + f +g)
remainder = int(SUM % 11)
leftovers = int(11 - remainder)
rightovers = leftovers - 1
Alphabet = "ABCDEFGHIZJ"
checkcode = chr(ord('a') + rightovers)
print(checkcode)
check_code()
</code></pre>
<p>This is the way the NRIC is supposed to be calculated, in the image below.</p>
<p><a href="http://i.stack.imgur.com/lT0Dz.png" rel="nofollow">NRIC calculation help.</a></p>
| -1 | 2016-07-30T11:46:35Z | 38,674,235 | <p>Here's a compact way to calculate the NRIC check code. If an invalid string is passed to the function a ValueError exception is raised, which will cause the program to crash. And if a non-string is passed TypeError will be raised. You can catch exceptions using the <code>try:... except</code> syntax.</p>
<pre><code>def check_code(nric):
if len(nric) != 7 or not nric.isdigit():
raise ValueError("Bad NRIC: {!r}".format(nric))
weights = (2, 7, 6, 5, 4, 3, 2)
n = sum(int(c) * w for c, w in zip(nric, weights))
return "ABCDEFGHIZJ"[10 - n % 11]
# Test
nric = "9300007"
print(check_code(nric))
</code></pre>
<p><strong>output</strong></p>
<pre><code>B
</code></pre>
| 1 | 2016-07-30T12:59:05Z | [
"python"
] |
python: list index out of range error in code | 38,673,583 | <p>I'm pretty new to python, and really stuck at this.</p>
<p>Basically, I'm supposed to make a check code to check the last alphabet of the NRIC. My code works fine so long as there are 7 numbers (like there are supposed to be). However, my teacher just helped me find out that my code doesn't work whenever the number starts with 0. Below is my code.</p>
<pre><code>def check_code():
nricno = int(input("Please enter your NRIC(numbers only). If you don't type an nric number, this code will fail."))
NRIC = [ int(x) for x in str(nricno) ]
a = NRIC[0]*2
b = NRIC[1]*7
c = NRIC[2]*6
d = NRIC[3]*5
e = NRIC[4]*4
f = NRIC[5]*3
g = NRIC[6]*2
SUM = int(a + b + c + d + e + f +g)
remainder = int(SUM % 11)
leftovers = int(11 - remainder)
rightovers = leftovers - 1
Alphabet = "ABCDEFGHIZJ"
checkcode = chr(ord('a') + rightovers)
print(checkcode)
check_code()
</code></pre>
<p>This is the way the NRIC is supposed to be calculated, in the image below.</p>
<p><a href="http://i.stack.imgur.com/lT0Dz.png" rel="nofollow">NRIC calculation help.</a></p>
| -1 | 2016-07-30T11:46:35Z | 39,585,944 | <p>When you force your input as an int, the leading 0 will be interpreted as an error in Python 3. For example, <code>int(0351)</code> would not yield either <code>0351</code> or <code>351</code> but would just cause an error stating <code>invalid token</code>.</p>
<p>You should not force the input to be an int, but instead add an assert statement stating that the value inputted must be a 7 digit integer (or a regular statement as you've done if you prefer).</p>
<h1>Change this:</h1>
<p><code>nricno = int(input("Please enter your NRIC(numbers only). If you don't type an nric number, this code will fail."))</code></p>
<h1>To this:</h1>
<p><code>nricno = input("Please enter your NRIC(numbers only). If you don't type an nric number, this code will fail.")</code></p>
| 0 | 2016-09-20T04:54:30Z | [
"python"
] |
Why does uploaded image permissions depends on it's size? | 38,673,607 | <p>Web application build using Django Framework</p>
<ul>
<li>Server: <code>CentOS Linux release 7.1.1503 (Core)</code></li>
<li>Nginx: <code>nginx version: nginx/1.6.3</code></li>
<li>Gunicorn: <code>gunicorn==19.3.0</code></li>
<li>Django: <code>Django==1.8.4</code></li>
</ul>
<p>When uploading image which is less than <code>2.7M</code> everything works fine.</p>
<ol>
<li>Image is created on server where it should be (<code>MEDIA_ROOT</code>) </li>
<li>user group and read permissions: <code>-rw-r--r-- 1 tb360 tb360</code> </li>
<li>Image is served by the web server without issue</li>
</ol>
<p>But when image size exceeds <code>2.7M</code> </p>
<ol>
<li>Image is created on server where it should be (<code>MEDIA_ROOT</code>) </li>
<li>Image is without read permission: <code>-rw------- 1 tb360 tb360</code></li>
<li>Image is not served by the web server
reason: no read permission on file</li>
</ol>
<p>After I just add read permission to such image, it is served by web server without issue.</p>
<p>When Testing on local development machine there is no similar problem.</p>
<p><a href="http://pastebin.com/wwLZ5V0i" rel="nofollow">nginx configuration</a></p>
| 0 | 2016-07-30T11:49:12Z | 38,673,672 | <p>Django is using 2 upload handlers: <code>MemoryFileUploadHandler</code> and <code>TemporaryFileUploadHandler</code>. First one will keep uploaded file in ram, before deciding what to do with it. Second one will put file in temp directory and move it later to proper location.</p>
<p>Problem probably occurs because memory handler will take only files up to certain size and your system is having different default file permissions for temp directory. That permissions will be kept when moving file from temp to your <code>MEDIA_ROOT</code>.</p>
<p>You can fix that issue by setting <a href="https://docs.djangoproject.com/en/1.9/ref/settings/#file-upload-permissions" rel="nofollow"><code>FILE_UPLOAD_PERMISSIONS</code></a> (so files will always have proper permissions) or <a href="https://docs.djangoproject.com/en/1.9/ref/settings/#file-upload-temp-dir" rel="nofollow"><code>FILE_UPLOAD_TEMP_DIR</code></a> (to store temp files on different location, which doesn't set more restrictive file permissions). </p>
| 0 | 2016-07-30T11:58:15Z | [
"python",
"django",
"nginx",
"centos",
"gunicorn"
] |
Using subprocess in python with multiple values and same argument | 38,673,624 | <p>This is my command</p>
<pre><code>subprocess.call(["wine","MP4Box.exe","-add",outputdir+"tmp.m4a","-itags",'name=a',"-itags", "artist=b","-itags", "album_artist=c","-itags", "album=d","-itags", "created=2034","-itags", "genre=e","-new", "tmp23.m4a"])
</code></pre>
<p>In the output file i can get only genre, subprocess sends only the last "-itags" value.Anyway to make this work?</p>
<p>thanks</p>
| 1 | 2016-07-30T11:51:51Z | 38,673,720 | <p>According to their <a href="https://gpac.wp.mines-telecom.fr/mp4box/mp4box-documentation/" rel="nofollow">documentation</a> the parameter should be passed like this </p>
<blockquote>
<p>-itags tag1[:tag2] </p>
</blockquote>
<p>So you may try to do it like so </p>
<pre><code>subprocess.call(["wine","MP4Box.exe","-add",outputdir+"tmp.m4a","-itags","name=a:artist=b" ...
</code></pre>
| 0 | 2016-07-30T12:02:19Z | [
"python",
"subprocess",
"xargs",
"mp4box"
] |
Using subprocess in python with multiple values and same argument | 38,673,624 | <p>This is my command</p>
<pre><code>subprocess.call(["wine","MP4Box.exe","-add",outputdir+"tmp.m4a","-itags",'name=a',"-itags", "artist=b","-itags", "album_artist=c","-itags", "album=d","-itags", "created=2034","-itags", "genre=e","-new", "tmp23.m4a"])
</code></pre>
<p>In the output file i can get only genre, subprocess sends only the last "-itags" value.Anyway to make this work?</p>
<p>thanks</p>
| 1 | 2016-07-30T11:51:51Z | 38,673,732 | <pre><code>outputdir = "output"
subprocess.call([
"wine", "MP4Box.exe", "-add", outputdir + "/tmp.m4a",
"-itags", "name={name}:artist={artist}:album_artist={album_artist}:album={album}:created={created}:genre={genre}".format(
name="a",
artist="b",
album_artist="c",
album="d",
created=2034,
genre="e"
),
"-new", "tmp23.m4a"
])
</code></pre>
<p>From the <a href="https://gpac.wp.mines-telecom.fr/mp4box/mp4box-documentation/" rel="nofollow">docs</a>. </p>
<p>Note that the tags cannot contain the <code>:</code> character as this will break the command.</p>
| 1 | 2016-07-30T12:02:55Z | [
"python",
"subprocess",
"xargs",
"mp4box"
] |
Efficient method for matching data in multiple csv files | 38,673,647 | <p>Here is a snippet of my code:</p>
<pre><code>for i,r1 in enumerate(Solution.values):
h1=ProjectedRevenue.index[i]
District_ID,Instrument_ID,Buy_or_not,Revenue=r1
listSol=[]
listSol.append(h1)
listSol.append(list(r1)[0])
listSol.append(list(r1)[1])
for j,r2 in enumerate(ProjectedRevenue.values):
h2=ProjectedRevenue.index[j]
if h2 == listSol[0]:
District_ID,Instrument_ID,Annual_Projected_Revenue= r2
listPR=list(r2)
if listSol[1] == listPR[1] & listSol[2] == listPR[2]:
if(listPR[2]>0):
#do stuff
continue
else:
#do stuff
continue
</code></pre>
<p>I need some help regarding this code. I'm new to python, and I have to search and compare data entries from multiple .csv files. I have seen itertuples but it is not able to recognise the function. So I just made this to search and perform operations via pandas but this has a complexity of $O(n^2)$ and is very slow. Any help regarding this?</p>
<p><strong>EDIT</strong>: So I am using pandas and numpy in the code to manipulate data. And to make it more clear, I need to do a search such that elements of the same index coloumns in two different files are same. Consider the example below:</p>
<p>Solution.csv:</p>
<pre><code>Hospital_ID,District_ID,Instrument_ID
1,4,6
2,5,4
7,8,5
</code></pre>
<p>ProjectedRevenue.csv:</p>
<pre><code>Hospital_ID,District_ID,Instrument_ID
9,3,5
7,8,5
1,2,6
</code></pre>
<p>So here the common entry is 7,8,5 in the two files.I want to know the fastest way to search and match them.</p>
<p><strong>UPDATE</strong>: The previous question was about a syntax error which is resolved.</p>
| -1 | 2016-07-30T11:54:28Z | 38,673,655 | <p>use <code>:</code> at the end of <code>if</code> check.</p>
<pre><code>if h2 == listSol[0]:
</code></pre>
| 0 | 2016-07-30T11:55:27Z | [
"python",
"python-3.x",
"numpy",
"pandas"
] |
Efficient method for matching data in multiple csv files | 38,673,647 | <p>Here is a snippet of my code:</p>
<pre><code>for i,r1 in enumerate(Solution.values):
h1=ProjectedRevenue.index[i]
District_ID,Instrument_ID,Buy_or_not,Revenue=r1
listSol=[]
listSol.append(h1)
listSol.append(list(r1)[0])
listSol.append(list(r1)[1])
for j,r2 in enumerate(ProjectedRevenue.values):
h2=ProjectedRevenue.index[j]
if h2 == listSol[0]:
District_ID,Instrument_ID,Annual_Projected_Revenue= r2
listPR=list(r2)
if listSol[1] == listPR[1] & listSol[2] == listPR[2]:
if(listPR[2]>0):
#do stuff
continue
else:
#do stuff
continue
</code></pre>
<p>I need some help regarding this code. I'm new to python, and I have to search and compare data entries from multiple .csv files. I have seen itertuples but it is not able to recognise the function. So I just made this to search and perform operations via pandas but this has a complexity of $O(n^2)$ and is very slow. Any help regarding this?</p>
<p><strong>EDIT</strong>: So I am using pandas and numpy in the code to manipulate data. And to make it more clear, I need to do a search such that elements of the same index coloumns in two different files are same. Consider the example below:</p>
<p>Solution.csv:</p>
<pre><code>Hospital_ID,District_ID,Instrument_ID
1,4,6
2,5,4
7,8,5
</code></pre>
<p>ProjectedRevenue.csv:</p>
<pre><code>Hospital_ID,District_ID,Instrument_ID
9,3,5
7,8,5
1,2,6
</code></pre>
<p>So here the common entry is 7,8,5 in the two files.I want to know the fastest way to search and match them.</p>
<p><strong>UPDATE</strong>: The previous question was about a syntax error which is resolved.</p>
| -1 | 2016-07-30T11:54:28Z | 38,675,126 | <p>try this vectorized pandas approach:</p>
<pre><code>In [22]: fn1 = r'D:\temp\.data\38673647\Solution.csv'
In [23]: fn2 = r'D:\temp\.data\38673647\ProjectedRevenue.csv'
In [24]: df1 = pd.read_csv(fn1)
In [25]: df2 = pd.read_csv(fn2)
In [26]: df1
Out[26]:
Hospital_ID District_ID Instrument_ID
0 1 4 6
1 2 5 4
2 7 8 5
In [27]: df2
Out[27]:
Hospital_ID District_ID Instrument_ID
0 9 3 5
1 7 8 5
2 1 2 6
In [28]: pd.merge(df1, df2, on=df1.columns.tolist())
Out[28]:
Hospital_ID District_ID Instrument_ID
0 7 8 5
</code></pre>
| 1 | 2016-07-30T14:42:07Z | [
"python",
"python-3.x",
"numpy",
"pandas"
] |
How can create Android app using Django REST API | 38,673,676 | <p>How can create an Android App with Django back-end.
I have created REST API but unable to communicate between Android and API. Any luck...Thanks in advance..</p>
| -1 | 2016-07-30T11:58:19Z | 38,676,081 | <p>Communicating to your REST-API should be made via SSL-encrypted calls to your specified endpoints like <a href="https://myurl.com/api/call_method" rel="nofollow">https://myurl.com/api/call_method</a></p>
<p>You can use androids HttpUrlConnection to access your server. Preprocess your user input in a task specific handler and start an AsyncTask to your server. You can authenticate the user i.e. with an google or facebook OAuth2 login with backend verfication if your data should be protected. You can return a JSON to your app and process the data from there on.</p>
<p>All those single steps have great documentation or SO posts already. When you have a specific problem, come back with the exact question.</p>
| 0 | 2016-07-30T16:27:34Z | [
"android",
"python",
"django"
] |
How can I call Python script from URL and get parameters | 38,673,764 | <p>I call my script with <a href="http://domain/script.py" rel="nofollow">http://domain/script.py</a>
How can I call the script with a parameter and get the parameter in the python script when I call it like <a href="http://domain/script.py?parmameter=value" rel="nofollow">http://domain/script.py?parmameter=value</a>
I can't find a solution or even find out if this is possible or not.
Would be very happy fo a solution.
Thanks to all.</p>
<hr>
<p><strong>edit</strong>:
I run it now from php:
<a href="http://domain/script.php?parmameter=value" rel="nofollow">http://domain/script.php?parmameter=value</a></p>
<pre><code><?php
if (isset($_GET['parmameter'])){
$myval = $_GET['parmameter'];
$command = escapeshellcmd("python script.py $myval" );
...
?>
</code></pre>
| 0 | 2016-07-30T12:07:11Z | 38,673,846 | <p>Tell us more about your setup and what is working. What is serving your Python script? Are you using a Web server like nginx or Apache?<br>
You can not normally run a Python script via a Web browser without some kind of interface - otherwise you would just get the source of the Python file served as text.<br>
If you were using Flask or Django as a Web framework, then these have built-in ways to access the query strings.</p>
<p>EDIT: since you're calling it as a shell script, you want to get the <em>command line arguments</em>.</p>
<p>Have a look at this answer: <a href="http://stackoverflow.com/questions/14596694/calling-command-line-arguments-in-python-3">Calling command line arguments in Python 3</a></p>
<p>You can get the value of your <code>$myval</code> variable with: <code>import sys</code> then <code>sys.argv[1]</code> (but best to check that it exists before trying to access it, or use exception handling).</p>
| 0 | 2016-07-30T12:15:01Z | [
"python",
"url",
"server"
] |
Spark 2.0 Possible Bug on DataFrame Initialization | 38,673,826 | <p>There is possible bug that is produced by the following code :</p>
<pre><code>_struct = [
types.StructField('string_field', types.StringType(), True),
types.StructField('long_field', types.LongType(), True),
types.StructField('double_field', types.DoubleType(), True)
]
_rdd = sc.parallelize([Row(string_field='1', long_field=1, double_field=1.1)])
_schema = types.StructType(_struct)
_df = sqlContext.createDataFrame(_rdd, schema=_schema)
_df.take(1)
</code></pre>
<p>The expected output is an RDD with 1 row should be created.</p>
<p>But with the current behavior I receive the following error:</p>
<pre><code>DoubleType can not accept object '1' in type <type 'str'>
</code></pre>
<p><strong>PS:</strong> I'm using spark 2.0 compile on Scala 2.10</p>
<p><strong>Edit</strong></p>
<p>Thanks to the answerer's suggestion, I can properly understand this now. To simplify, make sure that the struct is sorted. The following code explains this:</p>
<pre><code># This doesn't work:
_struct = [
SparkTypes.StructField('string_field', SparkTypes.StringType(), True),
SparkTypes.StructField('long_field', SparkTypes.LongType(), True),
SparkTypes.StructField('double_field', SparkTypes.DoubleType(), True)
]
_rdd = sc.parallelize([Row(string_field='1', long_field=1, double_field=1.1)])
# But this will work, since schema is sorted:
_struct = sorted([
SparkTypes.StructField('string_field', SparkTypes.StringType(), True),
SparkTypes.StructField('long_field', SparkTypes.LongType(), True),
SparkTypes.StructField('double_field', SparkTypes.DoubleType(), True)
], key=lambda x: x.name)
params = {'string_field':'1', 'long_field':1, 'double_field':1.1}
_rdd = sc.parallelize([Row(**params)])
_schema = SparkTypes.StructType(_struct)
_df = sqlContext.createDataFrame(_rdd, schema=_schema)
_df.take(1)
_schema = SparkTypes.StructType(_struct)
_df = sqlContext.createDataFrame(_rdd, schema=_schema)
_df.take(1)
</code></pre>
| 1 | 2016-07-30T12:13:40Z | 38,674,803 | <p>This looks like a change of behavior between 1.x and 2.x but I doubt it is a bug. In particular when you create <code>Row</code> object with <code>kwargs</code> (named arguments) <a href="https://github.com/apache/spark/blob/274f3b9ec86e4109c7678eef60f990d41dc3899f/python/pyspark/sql/types.py#L1375-L1376" rel="nofollow">the fields are sorted by names</a>. Let's illustrate that with a simple example:</p>
<pre class="lang-py prettyprint-override"><code>Row(string_field='1', long_field=1, double_field=1.1)
## Row(double_field=1.1, long_field=1, string_field='1'
</code></pre>
<p>As you can see order of fields change and is no longer reflected in the schema. </p>
<p>Prior to 2.0.0 Spark verifies types <a href="https://github.com/apache/spark/blob/branch-1.6/python/pyspark/sql/context.py#L348-L350" rel="nofollow">only if <code>data</code> argument for <code>createDataFrame</code> is a local data structure</a>. So following code:</p>
<pre><code>sqlContext.createDataFrame(
data=[Row(string_field='1', long_field=1, double_field=1.1)],
schema=_schema
)
</code></pre>
<p>would fail in 1.6 as well</p>
<p>Spark 2.0.0 introduced <a href="https://github.com/apache/spark/blob/274f3b9ec86e4109c7678eef60f990d41dc3899f/python/pyspark/sql/session.py#L525" rel="nofollow">verification for <code>RDDs</code></a> and provides consistent behavior between local and distributed inputs.</p>
| 4 | 2016-07-30T14:06:19Z | [
"python",
"apache-spark",
"pyspark",
"apache-spark-sql"
] |
Redefine *= operator in numpy | 38,673,878 | <p>As <a href="http://stackoverflow.com/questions/38673531/multiply-numpy-int-and-float-arrays">mentioned here</a> and <a href="https://github.com/numpy/numpy/pull/6499/files" rel="nofollow">here</a>, this doesn't work anymore in numpy 1.7+ :</p>
<pre><code>import numpy
A = numpy.array([1, 2, 3, 4], dtype=numpy.int16)
B = numpy.array([0.5, 2.1, 3, 4], dtype=numpy.float64)
A *= B
</code></pre>
<p>A workaround is to do:</p>
<pre><code>def mult(a,b):
numpy.multiply(a, b, out=a, casting="unsafe")
def add(a,b):
numpy.add(a, b, out=a, casting="unsafe")
mult(A,B)
</code></pre>
<p>but that's way too long to write for each matrix operation!</p>
<p><strong>How can override the numpy <code>*=</code> operator to do this by default?</strong></p>
<p>Should I subclass something?</p>
| 4 | 2016-07-30T12:18:32Z | 38,673,941 | <p>You can create a general function and pass the intended attribute to it:</p>
<pre><code>def calX(a,b, attr):
try:
return getattr(numpy, attr)(a, b, out=a, casting="unsafe")
except AttributeError:
raise Exception("Please enter a valid attribute")
</code></pre>
<p>Demo:</p>
<pre><code>>>> import numpy
>>> A = numpy.array([1, 2, 3, 4], dtype=numpy.int16)
>>> B = numpy.array([0.5, 2.1, 3, 4], dtype=numpy.float64)
>>> calX(A, B, 'multiply')
array([ 0, 4, 9, 16], dtype=int16)
>>> calX(A, B, 'subtract')
array([ 0, 1, 6, 12], dtype=int16)
</code></pre>
<p>Note that if you want to override the result you can just assign the function's return to the first matrix.</p>
<pre><code>A = calX(A, B, 'multiply')
</code></pre>
| 1 | 2016-07-30T12:25:34Z | [
"python",
"arrays",
"numpy",
"subclass"
] |
Redefine *= operator in numpy | 38,673,878 | <p>As <a href="http://stackoverflow.com/questions/38673531/multiply-numpy-int-and-float-arrays">mentioned here</a> and <a href="https://github.com/numpy/numpy/pull/6499/files" rel="nofollow">here</a>, this doesn't work anymore in numpy 1.7+ :</p>
<pre><code>import numpy
A = numpy.array([1, 2, 3, 4], dtype=numpy.int16)
B = numpy.array([0.5, 2.1, 3, 4], dtype=numpy.float64)
A *= B
</code></pre>
<p>A workaround is to do:</p>
<pre><code>def mult(a,b):
numpy.multiply(a, b, out=a, casting="unsafe")
def add(a,b):
numpy.add(a, b, out=a, casting="unsafe")
mult(A,B)
</code></pre>
<p>but that's way too long to write for each matrix operation!</p>
<p><strong>How can override the numpy <code>*=</code> operator to do this by default?</strong></p>
<p>Should I subclass something?</p>
| 4 | 2016-07-30T12:18:32Z | 38,674,413 | <p>You can use <code>np.set_numeric_ops</code> to override array arithmetic methods:</p>
<pre><code>import numpy as np
def unsafe_multiply(a, b, out=None):
return np.multiply(a, b, out=out, casting="unsafe")
np.set_numeric_ops(multiply=unsafe_multiply)
A = np.array([1, 2, 3, 4], dtype=np.int16)
B = np.array([0.5, 2.1, 3, 4], dtype=np.float64)
A *= B
print(repr(A))
# array([ 0, 4, 9, 16], dtype=int16)
</code></pre>
| 6 | 2016-07-30T13:22:53Z | [
"python",
"arrays",
"numpy",
"subclass"
] |
Find the row indexes of several values in a numpy array | 38,674,027 | <p>I have an array X:</p>
<pre><code>X = np.array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
</code></pre>
<p>And I wish to find the index of the row of several values in this array:</p>
<pre><code>searched_values = np.array([[4, 2],
[3, 3],
[5, 6]])
</code></pre>
<p>For this example I would like a result like:</p>
<pre><code>[0,3,4]
</code></pre>
<p>I have a code doing this, but I think it is overly complicated:</p>
<pre><code>X = np.array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
searched_values = np.array([[4, 2],
[3, 3],
[5, 6]])
result = []
for s in searched_values:
idx = np.argwhere([np.all((X-s)==0, axis=1)])[0][1]
result.append(idx)
print(result)
</code></pre>
<p>I found <a href="http://stackoverflow.com/a/32191125/4876550">this answer</a> for a similar question but it works only for 1d arrays.</p>
<p>Is there a way to do what I want in a simpler way?</p>
| 4 | 2016-07-30T12:34:59Z | 38,674,038 | <p><strong>Approach #1</strong></p>
<p>One approach would be to use <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy broadcasting</code></a>, like so -</p>
<pre><code>np.where((X==searched_values[:,None]).all(-1))[1]
</code></pre>
<p><strong>Approach #2</strong></p>
<p>A memory efficient approach would be to convert each row as linear index equivalents and then using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html" rel="nofollow"><code>np.in1d</code></a>, like so -</p>
<pre><code>dims = X.max(0)+1
out = np.where(np.in1d(np.ravel_multi_index(X.T,dims),\
np.ravel_multi_index(searched_values.T,dims)))[0]
</code></pre>
<p><strong>Approach #3</strong></p>
<p>Another memory efficient approach using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html" rel="nofollow"><code>np.searchsorted</code></a> and with that same philosophy of converting to linear index equivalents would be like so -</p>
<pre><code>dims = X.max(0)+1
X1D = np.ravel_multi_index(X.T,dims)
searched_valuesID = np.ravel_multi_index(searched_values.T,dims)
sidx = X1D.argsort()
out = sidx[np.searchsorted(X1D,searched_valuesID,sorter=sidx)]
</code></pre>
<p>Please note that this <code>np.searchsorted</code> method assumes there is a match for each row from <code>searched_values</code> in <code>X</code>.</p>
<hr>
<h2>How does <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel_multi_index.html" rel="nofollow"><code>np.ravel_multi_index</code></a> work?</h2>
<p>This function gives us the linear index equivalent numbers. It accepts a <code>2D</code> array of <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#integer-array-indexing" rel="nofollow"><code>n-dimensional indices</code></a>, set as columns and the shape of that n-dimensional grid itself onto which those indices are to be mapped and equivalent linear indices are to be computed.</p>
<p>Let's use the inputs we have for the problem at hand. Take the case of input <code>X</code> and note the first row of it. Since, we are trying to convert each row of <code>X</code> into its linear index equivalent and since <code>np.ravel_multi_index</code> assumes each column as one indexing tuple, we need to transpose <code>X</code> before feeding into the function. Since, the number of elements per row in <code>X</code> in this case is <code>2</code>, the n-dimensional grid to be mapped onto would be <code>2D</code>. With 3 elements per row in <code>X</code>, it would had been <code>3D</code> grid for mapping and so on.</p>
<p>To see how this function would compute linear indices, consider the first row of <code>X</code> -</p>
<pre><code>In [77]: X
Out[77]:
array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
</code></pre>
<p>We have the shape of the n-dimensional grid as <code>dims</code> -</p>
<pre><code>In [78]: dims
Out[78]: array([10, 7])
</code></pre>
<p>Let's create the 2-dimensional grid to see how that mapping works and linear indices get computed with <code>np.ravel_multi_index</code> -</p>
<pre><code>In [79]: out = np.zeros(dims,dtype=int)
In [80]: out
Out[80]:
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]])
</code></pre>
<p>Let's set the first indexing tuple from <code>X</code>, i.e. the first row from <code>X</code> into the grid -</p>
<pre><code>In [81]: out[4,2] = 1
In [82]: out
Out[82]:
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]])
</code></pre>
<p>Now, to see the linear index equivalent of the element just set, let's flatten and use <code>np.where</code> to detect that <code>1</code>.</p>
<pre><code>In [83]: np.where(out.ravel())[0]
Out[83]: array([30])
</code></pre>
<p>This could also be computed if row-major ordering is taken into account.</p>
<p>Let's use <code>np.ravel_multi_index</code> and verify those linear indices -</p>
<pre><code>In [84]: np.ravel_multi_index(X.T,dims)
Out[84]: array([30, 66, 61, 24, 41])
</code></pre>
<p>Thus, we would have linear indices corresponding to each indexing tuple from <code>X</code>, i.e. each row from <code>X</code>.</p>
<p><strong>Choosing dimensions for <code>np.ravel_multi_index</code> to form unique linear indices</strong></p>
<p>Now, the idea behind considering each row of <code>X</code> as indexing tuple of a n-dimensional grid and converting each such tuple to a scalar is to have unique scalars corresponding to unique tuples, i.e. unique rows in <code>X</code>.</p>
<p>Let's take another look at <code>X</code> -</p>
<pre><code>In [77]: X
Out[77]:
array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
</code></pre>
<p>Now, as discussed in the previous section, we are considering each row as indexing tuple. Within each such indexing tuple, the first element would represent the first axis of the n-dim grid, second element would be the second axis of the grid and so on until the last element of each row in <code>X</code>. In essence, each column would represent one dimension or axis of the grid. If we are to map all elements from <code>X</code> onto the same n-dim grid, we need to consider the maximum stretch of each axis of such a proposed n-dim grid. Assuming we are dealing with positive numbers in <code>X</code>, such a stretch would be the maximum of each column in <code>X</code> + 1. That <code>+ 1</code> is because Python follows <code>0-based</code> indexing. So, for example <strong><code>X[1,0] == 9</code> would map to the 10th row</strong> of the proposed grid. Similarly, <strong><code>X[4,1] == 6</code> would go to the <code>7th</code> column</strong> of that grid.</p>
<p>So, for our sample case, we had -</p>
<pre><code>In [7]: dims = X.max(axis=0) + 1 # Or simply X.max(0) + 1
In [8]: dims
Out[8]: array([10, 7])
</code></pre>
<p>Thus, we would need a grid of at least a shape of <code>(10,7)</code> for our sample case. More lengths along the dimensions won't hurt and would give us unique linear indices too. </p>
<p>Concluding remarks : One important thing to be noted here is that if we have negative numbers in <code>X</code>, we need to add proper offsets along each column in <code>X</code> to make those indexing tuples as positive numbers before using <code>np.ravel_multi_index</code>.</p>
| 5 | 2016-07-30T12:36:15Z | [
"python",
"arrays",
"numpy"
] |
Find the row indexes of several values in a numpy array | 38,674,027 | <p>I have an array X:</p>
<pre><code>X = np.array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
</code></pre>
<p>And I wish to find the index of the row of several values in this array:</p>
<pre><code>searched_values = np.array([[4, 2],
[3, 3],
[5, 6]])
</code></pre>
<p>For this example I would like a result like:</p>
<pre><code>[0,3,4]
</code></pre>
<p>I have a code doing this, but I think it is overly complicated:</p>
<pre><code>X = np.array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
searched_values = np.array([[4, 2],
[3, 3],
[5, 6]])
result = []
for s in searched_values:
idx = np.argwhere([np.all((X-s)==0, axis=1)])[0][1]
result.append(idx)
print(result)
</code></pre>
<p>I found <a href="http://stackoverflow.com/a/32191125/4876550">this answer</a> for a similar question but it works only for 1d arrays.</p>
<p>Is there a way to do what I want in a simpler way?</p>
| 4 | 2016-07-30T12:34:59Z | 38,674,119 | <pre><code>X = np.array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
S = np.array([[4, 2],
[3, 3],
[5, 6]])
result = [[i for i,row in enumerate(X) if (s==row).all()] for s in S]
</code></pre>
<p>or</p>
<pre><code>result = [i for s in S for i,row in enumerate(X) if (s==row).all()]
</code></pre>
<p>if you want a flat list (assuming there is exactly one match per searched value).</p>
| 2 | 2016-07-30T12:45:41Z | [
"python",
"arrays",
"numpy"
] |
Find the row indexes of several values in a numpy array | 38,674,027 | <p>I have an array X:</p>
<pre><code>X = np.array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
</code></pre>
<p>And I wish to find the index of the row of several values in this array:</p>
<pre><code>searched_values = np.array([[4, 2],
[3, 3],
[5, 6]])
</code></pre>
<p>For this example I would like a result like:</p>
<pre><code>[0,3,4]
</code></pre>
<p>I have a code doing this, but I think it is overly complicated:</p>
<pre><code>X = np.array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
searched_values = np.array([[4, 2],
[3, 3],
[5, 6]])
result = []
for s in searched_values:
idx = np.argwhere([np.all((X-s)==0, axis=1)])[0][1]
result.append(idx)
print(result)
</code></pre>
<p>I found <a href="http://stackoverflow.com/a/32191125/4876550">this answer</a> for a similar question but it works only for 1d arrays.</p>
<p>Is there a way to do what I want in a simpler way?</p>
| 4 | 2016-07-30T12:34:59Z | 38,675,193 | <p>Another alternative is to use <code>asvoid</code> (below) to <code>view</code> each row as a <em>single</em>
value of <code>void</code> dtype. This reduces a 2D array to a 1D array, thus allowing you to use <code>np.in1d</code> as usual:</p>
<pre><code>import numpy as np
def asvoid(arr):
"""
Based on http://stackoverflow.com/a/16973510/190597 (Jaime, 2013-06)
View the array as dtype np.void (bytes). The items along the last axis are
viewed as one value. This allows comparisons to be performed which treat
entire rows as one value.
"""
arr = np.ascontiguousarray(arr)
if np.issubdtype(arr.dtype, np.floating):
""" Care needs to be taken here since
np.array([-0.]).view(np.void) != np.array([0.]).view(np.void)
Adding 0. converts -0. to 0.
"""
arr += 0.
return arr.view(np.dtype((np.void, arr.dtype.itemsize * arr.shape[-1])))
X = np.array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
searched_values = np.array([[4, 2],
[3, 3],
[5, 6]])
idx = np.flatnonzero(np.in1d(asvoid(X), asvoid(searched_values)))
print(idx)
# [0 3 4]
</code></pre>
| 5 | 2016-07-30T14:49:06Z | [
"python",
"arrays",
"numpy"
] |
Find the row indexes of several values in a numpy array | 38,674,027 | <p>I have an array X:</p>
<pre><code>X = np.array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
</code></pre>
<p>And I wish to find the index of the row of several values in this array:</p>
<pre><code>searched_values = np.array([[4, 2],
[3, 3],
[5, 6]])
</code></pre>
<p>For this example I would like a result like:</p>
<pre><code>[0,3,4]
</code></pre>
<p>I have a code doing this, but I think it is overly complicated:</p>
<pre><code>X = np.array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
searched_values = np.array([[4, 2],
[3, 3],
[5, 6]])
result = []
for s in searched_values:
idx = np.argwhere([np.all((X-s)==0, axis=1)])[0][1]
result.append(idx)
print(result)
</code></pre>
<p>I found <a href="http://stackoverflow.com/a/32191125/4876550">this answer</a> for a similar question but it works only for 1d arrays.</p>
<p>Is there a way to do what I want in a simpler way?</p>
| 4 | 2016-07-30T12:34:59Z | 39,587,388 | <p>The <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="nofollow">numpy_indexed</a> package (disclaimer: I am its author) contains functionality for performing such operations efficiently (also uses searchsorted under the hood). In terms of functionality, it acts as a vectorized equivalent of list.index:</p>
<pre><code>import numpy_indexed as npi
result = npi.indices(X, searched_values)
</code></pre>
<p>Note that using the 'missing' kwarg, you have full control over behavior of missing items, and it works for nd-arrays (fi; stacks of images) as well. </p>
<p>Update: using the same shapes as Rik X=[520000,28,28] and searched_values=[20000,28,28], it runs in 0.8064 secs, using missing=-1 to detect and denote entries not present in X.</p>
| 0 | 2016-09-20T06:45:20Z | [
"python",
"arrays",
"numpy"
] |
Find the row indexes of several values in a numpy array | 38,674,027 | <p>I have an array X:</p>
<pre><code>X = np.array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
</code></pre>
<p>And I wish to find the index of the row of several values in this array:</p>
<pre><code>searched_values = np.array([[4, 2],
[3, 3],
[5, 6]])
</code></pre>
<p>For this example I would like a result like:</p>
<pre><code>[0,3,4]
</code></pre>
<p>I have a code doing this, but I think it is overly complicated:</p>
<pre><code>X = np.array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
searched_values = np.array([[4, 2],
[3, 3],
[5, 6]])
result = []
for s in searched_values:
idx = np.argwhere([np.all((X-s)==0, axis=1)])[0][1]
result.append(idx)
print(result)
</code></pre>
<p>I found <a href="http://stackoverflow.com/a/32191125/4876550">this answer</a> for a similar question but it works only for 1d arrays.</p>
<p>Is there a way to do what I want in a simpler way?</p>
| 4 | 2016-07-30T12:34:59Z | 39,594,398 | <p>Here is a pretty fast solution that scales up well using numpy and hashlib. It can handle large dimensional matrices or images in seconds. I used it on 520000 X (28 X 28) array and 20000 X (28 X 28) in 2 seconds on my CPU</p>
<p>Code:</p>
<pre><code>import numpy as np
import hashlib
X = np.array([[4, 2],
[9, 3],
[8, 5],
[3, 3],
[5, 6]])
searched_values = np.array([[4, 2],
[3, 3],
[5, 6]])
#hash using sha1 appears to be efficient
xhash=[hashlib.sha1(row).digest() for row in X]
yhash=[hashlib.sha1(row).digest() for row in searched_values]
z=np.in1d(xhash,yhash)
##Use unique to get unique indices to ind1 results
_,unique=np.unique(np.array(xhash)[z],return_index=True)
##Compute unique indices by indexing an array of indices
idx=np.array(range(len(xhash)))
unique_idx=idx[z][unique]
print('unique_idx=',unique_idx)
print('X[unique_idx]=',X[unique_idx])
</code></pre>
<p>Output:</p>
<pre><code>unique_idx= [4 3 0]
X[unique_idx]= [[5 6]
[3 3]
[4 2]]
</code></pre>
| 0 | 2016-09-20T12:40:41Z | [
"python",
"arrays",
"numpy"
] |
Nested relative imports don't seem to work | 38,674,106 | <h2>Description</h2>
<p>I have a package structure where various modules need to obtain information from different ones and therefore I use relative imports. It happens that those relative imports are nested in some way.<br>
I'll just present you the package structure I have:</p>
<pre><code>.
âââ core
â  âââ __init__.py
â  âââ sub1
â  â  âââ __init__.py
â  â  âââ mod1.py
â  âââ sub2
â  âââ __init__.py
â  âââ mod1.py
â  âââ sub1
â  âââ __init__.py
â  âââ mod1.py
âââ main.py
</code></pre>
<p>The files contain the following statements:</p>
<h3>main.py:</h3>
<pre><code>print __name__
import core.sub2.mod1
</code></pre>
<h3>core/sub2/mod1.py</h3>
<pre><code>print __name__
import sub1.mod1
</code></pre>
<h3>core/sub2/sub1/mod1.py</h3>
<pre><code>print __name__
from ...sub1 import mod1
</code></pre>
<h3>core/sub1/mod1.py</h3>
<pre><code>print __name__
from ..sub2 import mod1
</code></pre>
<h3>Visualization</h3>
<p>A visualization of the imports:</p>
<p><a href="http://i.stack.imgur.com/Gltl7.png" rel="nofollow"><img src="http://i.stack.imgur.com/Gltl7.png" alt="Visualization"></a></p>
<h2>Problem</h2>
<p>When I run <code>python main.py</code> I get the following error (I substituted the absolute file paths with <code>./<path-to-file></code>):</p>
<pre><code>__main__
core.sub2.mod1
core.sub2.sub1.mod1
core.sub1.mod1
Traceback (most recent call last):
File "main.py", line 2, in <module>
import core.sub2.mod1
File "./core/sub2/mod1.py", line 2, in <module>
import sub1.mod1
File "./core/sub2/sub1/mod1.py", line 2, in <module>
from ...sub1 import mod1
File "./core/sub1/mod1.py", line 2, in <module>
from ..sub2 import mod1
ImportError: cannot import name mod1
</code></pre>
<p>From <a href="http://stackoverflow.com/questions/14132789/relative-imports-for-the-billionth-time" title="this question">this question</a> I learned that python uses the <code>__name__</code> attribute of a module to resolve its location within the package. So I printed all the names of the modules and they seem to be alright! Why do I get this <code>ImportError</code> then? And how can I make all the imports work?</p>
| 2 | 2016-07-30T12:44:26Z | 38,675,544 | <p>As @Blckknght pointed out, you have an import cycle. you can eliminate this annoying cycle by reorganizing your code.</p>
<p>But if you don't have to invoke <code>core.sub2.mod1</code> immediately when importing <code>core.sub1.mod1</code>, there is a simple way to fix it.</p>
<p>You can import <code>core.sub2</code> and invoke <code>core.sub2.mod1</code> later.</p>
<h3>core/sub1/mod1.py</h3>
<pre><code>print __name__
from .. import sub2
def foo():
t = sub2.mod1
</code></pre>
| 0 | 2016-07-30T15:24:40Z | [
"python",
"python-2.7",
"import"
] |
How to write Django class-based view for form displaying and submission from other template? | 38,674,150 | <p>I have a ModelForm that Iâd like to display in multiple places. For instance, in a ListView, underneath the list of articles. I can do this by putting it in <code>get_context_data()</code> in the ListView. Iâd also like to display the form in its own template.</p>
<p>Iâve created a view for the form, but am not sure how to actually write it.</p>
<p>Iâve defined a <code>get_absolute_url()</code> in my model:</p>
<pre><code>class Article(models.Model):
author = models.ForeignKey('auth.User')
title = models.CharField(max_length=200)
text = models.TextField()
categories = models.ManyToManyField(Category)
city = models.ForeignKey(City)
def __str__(self):
return self.title
def get_absolute_url(self):
return reverse_lazy('article', args=self.id)
</code></pre>
<p>The view for the form itself is:</p>
<p><code>Views.py</code></p>
<pre><code>class ArticleSubmitView(CreateView):
model = Article
form_class = ArticleSubmitForm
# is initial necessary?
inital = {'title': '', 'text': '', 'categories': '', 'city': ''}
# success url not necessary because model has get_absolute_url
# however it does not redirect to the article
template_name = 'articles/article-submit.html'
# handle post data from other template/view
# ???
</code></pre>
<p>The template includes the form (same thing for the ListView template).</p>
<p><code>article-submit.html</code></p>
<pre><code>{% extends 'articles/base.html' %}
{% block article-submit %}
{% include 'articles/article-form.html' %}
{% endblock article-submit %}
</code></pre>
<p>The form submits to the url that calls the CreateView:</p>
<p><code>article-form.html</code></p>
<pre><code><form action="{% url 'article-submit' %}" method="POST">
{% csrf_token %}
{% for field in form %}
<!--- etc. --->
{% endfor %}
</form>
</code></pre>
<p><code>urls.py</code></p>
<pre><code>from django.conf.urls import url
from .views import ArticlesView, ArticleSubmitView
urlpatterns = [
url(r'^$', ArticlesView.as_view(), name='articles'),
# some urls left out for brevity
url(r'^article-submit/$', ArticleSubmitView.as_view(), name='article-submit'),
]
</code></pre>
<p>However, the form does not submit from the list template, nor does it submit from the form template itself. It also doesnât redirect, or show any error messages.</p>
<p>What am I doing wrong?</p>
<p><a href="https://gist.github.com/Flobin/56bdaf52094dd7b99b255a3cae458227#file-views-py" rel="nofollow">Full code is available here</a>.</p>
<p>edit:</p>
<p>Checking to see if the form is valid or not like this shows me that the form is actually not valid:</p>
<pre><code>class ArticleSubmitView(CreateView):
model = Article
form_class = ArticleSubmitForm
# success url not necessary because model has get_absolute_url
# however it does not redirect to the article
template_name = 'articles/article-submit.html'
# handle post data from other template/view
# ???
def form_valid(self, form):
print('form is valid')
def form_invalid(self, form):
print('form is invalid')
print(form.errors)
</code></pre>
<p>However I get:<br>
<code>AttributeError at /article-submit/</code><br>
<code>'ArticleSubmitForm' object has no attribute 'errors'</code></p>
<p>Same thing happens when rendering the form as just {{ form }}</p>
| 0 | 2016-07-30T12:49:24Z | 38,681,482 | <p>As it turns out, I donât need django-betterforms. A regular modelform works just fine. There were some other mistakes as well.</p>
<p>This is the code.</p>
<p><code>models.py</code></p>
<pre><code>class Article(models.Model):
author = models.ForeignKey('auth.User')
title = models.CharField(max_length=200)
text = models.TextField()
categories = models.ManyToManyField(Category)
city = models.ForeignKey(City)
def __str__(self):
return self.title
def get_absolute_url(self):
return reverse_lazy('article', kwargs={'pk': self.id})
</code></pre>
<p><code>views.py</code></p>
<pre><code>class ArticleSubmitView(CreateView):
model = Article
form_class = ArticleForm
template_name = 'articles/article-submit.html'
def form_valid(self, form):
print('form is valid')
print(form.data)
obj = form.save(commit=False)
obj.author = self.request.user
obj.save()
return HttpResponseRedirect(reverse('article', kwargs={'pk': obj.id}))
</code></pre>
<p>The urls and templates remain (largely) as above.</p>
| 0 | 2016-07-31T06:44:14Z | [
"python",
"django",
"django-forms",
"django-templates",
"django-class-based-views"
] |
Python 2.7 converting list of strings to dictionary | 38,674,163 | <p>I am trying to transform a list of the type</p>
<pre><code>list = ['a=1','b=2','c=3','d=4']
</code></pre>
<p>to a dictionary of the type</p>
<pre><code>dictionary = {'a':'1', 'b':'2', 'c':'3', 'd':'4'}
</code></pre>
<p>How can I do this?</p>
| -5 | 2016-07-30T12:51:14Z | 38,674,188 | <p>That suppose to work for you:</p>
<pre><code>items = ['a=1','b=2','c=3','d=4']
d = {}
for item in items:
a, b = item.split('=')
d.update({a:b})
print d # {'a': '1', 'c': '3', 'b': '2', 'd': '4'}
</code></pre>
<p>Good Luck!</p>
| 0 | 2016-07-30T12:54:07Z | [
"python",
"data-type-conversion"
] |
Keras VGG extract features | 38,674,189 | <p>I have loaded a pre-trained VGG face CNN and have run it successfully. I want to extract the hyper-column average from layers 3 and 8. I was following the section about extracting hyper-columns from <a href="http://blog.christianperone.com/2016/01/convolutional-hypercolumns-in-python/">here</a>. However, since the get_output function was not working, I had to make a few changes:</p>
<p>Imports:</p>
<pre><code>import matplotlib.pyplot as plt
import theano
from scipy import misc
import scipy as sp
from PIL import Image
import PIL.ImageOps
from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD
import numpy as np
from keras import backend as K
</code></pre>
<p>Main function:</p>
<pre><code>#after necessary processing of input to get im
layers_extract = [3, 8]
hc = extract_hypercolumn(model, layers_extract, im)
ave = np.average(hc.transpose(1, 2, 0), axis=2)
print(ave.shape)
plt.imshow(ave)
plt.show()
</code></pre>
<p>Get features function:(I followed <a href="https://github.com/fchollet/keras/issues/3166#issuecomment-231168207">this</a>)</p>
<pre><code>def get_features(model, layer, X_batch):
get_features = K.function([model.layers[0].input, K.learning_phase()], [model.layers[layer].output,])
features = get_features([X_batch,0])
return features
</code></pre>
<p>Hyper-column extraction:</p>
<pre><code>def extract_hypercolumn(model, layer_indexes, instance):
layers = [K.function([model.layers[0].input],[model.layers[li].output])([instance])[0] for li in layer_indexes]
feature_maps = get_features(model,layers,instance)
hypercolumns = []
for convmap in feature_maps:
for fmap in convmap[0]:
upscaled = sp.misc.imresize(fmap, size=(224, 224),mode="F", interp='bilinear')
hypercolumns.append(upscaled)
return np.asarray(hypercolumns)
</code></pre>
<p>However, when I run the code, I'm getting the following error:</p>
<pre><code>get_features = K.function([model.layers[0].input, K.learning_phase()], [model.layers[layer].output,])
TypeError: list indices must be integers, not list
</code></pre>
<p>How can I fix this?</p>
<p>NOTE:</p>
<p>In the hyper-column extraction function, when I use <code>feature_maps = get_features(model,1,instance)</code> or any integer in place of 1, it works fine. But I want to extract the average from layers 3 to 8.</p>
| 5 | 2016-07-30T12:54:12Z | 38,760,402 | <p>It confused me a lot:</p>
<ol>
<li>After <code>layers = [K.function([model.layers[0].input],[model.layers[li].output])([instance])[0] for li in layer_indexes]</code>, layers is list of extracted feature.</li>
<li>And then you send that list into <code>feature_maps = get_features(model,layers,instance)</code>.</li>
<li>In <code>def get_features(model, layer, X_batch):</code>, they second parameter, namely <code>layer</code>, is used to index in <code>model.layers[layer].output</code>.</li>
</ol>
<p>What you want is:</p>
<ol>
<li><code>feature_maps = get_features(model,</code><strong>layer_indexes</strong><code>,instance)</code>: passing layer indices rather than extracted features.</li>
<li><code>get_features = K.function([model.layers[0].input, K.learning_phase()], [</code><strong>model.layers[l].output for l in layer</strong><code>])</code>: list cannot be used to indexing list.</li>
</ol>
<p>Still, your feature abstracting function is horribly written. I suggest you to rewrite everything rather than mixing codes.</p>
| 0 | 2016-08-04T06:54:35Z | [
"python",
"deep-learning",
"theano",
"keras"
] |
Python/Matplotlib: 2d random walk with kde joint density contour in a 3d plot | 38,674,204 | <p>I'm struggling with creating a quite complex 3d figure in python, specifically using iPython notebook. I can partition the content of the graph into two sections:</p>
<p><strong>The (x,y) plane:</strong> Here a two-dimensional random walk is bobbing around, let's call it G(). I would like to plot part of this trajectory on the (x,y) plane. Say, 10% of all the data points of G(). As G() bobs around, it visits some (x,y) pairs more frequently than others. I would like to estimate this density of G() using a kernel estimation approach and draw it as contour lines on the (x,y) plane. </p>
<p><strong>The (z) plane:</strong> Here, I would like to draw a mesh or (transparent) surface plot of the <em>information theoretical surprise of a bivariate normal</em>. Surprise is simply -log(p(i)) or the negative (base 2) logarithm of outcome i. Given the bivariate normal, each (x,y) pair has some probability p(x,y) and the surprise of this is simply -log(p(x,y)). </p>
<p>Essentially these two graphs are independent. Assume the interval of the random walk G() is [xmin,xmax],[ymin,ymax] and of size N. The bivariate normal in the z-plane should be drawn from the same interval, such that for each (x,y) pair in the random walk, I can draw a (dashed) line from some subset of the random walk n < N to the bivariate normal. Assume that G(10) = (5,5) then I would like to draw a dashed line from (5,5) up the Z-axes, until it hits the bivariate normal. </p>
<p>So far, I've managed to plot G() in a 3-d space, and estimate the density f(X,Y) using scipy.stats.gaussian_kde. In another (2d) graph, I have the sort of contour lines I want. What I don't have, is the contour lines in the 3d-plot using the estimated KDE density. I also don't have the bivariate normal plot, or the projection of a few random points from the random walk, to the surface of the bivariate normal. I've added a hand drawn figure, which might ease intuition (ignore the label on the z-axis and the fact that there is no mesh.. difficult to draw!) </p>
<p>Any input, even just partial, such as how to draw the contour lines in the (x,y) plane of the 3d graph, or a mesh of a bivariate normal would be much appreciated. </p>
<p>Thanks!</p>
<pre><code>import matplotlib as mpl
import matplotlib.pyplot as plt
import random
import numpy as np
import seaborn as sns
import scipy
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
def randomwalk():
mpl.rcParams['legend.fontsize'] = 10
xyz = []
cur = [0, 0]
for _ in range(400):
axis = random.randrange(0, 2)
cur[axis] += random.choice([-1, 1])
xyz.append(cur[:])
x, y = zip(*xyz)
data = np.vstack([x,y])
kde = scipy.stats.gaussian_kde(data)
density = kde(data)
fig1 = plt.figure()
ax = fig1.gca(projection='3d')
ax.plot(x, y, label='Random walk')
sns.kdeplot(data[0,:], data[1,:], 0)
ax.scatter(x[-1], y[-1], c='b', marker='o') # End point
ax.legend()
fig2 = plt.figure()
sns.kdeplot(data[0,:], data[1,:])
</code></pre>
<p>Calling randomwalk() initialises and plots this: </p>
<p><a href="http://i.stack.imgur.com/cSFSl.png" rel="nofollow"><img src="http://i.stack.imgur.com/cSFSl.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/jGRes.png" rel="nofollow"><img src="http://i.stack.imgur.com/jGRes.png" alt="enter image description here"></a></p>
<p><strong>Edit #1:</strong> </p>
<p>Made some progress, actually the only thing I need is to restrict the height of the dashed vertical lines to the bivariate. Any ideas?</p>
<pre><code>import matplotlib as mpl
import matplotlib.pyplot as plt
import random
import numpy as np
import seaborn as sns
import scipy
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.mlab import bivariate_normal
%matplotlib inline
# Data for random walk
def randomwalk():
mpl.rcParams['legend.fontsize'] = 10
xyz = []
cur = [0, 0]
for _ in range(40):
axis = random.randrange(0, 2)
cur[axis] += random.choice([-1, 1])
xyz.append(cur[:])
# Get density
x, y = zip(*xyz)
data = np.vstack([x,y])
kde = scipy.stats.gaussian_kde(data)
density = kde(data)
# Data for bivariate gaussian
a = np.linspace(-7.5, 7.5, 20)
b = a
X,Y = np.meshgrid(a, b)
Z = bivariate_normal(X, Y)
surprise_Z = -np.log(Z)
# Get random points from walker and plot up z-axis to the gaussian
M = data[:,np.random.choice(20,5)].T
# Plot figure
fig = plt.figure(figsize=(10, 7))
ax = fig.gca(projection='3d')
ax.plot(x, y, 'grey', label='Random walk') # Walker
ax.scatter(x[-1], y[-1], c='k', marker='o') # End point
ax.legend()
surf = ax.plot_surface(X, Y, surprise_Z, rstride=1, cstride=1,
cmap = plt.cm.gist_heat_r, alpha=0.1, linewidth=0.1)
#fig.colorbar(surf, shrink=0.5, aspect=7, cmap=plt.cm.gray_r)
for i in range(5):
ax.plot([M[i,0], M[i,0]],[M[i,1], M[i,1]], [0,10],'k--',alpha=0.8, linewidth=0.5)
ax.set_zlim(0, 50)
ax.set_xlim(-10, 10)
ax.set_ylim(-10, 10)
</code></pre>
<p><a href="http://i.stack.imgur.com/4vJSw.png" rel="nofollow"><img src="http://i.stack.imgur.com/4vJSw.png" alt="enter image description here"></a></p>
| 2 | 2016-07-30T12:55:27Z | 38,714,154 | <p>Final code, </p>
<pre><code>import matplotlib as mpl
import matplotlib.pyplot as plt
import random
import numpy as np
import seaborn as sns
import scipy
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.mlab import bivariate_normal
%matplotlib inline
# Data for random walk
def randomwalk():
mpl.rcParams['legend.fontsize'] = 10
xyz = []
cur = [0, 0]
for _ in range(50):
axis = random.randrange(0, 2)
cur[axis] += random.choice([-1, 1])
xyz.append(cur[:])
# Get density
x, y = zip(*xyz)
data = np.vstack([x,y])
kde = scipy.stats.gaussian_kde(data)
density = kde(data)
# Data for bivariate gaussian
a = np.linspace(-7.5, 7.5, 100)
b = a
X,Y = np.meshgrid(a, b)
Z = bivariate_normal(X, Y)
surprise_Z = -np.log(Z)
# Get random points from walker and plot up z-axis to the gaussian
M = data[:,np.random.choice(50,10)].T
# Plot figure
fig = plt.figure(figsize=(10, 7))
ax = fig.gca(projection='3d')
ax.plot(x, y, 'grey', label='Random walk') # Walker
ax.legend()
surf = ax.plot_surface(X, Y, surprise_Z, rstride=1, cstride=1,
cmap = plt.cm.gist_heat_r, alpha=0.1, linewidth=0.1)
#fig.colorbar(surf, shrink=0.5, aspect=7, cmap=plt.cm.gray_r)
for i in range(10):
x = [M[i,0], M[i,0]]
y = [M[i,1], M[i,1]]
z = [0,-np.log(bivariate_normal(M[i,0],M[i,1]))]
ax.plot(x,y,z,'k--',alpha=0.8, linewidth=0.5)
ax.scatter(x, y, z, c='k', marker='o')
</code></pre>
<p><a href="http://i.stack.imgur.com/u2uwG.png" rel="nofollow"><img src="http://i.stack.imgur.com/u2uwG.png" alt="enter image description here"></a></p>
| 0 | 2016-08-02T07:22:21Z | [
"python",
"matplotlib",
"plot",
"3d"
] |
Python __super black magic failed | 38,674,218 | <p>I want to add an attribute for every class created by a metaclass. For example, when a class named <code>C</code> is created, I want add an attribute <code>C._C__sup</code> whose value is the descriptor <code>super(C)</code>.</p>
<p>Here is what I've tried:</p>
<pre><code>class Meta(type):
def __init__(cls, name, bases, dict): # Not overriding type.__new__
cls.__dict__['_' + name + '__sup'] = super(cls)
# Not calling type.__init__; do I need it?
class C(object):
__metaclass__ = Meta
c = C()
print c._C__sup
</code></pre>
<p>This gives me:</p>
<pre><code>TypeError: Error when calling the metaclass bases
'dictproxy' object does not support item assignment
</code></pre>
<hr>
<p>Some background information:<br>
<sub>(You don't have to read this part)</sub></p>
<p>Inspired by <a href="http://www.artima.com/weblogs/viewpost.jsp?thread=236278" rel="nofollow">this article</a>, what I'm doing is trying to avoid "hardcoding" the class name when using <code>super</code>:</p>
<blockquote>
<p>The idea there is to use the unbound super objects as private
attributes. For instance, in our example, we could define the private
attribute <code>__sup</code> in the class <code>C</code> as the unbound super object
<code>super(C)</code>:</p>
<pre><code>>>> C._C__sup = super(C)
</code></pre>
<p>With this definition inside the methods the syntax <code>self.__sup.meth</code>
can be used as an alternative to <code>super(C, self).meth</code>. The advantage
is that you avoid to repeat the name of the class in the calling
syntax, since that name is hidden in the mangling mechanism of private
names. <strong>The creation of the <code>__sup</code> attributes can be hidden in a
metaclass and made automatic.</strong> So, all this seems to work: but
actually this not the case.</p>
</blockquote>
| 0 | 2016-07-30T12:57:24Z | 38,674,382 | <p>Use <code>setattr</code> instead of assignment to <code>cls.__dict__</code>:</p>
<pre><code>class Meta(type):
def __init__(cls, name, bases, clsdict): # Not overriding type.__new__
setattr(cls, '_' + name + '__sup', super(cls))
super(Meta, cls).__init__(name, bases, clsdict)
class C(object):
__metaclass__ = Meta
def say(self):
return 'wow'
class D(C):
def say(self):
return 'bow' + self.__sup.say()
c = C()
print(c._C__sup)
# <super: <class 'C'>, <C object>>
d = D()
print(d.say())
</code></pre>
<p>prints</p>
<pre><code>bowwow
</code></pre>
<hr>
<p>By the way, it is a good idea to call</p>
<pre><code> super(Meta, cls).__init__(name, bases, clsdict)
</code></pre>
<p>inside <code>Meta.__init__</code> to allow <code>Meta</code> to participate in class hierarchies which
might need <code>super</code> to properly call a chain of <code>__init__</code>s. This seems
particularly appropriate since you are building a metaclass to assist with the
use of <code>super</code>.</p>
| 2 | 2016-07-30T13:20:11Z | [
"python",
"python-2.7",
"attributes",
"metaclass",
"descriptor"
] |
How to get the token with django rest framework and ajax | 38,674,266 | <p>I want to build a rest api where users can authenticate with tokens. I have included <code>rest_framework.authtoken</code> in the installed apps list. Also added the required configuration in the settings.py:</p>
<pre><code>INSTALLED_APPS = (
...
'rest_framework.authtoken'
)
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAuthenticated',
),
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.TokenAuthentication',
'rest_framework.authentication.SessionAuthentication',
)
}
</code></pre>
<p>Defined a method to listen to <code>post_save</code> signal and create new token for the newly created user. Then I did migration. After creating new user I can see the token for that user.</p>
<p>Also if I do</p>
<pre><code>http POST 0.0.0.0:80/api-token-auth/ username='user@gmail.com' password='secure123'
</code></pre>
<p>I get a this response back</p>
<pre><code>HTTP/1.0 200 OK
Allow: POST, OPTIONS
Content-Type: application/json
Date: Sat, 30 Jul 2016 12:05:30 GMT
Server: WSGIServer/0.1 Python/2.7.3
Vary: Cookie
X-Frame-Options: SAMEORIGIN
{
"token": "4aecfb249265064c55300d782e4c7e66b8b77063"
}
</code></pre>
<p>So I suppose its working. But if I try to login with ajax:</p>
<pre><code>$.ajax({
url: 'http://test.com/api-token-auth/ username=' + email + ' password='+ password,
dataType: 'json',
cache: false,
success: function(data) {
console.log(data);
}.bind(this),
error: function(xhr, status, err) {
console.log(err);
}.bind(this)
});
</code></pre>
<p>I get this error in the browser console:</p>
<blockquote>
<p>jquery-3.1.0.min.js:4 GET <a href="http://test.com/api-token-auth/%20username=user@gmail.com%20password=secure123?_=1469883569618" rel="nofollow">http://test.com/api-token-auth/%20username=user@gmail.com%20password=secure123?_=1469883569618</a> 405 (Method Not Allowed)</p>
<p>bundle.js:27453 Method Not Allowed</p>
</blockquote>
<p>How do I get the token for authenticated user so that I can use it to post as authenticated user?</p>
<p><strong>update</strong></p>
<p>Also I am using <a href="https://github.com/ottoyiu/django-cors-headers" rel="nofollow">django-cors-headers</a> to deal with CORS related problem.</p>
<p><strong>update</strong></p>
<pre><code>%20username=user@gmail.com%20password=secure123?_=1469885103431 405 xhr jquery-3.1.0.min.js:4 278âB 29âms
</code></pre>
<p><strong>update: added response header</strong></p>
<p><a href="http://i.stack.imgur.com/Qx7lE.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Qx7lE.jpg" alt="enter image description here"></a></p>
| 1 | 2016-07-30T13:04:09Z | 38,674,510 | <p>By default the <code>type</code> or <code>method</code> value is set to 'GET' on jQuery.ajax() method. It lookls like the response you get, allows only 'POST', and 'OPTIONS'.</p>
<p>How about you try to set <code>type</code> or <code>method</code> value to 'POST' on your jQuery.ajax() method.</p>
| 0 | 2016-07-30T13:34:39Z | [
"python",
"ajax",
"django",
"rest",
"django-rest-framework"
] |
How to get the token with django rest framework and ajax | 38,674,266 | <p>I want to build a rest api where users can authenticate with tokens. I have included <code>rest_framework.authtoken</code> in the installed apps list. Also added the required configuration in the settings.py:</p>
<pre><code>INSTALLED_APPS = (
...
'rest_framework.authtoken'
)
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAuthenticated',
),
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.TokenAuthentication',
'rest_framework.authentication.SessionAuthentication',
)
}
</code></pre>
<p>Defined a method to listen to <code>post_save</code> signal and create new token for the newly created user. Then I did migration. After creating new user I can see the token for that user.</p>
<p>Also if I do</p>
<pre><code>http POST 0.0.0.0:80/api-token-auth/ username='user@gmail.com' password='secure123'
</code></pre>
<p>I get a this response back</p>
<pre><code>HTTP/1.0 200 OK
Allow: POST, OPTIONS
Content-Type: application/json
Date: Sat, 30 Jul 2016 12:05:30 GMT
Server: WSGIServer/0.1 Python/2.7.3
Vary: Cookie
X-Frame-Options: SAMEORIGIN
{
"token": "4aecfb249265064c55300d782e4c7e66b8b77063"
}
</code></pre>
<p>So I suppose its working. But if I try to login with ajax:</p>
<pre><code>$.ajax({
url: 'http://test.com/api-token-auth/ username=' + email + ' password='+ password,
dataType: 'json',
cache: false,
success: function(data) {
console.log(data);
}.bind(this),
error: function(xhr, status, err) {
console.log(err);
}.bind(this)
});
</code></pre>
<p>I get this error in the browser console:</p>
<blockquote>
<p>jquery-3.1.0.min.js:4 GET <a href="http://test.com/api-token-auth/%20username=user@gmail.com%20password=secure123?_=1469883569618" rel="nofollow">http://test.com/api-token-auth/%20username=user@gmail.com%20password=secure123?_=1469883569618</a> 405 (Method Not Allowed)</p>
<p>bundle.js:27453 Method Not Allowed</p>
</blockquote>
<p>How do I get the token for authenticated user so that I can use it to post as authenticated user?</p>
<p><strong>update</strong></p>
<p>Also I am using <a href="https://github.com/ottoyiu/django-cors-headers" rel="nofollow">django-cors-headers</a> to deal with CORS related problem.</p>
<p><strong>update</strong></p>
<pre><code>%20username=user@gmail.com%20password=secure123?_=1469885103431 405 xhr jquery-3.1.0.min.js:4 278âB 29âms
</code></pre>
<p><strong>update: added response header</strong></p>
<p><a href="http://i.stack.imgur.com/Qx7lE.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Qx7lE.jpg" alt="enter image description here"></a></p>
| 1 | 2016-07-30T13:04:09Z | 38,674,669 | <p>This is a cors issue, try with Chromium and start it with chromium-browser --disable-web-security to test it</p>
| 0 | 2016-07-30T13:51:19Z | [
"python",
"ajax",
"django",
"rest",
"django-rest-framework"
] |
How to get the token with django rest framework and ajax | 38,674,266 | <p>I want to build a rest api where users can authenticate with tokens. I have included <code>rest_framework.authtoken</code> in the installed apps list. Also added the required configuration in the settings.py:</p>
<pre><code>INSTALLED_APPS = (
...
'rest_framework.authtoken'
)
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAuthenticated',
),
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.TokenAuthentication',
'rest_framework.authentication.SessionAuthentication',
)
}
</code></pre>
<p>Defined a method to listen to <code>post_save</code> signal and create new token for the newly created user. Then I did migration. After creating new user I can see the token for that user.</p>
<p>Also if I do</p>
<pre><code>http POST 0.0.0.0:80/api-token-auth/ username='user@gmail.com' password='secure123'
</code></pre>
<p>I get a this response back</p>
<pre><code>HTTP/1.0 200 OK
Allow: POST, OPTIONS
Content-Type: application/json
Date: Sat, 30 Jul 2016 12:05:30 GMT
Server: WSGIServer/0.1 Python/2.7.3
Vary: Cookie
X-Frame-Options: SAMEORIGIN
{
"token": "4aecfb249265064c55300d782e4c7e66b8b77063"
}
</code></pre>
<p>So I suppose its working. But if I try to login with ajax:</p>
<pre><code>$.ajax({
url: 'http://test.com/api-token-auth/ username=' + email + ' password='+ password,
dataType: 'json',
cache: false,
success: function(data) {
console.log(data);
}.bind(this),
error: function(xhr, status, err) {
console.log(err);
}.bind(this)
});
</code></pre>
<p>I get this error in the browser console:</p>
<blockquote>
<p>jquery-3.1.0.min.js:4 GET <a href="http://test.com/api-token-auth/%20username=user@gmail.com%20password=secure123?_=1469883569618" rel="nofollow">http://test.com/api-token-auth/%20username=user@gmail.com%20password=secure123?_=1469883569618</a> 405 (Method Not Allowed)</p>
<p>bundle.js:27453 Method Not Allowed</p>
</blockquote>
<p>How do I get the token for authenticated user so that I can use it to post as authenticated user?</p>
<p><strong>update</strong></p>
<p>Also I am using <a href="https://github.com/ottoyiu/django-cors-headers" rel="nofollow">django-cors-headers</a> to deal with CORS related problem.</p>
<p><strong>update</strong></p>
<pre><code>%20username=user@gmail.com%20password=secure123?_=1469885103431 405 xhr jquery-3.1.0.min.js:4 278âB 29âms
</code></pre>
<p><strong>update: added response header</strong></p>
<p><a href="http://i.stack.imgur.com/Qx7lE.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Qx7lE.jpg" alt="enter image description here"></a></p>
| 1 | 2016-07-30T13:04:09Z | 38,681,334 | <p>The problem here is because instead of send the data into the request payload, you are passing through URL, like a GET request.</p>
<p>The request you generate using <code>httpie</code> works because it correctly generate the request, as you can see:</p>
<p><strong>Request:</strong></p>
<pre><code>http POST 0.0.0.0:80/api-token-auth/ username='user@gmail.com' password='secure123'
</code></pre>
<p><strong>Request (verbose mode):</strong></p>
<pre><code>POST /api-token-auth/ HTTP/1.1
Accept: application/json
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 55
Content-Type: application/json
Host: 0.0.0.0:8000
User-Agent: HTTPie/0.9.3
{
"password": "secure123",
"username": "user@gmail.com"
}
</code></pre>
<p>Which is different from:</p>
<p><strong>Request:</strong></p>
<pre><code>http POST "0.0.0.0:8000/api-token-auth/ username='user@gmail.com' password='secure123'"
</code></pre>
<p><strong>Request (verbose mode):</strong></p>
<pre><code>POST /api-token-auth/%20username='user@gmail.com'%20password='secure123' HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 0
Host: 0.0.0.0:8000
User-Agent: HTTPie/0.9.3
</code></pre>
<p>You can try this example I found <a href="http://stackoverflow.com/questions/23149151/jquery-and-django-rest-framework-bulk-send-list">here</a> of how send the data in the correct way using jQuery. (I din't test it.)</p>
<pre><code>$.ajax({
type: "POST",
url: "/api/articles/",
data: JSON.stringify(data),
sucess: function() { console.log("Success!"); },
contentType: "application/json; charset=utf-8",
dataType: "json",
crossDomain:false,
beforeSend: function(xhr, settings) {
xhr.setRequestHeader("X-CSRFToken", csrftoken);
}
});
</code></pre>
| 0 | 2016-07-31T06:20:54Z | [
"python",
"ajax",
"django",
"rest",
"django-rest-framework"
] |
How to properly wrap std::vector<std::size_t> with SWIG for Python? Problems with std::size_t | 38,674,268 | <p>I'm trying to get <code>std::vector<std::size_t></code> to work with SWIG. I need to provide a python interface to a c++ library. <code>std::vector</code>s of primitive types and objects are working fine but there is a problem with <code>std::size_t</code>.</p>
<p>I provide a MCVE on github <a href="https://github.com/paulbible/swig_std_size_t" rel="nofollow">here</a>.</p>
<h2>Main issue</h2>
<p>Basically the problem is that <code>std::size_t</code> is not recognized and <code>std::vector<std::size_t></code> is treated as <code>std::vector< int,std::allocator< int > > *</code>. When I try to specify the template, I get the following.</p>
<p>Using <code>%template(VecSize) std::vector<std::size_t>;</code> gives:</p>
<pre><code>swig -c++ -python c_swig_vec_std_size.i
:0: Warning(490): Fragment 'SWIG_AsVal_std_size_t' not found.
:0: Warning(490): Fragment 'SWIG_From_std_size_t' not found.
g++ -fpic -c c_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
c_swig_vec_std_size_wrap.cxx: In static member function âstatic int swig::traits_asval<long unsigned int>::asval(PyObject*, swig::traits_asval::value_type*)â:
c_swig_vec_std_size_wrap.cxx:4289: error: âSWIG_AsVal_std_size_tâ was not declared in this scope
c_swig_vec_std_size_wrap.cxx: In static member function âstatic PyObject* swig::traits_from<long unsigned int>::from(const swig::traits_from::value_type&)â:
c_swig_vec_std_size_wrap.cxx:4295: error: âSWIG_From_std_size_tâ was not declared in this scope
make: *** [c] Error 1
</code></pre>
<h2>Minimum Example</h2>
<h3>Example c++ class</h3>
<p>The following class is enough to show the functionality that I need. The <code>std::vector<int></code> is included to show the intended behavior.</p>
<p>class_vec_std_size.hpp</p>
<pre><code>#ifndef STD_SIZE_VEC
#define STD_SIZE_VEC
#include <vector>
class StdSizeVec{
public:
StdSizeVec(){
_myVec = std::vector<std::size_t>();
_myVec.push_back(1);
_myVec.push_back(2);
_myInts = std::vector<int>();
_myInts.push_back(1);
_myInts.push_back(2);
}
~StdSizeVec(){
_myVec.clear();
}
inline std::vector<std::size_t> getValues(){
return _myVec;
}
inline std::vector<int> getInts(){
return _myInts;
}
private:
std::vector<std::size_t> _myVec;
std::vector<int> _myInts;
};
#endif
</code></pre>
<h2>Various attempts at the interface</h2>
<h3>a_swig_vec_std_size.i</h3>
<pre><code>%module a_swig_vec_std_size
%{
#include "class_vec_std_size.hpp"
%}
%include "class_vec_std_size.hpp"
</code></pre>
<p>Output</p>
<pre><code>[paul@login-0-0 stack_swig]$ python
Python 2.7.11 (default, May 7 2016, 23:37:19)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from a_swig_vec_std_size import StdSizeVec
>>> ssv = StdSizeVec()
>>> vals = ssv.getValues()
>>> vals
<Swig Object of type 'std::vector< std::size_t > *' at 0x2ad7047be330>
>>> ints = ssv.getInts()
>>> ints
<Swig Object of type 'std::vector< int > *' at 0x2ad7047be780>
>>> exit()
swig/python detected a memory leak of type 'std::vector< int > *', no destructor found.
swig/python detected a memory leak of type 'std::vector< std::size_t > *', no destructor found.
[paul@login-0-0 stack_swig]$
</code></pre>
<p>This is the basic naive approach. The pointers are not useful in python and there are memory leak messages that we can not expose to the users of the interface.</p>
<h3>b_swig_vec_std_size.i</h3>
<pre><code>%module b_swig_vec_std_size
%{
#include "class_vec_std_size.hpp"
%}
%include "std_vector.i"
%include "class_vec_std_size.hpp"
</code></pre>
<p>Output</p>
<pre><code>[paul@login-0-0 stack_swig]$ python
Python 2.7.11 (default, May 7 2016, 23:37:19)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from b_swig_vec_std_size import StdSizeVec
>>> ssv = StdSizeVec()
>>> vals = ssv.getValues()
>>> vals
<Swig Object of type 'std::vector< std::size_t,std::allocator< std::size_t > > *' at 0x2aee17458330>
>>> ints = ssv.getInts()
>>> ints
<Swig Object of type 'std::vector< int,std::allocator< int > > *' at 0x2aee17458930>
>>> exit()
swig/python detected a memory leak of type 'std::vector< int,std::allocator< int > > *', no destructor found.
swig/python detected a memory leak of type 'std::vector< std::size_t,std::allocator< std::size_t > > *', no destructor found.
</code></pre>
<p>Using the correct "std_vector.i", SWIG knows more about the vector and allocators but still these pointers are not useful to client code in python and there are memory leak error messages.</p>
<h3>c_swig_vec_std_size.i</h3>
<p>This interface uses the correct <code>%template</code> directives like <a href="http://stackoverflow.com/questions/13587791/swig-and-c-memory-leak-with-vector-of-pointers">this answer</a>. Here SWIG does not understand <code>std::size_t</code> as a template argument.</p>
<pre><code>%module c_swig_vec_std_size
%{
#include "class_vec_std_size.hpp"
%}
%include "std_vector.i"
%template(VecInt) std::vector<int>;
// Does not compile
//%template(VecSize) std::vector<std::size_t>;
//
// Gives the following errors
//swig -c++ -python c_swig_vec_std_size.i
// :0: Warning(490): Fragment 'SWIG_AsVal_std_size_t' not found.
// :0: Warning(490): Fragment 'SWIG_From_std_size_t' not found.
// g++ -fpic -c c_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
// c_swig_vec_std_size_wrap.cxx: In static member function âstatic int swig::traits_asval<long unsigned int>::asval(PyObject*, swig::traits_asval::value_type*)â:
// c_swig_vec_std_size_wrap.cxx:4289: error: âSWIG_AsVal_std_size_tâ was not declared in this scope
// c_swig_vec_std_size_wrap.cxx: In static member function âstatic PyObject* swig::traits_from<long unsigned int>::from(const swig::traits_from::value_type&)â:
// c_swig_vec_std_size_wrap.cxx:4295: error: âSWIG_From_std_size_tâ was not declared in this scope
// make: *** [c] Error 1
//The following compiles but does not work
%template(VecSize) std::vector<size_t>;
%include "class_vec_std_size.hpp"
</code></pre>
<p>Output</p>
<pre><code>[paul@login-0-0 stack_swig]$ python
Python 2.7.11 (default, May 7 2016, 23:37:19)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from c_swig_vec_std_size import StdSizeVec
>>> ssv = StdSizeVec()
>>> vals = ssv.getValues()
>>> vals
<Swig Object of type 'std::vector< std::size_t,std::allocator< std::size_t > > *' at 0x2b286104bd80>
>>> ints = ssv.getInts()
>>> ints
(1, 2)
>>> exit()
swig/python detected a memory leak of type 'std::vector< std::size_t,std::allocator< std::size_t > > *', no destructor found.
</code></pre>
<p>Now the <code>std::vector<int></code> is working properly but SWIG's <code>%template(VecSize) std::vector<size_t>;</code> (without <code>std::</code>) does not do the job.</p>
<h3>Some internet digging</h3>
<p>I found a few posts which offer some clues.</p>
<p>Feeling like <a href="https://xkcd.com/979/" rel="nofollow">this</a> I found a <a href="http://swig.10945.n7.nabble.com/Bug-with-1-3-29-td2034.html" rel="nofollow">2006 post with the same problem</a></p>
<p>The <a href="http://swig.10945.n7.nabble.com/std-vector-size-type-wrapped-as-a-pointer-not-an-integer-td7790.html" rel="nofollow">std::vector::size_type wrapped as a pointer not an integer</a> link had some helpful info but the problem is not exactly the same.</p>
<p>I found this <a href="https://github.com/c-abird/magnum.fe/blob/master/magnumfe/swig/typemaps/primitives.i" rel="nofollow">primitives.i</a> from the magnum.fe project, but thinking wishfully and importing <strong>primitives.i</strong> did not work for me. </p>
<p>After that I tried to implement the <code>SWIG_AsVal_std_size_t</code> and <code>SWIG_From_std_size_t</code> similar to their approach, but no luck.</p>
<h3>hand rolled std_size_t.i</h3>
<pre><code>%fragment("SWIG_From_std_size_t", "header", fragment=SWIG_From_frag(std::size_t))
{
SWIGINTERNINLINE PyObject * SWIG_From_std_size_t(std::size_t value)
{
return PyInt_FromSize_t(value);
}
}
%fragment("SWIG_AsVal_std_size_t", "header")
{
SWIGINTERNINLINE bool SWIG_AsVal_std_size_t(PyObject* in, std::size_t& value)
{
// Get integer type
if(PyInt_Check(in)){
long unsigned int long_uint = PyLong_AsLong(in);
value = static_cast<std::size_t>(long_uint);
return true;
}else{
return false;
}
}
}
%fragment(SWIG_From_frag(std::size_t));
%fragment("SWIG_AsVal_std_size_t");
</code></pre>
<p>This was imported in <strong>d_swig_vec_std_size.i</strong>. but it does not compile.</p>
<pre><code>%module d_swig_vec_std_size
%{
#include "class_vec_std_size.hpp"
%}
%include "std_vector.i"
%template(VecInt) std::vector<int>;
%include "std_size_t.i"
%template(VecSize) std::vector<std::size_t>;
%include "class_vec_std_size.hpp"
</code></pre>
<p>Here I get this.</p>
<pre><code>swig -c++ -python d_swig_vec_std_size.i
g++ -fpic -c d_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
d_swig_vec_std_size_wrap.cxx: In static member function âstatic int swig::traits_asval<long unsigned int>::asval(PyObject*, swig::traits_asval::value_type*)â:
d_swig_vec_std_size_wrap.cxx:4311: error: invalid initialization of reference of type âsize_t&â from expression of type âswig::traits_asval::value_type*â
d_swig_vec_std_size_wrap.cxx:4288: error: in passing argument 2 of âbool SWIG_AsVal_std_size_t(PyObject*, size_t&)â
make: *** [d] Error 1
</code></pre>
<h2>makefile</h2>
<pre><code>PYTHON=/public/users/paul/dev/software/Python-2.7.11
all: a b c d
a:
swig -c++ -python a_swig_vec_std_size.i
g++ -fpic -c a_swig_vec_std_size_wrap.cxx -I${PYTHON}/Include -I${PYTHON}
g++ -g -fpic -shared a_swig_vec_std_size_wrap.o -o _a_swig_vec_std_size.so
b:
swig -c++ -python b_swig_vec_std_size.i
g++ -fpic -c b_swig_vec_std_size_wrap.cxx -I${PYTHON}/Include -I${PYTHON}
g++ -g -fpic -shared b_swig_vec_std_size_wrap.o -o _b_swig_vec_std_size.so
c:
swig -c++ -python c_swig_vec_std_size.i
g++ -fpic -c c_swig_vec_std_size_wrap.cxx -I${PYTHON}/Include -I${PYTHON}
g++ -g -fpic -shared c_swig_vec_std_size_wrap.o -o _c_swig_vec_std_size.so
d:
swig -c++ -python d_swig_vec_std_size.i
g++ -fpic -c d_swig_vec_std_size_wrap.cxx -I${PYTHON}/Include -I${PYTHON}
g++ -g -fpic -shared d_swig_vec_std_size_wrap.o -o _d_swig_vec_std_size.so
clean: clean_a clean_b clean_c clean_d
clean_a:
rm a_swig_vec_std_size_wrap.cxx a_swig_vec_std_size.py a_swig_vec_std_size_wrap.o _a_swig_vec_std_size.so
clean_b:
rm b_swig_vec_std_size_wrap.cxx b_swig_vec_std_size.py b_swig_vec_std_size_wrap.o _b_swig_vec_std_size.so
clean_c:
rm c_swig_vec_std_size_wrap.cxx c_swig_vec_std_size.py c_swig_vec_std_size_wrap.o _c_swig_vec_std_size.so
clean_d:
rm d_swig_vec_std_size_wrap.cxx d_swig_vec_std_size.py d_swig_vec_std_size_wrap.o _d_swig_vec_std_size.so
</code></pre>
<h2>Program Versions</h2>
<pre><code>python version: Python 2.7.11
g++ version: g++ (GCC) 4.4.7
swig version: SWIG Version 1.3.40
</code></pre>
<p>Using a newer swig version (swig-3.0.10) gives the same result for me.</p>
<h1>Summary</h1>
<p>I suspect the answer may be along the lines of interface <strong>d</strong> somewhere, but I have had no luck so far. There could be a problem with how <code>std::size_t</code> is implemented differently depending on the architecture instead of having a fixed size. In any case, I would expect SWIG to be able to handle it. Am I missing something? I would like to find a solution that does not involve making changes to the C++ library (such as encapsulating std::size_t in a <code>struct</code> or using <code>int</code> instead).</p>
<h2>Trying Jens Monk's Solution</h2>
<pre><code>namespace std {
%template(VecSize) vector<size_t>;
}
</code></pre>
<p>I get this:</p>
<pre><code>[paul@login-0-0 stack_swig]$ make clean
rm a_swig_vec_std_size_wrap.cxx a_swig_vec_std_size.py a_swig_vec_std_size_wrap.o _a_swig_vec_std_size.so
rm b_swig_vec_std_size_wrap.cxx b_swig_vec_std_size.py b_swig_vec_std_size_wrap.o _b_swig_vec_std_size.so
rm c_swig_vec_std_size_wrap.cxx c_swig_vec_std_size.py c_swig_vec_std_size_wrap.o _c_swig_vec_std_size.so
rm d_swig_vec_std_size_wrap.cxx d_swig_vec_std_size.py d_swig_vec_std_size_wrap.o _d_swig_vec_std_size.so
[paul@login-0-0 stack_swig]$ make
swig -c++ -python a_swig_vec_std_size.i
g++ -fpic -c a_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
g++ -g -fpic -shared a_swig_vec_std_size_wrap.o -o _a_swig_vec_std_size.so
swig -c++ -python b_swig_vec_std_size.i
g++ -fpic -c b_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
g++ -g -fpic -shared b_swig_vec_std_size_wrap.o -o _b_swig_vec_std_size.so
swig -c++ -python c_swig_vec_std_size.i
g++ -fpic -c c_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
g++ -g -fpic -shared c_swig_vec_std_size_wrap.o -o _c_swig_vec_std_size.so
swig -c++ -python -I/public/users/paul/dev/software/swig-3.0.10 d_swig_vec_std_size.i
g++ -fpic -c d_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
g++ -g -fpic -shared d_swig_vec_std_size_wrap.o -o _d_swig_vec_std_size.so
[paul@login-0-0 stack_swig]$ python
Python 2.7.11 (default, May 7 2016, 23:37:19)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from d_swig_vec_std_size import StdSizeVec
>>> ssv = StdSizeVec()
>>> vals = ssv.getValues()
>>> vals
<Swig Object of type 'std::vector< std::size_t,std::allocator< std::size_t > > *' at 0x2aba7dd8bd80>
>>> ints - ssv.getInts()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'ints' is not defined
>>> ints = ssv.getInts()
>>> ints
(1, 2)
>>> exit()
swig/python detected a memory leak of type 'std::vector< std::size_t,std::allocator< std::size_t > > *', no destructor found.
[paul@login-0-0 stack_swig]$ cat d_swig_vec_std_size.i
%module d_swig_vec_std_size
%{
#include "class_vec_std_size.hpp"
%}
%include "std_vector.i"
%template(VecInt) std::vector<int>;
%include "std_size_t.i"
namespace std {
%template(VecSize) vector<size_t>;
}
%include "class_vec_std_size.hpp"
</code></pre>
| 6 | 2016-07-30T13:04:31Z | 38,706,357 | <p>Instantiate your templates as follows</p>
<pre><code>namespace std {
%template(VecSize) vector<size_t>;
}
</code></pre>
<p>It works here with this change - out of the box. I am using SWIG 3.0.2, g++ 4.9.2 and Python 2.7.9. </p>
<p>I have changed <code>d_swig_vec_std_size.i</code> in your project and the include path to <code>/usr/include/python2.7</code> in your makefile</p>
<pre><code>%module d_swig_vec_std_size
%{
#include "class_vec_std_size.hpp"
%}
%include "std_vector.i"
%template(VecInt) std::vector<int>;
%include "std_size_t.i"
namespace std {
%template(VecSize) vector<size_t>;
}
%include "class_vec_std_size.hpp"
</code></pre>
| 2 | 2016-08-01T19:08:38Z | [
"python",
"c++",
"vector",
"swig"
] |
How to properly wrap std::vector<std::size_t> with SWIG for Python? Problems with std::size_t | 38,674,268 | <p>I'm trying to get <code>std::vector<std::size_t></code> to work with SWIG. I need to provide a python interface to a c++ library. <code>std::vector</code>s of primitive types and objects are working fine but there is a problem with <code>std::size_t</code>.</p>
<p>I provide a MCVE on github <a href="https://github.com/paulbible/swig_std_size_t" rel="nofollow">here</a>.</p>
<h2>Main issue</h2>
<p>Basically the problem is that <code>std::size_t</code> is not recognized and <code>std::vector<std::size_t></code> is treated as <code>std::vector< int,std::allocator< int > > *</code>. When I try to specify the template, I get the following.</p>
<p>Using <code>%template(VecSize) std::vector<std::size_t>;</code> gives:</p>
<pre><code>swig -c++ -python c_swig_vec_std_size.i
:0: Warning(490): Fragment 'SWIG_AsVal_std_size_t' not found.
:0: Warning(490): Fragment 'SWIG_From_std_size_t' not found.
g++ -fpic -c c_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
c_swig_vec_std_size_wrap.cxx: In static member function âstatic int swig::traits_asval<long unsigned int>::asval(PyObject*, swig::traits_asval::value_type*)â:
c_swig_vec_std_size_wrap.cxx:4289: error: âSWIG_AsVal_std_size_tâ was not declared in this scope
c_swig_vec_std_size_wrap.cxx: In static member function âstatic PyObject* swig::traits_from<long unsigned int>::from(const swig::traits_from::value_type&)â:
c_swig_vec_std_size_wrap.cxx:4295: error: âSWIG_From_std_size_tâ was not declared in this scope
make: *** [c] Error 1
</code></pre>
<h2>Minimum Example</h2>
<h3>Example c++ class</h3>
<p>The following class is enough to show the functionality that I need. The <code>std::vector<int></code> is included to show the intended behavior.</p>
<p>class_vec_std_size.hpp</p>
<pre><code>#ifndef STD_SIZE_VEC
#define STD_SIZE_VEC
#include <vector>
class StdSizeVec{
public:
StdSizeVec(){
_myVec = std::vector<std::size_t>();
_myVec.push_back(1);
_myVec.push_back(2);
_myInts = std::vector<int>();
_myInts.push_back(1);
_myInts.push_back(2);
}
~StdSizeVec(){
_myVec.clear();
}
inline std::vector<std::size_t> getValues(){
return _myVec;
}
inline std::vector<int> getInts(){
return _myInts;
}
private:
std::vector<std::size_t> _myVec;
std::vector<int> _myInts;
};
#endif
</code></pre>
<h2>Various attempts at the interface</h2>
<h3>a_swig_vec_std_size.i</h3>
<pre><code>%module a_swig_vec_std_size
%{
#include "class_vec_std_size.hpp"
%}
%include "class_vec_std_size.hpp"
</code></pre>
<p>Output</p>
<pre><code>[paul@login-0-0 stack_swig]$ python
Python 2.7.11 (default, May 7 2016, 23:37:19)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from a_swig_vec_std_size import StdSizeVec
>>> ssv = StdSizeVec()
>>> vals = ssv.getValues()
>>> vals
<Swig Object of type 'std::vector< std::size_t > *' at 0x2ad7047be330>
>>> ints = ssv.getInts()
>>> ints
<Swig Object of type 'std::vector< int > *' at 0x2ad7047be780>
>>> exit()
swig/python detected a memory leak of type 'std::vector< int > *', no destructor found.
swig/python detected a memory leak of type 'std::vector< std::size_t > *', no destructor found.
[paul@login-0-0 stack_swig]$
</code></pre>
<p>This is the basic naive approach. The pointers are not useful in python and there are memory leak messages that we can not expose to the users of the interface.</p>
<h3>b_swig_vec_std_size.i</h3>
<pre><code>%module b_swig_vec_std_size
%{
#include "class_vec_std_size.hpp"
%}
%include "std_vector.i"
%include "class_vec_std_size.hpp"
</code></pre>
<p>Output</p>
<pre><code>[paul@login-0-0 stack_swig]$ python
Python 2.7.11 (default, May 7 2016, 23:37:19)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from b_swig_vec_std_size import StdSizeVec
>>> ssv = StdSizeVec()
>>> vals = ssv.getValues()
>>> vals
<Swig Object of type 'std::vector< std::size_t,std::allocator< std::size_t > > *' at 0x2aee17458330>
>>> ints = ssv.getInts()
>>> ints
<Swig Object of type 'std::vector< int,std::allocator< int > > *' at 0x2aee17458930>
>>> exit()
swig/python detected a memory leak of type 'std::vector< int,std::allocator< int > > *', no destructor found.
swig/python detected a memory leak of type 'std::vector< std::size_t,std::allocator< std::size_t > > *', no destructor found.
</code></pre>
<p>Using the correct "std_vector.i", SWIG knows more about the vector and allocators but still these pointers are not useful to client code in python and there are memory leak error messages.</p>
<h3>c_swig_vec_std_size.i</h3>
<p>This interface uses the correct <code>%template</code> directives like <a href="http://stackoverflow.com/questions/13587791/swig-and-c-memory-leak-with-vector-of-pointers">this answer</a>. Here SWIG does not understand <code>std::size_t</code> as a template argument.</p>
<pre><code>%module c_swig_vec_std_size
%{
#include "class_vec_std_size.hpp"
%}
%include "std_vector.i"
%template(VecInt) std::vector<int>;
// Does not compile
//%template(VecSize) std::vector<std::size_t>;
//
// Gives the following errors
//swig -c++ -python c_swig_vec_std_size.i
// :0: Warning(490): Fragment 'SWIG_AsVal_std_size_t' not found.
// :0: Warning(490): Fragment 'SWIG_From_std_size_t' not found.
// g++ -fpic -c c_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
// c_swig_vec_std_size_wrap.cxx: In static member function âstatic int swig::traits_asval<long unsigned int>::asval(PyObject*, swig::traits_asval::value_type*)â:
// c_swig_vec_std_size_wrap.cxx:4289: error: âSWIG_AsVal_std_size_tâ was not declared in this scope
// c_swig_vec_std_size_wrap.cxx: In static member function âstatic PyObject* swig::traits_from<long unsigned int>::from(const swig::traits_from::value_type&)â:
// c_swig_vec_std_size_wrap.cxx:4295: error: âSWIG_From_std_size_tâ was not declared in this scope
// make: *** [c] Error 1
//The following compiles but does not work
%template(VecSize) std::vector<size_t>;
%include "class_vec_std_size.hpp"
</code></pre>
<p>Output</p>
<pre><code>[paul@login-0-0 stack_swig]$ python
Python 2.7.11 (default, May 7 2016, 23:37:19)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from c_swig_vec_std_size import StdSizeVec
>>> ssv = StdSizeVec()
>>> vals = ssv.getValues()
>>> vals
<Swig Object of type 'std::vector< std::size_t,std::allocator< std::size_t > > *' at 0x2b286104bd80>
>>> ints = ssv.getInts()
>>> ints
(1, 2)
>>> exit()
swig/python detected a memory leak of type 'std::vector< std::size_t,std::allocator< std::size_t > > *', no destructor found.
</code></pre>
<p>Now the <code>std::vector<int></code> is working properly but SWIG's <code>%template(VecSize) std::vector<size_t>;</code> (without <code>std::</code>) does not do the job.</p>
<h3>Some internet digging</h3>
<p>I found a few posts which offer some clues.</p>
<p>Feeling like <a href="https://xkcd.com/979/" rel="nofollow">this</a> I found a <a href="http://swig.10945.n7.nabble.com/Bug-with-1-3-29-td2034.html" rel="nofollow">2006 post with the same problem</a></p>
<p>The <a href="http://swig.10945.n7.nabble.com/std-vector-size-type-wrapped-as-a-pointer-not-an-integer-td7790.html" rel="nofollow">std::vector::size_type wrapped as a pointer not an integer</a> link had some helpful info but the problem is not exactly the same.</p>
<p>I found this <a href="https://github.com/c-abird/magnum.fe/blob/master/magnumfe/swig/typemaps/primitives.i" rel="nofollow">primitives.i</a> from the magnum.fe project, but thinking wishfully and importing <strong>primitives.i</strong> did not work for me. </p>
<p>After that I tried to implement the <code>SWIG_AsVal_std_size_t</code> and <code>SWIG_From_std_size_t</code> similar to their approach, but no luck.</p>
<h3>hand rolled std_size_t.i</h3>
<pre><code>%fragment("SWIG_From_std_size_t", "header", fragment=SWIG_From_frag(std::size_t))
{
SWIGINTERNINLINE PyObject * SWIG_From_std_size_t(std::size_t value)
{
return PyInt_FromSize_t(value);
}
}
%fragment("SWIG_AsVal_std_size_t", "header")
{
SWIGINTERNINLINE bool SWIG_AsVal_std_size_t(PyObject* in, std::size_t& value)
{
// Get integer type
if(PyInt_Check(in)){
long unsigned int long_uint = PyLong_AsLong(in);
value = static_cast<std::size_t>(long_uint);
return true;
}else{
return false;
}
}
}
%fragment(SWIG_From_frag(std::size_t));
%fragment("SWIG_AsVal_std_size_t");
</code></pre>
<p>This was imported in <strong>d_swig_vec_std_size.i</strong>. but it does not compile.</p>
<pre><code>%module d_swig_vec_std_size
%{
#include "class_vec_std_size.hpp"
%}
%include "std_vector.i"
%template(VecInt) std::vector<int>;
%include "std_size_t.i"
%template(VecSize) std::vector<std::size_t>;
%include "class_vec_std_size.hpp"
</code></pre>
<p>Here I get this.</p>
<pre><code>swig -c++ -python d_swig_vec_std_size.i
g++ -fpic -c d_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
d_swig_vec_std_size_wrap.cxx: In static member function âstatic int swig::traits_asval<long unsigned int>::asval(PyObject*, swig::traits_asval::value_type*)â:
d_swig_vec_std_size_wrap.cxx:4311: error: invalid initialization of reference of type âsize_t&â from expression of type âswig::traits_asval::value_type*â
d_swig_vec_std_size_wrap.cxx:4288: error: in passing argument 2 of âbool SWIG_AsVal_std_size_t(PyObject*, size_t&)â
make: *** [d] Error 1
</code></pre>
<h2>makefile</h2>
<pre><code>PYTHON=/public/users/paul/dev/software/Python-2.7.11
all: a b c d
a:
swig -c++ -python a_swig_vec_std_size.i
g++ -fpic -c a_swig_vec_std_size_wrap.cxx -I${PYTHON}/Include -I${PYTHON}
g++ -g -fpic -shared a_swig_vec_std_size_wrap.o -o _a_swig_vec_std_size.so
b:
swig -c++ -python b_swig_vec_std_size.i
g++ -fpic -c b_swig_vec_std_size_wrap.cxx -I${PYTHON}/Include -I${PYTHON}
g++ -g -fpic -shared b_swig_vec_std_size_wrap.o -o _b_swig_vec_std_size.so
c:
swig -c++ -python c_swig_vec_std_size.i
g++ -fpic -c c_swig_vec_std_size_wrap.cxx -I${PYTHON}/Include -I${PYTHON}
g++ -g -fpic -shared c_swig_vec_std_size_wrap.o -o _c_swig_vec_std_size.so
d:
swig -c++ -python d_swig_vec_std_size.i
g++ -fpic -c d_swig_vec_std_size_wrap.cxx -I${PYTHON}/Include -I${PYTHON}
g++ -g -fpic -shared d_swig_vec_std_size_wrap.o -o _d_swig_vec_std_size.so
clean: clean_a clean_b clean_c clean_d
clean_a:
rm a_swig_vec_std_size_wrap.cxx a_swig_vec_std_size.py a_swig_vec_std_size_wrap.o _a_swig_vec_std_size.so
clean_b:
rm b_swig_vec_std_size_wrap.cxx b_swig_vec_std_size.py b_swig_vec_std_size_wrap.o _b_swig_vec_std_size.so
clean_c:
rm c_swig_vec_std_size_wrap.cxx c_swig_vec_std_size.py c_swig_vec_std_size_wrap.o _c_swig_vec_std_size.so
clean_d:
rm d_swig_vec_std_size_wrap.cxx d_swig_vec_std_size.py d_swig_vec_std_size_wrap.o _d_swig_vec_std_size.so
</code></pre>
<h2>Program Versions</h2>
<pre><code>python version: Python 2.7.11
g++ version: g++ (GCC) 4.4.7
swig version: SWIG Version 1.3.40
</code></pre>
<p>Using a newer swig version (swig-3.0.10) gives the same result for me.</p>
<h1>Summary</h1>
<p>I suspect the answer may be along the lines of interface <strong>d</strong> somewhere, but I have had no luck so far. There could be a problem with how <code>std::size_t</code> is implemented differently depending on the architecture instead of having a fixed size. In any case, I would expect SWIG to be able to handle it. Am I missing something? I would like to find a solution that does not involve making changes to the C++ library (such as encapsulating std::size_t in a <code>struct</code> or using <code>int</code> instead).</p>
<h2>Trying Jens Monk's Solution</h2>
<pre><code>namespace std {
%template(VecSize) vector<size_t>;
}
</code></pre>
<p>I get this:</p>
<pre><code>[paul@login-0-0 stack_swig]$ make clean
rm a_swig_vec_std_size_wrap.cxx a_swig_vec_std_size.py a_swig_vec_std_size_wrap.o _a_swig_vec_std_size.so
rm b_swig_vec_std_size_wrap.cxx b_swig_vec_std_size.py b_swig_vec_std_size_wrap.o _b_swig_vec_std_size.so
rm c_swig_vec_std_size_wrap.cxx c_swig_vec_std_size.py c_swig_vec_std_size_wrap.o _c_swig_vec_std_size.so
rm d_swig_vec_std_size_wrap.cxx d_swig_vec_std_size.py d_swig_vec_std_size_wrap.o _d_swig_vec_std_size.so
[paul@login-0-0 stack_swig]$ make
swig -c++ -python a_swig_vec_std_size.i
g++ -fpic -c a_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
g++ -g -fpic -shared a_swig_vec_std_size_wrap.o -o _a_swig_vec_std_size.so
swig -c++ -python b_swig_vec_std_size.i
g++ -fpic -c b_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
g++ -g -fpic -shared b_swig_vec_std_size_wrap.o -o _b_swig_vec_std_size.so
swig -c++ -python c_swig_vec_std_size.i
g++ -fpic -c c_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
g++ -g -fpic -shared c_swig_vec_std_size_wrap.o -o _c_swig_vec_std_size.so
swig -c++ -python -I/public/users/paul/dev/software/swig-3.0.10 d_swig_vec_std_size.i
g++ -fpic -c d_swig_vec_std_size_wrap.cxx -I/public/users/paul/dev/software/Python-2.7.11/Include -I/public/users/paul/dev/software/Python-2.7.11
g++ -g -fpic -shared d_swig_vec_std_size_wrap.o -o _d_swig_vec_std_size.so
[paul@login-0-0 stack_swig]$ python
Python 2.7.11 (default, May 7 2016, 23:37:19)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from d_swig_vec_std_size import StdSizeVec
>>> ssv = StdSizeVec()
>>> vals = ssv.getValues()
>>> vals
<Swig Object of type 'std::vector< std::size_t,std::allocator< std::size_t > > *' at 0x2aba7dd8bd80>
>>> ints - ssv.getInts()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'ints' is not defined
>>> ints = ssv.getInts()
>>> ints
(1, 2)
>>> exit()
swig/python detected a memory leak of type 'std::vector< std::size_t,std::allocator< std::size_t > > *', no destructor found.
[paul@login-0-0 stack_swig]$ cat d_swig_vec_std_size.i
%module d_swig_vec_std_size
%{
#include "class_vec_std_size.hpp"
%}
%include "std_vector.i"
%template(VecInt) std::vector<int>;
%include "std_size_t.i"
namespace std {
%template(VecSize) vector<size_t>;
}
%include "class_vec_std_size.hpp"
</code></pre>
| 6 | 2016-07-30T13:04:31Z | 38,828,544 | <p>Try defining size_t for swig as shown here - <a href="http://www.swig.org/Doc1.3/SWIG.html#SWIG_nn20" rel="nofollow">http://www.swig.org/Doc1.3/SWIG.html#SWIG_nn20</a></p>
<pre><code>%inline %{
typedef long unsigned int size_t;
%}
namespace std {
%template(VecSize) vector<size_t>;
}
</code></pre>
| 1 | 2016-08-08T11:53:23Z | [
"python",
"c++",
"vector",
"swig"
] |
How to compile kivy and python files to apk | 38,674,286 | <p>i want to convert my python and kivy files into signed and unsigned apk. i work on windows but for compiling using python for android in ubuntu in vm ware. After installing all necessary modules like kivy, python for android and android studio.</p>
<p>when i am compiling it's showing error that sdk not found.
is there any option for generating apk in windows or ubuntu or linux.
i also heard about buildozer and it also works on windows.
please suggest me something on it i'm new to kivy.
thanks
<a href="http://i.stack.imgur.com/WmT3P.png" rel="nofollow">1</a></p>
| 0 | 2016-07-30T13:08:08Z | 38,676,403 | <p>I suggest using buildozer instead of p4a - the buildozer uses it internally. It can also automatically download specified android sdk for you.</p>
<p>Install it using pip: <code>sudo pip install buildozer</code></p>
<p>Once you have it installed, go to your project directory and type:</p>
<pre><code>buildozer init
# edit the buildozer.spec, then
buildozer android_new debug deploy run
</code></pre>
<p>Find more info <a href="https://github.com/kivy/buildozer" rel="nofollow">on the github</a>.</p>
| 1 | 2016-07-30T17:01:17Z | [
"android",
"python",
"apk",
"kivy"
] |
Store plotted data for later use | 38,674,287 | <p>I do much (practically all) of my data analysis in Jupyter/iPython-notebooks. For convenience, I obviously also plot my data in those notebooks using <code>matplotlib</code>/<code>pyplot</code>.
Some of those plots I need to recreate externally later on, for example to use them in <code>latex</code>. For this I save the corresponding data as textfiles to the harddrive. Right now, I manually create a numpy-array by stacking all the data needed for the plot, and save this using <code>numpy.savetxt</code>.</p>
<p>What I would like to have is a way to save <em>all</em> data needed for a specific plot written to the same file in an (semi)automatic way, but I am at a loss if it comes to the smart way of doing so.</p>
<p>Thus I have two questions:</p>
<ul>
<li><p>Is it possible (and save to do) to create something like a plot-memory object, that stores all data plotted per figure, and has a method similar to <code>Memoryobject.save_plot_to_file(figname)</code>? This object would need to know which figure I am working on, so I would need to create a layer above matplotlib, or get this information from the matplotlib objects</p></li>
<li><p>Is there a simpler way? The python universe is huge, and I do not know half of it. Maybe something like this already exists?</p></li>
</ul>
<p><strong>Edit</strong>: Clarification: I do <em>not</em> want to save the figure object. What I want to do is something like this:</p>
<pre><code>fig = plt.figure()
fig.plot(x1, y1)
fig.plot(x2, y2 + y3)
# and at a later point
arrays = get_data_from_plot(fig)
data = process(arrays)
np.savetxt('textfile', data)
</code></pre>
| 0 | 2016-07-30T13:08:17Z | 38,674,397 | <p>You could pickle the object (using the <code>cPickle</code> module). See this question <a href="http://stackoverflow.com/questions/7290370/store-and-reload-matplotlib-pyplot-object">here</a>.</p>
| 0 | 2016-07-30T13:22:08Z | [
"python",
"matplotlib",
"ipython",
"jupyter-notebook"
] |
Missing dll files when using pyinstaller | 38,674,400 | <p>Good day!</p>
<p>I'm using python 3.5.2 with qt5, pyqt5 and sip14.8.
I'm also using the latest pyinstaller bracnch (3.3.dev0+g501ad40).</p>
<p>I'm trying to create an exe file for a basic hello world program.</p>
<pre><code>from PyQt5 import QtWidgets
import sys
class newPingDialog(QtWidgets.QMainWindow):
def __init__(self):
super(newPingDialog, self).__init__()
self.setGeometry(50, 50, 500, 300)
self.setWindowTitle("hello!")
self.show()
app = QtWidgets.QApplication(sys.argv)
GUI = newPingDialog()
sys.exit(app.exec_())
</code></pre>
<p>At first, I used to get some errors regarding crt-msi. So I've reinstalled SDK and c++ runtime and added them to my environment.
But now I keep getting errors about missing dlls (qsvg, Qt5PrintSupport)</p>
<pre><code>6296 WARNING: lib not found: Qt5Svg.dll dependency of C:\users\me\appdata\local\programs\python\python35\lib\site-pac
kages\PyQt5\Qt\plugins\imageformats\qsvg.dll
6584 WARNING: lib not found: Qt5Svg.dll dependency of C:\users\me\appdata\local\programs\python\python35\lib\site-pac
kages\PyQt5\Qt\plugins\iconengines\qsvgicon.dll
6992 WARNING: lib not found: Qt5PrintSupport.dll dependency of C:\users\me\appdata\local\programs\python\python35\lib
\site-packages\PyQt5\Qt\plugins\printsupport\windowsprintersupport.dll
7535 WARNING: lib not found: Qt5PrintSupport.dll dependency of c:\users\me\appdata\local\programs\python\python35\lib
\site-packages\PyQt5\QtPrintSupport.pyd
8245 INFO: Looking for eggs
8245 INFO: Using Python library c:\users\me\appdata\local\programs\python\python35\python35.dll
8246 INFO: Found binding redirects:
</code></pre>
<p>I've checked and both dlls exist and have their PATH set. I also tried to manually add them to my dist folder, but it didn't helped.</p>
<p>I'll highly appreciate any advice you might have!</p>
| 1 | 2016-07-30T13:22:15Z | 38,682,416 | <p>This may be more like a workaround and Pyinstaller might need fixing.</p>
<p>I found out that <code>--paths</code> argument pointing to the directory containing <em>Qt5Core.dll</em>, <em>Qt5Gui.dll</em>, etc. helped</p>
<pre><code>pyinstaller --paths C:\Python35\Lib\site-packages\PyQt5\Qt\bin hello.py
</code></pre>
| 3 | 2016-07-31T09:04:04Z | [
"python",
"qt",
"dll",
"pyinstaller"
] |
Problems with python + json vs. curl | 38,674,459 | <p>so when I run the python code the server (google) give me a different response than when I run curl command. Can someone tell me where I'm wrong please?</p>
<p>code:</p>
<pre><code>import urllib2, simplejson
def MapsWIFI(card):
req = urllib2.Request("https://www.googleapis.com/geolocation/v1/geolocate?key=AI...")
jWifi = """
{
"wifiAccessPoints": [
{
"macAddress": "64:D1:A3:0A:11:65",
"channel": 6,
},
... #some AP here
]
}
"""
print jWifi
req.add_header("Content-Type", "application/json")
jWifiReport = urllib2.urlopen(req,simplejson.dumps(jWifi)).read()
print jWifiReport
APdetected = str(len(wifiCell))
mapsDict = simplejson.loads(jWifiReport)
location = str(mapsDict.get("location",{}))[1:-1]
accuracy = "Accuracy: "+str(mapsDict.get("accuracy",{}))[1:-1]
mapMe = "|---"+location.split(",")[0]+"\n|---"+location.split(",")[1][1:]+"\n|---$
return mapMe
MapsWIFI("wlp8s0")
</code></pre>
<p>And the command is:</p>
<pre><code>curl -d @file2.json -H "Content-Type: application/json" -i "https://www.googleapis.com/geolocation/v1/geolocate?key=AI..."
</code></pre>
<p>where file2.json contains exactly jWifi in that format.
The problem is that, as said, the location returned by the code is different from the location returned by curl. I don't get error code so I thing that the syntax is correct.</p>
| 1 | 2016-07-30T13:28:25Z | 38,674,495 | <p>The data is <em>already</em> a JSON encoded string, you don't want to encode it twice.</p>
<p>Pass it in <em>without</em> encoding it again:</p>
<pre><code>jWifiReport = urllib2.urlopen(req, jWifi).read()
</code></pre>
<p>You only need to encode if you have a Python data structure (a dictionary in this case).</p>
| 2 | 2016-07-30T13:33:16Z | [
"python",
"json",
"google-maps",
"curl",
"google-geolocation"
] |
InterfaceError for a custom JSONField for Django | 38,674,469 | <p>I am trying to build a custom JSON Field for Django projects that support MySQL. This is my <strong>models</strong>:</p>
<pre><code>from __future__ import unicode_literals
from django.db import models
from django.db import models
from django.core.serializers.json import DjangoJSONEncoder
import json
name1 = 'name1'
name2 = 'name2'
class JSONField(models.TextField):
"""JSONField is a generic textfield that neatly serializes/unserializes
JSON objects seamlessly"""
# Used so to_python() is called
__metaclass__ = models.SubfieldBase
def to_python(self, value):
"""Convert our string value to JSON after we load it from the DB"""
if value == "":
return None
try:
if isinstance(value, basestring):
return json.loads(value)
except ValueError:
pass
return value
def get_db_prep_save(self, value, connection):
"""Convert our JSON object to a string before we save"""
if value == "":
return None
if isinstance(value, dict):
value = json.dumps(value, cls=DjangoJSONEncoder)
return super(JSONField, self).get_db_prep_save(value, connection)
# Articles / Content
class Content(models.Model):
title = models.CharField(max_length=255)
body = models.TextField()
data = JSONField(blank=True, null=True)
def __unicode__(self):
return self.title
def save(self, *args, **kwargs):
self.data = {
name1 : {
"image_url" : 'https://photosite.com/image1.jpg',
"views" : 0
},
name2 : {
"image_url" : 'https://photosite.com/image2.jpg',
"views" : 0
}
}
super(Content, self).save(*args, **kwargs)
</code></pre>
<p>Please notice the custom save method for the Content model. When I try to save a new Content object, I get this error:</p>
<p><strong>InterfaceError at /admin/myapp/content/add/</strong></p>
<p>Error binding parameter 2 - probably unsupported type.</p>
<p>What exactly am I doing wrong? What does the error even mean. I mean it says 'probably', as if its not even sure if there is an error. Any help? </p>
<p>If you want the full traceback, you can find it here:
<a href="http://pastebin.com/B15hZpbu" rel="nofollow">http://pastebin.com/B15hZpbu</a></p>
| 1 | 2016-07-30T13:29:23Z | 38,674,937 | <p>this code will produce an undefined variable error before you call your user method.</p>
<pre><code>data = {
name1 : {
"image_url" : 'https://photosite.com/image1.jpg',
"views" : 0
},
name2 : {
"image_url" : 'https://photosite.com/image2.jpg',
"views" : 0
}
}
</code></pre>
<p>name1 and name2 are clearly not defined in your code.</p>
| 1 | 2016-07-30T14:19:56Z | [
"python",
"json",
"django",
"django-models",
"django-jsonfield"
] |
I/O Warning Non ASCII found | 38,674,480 | <p>This is my first question...</p>
<p>I just checked this <a href="http://stackoverflow.com/questions/16270174/how-to-deal-with-non-ascii-warning-when-performing-save-on-python-code-edited-wi">How to deal with Non-ASCII Warning when performing Save on Python code edited with IDLE?</a>
but didn't find the solution.</p>
<p>I wrote a very little code with idle and i save it. I call the file from the shell with python 3 and this image arrives.</p>
<p>Sorry for my english, and if i make this questions wrong.<a href="http://i.stack.imgur.com/TC79o.png" rel="nofollow">enter image description here</a> </p>
| -1 | 2016-07-30T13:31:35Z | 38,676,657 | <p>In image its already mentioned what to do. </p>
<p>You need to add first line in your code file as "<strong># -*- coding: utf-8 -*-</strong>"</p>
<pre><code># -*- coding: utf-8 -*-
</code></pre>
| 0 | 2016-07-30T17:31:15Z | [
"python",
"syntax"
] |
Replace leading zeros with spaces | 38,674,494 | <p>I have a text file of multiple records. Each record has a field which has some number of leading zeros that I need to replace with that number of spaces. A record will look like this:</p>
<pre><code>A206 000001204 X4609
</code></pre>
<p>I need the record to look like this:</p>
<pre><code>A206 1204 X4609
</code></pre>
<p>I'm extremely unfamiliar with regex but the following regex seems to find the matches that I need:</p>
<pre><code>\b0+
</code></pre>
<p>However, I have no idea how to do the replacement. A ReplaceAll for Notepad++ would be awesome but I can also create a quick program in C#, Powershell, or Python if needed. Can anyone give me some pointers on the regex for this?</p>
| -2 | 2016-07-30T13:33:06Z | 38,674,521 | <p>Yes, <code>\b0+</code> would probably work.</p>
<p>Here using the <a href="https://msdn.microsoft.com/en-us/library/ht1sxswy(v=vs.110).aspx" rel="nofollow"><code>Regex.Replace()</code> method</a> in <code>C#</code>:</p>
<pre><code>using System.Text.RegularExpressions;
Regex.Replace(inputString, @"\b0+", m => "".PadLeft(m.Value.Length,' '));
</code></pre>
<p>The last argument to <code>Replace()</code> is a simple lambda function that returns a string of the same length as the number of matched <code>0</code>s, but consisting only of spaces</p>
<hr>
<p>You can do the same in <code>PowerShell</code>, substituting a <code>scriptblock</code> for the lambda function:</p>
<pre><code>PS C:\> $inputString = 'A206 000001204 X4609'
PS C:\> [regex]::Replace($inputString, '\b0+', {param($m) ' ' * $m.Value.Length})
A206 1204 X4609
</code></pre>
| 6 | 2016-07-30T13:35:13Z | [
"c#",
"python",
"regex",
"powershell",
"notepad++"
] |
Replace leading zeros with spaces | 38,674,494 | <p>I have a text file of multiple records. Each record has a field which has some number of leading zeros that I need to replace with that number of spaces. A record will look like this:</p>
<pre><code>A206 000001204 X4609
</code></pre>
<p>I need the record to look like this:</p>
<pre><code>A206 1204 X4609
</code></pre>
<p>I'm extremely unfamiliar with regex but the following regex seems to find the matches that I need:</p>
<pre><code>\b0+
</code></pre>
<p>However, I have no idea how to do the replacement. A ReplaceAll for Notepad++ would be awesome but I can also create a quick program in C#, Powershell, or Python if needed. Can anyone give me some pointers on the regex for this?</p>
| -2 | 2016-07-30T13:33:06Z | 38,674,539 | <p>Does this suffice?</p>
<pre><code>while (dataString.Contains(" 0")) // while data contains a zero after a space
dataString = dataString.Replace(" 0", " "); // Replace with two spaces
</code></pre>
<p>Though this doesn't use regex. </p>
<p>I hope this helps.</p>
| 3 | 2016-07-30T13:37:39Z | [
"c#",
"python",
"regex",
"powershell",
"notepad++"
] |
Replace leading zeros with spaces | 38,674,494 | <p>I have a text file of multiple records. Each record has a field which has some number of leading zeros that I need to replace with that number of spaces. A record will look like this:</p>
<pre><code>A206 000001204 X4609
</code></pre>
<p>I need the record to look like this:</p>
<pre><code>A206 1204 X4609
</code></pre>
<p>I'm extremely unfamiliar with regex but the following regex seems to find the matches that I need:</p>
<pre><code>\b0+
</code></pre>
<p>However, I have no idea how to do the replacement. A ReplaceAll for Notepad++ would be awesome but I can also create a quick program in C#, Powershell, or Python if needed. Can anyone give me some pointers on the regex for this?</p>
| -2 | 2016-07-30T13:33:06Z | 38,674,859 | <p>Using Npp:</p>
<ul>
<li><kbd>Ctrl</kbd>+<kbd>H</kbd></li>
<li>Find what: <code>\b0</code></li>
<li>Replace with: <code></code> (a space)</li>
<li><kbd>Replace All</kbd></li>
</ul>
| 1 | 2016-07-30T14:11:56Z | [
"c#",
"python",
"regex",
"powershell",
"notepad++"
] |
Replace leading zeros with spaces | 38,674,494 | <p>I have a text file of multiple records. Each record has a field which has some number of leading zeros that I need to replace with that number of spaces. A record will look like this:</p>
<pre><code>A206 000001204 X4609
</code></pre>
<p>I need the record to look like this:</p>
<pre><code>A206 1204 X4609
</code></pre>
<p>I'm extremely unfamiliar with regex but the following regex seems to find the matches that I need:</p>
<pre><code>\b0+
</code></pre>
<p>However, I have no idea how to do the replacement. A ReplaceAll for Notepad++ would be awesome but I can also create a quick program in C#, Powershell, or Python if needed. Can anyone give me some pointers on the regex for this?</p>
| -2 | 2016-07-30T13:33:06Z | 38,676,466 | <p>As an alternative to <a href="http://stackoverflow.com/a/38674521/1630171">Mathias'</a> lambda expression solution you could also use a more "conventional" approach like this:</p>
<pre><code>$str = 'A206 000001204 X4609'
$re = '\b0+'
if ($str -match $re) {
$str -replace $re, (' ' * $matches[0].Length)
}
</code></pre>
| 0 | 2016-07-30T17:07:42Z | [
"c#",
"python",
"regex",
"powershell",
"notepad++"
] |
Select values of one array based on a boolean expression applied to another array | 38,674,562 | <p>Starting with the following array</p>
<pre><code>array([ nan, nan, nan, 1., nan, nan, 0., nan, nan])
</code></pre>
<p>which is generated like so:</p>
<pre><code>import numpy as np
row = np.array([ np.nan, np.nan, np.nan, 1., np.nan, np.nan, 0., np.nan, np.nan])
</code></pre>
<p>I'd like to get the indices of the sorted array and then exclude the <code>nans</code>. In this case, I'd like to get <code>[6,3]</code>. </p>
<p>I've come up with the following way to do this:</p>
<pre><code>vals = np.sort(row)
inds = np.argsort(row)
def select_index_by_value(indices, values):
selected_indices = []
for i in range(len(indices)):
if not np.isnan(values[i]):
selected_indices.append(indices[i])
return selected_indices
selected_inds = select_index_by_value(inds, vals)
</code></pre>
<p>Now <code>selected_inds</code> is <code>[6,3]</code>. However, this seems like quite a few lines of code to achieve something simple. Is there perhaps a shorter way of doing this? </p>
| 1 | 2016-07-30T13:40:00Z | 38,674,582 | <p>You could do something like this -</p>
<pre><code># Store non-NaN indices
idx = np.where(~np.isnan(row))[0]
# Select non-NaN elements, perform argsort and use those argsort
# indices to re-order non-NaN indices as final output
out = idx[row[idx].argsort()]
</code></pre>
| 3 | 2016-07-30T13:41:54Z | [
"python",
"arrays",
"numpy"
] |
Select values of one array based on a boolean expression applied to another array | 38,674,562 | <p>Starting with the following array</p>
<pre><code>array([ nan, nan, nan, 1., nan, nan, 0., nan, nan])
</code></pre>
<p>which is generated like so:</p>
<pre><code>import numpy as np
row = np.array([ np.nan, np.nan, np.nan, 1., np.nan, np.nan, 0., np.nan, np.nan])
</code></pre>
<p>I'd like to get the indices of the sorted array and then exclude the <code>nans</code>. In this case, I'd like to get <code>[6,3]</code>. </p>
<p>I've come up with the following way to do this:</p>
<pre><code>vals = np.sort(row)
inds = np.argsort(row)
def select_index_by_value(indices, values):
selected_indices = []
for i in range(len(indices)):
if not np.isnan(values[i]):
selected_indices.append(indices[i])
return selected_indices
selected_inds = select_index_by_value(inds, vals)
</code></pre>
<p>Now <code>selected_inds</code> is <code>[6,3]</code>. However, this seems like quite a few lines of code to achieve something simple. Is there perhaps a shorter way of doing this? </p>
| 1 | 2016-07-30T13:40:00Z | 38,674,628 | <p>Another option:</p>
<pre><code>row.argsort()[~np.isnan(np.sort(row))]
# array([6, 3])
</code></pre>
| 1 | 2016-07-30T13:47:02Z | [
"python",
"arrays",
"numpy"
] |
Select values of one array based on a boolean expression applied to another array | 38,674,562 | <p>Starting with the following array</p>
<pre><code>array([ nan, nan, nan, 1., nan, nan, 0., nan, nan])
</code></pre>
<p>which is generated like so:</p>
<pre><code>import numpy as np
row = np.array([ np.nan, np.nan, np.nan, 1., np.nan, np.nan, 0., np.nan, np.nan])
</code></pre>
<p>I'd like to get the indices of the sorted array and then exclude the <code>nans</code>. In this case, I'd like to get <code>[6,3]</code>. </p>
<p>I've come up with the following way to do this:</p>
<pre><code>vals = np.sort(row)
inds = np.argsort(row)
def select_index_by_value(indices, values):
selected_indices = []
for i in range(len(indices)):
if not np.isnan(values[i]):
selected_indices.append(indices[i])
return selected_indices
selected_inds = select_index_by_value(inds, vals)
</code></pre>
<p>Now <code>selected_inds</code> is <code>[6,3]</code>. However, this seems like quite a few lines of code to achieve something simple. Is there perhaps a shorter way of doing this? </p>
| 1 | 2016-07-30T13:40:00Z | 38,703,141 | <p>There is another faster solution (for OP data). </p>
<p>Psidom's Solution</p>
<pre><code>%timeit row.argsort()[~np.isnan(np.sort(row))]
The slowest run took 31.23 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 8.16 µs per loop
</code></pre>
<p>Divakar's Solution</p>
<pre><code>%timeit idx = np.where(~np.isnan(row))[0]; idx[row[idx].argsort()]
The slowest run took 35.11 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 4.73 µs per loop
</code></pre>
<p>Based on Divakar's Solution</p>
<pre><code>%timeit np.where(~np.isnan(row))[0][::-1]
The slowest run took 9.42 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 2.86 µs per loop
</code></pre>
<p>I think this works because <code>np.where(~np.isnan(row))</code> retains order. </p>
| 0 | 2016-08-01T15:53:14Z | [
"python",
"arrays",
"numpy"
] |
Can not start elasticsearch as a service in ubuntu 16.04 | 38,674,711 | <p>I have recently upgraded my machine from Ubuntu <code>14.04</code> to <code>16.04</code>. I am facing problem of using the <code>elasticsearch</code> as a service. I <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html" rel="nofollow">installed</a> <code>elasticsearch</code> using:</p>
<pre><code>sudo apt-get install elasticsearch
</code></pre>
<p>Now <code>sudo service elasticsearch status</code> command shows me this result:</p>
<pre><code>elasticsearch.service - LSB: Starts elasticsearch
Loaded: loaded (/etc/init.d/elasticsearch; bad; vendor preset: enabled)
Active: active (exited) since Sat 2016-07-30 18:28:13 BDT; 1h 19min ago
Docs: man:systemd-sysv-generator(8)
Main PID: 7988 (code=exited, status=1/FAILURE)
CGroup: /system.slice/elasticsearch.service
Jul 30 18:28:13 dimik elasticsearch[10266]: [warning] /etc/init.d/elasticsearch: No java runtime was found
Jul 30 18:28:13 dimik systemd[1]: Started LSB: Starts elasticsearch.
Jul 30 18:28:46 dimik systemd[1]: Started LSB: Starts elasticsearch.
Jul 30 18:35:30 dimik systemd[1]: Started LSB: Starts elasticsearch.
Jul 30 19:04:36 dimik systemd[1]: Started A search engine.
Jul 30 19:07:48 dimik systemd[1]: Started A search engine.
Jul 30 19:27:01 dimik systemd[1]: Started A search engine.
Jul 30 19:27:51 dimik systemd[1]: Started A search engine.
Jul 30 19:28:54 dimik systemd[1]: Started A search engine.
Jul 30 19:29:18 dimik systemd[1]: Started LSB: Starts elasticsearch.
</code></pre>
<p>Although Java is installed in my machine and I can start the server using this command.</p>
<pre><code>sudo /usr/share/elasticsearch/bin/elasticsearch
</code></pre>
<p>I am kind of stuck here. Any help will be appreciated. </p>
<p><strong>Edit</strong></p>
<p>After setting up <code>JAVA_HOME</code> for root the error:</p>
<pre><code>elasticsearch.service - LSB: Starts elasticsearch
Loaded: loaded (/etc/init.d/elasticsearch; bad; vendor preset: enabled)
Active: active (exited) since Sat 2016-07-30 18:28:13 BDT; 3h 32min ago
Docs: man:systemd-sysv-generator(8)
Main PID: 7988 (code=exited, status=1/FAILURE)
CGroup: /system.slice/elasticsearch.service
Jul 30 18:35:30 dimik systemd[1]: Started LSB: Starts elasticsearch.
Jul 30 19:04:36 dimik systemd[1]: Started A search engine.
Jul 30 19:07:48 dimik systemd[1]: Started A search engine.
Jul 30 19:27:01 dimik systemd[1]: Started A search engine.
Jul 30 19:27:51 dimik systemd[1]: Started A search engine.
Jul 30 19:28:54 dimik systemd[1]: Started A search engine.
Jul 30 19:29:18 dimik systemd[1]: Started LSB: Starts elasticsearch.
Jul 30 20:02:07 dimik systemd[1]: Started LSB: Starts elasticsearch.
Jul 30 20:20:21 dimik systemd[1]: Started LSB: Starts elasticsearch.
Jul 30 21:59:21 dimik systemd[1]: Started LSB: Starts elasticsearch.
</code></pre>
| 3 | 2016-07-30T13:55:14Z | 38,676,036 | <p>I found the solution for this issue. The solution comes form this discussion thread- <a href="https://discuss.elastic.co/t/cant-start-elasticsearch-with-ubuntu-16-04/48730/9">Canât start elasticsearch with Ubuntu 16.04</a> on elastic's website.</p>
<blockquote>
<p>It seems that to get Elasticsearch to run on <code>16.04</code> you have to set <code>START_DAEMON</code> to true on <code>/etc/default/elasticsearch</code>. It comes commented out by default, and uncommenting it makes Elasticsearch start again just fine.</p>
<p>Be sure to use <code>systemctl restart</code> instead of just <code>start</code> because the
service is started right after installation, and apparently there's
some <code>socket/pidfile/something</code> that <code>systemd</code> keeps that must be released
before being able to start the service again.</p>
</blockquote>
| 5 | 2016-07-30T16:23:02Z | [
"java",
"python",
"elasticsearch",
"ubuntu-16.04"
] |
Pillow Attribute Error | 38,674,729 | <p>I want to set up an imagestream from my rbpi to my server.</p>
<p>So I would like to setup a network stream discripted in the <a href="http://picamera.readthedocs.io/en/release-1.12/recipes1.html#streaming-capture" rel="nofollow">http://picamera.readthedocs.io/en/release-1.12/recipes1.html#streaming-capture</a>.</p>
<p>This worked well, but now I want to save the captured Image.</p>
<p>-> (modified the server script)</p>
<pre><code>import io
import socket
import struct
from PIL import Image
# Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means
# all interfaces)
server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 8000))
server_socket.listen(0)
# Accept a single connection and make a file-like object out of it
connection = server_socket.accept()[0].makefile('rb')
try:
while True:
# Read the length of the image as a 32-bit unsigned int. If the
# length is zero, quit the loop
image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0]
if not image_len:
break
# Construct a stream to hold the image data and read the image
# data from the connection
image_stream = io.BytesIO()
image_stream.write(connection.read(image_len))
# Rewind the stream, open it as an image with PIL and do some
# processing on it
image_stream.seek(0)
image = Image.open(image_stream)
print('Image is %dx%d' % image.size)
image.verify()
print('Image is verified')
im = Image.new("RGB", (640,480), "black") #the saving part
im = image.copy()
im.save("./img/test.jpg","JPEG")
finally:
connection.close()
server_socket.close()
</code></pre>
<p>But it returns me following errorcode:</p>
<pre><code>Traceback (most recent call last):
File "stream.py", line 33, in <module>
im = image.copy()
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 781, in copy
self.load()
File "/usr/lib/python2.7/dist-packages/PIL/ImageFile.py", line 172, in load
read = self.fp.read
AttributeError: 'NoneType' object has no attribute 'read'
</code></pre>
<p>How can I fix this?</p>
| 3 | 2016-07-30T13:57:17Z | 38,705,927 | <p>I don't have a raspberry-pi, but decided to see if I could reproduce the problem anyway. Also, for input I just created an image file on disk to eliminate all the socket stuff. Sure enough I got exactly the same error as you encountered. (<strong>Note:</strong> IMO you should have done this simplification yourself and posted an MCVE illustrating the problem (see <a href="https://stackoverflow.com/help/mcve"><em>How to create a Minimal, Complete, and Verifiable example</em></a> in the SO Help Center).</p>
<p>To get the problem to go away I added a call to the <code>image.load()</code> method immediately after the <code>Image.open()</code> statement and things started working. Not only was the error gone, but the output file seemed fine, too.</p>
<p>Here's my simple test code with the fix indicated:</p>
<pre><code>import io
import os
from PIL import Image
image_filename = 'pillow_test.jpg'
image_len = os.stat(image_filename).st_size
image_stream = io.BytesIO()
with open(image_filename, 'rb') as imagefile:
image_stream.write(imagefile.read(image_len))
image_stream.seek(0)
image = Image.open(image_stream)
image.load() ########## ADDED LINE ##########
print('Image is %dx%d' % image.size)
image.verify()
print('Image is verified')
im = Image.new("RGB", (640,480), "black") #the saving part
im = image.copy()
im.save("pillow_test_out.jpg","JPEG")
print('image written')
</code></pre>
<p>The clue was this passage from the <a href="http://pillow.readthedocs.io/en/3.3.x/reference/Image.html" rel="nofollow">pillow documentation</a> for the <code>PIL.Image.open()</code> function:</p>
<blockquote>
<p>This is a lazy operation; this function identifies the file, but the file
remains open and the actual image data is not read from the file until you try
to process the data <strong>(or call the load() method)</strong>.</p>
</blockquote>
<p>(emphasis mine) You would think the <code>image.verify()</code> would make this unnecessary since it seems like verifying the "file" would require loading the data in order to check it. My guess is this is likely a bug and you should report it.</p>
| 1 | 2016-08-01T18:42:42Z | [
"python",
"raspberry-pi",
"pillow"
] |
Matplotlib - plot line merging with plot frame | 38,674,784 | <p>How to avoid plot line merging with plot frame in matplotlib? I attached a screenshot. As you can see the purple line at the bottom is merely visible.</p>
<p><a href="http://i.stack.imgur.com/zf67i.png" rel="nofollow">Graph</a></p>
<p>I am plotting like this:</p>
<pre><code>plt.subplot2grid((4,4), (1, 0), colspan=2)
plt.plot(np.array(graph_time), np.array(graph1_data), label="graph1", color='#a42102')
plt.plot(np.array(graph_time), np.array(graph2_data), label="graph2", color='#da7701')
if len(errortime) > 0:
[plt.axvline(_x, linestyle="dashed", color='r', label='error' if not i else None, zorder=5) for i, _x in enumerate(errortime)]
lgd = plt.legend(ncol=2, loc='best')
lgd.get_frame().set_alpha(0)
plt.xticks(rotation=30)
</code></pre>
<p>Any help is much appreciated...thanks!</p>
| 0 | 2016-07-30T14:03:47Z | 38,675,289 | <p>The easiest thing to do would be to alter the axes. If you move the y-axis down to about -5 or even -1 it will show the whole line. Use the ylim function:</p>
<pre><code>ymin, ymax = ylim() # get the current limits
ylim( (ymin - 5, ymax) ) # set the ylim to ymin, ymax
</code></pre>
<p>This will move the y-axis down by 5. If you want to do this in such a way that it scales well to larger graphs you could do something like this:</p>
<pre><code>ymin, ymax = ylim()
ymin = ymin - (ymax -ymin)* 0.1
ylim( (ymin, ymax) )
</code></pre>
| 0 | 2016-07-30T14:58:24Z | [
"python",
"matplotlib"
] |
How and when to intialise configuration in Python? | 38,674,793 | <p>I'm getting pretty confused as to how and where to initialise application configuration in Python 3.</p>
<p>I have configuration that consists of application specific config (db connection strings, url endpoints etc.) and logging configuration.</p>
<p>Before my application performs its intended function I want to initialise the application and logging config.</p>
<p>After a few different attempts, I eventually ended up with something like the code below in my main entry module. It has the nice effect of all imports being grouped at the top of the file (<a href="https://www.python.org/dev/peps/pep-0008/#imports" rel="nofollow">https://www.python.org/dev/peps/pep-0008/#imports</a>), but it doesn't feel right since the config modules are being imported for side effects alone which is pretty non-intuitive.</p>
<pre><code>import config.app_config # sets up the app config
import config.logging_config # sets up the logging config
...
if __name__ == "__main__":
...
</code></pre>
<p><code>config.app_config</code> looks something like follows:</p>
<pre><code>_config = {
'DB_URL': None
}
_config['DB_URL'] = _get_db_url()
def db_url():
return _config['DB_URL']
def _get_db_url():
#somehow get the db url
</code></pre>
<p>and <code>config.logging_config</code> looks like:</p>
<pre><code>if not os.path.isdir('.\logs'):
os.makedirs('.\logs')
if os.path.exists('logging_config.json'):
with open(path, 'rt') as f:
config = json.load(f)
logging.config.dictConfig(config)
else:
logging.basicConfig(level=log_level)
</code></pre>
<p>What is the common way to set up application configuration in Python? Bearing in mind that I will have multiple applications each using the <code>config.app_config</code> and <code>config.logging_config</code> module, but with different connection string possibly read from a file</p>
| 0 | 2016-07-30T14:05:06Z | 38,713,360 | <p>I ended up with a cut down version of the Django approach: <a href="https://github.com/django/django/blob/master/django/conf/__init__.py" rel="nofollow">https://github.com/django/django/blob/master/django/conf/<strong>init</strong>.py</a></p>
<p>It seems pretty elegant and has the nice benefit of working regardless of which module imports settings first.</p>
| 0 | 2016-08-02T06:40:21Z | [
"python"
] |
Python Pandas flattening a calendar with overlapping meetings to get actual time in meetings | 38,674,805 | <p>I have the details of my weekly calendar (obviously Changed the Subjects to protect the innocent) read into into a pandas dataframe. One of my goals is to get the total time in meetings. I would like to have a dataframe indexed by date_range with hourly frequencies for the week showing how many total minutes I was in meetings during those hours. My first challenge is that meetings overlap and as much as I would like to be in two places at once, I am surely not. I do hop out of one and into another though. So for example, rows at index 8 and 9 should be a total meeting time of 90 minutes and not 120 minutes as would be the case if I just df['Duration'].sum() 'd the column. How do I flatten the time periods in the dataframe to only count the overlap once? It seems like there is an answer somewhere using date_range and periods, but I can't wrap my head around it. Below is my dataframe df.</p>
<pre class="lang-none prettyprint-override"><code> Start End Duration Subject
0 07/04/16 10:30:00 07/04/16 11:00:00 30 Inspirational Poster Design Session
1 07/04/16 15:00:00 07/04/16 15:30:00 30 Corporate Speak Do's and Don'ts
2 07/04/16 09:00:00 07/04/16 12:00:00 180 Metrics or Matrix -Panel Discussion
3 07/04/16 13:30:00 07/04/16 15:00:00 90 "Do More with Less" kickoff party
4 07/05/16 09:00:00 07/05/16 10:00:00 60 Fiscal or Physical -Panel Discussion
5 07/05/16 14:00:00 07/05/16 14:30:00 30 "Why we can't have nice thing" training video
6 07/06/16 15:00:00 07/06/16 16:00:00 60 One-on-One with manager -Panel Discussion
7 07/06/16 09:00:00 07/06/16 10:00:00 60 Fireing for Performance leadership session
8 07/06/16 13:00:00 07/06/16 14:00:00 60 Birthday Cake in the conference room *MANDATORY*
9 07/06/16 12:30:00 07/06/16 13:30:00 60 Obligatory lunchtime meeting because it was the only time everyone had avaiable
</code></pre>
<p>Any help would be greatly appreciated.</p>
<p>EDIT:
This is the output I would be hoping for with the above data set.</p>
<pre class="lang-none prettyprint-override"><code>2016-07-04 00:00:00 0
2016-07-04 01:00:00 0
2016-07-04 02:00:00 0
2016-07-04 03:00:00 0
2016-07-04 04:00:00 0
2016-07-04 05:00:00 0
2016-07-04 06:00:00 0
2016-07-04 07:00:00 0
2016-07-04 08:00:00 0
2016-07-04 09:00:00 60
2016-07-04 10:00:00 60
2016-07-04 11:00:00 60
2016-07-04 12:00:00 0
2016-07-04 13:00:00 30
2016-07-04 14:00:00 60
2016-07-04 15:00:00 30
2016-07-04 16:00:00 0
2016-07-04 17:00:00 0
2016-07-04 18:00:00 0
2016-07-04 19:00:00 0
2016-07-04 20:00:00 0
2016-07-04 21:00:00 0
2016-07-04 22:00:00 0
2016-07-04 23:00:00 0
2016-07-05 00:00:00 0
2016-07-05 01:00:00 0
2016-07-05 02:00:00 0
2016-07-05 03:00:00 0
2016-07-05 04:00:00 0
2016-07-05 05:00:00 0
2016-07-05 06:00:00 0
2016-07-05 07:00:00 0
2016-07-05 08:00:00 0
2016-07-05 09:00:00 60
2016-07-05 10:00:00 0
2016-07-05 11:00:00 0
2016-07-05 12:00:00 0
2016-07-05 13:00:00 0
2016-07-05 14:00:00 30
2016-07-05 15:00:00 0
2016-07-05 16:00:00 0
2016-07-05 17:00:00 0
2016-07-05 18:00:00 0
2016-07-05 19:00:00 0
2016-07-05 20:00:00 0
2016-07-05 21:00:00 0
2016-07-05 22:00:00 0
2016-07-05 23:00:00 0
2016-07-06 00:00:00 0
2016-07-06 01:00:00 0
2016-07-06 02:00:00 0
2016-07-06 03:00:00 0
2016-07-06 04:00:00 0
2016-07-06 05:00:00 0
2016-07-06 06:00:00 0
2016-07-06 07:00:00 0
2016-07-06 08:00:00 0
2016-07-06 09:00:00 60
2016-07-06 10:00:00 0
2016-07-06 11:00:00 0
2016-07-06 12:00:00 30
2016-07-06 13:00:00 60
2016-07-06 14:00:00 0
2016-07-06 15:00:00 60
2016-07-06 16:00:00 0
2016-07-06 17:00:00 0
2016-07-06 18:00:00 0
2016-07-06 19:00:00 0
2016-07-06 20:00:00 0
2016-07-06 21:00:00 0
2016-07-06 22:00:00 0
2016-07-06 23:00:00 0
2016-07-07 00:00:00 0
</code></pre>
| 0 | 2016-07-30T14:06:21Z | 38,675,588 | <p>One possibility is creating a time series (<code>s</code> below) indexed by minute that keeps tracks of whether you are in a meeting during that minute or not, and then resample that by hour. To match your desired output, you may adjust the start and end time of the index of <code>s</code>.</p>
<pre><code>import io
import pandas as pd
data = io.StringIO('''\
Start,End,Duration,Subject
0,07/04/16 10:30:00,07/04/16 11:00:00,30,Inspirational Poster Design Session
1,07/04/16 15:00:00,07/04/16 15:30:00,30,Corporate Speak Do's and Don'ts
2,07/04/16 09:00:00,07/04/16 12:00:00,180,Metrics or Matrix -Panel Discussion
3,07/04/16 13:30:00,07/04/16 15:00:00,90,"Do More with Less" kickoff party
4,07/05/16 09:00:00,07/05/16 10:00:00,60,Fiscal or Physical -Panel Discussion
5,07/05/16 14:00:00,07/05/16 14:30:00,30,"Why we can't have nice thing" training video
6,07/06/16 15:00:00,07/06/16 16:00:00,60,One-on-One with manager -Panel Discussion
7,07/06/16 09:00:00,07/06/16 10:00:00,60,Fireing for Performance leadership session
8,07/06/16 13:00:00,07/06/16 14:00:00,60,Birthday Cake in the conference room *MANDATORY*
9,07/06/16 12:30:00,07/06/16 13:30:00,60,Obligatory lunchtime meeting because it was the only time everyone
''')
df = pd.read_csv(data, usecols=['Start', 'End', 'Subject'])
df['Start'] = pd.to_datetime(df['Start'])
df['End'] = pd.to_datetime(df['End'])
# Ranges in datetime indices include the right endpoint
tdel = pd.Timedelta('1min')
s = pd.Series(False, index=pd.date_range(start=df['Start'].min(),
end=df['End'].max()-tdel,
freq='min'))
for _, meeting in df.iterrows():
s[meeting['Start'] : meeting['End']-tdel] = True
result = s.resample('1H').sum().astype(int)
print(result)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>2016-07-04 09:00:00 60
2016-07-04 10:00:00 60
2016-07-04 11:00:00 60
2016-07-04 12:00:00 0
2016-07-04 13:00:00 30
2016-07-04 14:00:00 60
2016-07-04 15:00:00 30
2016-07-04 16:00:00 0
2016-07-04 17:00:00 0
2016-07-04 18:00:00 0
2016-07-04 19:00:00 0
2016-07-04 20:00:00 0
2016-07-04 21:00:00 0
2016-07-04 22:00:00 0
2016-07-04 23:00:00 0
2016-07-05 00:00:00 0
2016-07-05 01:00:00 0
2016-07-05 02:00:00 0
2016-07-05 03:00:00 0
2016-07-05 04:00:00 0
2016-07-05 05:00:00 0
2016-07-05 06:00:00 0
2016-07-05 07:00:00 0
2016-07-05 08:00:00 0
2016-07-05 09:00:00 60
2016-07-05 10:00:00 0
2016-07-05 11:00:00 0
2016-07-05 12:00:00 0
2016-07-05 13:00:00 0
2016-07-05 14:00:00 30
2016-07-05 15:00:00 0
2016-07-05 16:00:00 0
2016-07-05 17:00:00 0
2016-07-05 18:00:00 0
2016-07-05 19:00:00 0
2016-07-05 20:00:00 0
2016-07-05 21:00:00 0
2016-07-05 22:00:00 0
2016-07-05 23:00:00 0
2016-07-06 00:00:00 0
2016-07-06 01:00:00 0
2016-07-06 02:00:00 0
2016-07-06 03:00:00 0
2016-07-06 04:00:00 0
2016-07-06 05:00:00 0
2016-07-06 06:00:00 0
2016-07-06 07:00:00 0
2016-07-06 08:00:00 0
2016-07-06 09:00:00 60
2016-07-06 10:00:00 0
2016-07-06 11:00:00 0
2016-07-06 12:00:00 30
2016-07-06 13:00:00 60
2016-07-06 14:00:00 0
2016-07-06 15:00:00 60
Freq: H, dtype: int64
</code></pre>
| 1 | 2016-07-30T15:29:50Z | [
"python",
"datetime",
"pandas",
"dataframe"
] |
split elements in array using python | 38,674,808 | <p>I have a big array and a part of that is shown below. in each list, the first number is start and the 2nd number is end (so there is a range). what I want to do is:</p>
<p>1:
filter out those lists (ranges) which are smaller than 300 (e.g. the 18th list in the following array must be removed) </p>
<p>2:
get a smaller ranges (lists) in this way: (start+100) to (start+200). e.g the first list would be [ 569, 669]. </p>
<p>I tried to use different split functions in numpy but non of them gives what I am looking for.</p>
<pre><code>array([[ 469, 1300],
[ 171, 1440],
[ 187, 1564],
[ 204, 1740],
[ 40, 1363],
[ 56, 1457],
[ 132, 606],
[1175, 2096],
[ 484, 2839],
[ 132, 4572],
[ 166, 1693],
[ 69, 3300],
[ 142, 1003],
[2118, 2118],
[ 715, 1687],
[ 301, 1006],
[ 48, 2142],
[ 63, 330],
[ 479, 2411]], dtype=uint32)
</code></pre>
<p>do you guys know how to do that in python?</p>
<p>thanks</p>
| 2 | 2016-07-30T14:06:40Z | 38,674,887 | <pre><code>data = [[ 469, 1300],
# ...
[ 63, 330],
[ 479, 2411]]
print(
filter(lambda v: v[1] - v[0] >= 300, data)
)
print(
[[v[0] + 100, v[0] + 200] for v in data]
)
</code></pre>
<p>Explanation:</p>
<p>The first command uses the builtin <a href="https://docs.python.org/3.5/library/functions.html#filter" rel="nofollow">filter</a> method to filter the remaining elements based on a <a href="https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions" rel="nofollow">lambda</a> expression.</p>
<p>The second iterates over the list and generates a new one while doing so.</p>
<p>If the input and output should be numpy arrays try the following. Note: There is no way to filter an numpy array without creating a new one.</p>
<pre><code>data = array([
( 469, 1300),
( 171, 1440),
# ...
( 63, 330),
( 479, 2411)], dtype=(uint32, uint32))
print(
array(filter(lambda v: v[1] - v[0] >= 300, data), dtype=(uint32, uint32))
)
print(
array([[v[0] + 100, v[0] + 200] for v in data], dtype=(uint32, uint32))
)
</code></pre>
| 0 | 2016-07-30T14:15:10Z | [
"python",
"arrays",
"numpy"
] |
split elements in array using python | 38,674,808 | <p>I have a big array and a part of that is shown below. in each list, the first number is start and the 2nd number is end (so there is a range). what I want to do is:</p>
<p>1:
filter out those lists (ranges) which are smaller than 300 (e.g. the 18th list in the following array must be removed) </p>
<p>2:
get a smaller ranges (lists) in this way: (start+100) to (start+200). e.g the first list would be [ 569, 669]. </p>
<p>I tried to use different split functions in numpy but non of them gives what I am looking for.</p>
<pre><code>array([[ 469, 1300],
[ 171, 1440],
[ 187, 1564],
[ 204, 1740],
[ 40, 1363],
[ 56, 1457],
[ 132, 606],
[1175, 2096],
[ 484, 2839],
[ 132, 4572],
[ 166, 1693],
[ 69, 3300],
[ 142, 1003],
[2118, 2118],
[ 715, 1687],
[ 301, 1006],
[ 48, 2142],
[ 63, 330],
[ 479, 2411]], dtype=uint32)
</code></pre>
<p>do you guys know how to do that in python?</p>
<p>thanks</p>
| 2 | 2016-07-30T14:06:40Z | 38,675,005 | <p>A general note before:
You should use <a href="https://docs.python.org/3/tutorial/datastructures.html#tuples-and-sequences" rel="nofollow">tuples</a> to represnt such ranges, not lists, They are immutable data types with a meaning to the order of items in them.</p>
<p>As for 1, it is pretty easy to filter in python:</p>
<pre><code>filter(lambda single_range: single_range[1] - single_range[0] > 300, ranges)
</code></pre>
<p>A clearer way (in my opinion) to do this is with a list comprehension:</p>
<pre><code>[(start, end) for start, end in ranges if end - start > 300]
</code></pre>
<p>As for 2, I don't fully understand what you mean, but if you mean creating a new list of ranges, where each range is changes using a single function, you mean a map (or my preferred way, a list comprehension which is equal but more descriptive):</p>
<pre><code>[(start + 100, start + 200) for start, end in ranges]
</code></pre>
| 0 | 2016-07-30T14:28:32Z | [
"python",
"arrays",
"numpy"
] |
split elements in array using python | 38,674,808 | <p>I have a big array and a part of that is shown below. in each list, the first number is start and the 2nd number is end (so there is a range). what I want to do is:</p>
<p>1:
filter out those lists (ranges) which are smaller than 300 (e.g. the 18th list in the following array must be removed) </p>
<p>2:
get a smaller ranges (lists) in this way: (start+100) to (start+200). e.g the first list would be [ 569, 669]. </p>
<p>I tried to use different split functions in numpy but non of them gives what I am looking for.</p>
<pre><code>array([[ 469, 1300],
[ 171, 1440],
[ 187, 1564],
[ 204, 1740],
[ 40, 1363],
[ 56, 1457],
[ 132, 606],
[1175, 2096],
[ 484, 2839],
[ 132, 4572],
[ 166, 1693],
[ 69, 3300],
[ 142, 1003],
[2118, 2118],
[ 715, 1687],
[ 301, 1006],
[ 48, 2142],
[ 63, 330],
[ 479, 2411]], dtype=uint32)
</code></pre>
<p>do you guys know how to do that in python?</p>
<p>thanks</p>
| 2 | 2016-07-30T14:06:40Z | 38,675,113 | <p>Assuming your array is called <code>A</code>, then:</p>
<pre><code>import numpy as np
# Filter out differences not wanted
gt300 = A[(np.diff(A) >= 300).flatten()]
# Set new value of first column
gt300[:,0] += 100
# Set value of second column
gt300[:,1] = gt300[:,0] + 100
</code></pre>
<p>Or maybe something like:</p>
<pre><code>B = A[:,0][(np.diff(A) >= 300).flatten()]
C = np.repeat(B, 2).reshape((len(B), 2)) + [100, 200]
</code></pre>
| 2 | 2016-07-30T14:40:46Z | [
"python",
"arrays",
"numpy"
] |
split elements in array using python | 38,674,808 | <p>I have a big array and a part of that is shown below. in each list, the first number is start and the 2nd number is end (so there is a range). what I want to do is:</p>
<p>1:
filter out those lists (ranges) which are smaller than 300 (e.g. the 18th list in the following array must be removed) </p>
<p>2:
get a smaller ranges (lists) in this way: (start+100) to (start+200). e.g the first list would be [ 569, 669]. </p>
<p>I tried to use different split functions in numpy but non of them gives what I am looking for.</p>
<pre><code>array([[ 469, 1300],
[ 171, 1440],
[ 187, 1564],
[ 204, 1740],
[ 40, 1363],
[ 56, 1457],
[ 132, 606],
[1175, 2096],
[ 484, 2839],
[ 132, 4572],
[ 166, 1693],
[ 69, 3300],
[ 142, 1003],
[2118, 2118],
[ 715, 1687],
[ 301, 1006],
[ 48, 2142],
[ 63, 330],
[ 479, 2411]], dtype=uint32)
</code></pre>
<p>do you guys know how to do that in python?</p>
<p>thanks</p>
| 2 | 2016-07-30T14:06:40Z | 38,676,451 | <p>We can find which rows have the small difference with:</p>
<pre><code>In [745]: mask=(x[:,1]-x[:,0])<300
In [746]: mask
Out[746]:
array([False, False, False, False, False, False, False, False, False,
False, False, False, False, True, False, False, False, True, False], dtype=bool)
</code></pre>
<p>We can use that <code>mask</code> to select those rows, or to deselect them</p>
<pre><code>In [747]: x[mask,:]
Out[747]:
array([[2118, 2118],
[ 63, 330]], dtype=uint32)
In [748]: x[~mask,:]
Out[748]:
array([[ 469, 1300],
[ 171, 1440],
[ 187, 1564],
[ 204, 1740],
...
[ 479, 2411]], dtype=uint32)
</code></pre>
<p>To make a new set of ranges; get the first column; here I am using <code>[0]</code> so the selection remains a column array:</p>
<pre><code>In [750]: x[:,[0]]
Out[750]:
array([[ 469],
[ 171],
[ 187],
...
[ 48],
[ 63],
[ 479]], dtype=uint32)
</code></pre>
<p>Add to that the desired offsets. This takes advantage of broadcasting.</p>
<pre><code>In [751]: x[:,[0]]+[100,200]
Out[751]:
array([[ 569, 669],
[ 271, 371],
[ 287, 387],
[ 304, 404],
[ 140, 240],
[ 156, 256],
...
[ 401, 501],
[ 148, 248],
[ 163, 263],
[ 579, 679]], dtype=int64)
</code></pre>
<p>There are other ways of constructing such an array</p>
<pre><code>np.column_stack([x[:,0]+100,x[:,0]+200])
np.array([x[:,0]+100, x[:,0]+200]).T # or vstack
</code></pre>
<p>Other answers have suggested the <code>Python</code> list <code>filter</code>. I'm partial to list comprehensions in this kind of use, for example:</p>
<pre><code>In [756]: np.array([i for i in x if (i[1]-i[0])<300])
Out[756]:
array([[2118, 2118],
[ 63, 330]], dtype=uint32)
</code></pre>
<p>For small lists of lists, the pure Python approach tends to be faster. But if the object is already a <code>numpy</code> array, it is faster to use the <code>numpy</code> operations that work on the whole array at once (i.e. do the iteration in compiled code). Hence my suggestion to use the boolean mask.</p>
| 0 | 2016-07-30T17:05:57Z | [
"python",
"arrays",
"numpy"
] |
Append a series of strings to a Pandas column | 38,674,830 | <p>I'm a Pandas newbie and have written some code that should append a dictionary to the last column in a row.
The last column is named "Holder"</p>
<p>Part of my code, which offends the pandas engine is shown below</p>
<pre><code>df.loc[df[innercat] == -1, 'Holder'] += str(odata)
</code></pre>
<p>I get the error message</p>
<pre><code>TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('S75') dtype('S75') dtype('S75')
</code></pre>
<p>When I run my code replacing the "+=" with "=" the code runs just fine although I only get part of the data I want.
What am I doing wrong? I've tried removing the str() cast and it still works as an assignment, not an append.</p>
<p><strong>Further clarification</strong>:</p>
<pre><code>Math1 Math1_Notes Physics1 Physics1_Notes Chem1 Chem1_Notes Bio1 Bio1_Notes French1 French1_Notes Spanish1 Spanish1_Notes Holder
-1 Gr8 student 0 0 0 0 -1 Foo NaN
0 0 0 0 0 -1 Good student NaN
0 0 -1 So so 0 0 0 NaN
0 -1 Not serious -1 Hooray -1 Voila 0 NaN
</code></pre>
<p>My original dataset contains over 300 columns of data, but I've created an example that captures the spirit of what I'm trying to do. Imagine a college with 300 departments each offering 1(or more) courses. The above data is a micro-sample of that data. So for each student, next to their name or admission number, there is a "-1" indicating that they took a certain course. And in addition, the next column USUALLY contains notes from that department about that student.</p>
<p>Looking at the 1st row of the data above, we have a student who took Math & Spanish and each department added some comments about the student. For each row, I want to add a dict that summarises the data for each student. Basically a JSON summary of each departments entry. Assuming a string of the general form</p>
<pre><code>json_string = {"student name": a, "data": {"notes": b, "Course name": c}}
</code></pre>
<p>I intend my code to read my csv, form a dict for each department and APPEND it to Holder column. Thus for the above student(1st row), there will be 2 dicts namely</p>
<pre><code>{"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}}
{"student name": "Peter", "data": {"notes": "Foo", "Course name": "Spanish1"}}
</code></pre>
<p>and the final contents of Holder for row 1 will be</p>
<pre><code>{"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}} {"student name": "Peter", "data": {"notes": "Foo", "Course name": "Spanish1"}}
</code></pre>
<p>when I can successfully append the data, I will probably add a comma or '|' in between the seperate dicts. The line of code that I have written is </p>
<pre><code>df.loc[df[innercat] == -1, 'Holder'] = str(odata)
</code></pre>
<p>whether or not I cast the above line as str(), writing the assignment instead of the append operator appears to overwrite all the previous values and only write the last value into Holder, something like</p>
<pre><code>-1 Gr8 student 0 0 0 0 -1 Foo {"student name": "Peter", "data": {"notes": "Foo", "Course name": "Spanish1"}}
</code></pre>
<p>while I want</p>
<pre><code>-1 Gr8 student 0 0 0 0 -1 Foo {"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}} {"student name": "Peter", "data": {"notes": "Foo", "Course name": "Spanish1"}}
</code></pre>
<p>For anyone interested in reproducing what I have done, the main part of my code is shown below</p>
<pre><code>count = 0
substrategy = 0
for cat in col_array:
count += 1
for innercat in cat:
if "Notes" in innercat:
#b = str(df[innercat])
continue
substrategy += 1
c = count
a = substrategy
odata = {}
odata['did'] = a
odata['id'] = a
odata['data'] = {}
odata['data']['notes'] = b
odata['data']['substrategy'] = a
odata['data']['strategy'] = c
df.loc[df[innercat] == -1, 'Holder'] += str(odata)
</code></pre>
| 0 | 2016-07-30T14:08:36Z | 38,675,477 | <p>is that what you want?</p>
<pre><code>In [190]: d1 = {"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}}
In [191]: d2 = {"student name": "Peter", "data": {"notes": "Foo", "Course name": "Spanish1"}}
In [192]: import json
In [193]: json.dumps(d1)
Out[193]: '{"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}}'
In [194]: df
Out[194]:
Investments_Cash Holder
0 0 NaN
1 0 NaN
2 -1 NaN
In [196]: df.Holder = ''
In [197]: df.ix[df.Investments_Cash == -1, 'Holder'] += json.dumps(d1)
In [198]: df.ix[df.Investments_Cash == -1, 'Holder'] += ' ' + json.dumps(d2)
In [199]: df
Out[199]:
Investments_Cash
Holder
0 0
1 0
2 -1 {"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}} {"student name": "Peter", "data": {"notes": "Foo", "Course nam...
</code></pre>
<p>NOTE: <strong>it will be really painful to work / parse your <code>Holder</code> column in future, because it's not standard - you won't be able to parse it back without additional preprocessing (for example splitting using complex RegEx'es, etc.)</strong></p>
<p>So i would strongly recommend you to convert <strong>a list of dicts to JSON</strong> - you'll be able to read it back using <a href="https://docs.python.org/3/library/json.html" rel="nofollow">json.loads()</a> method:</p>
<pre><code>In [201]: df.ix[df.Investments_Cash == -1, 'Holder'] = json.dumps([d1, d2])
In [202]: df
Out[202]:
Investments_Cash
Holder
0 0
1 0
2 -1 [{"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}}, {"student name": "Peter", "data": {"notes": "Foo", "Course n...
</code></pre>
<p>parse it back:</p>
<pre><code>In [204]: lst = json.loads(df.ix[2, 'Holder'])
In [205]: lst
Out[205]:
[{'data': {'Course name': 'Math1', 'notes': 'Gr8 student'},
'student name': 'Peter'},
{'data': {'Course name': 'Spanish1', 'notes': 'Foo'},
'student name': 'Peter'}]
In [206]: lst[0]
Out[206]:
{'data': {'Course name': 'Math1', 'notes': 'Gr8 student'},
'student name': 'Peter'}
In [207]: lst[1]
Out[207]: {'data': {'Course name': 'Spanish1', 'notes': 'Foo'}, 'student name': 'Peter'}
</code></pre>
| 1 | 2016-07-30T15:17:20Z | [
"python",
"pandas"
] |
How to "promote" an instance to a subclass? | 38,674,880 | <p>Suppose I have the following classes:</p>
<pre><code>class Plain(object):
def speak(self):
print 'ho-hum'
class Fancy(Plain):
def exult(self):
print 'huzzah!'
plain = Plain()
plain.speak()
# ho-hum
plain.exult()
# ---------------------------------------------------------------------------
# AttributeError Traceback (most recent call last)
# <ipython-input-585-5f782c9ea88b> in <module>()
# ----> 1 plain.exult()
fancy = Fancy()
fancy.speak()
# ho-hum
fancy.exult()
# huzzah!
</code></pre>
<p>... and suppose that, for some reason, I want to "promote" an instance of <code>Plain</code> so that it becomes an instance of <code>Fancy</code>.</p>
<p>I know that I can always modify the instance's <code>__class__</code> attribute:</p>
<pre><code>plain.__class__ = Fancy
plain.exult()
# huzzah!
</code></pre>
<p>...but I gather from various SO posts (e.g., here, here) that this is not a good thing to do (at least not in production code).</p>
<p>Is there a more "production-code-worthy" way to promote an instance to a subclass?</p>
<hr>
<p><sub>
FWIW, the real-world use-case that brings me to this problem is the following. I'm working with a 3rd-party library that implements a client for a web service. Among the things this client can do is return instances of various classes. These classes provide very few methods, and the few they provide are very weak. I can easily write subclasses for these classes with more, and more powerful, methods, but in order for them to be useful, I'd also need to write wrappers for those API functions that currently return instances of the API's original classes. Invariably, this wrappers would only have to promote the instances returned by the original ("wrappee") functions to instances of my enhanced subclasses.</sub></p>
| 0 | 2016-07-30T14:14:00Z | 38,674,928 | <p>You could use of course</p>
<pre><code>fancy = Fancy(plain)
</code></pre>
<p>and define in the constructor of Fancy, what should happen if an instance of Plain is given as argument.</p>
<p><strong>Example</strong>:</p>
<pre><code>class Plain:
def __init__(self, foo, bar):
self.foo = foo
self.bar = bar
self.status = 'plain'
class Fancy:
def __init__(self, something):
if isinstance(something, Plain):
self.foo = something.foo
self.bar = something.bar
self.status = 'fancy'
</code></pre>
| 3 | 2016-07-30T14:19:10Z | [
"python",
"class"
] |
Ignoring empty arguments for certain flags | 38,674,919 | <p>I have a command that takes 1 argument and can take several flags.</p>
<pre><code>@click.command()
@click.argument('item')
@click.option('--list-items', help='list items', is_flag=True)
def cli(item, list_items):
if list_items:
click.echo(ITEMS)
return
</code></pre>
<p>currently it returns:</p>
<pre><code>Error: Missing argument "item".
</code></pre>
<p>How can I make so that I could access the functionality of --list-items even if I don't provide an argument? Just like --help flag does.</p>
| -1 | 2016-07-30T14:18:23Z | 38,675,052 | <p>You'd have to make <code>item</code> optional:</p>
<pre><code>@click.argument('item', required=False)
</code></pre>
<p>then do error handling in the function (e.g. raise a <a href="http://click.pocoo.org/5/api/#click.BadParameter" rel="nofollow"><code>BadParameter()</code> exeception</a>).</p>
| 1 | 2016-07-30T14:32:58Z | [
"python",
"click",
"command-line-interface"
] |
Ignoring empty arguments for certain flags | 38,674,919 | <p>I have a command that takes 1 argument and can take several flags.</p>
<pre><code>@click.command()
@click.argument('item')
@click.option('--list-items', help='list items', is_flag=True)
def cli(item, list_items):
if list_items:
click.echo(ITEMS)
return
</code></pre>
<p>currently it returns:</p>
<pre><code>Error: Missing argument "item".
</code></pre>
<p>How can I make so that I could access the functionality of --list-items even if I don't provide an argument? Just like --help flag does.</p>
| -1 | 2016-07-30T14:18:23Z | 38,675,084 | <p>You made it mandatory argument, make it optional by either adding required= False or by adding a default value to it</p>
| 0 | 2016-07-30T14:36:08Z | [
"python",
"click",
"command-line-interface"
] |
Define a function using string variable | 38,674,954 | <p>I am trying to create a function "a21" that takes a parameter x and adds 4 to it.</p>
<pre><code>eq = 'x+4'
b=21
new='a'+str(b)+'(x)'
def eval(new):
return eval(eq)
c=5
print(a21(c))
</code></pre>
<p>The desired output is 9 but it's not recognizing a21 as a function. How do I write this to create a the function a21 that also takes a parameter x?</p>
| -3 | 2016-07-30T14:22:38Z | 38,674,997 | <p>Write a fully-featured function definition:</p>
<pre><code>new = '''
def a21(x):
return x + 4
'''
</code></pre>
<p>And then <code>exec</code>ute it: <code>exec(new)</code> and run: <code>a21(678)</code>.</p>
<p>If you want to construct a function during runtime, use string formatting.</p>
<pre><code>new = '''
def {}({}):
return {}
'''
exec(new.format('test', 'x', 'x+4'))
test(123)
</code></pre>
| 2 | 2016-07-30T14:27:42Z | [
"python"
] |
Define a function using string variable | 38,674,954 | <p>I am trying to create a function "a21" that takes a parameter x and adds 4 to it.</p>
<pre><code>eq = 'x+4'
b=21
new='a'+str(b)+'(x)'
def eval(new):
return eval(eq)
c=5
print(a21(c))
</code></pre>
<p>The desired output is 9 but it's not recognizing a21 as a function. How do I write this to create a the function a21 that also takes a parameter x?</p>
| -3 | 2016-07-30T14:22:38Z | 38,675,437 | <p>The following is possible and does almost the same thing:
You can bind the function in a function like below.</p>
<pre><code>eq = 'x+4'
def bindfunc(name):
def dynamicfunc(x):
return eval(eq)
dynamicfunc.__name__ = name
return dynamicfunc
</code></pre>
<p>The way you would use this would be a little different:</p>
<pre><code>b=21
new='a'+str(b) #your function name
c=5
print(bindfunc(new)(c))
</code></pre>
<p>What the last line does is that it first runs bindfunc which returns a function with the given name. It then runs that function with the input c as needed and prints output.</p>
<p>Hope this helps!</p>
| 0 | 2016-07-30T15:12:38Z | [
"python"
] |
Python error while attempting to call cocos from the command line | 38,675,224 | <p>To be clear, this is not an issue about creating a new project. I have a sneaking suspicion that it may have something to do with my initial setup using setup.py.</p>
<p><strong>The Problem:</strong></p>
<p>Calling "cocos" from the command line generates the following error:</p>
<pre><code>*C:\Users\pixelhacker>cocos
File "C:\cocos2d-x-3.12\tools\cocos2d-console\bin\/cocos.py", line 198
except subprocess.CalledProcessError as e:
^
SyntaxError: invalid syntax*
</code></pre>
<p><strong>Additional Info:</strong>
During the setup process the only issue I had was setting up the ANT_SDK. In order to get that part of the setup to work I created a environment variable that included the root folder \bin.</p>
<p><strong>Environment Info:</strong></p>
<p><strong>Cocos Version:</strong> cocos2d-x-3.12</p>
<p><strong>NDK Version:</strong> android-ndk-r12b</p>
<p><strong>Android SDK Version:</strong> Installed back on 5/5/2016</p>
<p><strong>Ant Version:</strong> apache-ant-1.9.7</p>
<p><strong>Operating System:</strong> Windows 8.1 x64</p>
<p>If I've missed anything let me know. Thanks for your time.</p>
| 0 | 2016-07-30T14:52:06Z | 38,687,364 | <p>Originally I was running an older version of Python, version 2.5. I upgraded to the latest version but the issue persisted. After some research, I decided to use version 2.7.</p>
<p>Problem resolved.</p>
| 0 | 2016-07-31T18:52:58Z | [
"python",
"cocos2d-x-3.x"
] |
Flask HTML forms- UnboundLocalError | 38,675,377 | <p>I am new to the Flask application development. I want to make a Flask application that receives the details from a user. After the successful registration, then a new page will be displayed showing the entered details. The application is built using HTML forms and not by using WTF.</p>
<p><strong>Following is the code for app.py</strong></p>
<pre><code>from flask import Flask, render_template, request
app = Flask(__name__)
@app.route('/register',methods = ['POST', 'GET'])
def user():
if request.method == 'POST':
result = request.form
name = result['name']
phoneno = result['phoneno']
email = result['email']
password = result['password']
if name or phoneno or email or password is not None:
print name,phoneno,email,password
return render_template("register.html",result=result)
if __name__ == '__main__':
app.run(debug = True)
</code></pre>
<p><strong>This is the HTML page of register.html</strong></p>
<pre><code><html>
<body>
<form action = "/success" method = "POST">
<p>Name <input type = "text" name = "name" /></p>
<p>Phone no <input type = "text" name = "phoneno" /></p>
<p>Email <input type = "text" name = "email" /></p>
<p>Password <input type ="text" name = "password" /></p>
<p><input type = "submit" value = "submit" /></p>
</form>
</body>
</html>
</code></pre>
<p>What now I am getting is an error</p>
<pre><code>Full Traceback
**UnboundLocalError**
UnboundLocalError: local variable 'result' referenced before assignment.
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 2000, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1991, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1567, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/protocol/PycharmProjects/tut/app.py", line 17, in user
return render_template("register.html",result=result)
UnboundLocalError: local variable 'result' referenced before assignment
</code></pre>
<p>I tried to print the values. But nothing is shown in the console I couldn't rectify this error.Thanks in advance.</p>
| 0 | 2016-07-30T15:08:01Z | 38,676,803 | <p><code>result</code> only exists if the <code>request</code> method is <code>POST</code>, so if the request method was <code>GET</code>(which is the default request one), that is when the page is requested before submitting user data form, then <code>result</code> won't be existing, hence the error message. Plus, you need to render the <code>result</code> object into another template, for example <code>user.html</code>:</p>
<pre><code>@app.route('/register',methods = ['POST', 'GET'])
def user():
if request.method == 'POST':
result = request.form
name = result['name']
phoneno = result['phoneno']
email = result['email']
password = result['password']
if name or phoneno or email or password is not None:
print name,phoneno,email,password
return render_template("user.html",result=result)
return render_template("register.html")
</code></pre>
<p>Another thing, if the above view function is for <code>register.html</code> form, then you also need to change in <code>form</code> block, from: </p>
<pre><code><form action = "/success" method = "POST">
</code></pre>
<p>to</p>
<pre><code><form action = "/register" method = "POST">
</code></pre>
| 1 | 2016-07-30T17:46:55Z | [
"python",
"flask"
] |
Flask HTML forms- UnboundLocalError | 38,675,377 | <p>I am new to the Flask application development. I want to make a Flask application that receives the details from a user. After the successful registration, then a new page will be displayed showing the entered details. The application is built using HTML forms and not by using WTF.</p>
<p><strong>Following is the code for app.py</strong></p>
<pre><code>from flask import Flask, render_template, request
app = Flask(__name__)
@app.route('/register',methods = ['POST', 'GET'])
def user():
if request.method == 'POST':
result = request.form
name = result['name']
phoneno = result['phoneno']
email = result['email']
password = result['password']
if name or phoneno or email or password is not None:
print name,phoneno,email,password
return render_template("register.html",result=result)
if __name__ == '__main__':
app.run(debug = True)
</code></pre>
<p><strong>This is the HTML page of register.html</strong></p>
<pre><code><html>
<body>
<form action = "/success" method = "POST">
<p>Name <input type = "text" name = "name" /></p>
<p>Phone no <input type = "text" name = "phoneno" /></p>
<p>Email <input type = "text" name = "email" /></p>
<p>Password <input type ="text" name = "password" /></p>
<p><input type = "submit" value = "submit" /></p>
</form>
</body>
</html>
</code></pre>
<p>What now I am getting is an error</p>
<pre><code>Full Traceback
**UnboundLocalError**
UnboundLocalError: local variable 'result' referenced before assignment.
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 2000, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1991, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1567, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/protocol/PycharmProjects/tut/app.py", line 17, in user
return render_template("register.html",result=result)
UnboundLocalError: local variable 'result' referenced before assignment
</code></pre>
<p>I tried to print the values. But nothing is shown in the console I couldn't rectify this error.Thanks in advance.</p>
| 0 | 2016-07-30T15:08:01Z | 38,676,930 | <p>Try this:<br>
app.py</p>
<pre><code>from flask import Flask, render_template, request
app = Flask(__name__)
@app.route('/')
def form():
return render_template('register.html')
@app.route('/success',methods = ['POST', 'GET'])
def user():
if request.method == 'POST':
results = request.form
return render_template('new.html',results=results)
if __name__ == '__main__':
app.run(debug = True)
</code></pre>
<p>new.html</p>
<pre><code>{% for k,v in results.iteritems() %}
<p>{{v}}</p>
{% endfor %}
</code></pre>
<p>register.html is the same as yours.<br>
You are missing a html template to display your form input data, new.html file will render the values you enter in your form.</p>
| 0 | 2016-07-30T18:00:08Z | [
"python",
"flask"
] |
Python/OpenCV - how to load all images from folder in alphabetical order | 38,675,389 | <p>how to load all images from given folder in alphabetical order?</p>
<p>Code like this:</p>
<pre><code>images = []
for img in glob.glob("images/*.jpg"):
n= cv2.imread(img)
images.append(n)
print (img)
</code></pre>
<p>...return:</p>
<pre><code>...
images/IMG_9409.jpg
images/IMG_9425.jpg
images/IMG_9419.jpg
images/IMG_9376.jpg
images/IMG_9368.jpg
images/IMG_9417.jpg
...
</code></pre>
<p>Is there a way to get all images but in correct order?</p>
| 1 | 2016-07-30T15:08:55Z | 38,675,461 | <p>Luckily, python lists have a built-in sort function that can sort strings using ASCII values. It is as simple as putting this before your loop:</p>
<pre><code>filenames = [img for img in glob.glob("images/*.jpg")]
filenames.sort() # ADD THIS LINE
images = []
for img in filenames:
n= cv2.imread(img)
images.append(n)
print (img)
</code></pre>
| 0 | 2016-07-30T15:14:44Z | [
"python",
"opencv"
] |
fft (fast Fourier transform) to speed up opencv in python? | 38,675,473 | <p>i have a raspberry pi with opencv and python installed. I want is to do a simple frontal face haarcascade using opencv. It works however i only have about 2 fps. So i searched through the internet and found this: <code>https://www.raspberrypi.org/blog/accelerating-fourier-transforms-using-the-gpu/</code></p>
<p>I think it's quiet interesting, but how do i implement it into python?</p>
<p>Can you help me?</p>
| 0 | 2016-07-30T15:16:57Z | 38,927,016 | <p>You can highly improve the performance of the classification when you specify the parameters correctly.
Just set the min,max frame size to reasonable values and maybe set a scale factor.</p>
<p>For the fft there are already some python packages available for that.
I would not recommend to write your own fft function, since the library functions a mostly optimized and you will unlikely be able to write a faster version yourself.
There is another issue posted for that topic:
<a href="https://github.com/numpy/numpy/issues/5348" rel="nofollow">https://github.com/numpy/numpy/issues/5348</a></p>
<p>They linked a github repo for this as well:
<a href="https://github.com/raspberrypi/userland/tree/master/host_applications/linux/apps/hello_pi/hello_fft" rel="nofollow">https://github.com/raspberrypi/userland/tree/master/host_applications/linux/apps/hello_pi/hello_fft</a></p>
| 0 | 2016-08-12T21:57:17Z | [
"python",
"opencv",
"fft"
] |
robot framework with pabot : is it possible to pass two different values to a variable in two tests | 38,675,475 | <p>Example, I have <code>file1.robot</code> and <code>file2.robot</code>and each has <code>${var}</code> as the variable. Can I pass 2 different values to this same <code>${var}</code> in the command line? Something like <code>pabot -v var:one:two file1.robot file2.robot</code> where <code>-v var:one:two</code> would follow the order of the robot files; not by name but by how they were introduced in the command line?</p>
| 2 | 2016-07-30T15:17:14Z | 38,740,020 | <p>This solution is not 100% what you've asked for, but maybe you can make it work.</p>
<p>In <a href="https://github.com/mkorpela/pabot/blob/master/README.md" rel="nofollow">pabot readme file</a> is mentioned something about shared set of variables and <a href="https://github.com/mkorpela/pabot/blob/master/README.md#pabotlib" rel="nofollow">acquiring set</a> for each running process. The documentation was bit unclear to me, but if you try following example, you'll see for yourself. It's basically pool of variables and each process can get set of variables from it and when it's done with it, it can return this set back to the pool.</p>
<p>Create your value set <code>valueset.dat</code></p>
<pre><code>[Set1]
USERNAME=user1
PASSWORD=password1
[Set2]
USERNAME=user2
PASSWORD=password2
</code></pre>
<p>create <code>suite1.robot</code> and <code>suite2.robot</code>. I've created 2 suites that are exactly the same. I just wanted to try to run 2 suites in parallel.</p>
<pre><code>*** Settings ***
Library pabot.PabotLib
*** Test Cases ***
Foobar
${valuesetname}= Acquire Value Set
Log ${valuesetname}
${username}= Get Value From Set username
Log ${username}
# Release Value Set
</code></pre>
<p>And then run command <code>pabot --pabotlib --resourcefile valueset.dat tests</code>. If you check html report, you'll see that one suite used set1 and other used set2.</p>
<p>Hope this helps.<br>
Cheers!</p>
| 1 | 2016-08-03T09:45:03Z | [
"python",
"robotframework"
] |
robot framework with pabot : is it possible to pass two different values to a variable in two tests | 38,675,475 | <p>Example, I have <code>file1.robot</code> and <code>file2.robot</code>and each has <code>${var}</code> as the variable. Can I pass 2 different values to this same <code>${var}</code> in the command line? Something like <code>pabot -v var:one:two file1.robot file2.robot</code> where <code>-v var:one:two</code> would follow the order of the robot files; not by name but by how they were introduced in the command line?</p>
| 2 | 2016-07-30T15:17:14Z | 39,305,280 | <p>Another way is to use multiple argument files. One containing the first value for ${var} and the other containing the other.</p>
<p>This will execute the same test suite for both argument files.</p>
<pre><code>pabot --agumentfile1 varone.args --argumentfile2 vartwo.args file.robot
=>
file.robot executed with varone.args
file.robot executed with vartwo.args
</code></pre>
| 0 | 2016-09-03T09:17:47Z | [
"python",
"robotframework"
] |
Is a break statement required or is the return statement enough? | 38,675,529 | <p>In my Python 3(.5) script I have a simple <code>for</code> loop, that looks like this:</p>
<pre><code>request = "simple string"
ignore = (
# Tuple that contains regex's to ignore
)
for (i, regex) in enumerate(ignore):
if re.search(regex, request):
print("Found regex {0}.".format(i))
return False
</code></pre>
<p>Now, this works as expected and the loop stops on the first match that is found.<br />
I understand that the <code>break</code> statement is what is used to break loops in Python.<br />
Knowing this lead to the question: <strong><em>Must</strong> I use the <code>break</code> statement to break a loop or could I get away with using the <code>return</code> statement instead?</em></p>
<p>Keeping this question in mind, would it be better off for my code to look like this:</p>
<pre><code>request = "simple string"
ignore = (
# Tuple that contains regex's to ignore
)
for (i, regex) in enumerate(ignore):
if re.search(regex, request):
print("Found regex {0}.".format(i))
break
</code></pre>
| 2 | 2016-07-30T15:23:13Z | 38,675,546 | <p><code>return</code> exits a function immediately.</p>
<p>If you are in a loop, that breaks out of the loop and no <code>break</code> is required first.</p>
<p>So no, you are not required to use <code>break</code> if <code>return</code> suits your needs.</p>
| 5 | 2016-07-30T15:24:55Z | [
"python",
"python-3.x",
"for-loop",
"return",
"break"
] |
How do i extract values with beautifulsoup? | 38,675,693 | <p>Hi am using beautiful soup to extract the euro to us value, this is what i got so far: </p>
<pre><code>import requests
from bs4 import BeautifulSoup
def Euro_spider():
url = 'http://fx-rate.net/USD/'
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
</code></pre>
<p>what should i do next?</p>
| -1 | 2016-07-30T15:43:10Z | 38,675,724 | <p>Now you need to locate the correct element containing the rate:</p>
<pre><code><a href="/USD/EUR/" class="1rate" title="Dollar to Euro">0.895</a>
</code></pre>
<p>You can locate it, for example, by <code>title</code>:</p>
<pre><code>usd_to_euro = soup.find(title="Dollar to Euro").get_text()
print(usd_to_euro) # prints 0.895
</code></pre>
| 0 | 2016-07-30T15:46:22Z | [
"python",
"web-scraping",
"beautifulsoup",
"web-crawler"
] |
** operator TypeError | 38,675,717 | <p>I keep getting this error message:</p>
<pre class="lang-none prettyprint-override"><code>File "/Users/SalamonCreamcheese/Documents/4.py", line 31, in <module>
testFindRoot()
File "/Users/SalamonCreamcheese/Documents/4.py", line 29, in testFindRoot
print " ", result**power, " ~= ", x
TypeError: unsupported operand type(s) for ** or pow(): 'tuple' and 'int'
</code></pre>
<p>I don't understand why it's saying
that <code>result**power</code> is of wrong type(s), I'm assuming it means string, and why that's an error.</p>
<pre><code>def findRoot(x, power, epsilon):
"""Assumes x and epsilon int or float,power an int,
epsilon > 0 and power >= 1
Returns float y such that y**power is within epsilon of x
If such a float does not exist, returns None"""
if x < 0 and power % 2 == 0:
return None
low = min(-1.0, x)
high = max(1,.0 ,x)
ans = (high + low) / 2.0
while abs(ans**power - x) > epsilon:
if ans**power < x:
low = ans
else:
high = ans
ans = (high +low) / 2.0
return ans
def testFindRoot():
for x in (0.25, -0.25, 2, -2, 8, -8):
epsilon = 0.0001
for power in range(1, 4):
print 'Testing x = ' + str(x) +\
' and power = ' + str(power)
result = (x, power, epsilon)
if result == None:
print 'No result was found!'
else:
print " ", result**power, " ~= ", x
testFindRoot()
</code></pre>
| 1 | 2016-07-30T15:45:10Z | 38,675,824 | <p>You can't apply a powers operation to a tuple. If you need to have all of the values to that power, try operating them separately.</p>
<p>You might need:
[power**n for n in result]</p>
| 1 | 2016-07-30T15:58:44Z | [
"python",
"typeerror"
] |
** operator TypeError | 38,675,717 | <p>I keep getting this error message:</p>
<pre class="lang-none prettyprint-override"><code>File "/Users/SalamonCreamcheese/Documents/4.py", line 31, in <module>
testFindRoot()
File "/Users/SalamonCreamcheese/Documents/4.py", line 29, in testFindRoot
print " ", result**power, " ~= ", x
TypeError: unsupported operand type(s) for ** or pow(): 'tuple' and 'int'
</code></pre>
<p>I don't understand why it's saying
that <code>result**power</code> is of wrong type(s), I'm assuming it means string, and why that's an error.</p>
<pre><code>def findRoot(x, power, epsilon):
"""Assumes x and epsilon int or float,power an int,
epsilon > 0 and power >= 1
Returns float y such that y**power is within epsilon of x
If such a float does not exist, returns None"""
if x < 0 and power % 2 == 0:
return None
low = min(-1.0, x)
high = max(1,.0 ,x)
ans = (high + low) / 2.0
while abs(ans**power - x) > epsilon:
if ans**power < x:
low = ans
else:
high = ans
ans = (high +low) / 2.0
return ans
def testFindRoot():
for x in (0.25, -0.25, 2, -2, 8, -8):
epsilon = 0.0001
for power in range(1, 4):
print 'Testing x = ' + str(x) +\
' and power = ' + str(power)
result = (x, power, epsilon)
if result == None:
print 'No result was found!'
else:
print " ", result**power, " ~= ", x
testFindRoot()
</code></pre>
| 1 | 2016-07-30T15:45:10Z | 38,675,882 | <p><code>result**power</code> is trying to find <code>x to the y</code> where <code>x = result</code> and <code>y = power</code>.</p>
<p>Your problem is that <code>result</code> is a <strong>tuple.</strong> You can't raise a tuple to a power. It makes no sense...</p>
<p>You need to access the value inside the tuple that is supposed to be exponentiated and exponentiate that.</p>
<p>For instance, <code>result[0] ** power</code>, <code>result[1] ** power</code>, etc.</p>
| 2 | 2016-07-30T16:04:13Z | [
"python",
"typeerror"
] |
** operator TypeError | 38,675,717 | <p>I keep getting this error message:</p>
<pre class="lang-none prettyprint-override"><code>File "/Users/SalamonCreamcheese/Documents/4.py", line 31, in <module>
testFindRoot()
File "/Users/SalamonCreamcheese/Documents/4.py", line 29, in testFindRoot
print " ", result**power, " ~= ", x
TypeError: unsupported operand type(s) for ** or pow(): 'tuple' and 'int'
</code></pre>
<p>I don't understand why it's saying
that <code>result**power</code> is of wrong type(s), I'm assuming it means string, and why that's an error.</p>
<pre><code>def findRoot(x, power, epsilon):
"""Assumes x and epsilon int or float,power an int,
epsilon > 0 and power >= 1
Returns float y such that y**power is within epsilon of x
If such a float does not exist, returns None"""
if x < 0 and power % 2 == 0:
return None
low = min(-1.0, x)
high = max(1,.0 ,x)
ans = (high + low) / 2.0
while abs(ans**power - x) > epsilon:
if ans**power < x:
low = ans
else:
high = ans
ans = (high +low) / 2.0
return ans
def testFindRoot():
for x in (0.25, -0.25, 2, -2, 8, -8):
epsilon = 0.0001
for power in range(1, 4):
print 'Testing x = ' + str(x) +\
' and power = ' + str(power)
result = (x, power, epsilon)
if result == None:
print 'No result was found!'
else:
print " ", result**power, " ~= ", x
testFindRoot()
</code></pre>
| 1 | 2016-07-30T15:45:10Z | 38,680,370 | <p>I think you have a mistake on this line:</p>
<pre><code>result = (x, power, epsilon)
</code></pre>
<p>I suspect you want to be calling the <code>findroot</code> function with those three values as arguments rather than creating a tuple out of them. Try changing it to:</p>
<pre><code>result = findroot(x, power, epsilon)
</code></pre>
| 2 | 2016-07-31T03:03:39Z | [
"python",
"typeerror"
] |
** operator TypeError | 38,675,717 | <p>I keep getting this error message:</p>
<pre class="lang-none prettyprint-override"><code>File "/Users/SalamonCreamcheese/Documents/4.py", line 31, in <module>
testFindRoot()
File "/Users/SalamonCreamcheese/Documents/4.py", line 29, in testFindRoot
print " ", result**power, " ~= ", x
TypeError: unsupported operand type(s) for ** or pow(): 'tuple' and 'int'
</code></pre>
<p>I don't understand why it's saying
that <code>result**power</code> is of wrong type(s), I'm assuming it means string, and why that's an error.</p>
<pre><code>def findRoot(x, power, epsilon):
"""Assumes x and epsilon int or float,power an int,
epsilon > 0 and power >= 1
Returns float y such that y**power is within epsilon of x
If such a float does not exist, returns None"""
if x < 0 and power % 2 == 0:
return None
low = min(-1.0, x)
high = max(1,.0 ,x)
ans = (high + low) / 2.0
while abs(ans**power - x) > epsilon:
if ans**power < x:
low = ans
else:
high = ans
ans = (high +low) / 2.0
return ans
def testFindRoot():
for x in (0.25, -0.25, 2, -2, 8, -8):
epsilon = 0.0001
for power in range(1, 4):
print 'Testing x = ' + str(x) +\
' and power = ' + str(power)
result = (x, power, epsilon)
if result == None:
print 'No result was found!'
else:
print " ", result**power, " ~= ", x
testFindRoot()
</code></pre>
| 1 | 2016-07-30T15:45:10Z | 38,680,393 | <p>It looks like you meant to call <code>findRoot</code> with the three arguments <code>x</code>, <code>power</code>, and <code>epsilon</code>. Try editing the line </p>
<pre><code>result = (x, power, epsilon)
</code></pre>
<p>to be </p>
<pre><code>result = findRoot(x, power, epsilon)
</code></pre>
<p>As that line presently is, <code>result</code> is not a number (which you'd want for the <code>**</code> operator). <code>result</code> a tuple that has three different objects in it: <code>x</code>, <code>power</code>, and <code>epsilon</code>. You can use the <code>**</code> operator on any two of the items in <code>result</code> but it's not defined for the <code>tuple</code> type. </p>
| 2 | 2016-07-31T03:09:12Z | [
"python",
"typeerror"
] |
Random Number following a certain distribution | 38,675,845 | <p>I'm stuck with a problem. I have to implement an algorithm in python which needs a random number X such as Pr[X ⥠k] = 1/k. I don't know if already exists a distribution which can give me this exact value or if there is a way to implement this random generator using the simple random python library. Is there a way to do this? Thank you in advance for your help!</p>
| 2 | 2016-07-30T16:01:15Z | 38,676,004 | <p>The easiest attempt is to make</p>
<pre><code>X = 1.0 / random.random()
</code></pre>
<p>However, <code>random.random()</code> can have a value of zero, so this may result in a divide-by-zero error. The value can never be 1.0, according to the documentation, so use</p>
<pre><code>X = 1.0 / (1.0 - random.random())
</code></pre>
<p>For this distribution,</p>
<p>Pr[X ⥠k] = Pr[0 < 1/X ⤠1/k]</p>
<p>= Pr[0 < 1 - random.random() ⤠1/k]</p>
<p>= Pr[1 - 1/k ⤠random.random() < 1]</p>
<p>= 1 - (1 - 1/k) {since random() is uniform in [0,1) and [1-1/k, 1) is a subinterval}</p>
<p>= 1/k</p>
<p>(I wish I could use MathJax here!) Of course, all this assumes that k ⥠1, since your condition makes no sense otherwise. I also assumed that X was to be a continuous random variable, from 1 to plus infinity. If X is to be an positive integer (thus k is also a positive integer), just take the floor of the formula I gave.</p>
| 6 | 2016-07-30T16:18:50Z | [
"python",
"random",
"probability"
] |
Random Number following a certain distribution | 38,675,845 | <p>I'm stuck with a problem. I have to implement an algorithm in python which needs a random number X such as Pr[X ⥠k] = 1/k. I don't know if already exists a distribution which can give me this exact value or if there is a way to implement this random generator using the simple random python library. Is there a way to do this? Thank you in advance for your help!</p>
| 2 | 2016-07-30T16:01:15Z | 38,676,015 | <p>What you need is a uniform random number generator for generating random numbers.
These numbers can then be transformed into a distribution (e.g. multiplication)</p>
<p><a href="https://docs.python.org/2/library/random.html" rel="nofollow">https://docs.python.org/2/library/random.html</a><br>
<a href="https://docs.python.org/3.5/library/random.html" rel="nofollow">https://docs.python.org/3.5/library/random.html</a></p>
<p>These can generate random numbers.
What iam not getting is your distribution. How does it look graphically? If you have a function representing it, use it to multiply with the random number.</p>
| -2 | 2016-07-30T16:19:45Z | [
"python",
"random",
"probability"
] |
Random Number following a certain distribution | 38,675,845 | <p>I'm stuck with a problem. I have to implement an algorithm in python which needs a random number X such as Pr[X ⥠k] = 1/k. I don't know if already exists a distribution which can give me this exact value or if there is a way to implement this random generator using the simple random python library. Is there a way to do this? Thank you in advance for your help!</p>
| 2 | 2016-07-30T16:01:15Z | 38,676,339 | <p>Rory ended up at the correct answer, but his math justifying it is not constructive—it doesn't show how to get to the answer, it only shows that his assertion is correct. The following uses basic rules of probability to derive the answer.</p>
<pre><code>Pr{X ⥠k} = 1 - Pr{X < k}
</code></pre>
<p>If <code>X</code> is a continuous random variable,</p>
<pre><code>Pr{X < k} = Pr{X ⤠k}
</code></pre>
<p>The right hand side is the definition of the cumulative distribution function F<sub>X</sub>(k), so</p>
<pre><code>Pr{X ⥠k} = 1 - F(k) = 1/k
F(k) = 1 - 1/k
</code></pre>
<p>Then <a href="https://en.wikipedia.org/wiki/Inverse_transform_sampling" rel="nofollow">by the inversion theorem</a> we can set that equal to <code>U</code>, a uniform(0,1) RV, and solve for k:</p>
<pre><code>U = 1 - 1/k
1 - U = 1/k
k = 1 / (1 - U)
</code></pre>
<p>Use your random number generator for U, and you're done. As Rory pointed out this is only valid for k ⥠1, otherwise it would drive the CDF out of bounds.</p>
| 1 | 2016-07-30T16:53:14Z | [
"python",
"random",
"probability"
] |
Print has "No Output" in STDOUT | 38,675,965 | <p>I have been writing code in HackerRank to count how many valleys that a hiker (Gary) walked through in a hike. Right now, it looks something like this.</p>
<pre><code>#Defining variables for later use and a list of Gary's steps
steps = ["U", "D", "D", "D", "D", "D", "U", "U"]
sea_level = 0
valleys = 0
#For loop to calculate how many valleys Gary hiked through
for step in steps:
step_ud = step
if step_ud == "U":
sea_level += 1
elif sea_level == 0:
valleys += 1
elif step_ud == "D":
sea_level -= 1
elif sea_level == 0:
valleys += 1
print(valleys)
</code></pre>
<p>When I run the code, however I receive no output. My expected output was 1, knowing that Gary only walked through 1 valley.</p>
<p>The term valley was defined as:
"A valley is a non-empty sequence of consecutive steps below sea level, starting with a step down from sea level and ending with a step up to sea level."</p>
<p>The question was written as:
"Given Gary's sequence of up and down steps during his last hike, find and print the number of valleys he walked through."</p>
<p>I have taken a look at these 3 posts:</p>
<p><a href="http://stackoverflow.com/questions/230751/how-to-flush-output-of-python-print">How to flush output of Python print?</a></p>
<p><a href="http://stackoverflow.com/questions/25368786/python-print-does-not-work-in-loop">python `print` does not work in loop</a></p>
<p>I have also tried some other methods, but they haven't helped. These are things I've tried.</p>
<p>I imported the sys module and used the sys.stdout.flush() function to flush the stdout.</p>
<pre><code>import sys
...
#Loop with lines to determine whether it's a valley.
...
print(valleys)
sys.stdout.flush()
</code></pre>
<p>I've also tried making my own function of flushing the stdout, but that didn't work either.</p>
<pre><code>def my_print(text):
sys.stdout.write(str(text))
sys.stdout.flush()
</code></pre>
<p>Then I used the function after printing to flush.</p>
<pre><code>import sys
...
#Loop with lines to determine whether it's a valley.
...
print(valleys)
my_print(text)
</code></pre>
<p>Currently I'm pretty lost in knowing what I have to fix. Thanks for the help.</p>
| 1 | 2016-07-30T16:14:43Z | 38,676,213 | <p>Lets try out some debugging here:<br>
I know its a bad idea when you are working on big solutions, anyway when you <br>will run this code you will see that you are never getting the condition
<code>elif step_ud="D"</code> because it will check <code>elif sea_level==0</code> and continue the for loop. </p>
<pre><code>steps = ["U", "D", "D", "D", "D", "D", "U", "U"]
sea_level = 0
valleys = 0
#For loop to calculate how many valleys Gary hiked through
for step in steps:
step_ud = step
if step_ud == "U":
print "U"
sea_level += 1
elif sea_level == 0:
print "0 sea"
valleys += 1
elif step_ud == "D":
print "D"
sea_level -= 1
elif sea_level == 0:
print "0 sea 2"
valleys += 1
print sea_level
</code></pre>
<p>Here is my solution which I submitted during the contest:</p>
<pre><code>n = int(raw_input())
stra = raw_input()
lev = 0
arr = []
valleys = 0
for i in stra:
if i=='U':
lev +=1
arr.append(lev)
elif i=='D':
lev -=1
arr.append(lev)
#print arr
for i in range(len(arr)):
if arr[i]==0 and arr[i-1]<0:
valleys +=1
print valleys
</code></pre>
<p><br>
My idea behind this program:</p>
<p><br><strong>Output</strong></p>
<pre><code>C:\Users\bhansa\Desktop\Stack>python valley.py
8
DDUUDDUDUUUD
[-1, -2, -1, 0, -1, -2, -1, -2, -1, 0, 1, 0]
2
</code></pre>
<p>See the above list, if there is <code>0</code> that means gary is on the see level again <br>check if the value before the element <code>0</code> is negative then it is sure it <br>came from a valley.</p>
<p>I hope it helped.</p>
| 2 | 2016-07-30T16:41:05Z | [
"python",
"printing",
"output"
] |
Loop needs to reference list returned by terminal in notebook | 38,676,041 | <p>I have a folder structure with 3 folder X,Y and Z. Each of these folders contains 3 files X1.csv, X2.csv, X3.csv, Y1.csv, Y2.csv, Y3.csv, Z1.csv, Z2.csv, Z3.csv.</p>
<p>In my jupyter notebook I can do the following:</p>
<pre><code>folders = ['X','Y','Z']
for f in folders:
a = !ls data/X
print(a)
</code></pre>
<p>This retruns the following:</p>
<pre><code>['X1.csv', 'X2.csv', 'X3.csv']
['X1.csv', 'X2.csv', 'X3.csv']
['X1.csv', 'X2.csv', 'X3.csv']
</code></pre>
<p>I want to change <code>!ls data/X</code> so that it uses <code>f</code> instead of the hard-coded <code>X</code> but if I change data/X to a string 'data/' and concatenate f to the end of it then I does not return anything.</p>
<hr>
<p>note </p>
<p>I could import os and then use listdir but I'm wondering if the above is possible.</p>
| 2 | 2016-07-30T16:23:31Z | 38,676,408 | <p>Simply do not use magical notebook commands to do something that <a href="http://www.tutorialspoint.com/python/os_listdir.htm" rel="nofollow">can be easily done with pure python</a>. Magic notebook commands are supposed to be used for quick hacks and supporting non-python functionality, not to be actually placed as a part of your code. </p>
<p>However, if for odd reason you really <strong>have to</strong>, you need to reference your variables with bash like-way (<code>$variable</code> instaed of <code>variable</code>)</p>
<pre><code>folders = ['X','Y','Z']
for f in folders:
a = !ls data/$f
print(a)
</code></pre>
| 3 | 2016-07-30T17:01:55Z | [
"python",
"ipython"
] |
List and maximum numbers | 38,676,297 | <p>Here's the question:
Using a <strong>for loop</strong>, write a <strong>function</strong> called <strong><em>getMax4</em></strong> that takes in a list of numbers. It determines and <strong>returns</strong> the <strong>maximum</strong> number that is <strong>divisible by 4.</strong>
The function returns -999 if the argument is an empty list. The function returns 0 if no number in the argument list is divisible by 4.
The following shows sample outputs when the function is called in the python shell:</p>
<p><img src="http://i.stack.imgur.com/9rt4g.png" alt="This is the sample output"></p>
<p>My code:</p>
<pre><code># What im trying to do is e.g. let's say:
List=[1,2,3]
maximum=List[0]
for num in List:
if num > maximum:
maximum = num
print(maximum)
</code></pre>
<p>by doing the for loop, it first compares with List[0] which is 1, with the "1" in the list. After comparing 1 with 1, there is no differnce, so the max is still 1. Now it moves to the second iteration, maximum=List[0] (which is 1 in the list), now compares with 2 in the list. Now 2 is higher than the maximum, so it updates the new maximum as 2. (sorry for the bad english)
So the problem is that when i try to do it with empty set, it gives me index out of range.</p>
<p>Another problem is that when i input the values given in the sample output, all i get is 0. </p>
<pre><code>List=[]
def getMax4 (List):
highest=List[0]
for num in List:
if num % 4 == 0:
if num > highest:
highest = num
return highest
elif num == [] :
return -999
else:
return 0
</code></pre>
| -5 | 2016-07-30T16:49:05Z | 38,680,662 | <p>Your first problem is at <code>highest = List[0]</code>. Since <code>List</code> is not guaranteed to have at least one item, you will have an error if <code>List</code> is empty. You should add a check:</p>
<pre><code>def getMax4(List):
if not List: # or if len(List) == 0:
return -999
...
</code></pre>
<p>Another problem is that you define <code>highest</code> as the first item. Why? The first item could possibly be the highest only if it is divisible by four. You should set it to <code>0</code> so that even if there are no numbers divisible by four, it will still have the default of <code>0</code> that is required in the examples.</p>
<p>The next problem is that you are returning inside the loop. Wait until the loop is done before you try to return.</p>
<p>Once we fix all those, the problem is that <code>-4</code> is better than <code>0</code> even though it is lower. We can simply add a check for <code>not highest</code> (<code>highest</code> is equal to <code>0</code>). The new code looks like this:</p>
<pre><code>def getMax4(List):
if not List:
return -999
highest = 0
for num in List:
if num % 4 == 0 and (num > highest or not highest):
highest = num
return highest
</code></pre>
| 0 | 2016-07-31T04:05:49Z | [
"python",
"python-3.x"
] |
List and maximum numbers | 38,676,297 | <p>Here's the question:
Using a <strong>for loop</strong>, write a <strong>function</strong> called <strong><em>getMax4</em></strong> that takes in a list of numbers. It determines and <strong>returns</strong> the <strong>maximum</strong> number that is <strong>divisible by 4.</strong>
The function returns -999 if the argument is an empty list. The function returns 0 if no number in the argument list is divisible by 4.
The following shows sample outputs when the function is called in the python shell:</p>
<p><img src="http://i.stack.imgur.com/9rt4g.png" alt="This is the sample output"></p>
<p>My code:</p>
<pre><code># What im trying to do is e.g. let's say:
List=[1,2,3]
maximum=List[0]
for num in List:
if num > maximum:
maximum = num
print(maximum)
</code></pre>
<p>by doing the for loop, it first compares with List[0] which is 1, with the "1" in the list. After comparing 1 with 1, there is no differnce, so the max is still 1. Now it moves to the second iteration, maximum=List[0] (which is 1 in the list), now compares with 2 in the list. Now 2 is higher than the maximum, so it updates the new maximum as 2. (sorry for the bad english)
So the problem is that when i try to do it with empty set, it gives me index out of range.</p>
<p>Another problem is that when i input the values given in the sample output, all i get is 0. </p>
<pre><code>List=[]
def getMax4 (List):
highest=List[0]
for num in List:
if num % 4 == 0:
if num > highest:
highest = num
return highest
elif num == [] :
return -999
else:
return 0
</code></pre>
| -5 | 2016-07-30T16:49:05Z | 38,681,901 | <p>In regards to your 2nd piece of code you need to be mindful of where you place your returns. A return statement will exit the function with the value. Therefore these checks should be done before and after the loop as your function requires it to go through the entire list.</p>
<p>I'm not too fond of setting your initial value to something that could be a valid output. Therefore here is my code where I use <code>None</code> for the case of no current max. However it requires an extra check at the end since 0 is expected.</p>
<pre><code>def getMax4(lst):
if not lst:
return -999
highest = None
for num in lst:
if num % 4 == 0 and (highest is None or num > highest):
highest = num
return 0 if highest is None else highest
>>> getMax4([])
-999
>>> getMax4([1, 3, 9])
0
>>> getMax4([-4, 3, -12, -8, 13])
-4
>>> getMax4([1, 16, 18, 12])
16
>>> getMax4([-4, 0, -8])
0
</code></pre>
<p>In the last example 0 is the max value, if 0 should not be counted as a valid maximum divisible by 4 then an extra condition is required for the if statement.</p>
| 0 | 2016-07-31T07:49:36Z | [
"python",
"python-3.x"
] |
Is it possible to get monthly historical stock prices in python? | 38,676,323 | <p>I know using pandas this is how you normally get daily stock price quotes. But I'm wondering if its possible to get monthly or weekly quotes, is there maybe a parameter I can pass through to get monthly quotes?</p>
<pre><code> from pandas.io.data import DataReader
from datetime import datetime
ibm = DataReader('IBM', 'yahoo', datetime(2000,1,1), datetime(2012,1,1))
print(ibm['Adj Close'])
</code></pre>
| -1 | 2016-07-30T16:51:49Z | 38,676,389 | <p>try this:</p>
<pre><code>In [175]: from pandas_datareader.data import DataReader
In [176]: ibm = DataReader('IBM', 'yahoo', '2001-01-01', '2012-01-01')
</code></pre>
<p><strong>UPDATE:</strong> show average for <code>Adj Close</code> only (month start)</p>
<pre><code>In [12]: ibm.groupby(pd.TimeGrouper(freq='MS'))['Adj Close'].mean()
Out[12]:
Date
2001-01-01 79.430605
2001-02-01 86.625519
2001-03-01 75.938913
2001-04-01 81.134375
2001-05-01 90.460754
2001-06-01 89.705042
2001-07-01 83.350254
2001-08-01 82.100543
2001-09-01 74.335789
2001-10-01 79.937451
...
2011-03-01 141.628553
2011-04-01 146.530774
2011-05-01 150.298053
2011-06-01 146.844772
2011-07-01 158.716834
2011-08-01 150.690990
2011-09-01 151.627555
2011-10-01 162.365699
2011-11-01 164.596963
2011-12-01 167.924676
Freq: MS, Name: Adj Close, dtype: float64
</code></pre>
<p>show average for Adj Close only (month end)</p>
<pre><code>In [13]: ibm.groupby(pd.TimeGrouper(freq='M'))['Adj Close'].mean()
Out[13]:
Date
2001-01-31 79.430605
2001-02-28 86.625519
2001-03-31 75.938913
2001-04-30 81.134375
2001-05-31 90.460754
2001-06-30 89.705042
2001-07-31 83.350254
2001-08-31 82.100543
2001-09-30 74.335789
2001-10-31 79.937451
...
2011-03-31 141.628553
2011-04-30 146.530774
2011-05-31 150.298053
2011-06-30 146.844772
2011-07-31 158.716834
2011-08-31 150.690990
2011-09-30 151.627555
2011-10-31 162.365699
2011-11-30 164.596963
2011-12-31 167.924676
Freq: M, Name: Adj Close, dtype: float64
</code></pre>
<p>monthly averages (all columns):</p>
<pre><code>In [179]: ibm.groupby(pd.TimeGrouper(freq='M')).mean()
Out[179]:
Open High Low Close Volume Adj Close
Date
2001-01-31 100.767857 103.553571 99.428333 101.870357 9474409 79.430605
2001-02-28 111.193160 113.304210 108.967368 110.998422 8233626 86.625519
2001-03-31 97.366364 99.423637 95.252272 97.281364 11570454 75.938913
2001-04-30 103.990500 106.112500 102.229501 103.936999 11310545 81.134375
2001-05-31 115.781363 117.104091 114.349091 115.776364 7243463 90.460754
2001-06-30 114.689524 116.199048 113.739523 114.777618 6806176 89.705042
2001-07-31 106.717143 108.028095 105.332857 106.646666 7667447 83.350254
2001-08-31 105.093912 106.196521 103.856522 104.939999 6234847 82.100543
2001-09-30 95.138667 96.740000 93.471334 94.987333 12620833 74.335789
2001-10-31 101.400870 103.140000 100.327827 102.145217 9754413 79.937451
2001-11-30 113.449047 114.875715 112.510952 113.938095 6435061 89.256046
2001-12-31 120.651001 122.076000 119.790500 121.087999 6669690 94.878736
2002-01-31 116.483334 117.509524 114.613334 115.994762 9217280 90.887920
2002-02-28 103.194210 104.389474 101.646316 102.961579 9069526 80.764672
2002-03-31 105.246500 106.764499 104.312999 105.478499 7563425 82.756873
... ... ... ... ... ... ...
2010-10-31 138.956188 140.259048 138.427142 139.631905 6537366 122.241844
2010-11-30 144.281429 145.164762 143.385241 144.439524 4956985 126.878319
2010-12-31 145.155909 145.959545 144.567273 145.251819 4245127 127.726929
2011-01-31 152.595000 153.950499 151.861000 153.181501 5941580 134.699880
2011-02-28 163.217895 164.089474 162.510002 163.339473 4687763 144.050847
2011-03-31 160.433912 161.745652 159.154349 160.425651 5639752 141.628553
2011-04-30 165.437501 166.587500 164.760500 165.978500 5038475 146.530774
2011-05-31 169.657144 170.679046 168.442858 169.632857 5276390 150.298053
2011-06-30 165.450455 166.559093 164.691819 165.593635 4792836 146.844772
2011-07-31 178.124998 179.866502 177.574998 178.981500 5679660 158.716834
2011-08-31 169.734350 171.690435 166.749567 169.360434 8480613 150.690990
2011-09-30 169.752858 172.034761 168.109999 170.245714 6566428 151.627555
2011-10-31 181.529525 183.597145 180.172379 182.302381 6883985 162.365699
2011-11-30 184.536668 185.950952 182.780477 184.244287 4619719 164.596963
2011-12-31 188.151428 189.373809 186.421905 187.789047 4925547 167.924676
[132 rows x 6 columns]
</code></pre>
<p>weekly averages (all columns):</p>
<pre><code>In [180]: ibm.groupby(pd.TimeGrouper(freq='W')).mean()
Out[180]:
Open High Low Close Volume Adj Close
Date
2001-01-07 89.234375 94.234375 87.890625 91.656250 11060200 71.466436
2001-01-14 93.412500 95.062500 91.662500 93.412500 7470200 72.835824
2001-01-21 100.250000 103.921875 99.218750 102.250000 13851500 79.726621
2001-01-28 109.575000 111.537500 108.675000 110.600000 8056720 86.237303
2001-02-04 113.680000 115.465999 111.734000 113.582001 6538080 88.562436
2001-02-11 113.194002 115.815999 111.639999 113.884001 7269320 88.858876
2001-02-18 113.960002 116.731999 113.238000 115.106000 7225420 89.853021
2001-02-25 109.525002 111.375000 105.424999 107.977501 10722700 84.288436
2001-03-04 103.390001 106.052002 100.386000 103.228001 11982540 80.580924
2001-03-11 105.735999 106.920000 103.364002 104.844002 9226900 81.842391
2001-03-18 95.660001 97.502002 93.185997 94.899998 13863740 74.079992
2001-03-25 90.734000 92.484000 88.598000 90.518001 11382280 70.659356
2001-04-01 95.622000 97.748000 94.274000 96.106001 10467580 75.021411
2001-04-08 95.259999 97.360001 93.132001 94.642000 12312580 73.878595
2001-04-15 98.350000 99.520000 95.327502 97.170000 10218625 75.851980
... ... ... ... ... ... ...
2011-09-25 170.678003 173.695996 169.401996 171.766000 6358100 152.981582
2011-10-02 176.290002 178.850000 174.729999 176.762000 7373680 157.431216
2011-10-09 175.920001 179.200003 174.379999 177.792001 7623560 158.348576
2011-10-16 185.366000 187.732001 184.977997 187.017999 5244180 166.565614
2011-10-23 180.926001 182.052002 178.815997 180.351999 9359200 160.628611
2011-10-30 183.094003 184.742001 181.623996 183.582001 5743800 163.505379
2011-11-06 184.508002 186.067999 183.432004 184.716003 4583780 164.515366
2011-11-13 185.350000 186.690002 183.685999 185.508005 4180620 165.750791
2011-11-20 187.600003 189.101999 185.368002 186.738000 5104420 166.984809
2011-11-27 181.067497 181.997501 178.717499 179.449997 4089350 160.467733
2011-12-04 185.246002 187.182001 184.388000 186.052002 5168720 166.371376
2011-12-11 191.841998 194.141998 191.090002 192.794000 4828580 172.400204
2011-12-18 191.085999 191.537998 187.732001 188.619998 6037220 168.667729
2011-12-25 183.810001 184.634003 181.787997 183.678000 5433360 164.248496
2012-01-01 185.140003 185.989998 183.897499 184.750000 3029925 165.207100
[574 rows x 6 columns]
</code></pre>
| 0 | 2016-07-30T16:58:49Z | [
"python",
"pandas",
"quantitative-finance"
] |
Is it possible to get monthly historical stock prices in python? | 38,676,323 | <p>I know using pandas this is how you normally get daily stock price quotes. But I'm wondering if its possible to get monthly or weekly quotes, is there maybe a parameter I can pass through to get monthly quotes?</p>
<pre><code> from pandas.io.data import DataReader
from datetime import datetime
ibm = DataReader('IBM', 'yahoo', datetime(2000,1,1), datetime(2012,1,1))
print(ibm['Adj Close'])
</code></pre>
| -1 | 2016-07-30T16:51:49Z | 38,773,260 | <p>Get it from Quandl:</p>
<pre><code>import pandas as pd
import quandl
quandl.ApiConfig.api_key = 'xxxxxxxxxxxx' # Optional
quandl.ApiConfig.api_version = '2015-04-09' # Optional
ibm = quandl.get("WIKI/IBM", start_date="2000-01-01", end_date="2012-01-01", collapse="monthly", returns="pandas")
</code></pre>
| 0 | 2016-08-04T16:55:11Z | [
"python",
"pandas",
"quantitative-finance"
] |
Is it possible to get monthly historical stock prices in python? | 38,676,323 | <p>I know using pandas this is how you normally get daily stock price quotes. But I'm wondering if its possible to get monthly or weekly quotes, is there maybe a parameter I can pass through to get monthly quotes?</p>
<pre><code> from pandas.io.data import DataReader
from datetime import datetime
ibm = DataReader('IBM', 'yahoo', datetime(2000,1,1), datetime(2012,1,1))
print(ibm['Adj Close'])
</code></pre>
| -1 | 2016-07-30T16:51:49Z | 38,833,988 | <p>Monthly closing prices from Yahoo! Finance... </p>
<pre><code>import pandas_datareader.data as web
data = web.get_data_yahoo('IBM','01/01/2015',interval='m')
</code></pre>
<p>where you can replace the interval input as required ('d', 'w', 'm', etc).</p>
| 1 | 2016-08-08T16:12:34Z | [
"python",
"pandas",
"quantitative-finance"
] |
Beautiful Soup is not selecting any element | 38,676,348 | <p>This is the code I am using to iterate over all elements:</p>
<pre><code>soup_top = bs4.BeautifulSoup(r_top.text, 'html.parser')
selector = '#ContentPlaceHolder1_gvDisplay table tr td:nth-of-type(3) a'
for link in soup_top.select(selector):
print(link)
</code></pre>
<p>The same selector gives a length of 57 when used in JavaScript:</p>
<pre><code>document.querySelectorAll("#ContentPlaceHolder1_gvDisplay table tr td:nth-of-type(3) a").length;
</code></pre>
<p>I thought that maybe I am not getting the contents of the webpage correctly. I then saved a local copy of the webpage but the selector in Beautiful Soup still did not select anything. What is going on here?</p>
<p>This is the <a href="http://www.swapnilpatni.com/law_charts_final.php" rel="nofollow">website</a> I am using the code on.</p>
| 0 | 2016-07-30T16:54:06Z | 38,683,486 | <p>It seems that this is due to the <a href="http://beautiful-soup-4.readthedocs.io/en/latest/#installing-a-parser" rel="nofollow">parser</a> you used (i.e. <code>html.parser</code>). If I try the same thing with <code>lxml</code> as parser:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
url = 'http://www.swapnilpatni.com/law_charts_final.php'
r = requests.get(url)
r.raise_for_status()
soup = BeautifulSoup(r.text, 'lxml')
css_select = '#ContentPlaceHolder1_gvDisplay table tr td:nth-of-type(3) a'
links = soup.select(css_select)
print('{} link(s) found'.format(len(links)))
>> 1 link(s) found
for link in links:
print(link['href'])
>> spadmin/doc/Company Law amendment 1.1.png
</code></pre>
<p>The <code>html.parser</code> will return a result up until <code>#ContentPlaceHolder1_gvDisplay table tr</code>, and even then it only returns the first <code>tr</code>.</p>
<p>When running the url through <a href="http://validator.w3.org" rel="nofollow">W3 Markup Validation Service</a>, this is the error that is returned:</p>
<blockquote>
<p>Sorry, I am unable to validate this document because on line 1212 it contained one or more bytes that I cannot interpret as utf-8 (in other words, the bytes found are not valid values in the specified Character Encoding). Please check both the content of the file and the character encoding indication.
The error was: utf8 "\xA0" does not map to Unicode</p>
</blockquote>
<p>It's likely that the <code>html.parser</code> chokes on this as well, while <code>lxml</code> is more fault-tolerant.</p>
| 0 | 2016-07-31T11:27:42Z | [
"python",
"python-3.x",
"web-scraping",
"python-3.5",
"bs4"
] |
highcharts not showing on heroku using python flask app | 38,676,388 | <p>i have created an html which visualizes some data using highcharts. When using this html on <strong>localhost</strong> i can successfully see my charts. But when i use it on heroku i do not get my charts. Any ideas?</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><!DOCTYPE html>
<html>
<base href="https://www.highcharts.com" />
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<script src="/lib/jquery-1.7.2.js" type="text/javascript"></script>
<script type="text/javascript">
</head>
<body >
<script src="https://code.highcharts.com/highcharts.js"></script>
<script src="https://code.highcharts.com/modules/exporting.js"></script>
<script src="https://code.highcharts.com/modules/data.js"></script>
<script src="https://code.highcharts.com/modules/drilldown.js"></script>
<!--<div id="container" style="min-width: 310px; height: 0 auto; max-width: 600px; margin: 0 auto"></div>-->
<!--<div id="container2" style="min-width: 310px; height: 0 auto; max-width: 600px; margin: 0 auto"></div>-->
<div id="container6" class="text">
<p>info:about,category,location,website,founded</p>
</div>
<div id="container" class="chart">
<p></p>
</div>
<div id="container2" class="chart">
<p></p>
</div>
<div id="container3" class="chart">
<p></p>
</div>
<div id="container4" class="chart">
<p></p>
</div>
<div id="container5" class="chart">
</div>
<div id="container7" class="chart">
<p>post message,video,photo etc.</p>
</div>
</body>
</html></code></pre>
</div>
</div>
</p>
<p>i tried several solutions like copying the modules locally or impose https: insted of http: on the links.
I suppose that the issue has to do about loading the highcharts .js but i cannot figure why</p>
| 0 | 2016-07-30T16:58:48Z | 38,697,898 | <p>A few things that I noticed and corrected:</p>
<ul>
<li>Your code snippet did not have a starting <code><head></code> tag.</li>
<li>You had an unclosed instance of <code><script type="text/javascript"></code> right before your <code></head></code> tag. This was causing an <code>Uncaught SyntaxError: Unexpected token <</code> error.</li>
<li>I moved all of your script calls in between the <code><head></code> tags and gave the jQuery library an absolute URL (in order to get this to work in the snippet).</li>
</ul>
<p>When you run the code snippet now, you'll see the expected text in the <code><p></code> tags. I don't see a chart, but I also don't see the code with the options to create them.</p>
<p>An edited version of your code snippet is below.</p>
<p>I hope this is helpful for you.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><!DOCTYPE html>
<html>
<head>
<base href="https://www.highcharts.com" />
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<!-- <script src="/lib/jquery-1.7.2.js" type="text/javascript"></script> -->
<script src="https://code.jquery.com/jquery-1.7.2.js" type="text/javascript"></script>
<script src="https://code.highcharts.com/highcharts.js"></script>
<script src="https://code.highcharts.com/modules/exporting.js"></script>
<script src="https://code.highcharts.com/modules/data.js"></script>
<script src="https://code.highcharts.com/modules/drilldown.js"></script>
</head>
<body >
<!--<div id="container" style="min-width: 310px; height: 0 auto; max-width: 600px; margin: 0 auto"></div>-->
<!--<div id="container2" style="min-width: 310px; height: 0 auto; max-width: 600px; margin: 0 auto"></div>-->
<div id="container6" class="text">
<p>info:about,category,location,website,founded</p>
</div>
<div id="container" class="chart">
<p></p>
</div>
<div id="container2" class="chart">
<p></p>
</div>
<div id="container3" class="chart">
<p></p>
</div>
<div id="container4" class="chart">
<p></p>
</div>
<div id="container5" class="chart">
</div>
<div id="container7" class="chart">
<p>post message,video,photo etc.</p>
</div>
</body>
</html></code></pre>
</div>
</div>
</p>
| 1 | 2016-08-01T11:38:49Z | [
"javascript",
"python",
"heroku",
"highcharts",
"flask"
] |
Pandas DatetimeIndex indexing dtype: datetime64 vs Timestamp | 38,676,418 | <p>Indexing a pandas DatetimeIndex (with dtype numpy datetime64[ns]) returns either:</p>
<ul>
<li>another DatetimeIndex for multiple indices</li>
<li>a pandas Timestamp for single index</li>
</ul>
<p>The confusing part is that Timestamps do not equal np.datetime64, so that:</p>
<pre><code>import numpy as np
import pandas as pd
a_datetimeindex = pd.date_range('1/1/2016', '1/2/2016', freq = 'D')
print np.in1d(a_datetimeindex[0], a_datetimeindex)
</code></pre>
<p>Returns false. But:</p>
<pre><code>print np.in1d(a_datetimeindex[0:1], a_datetimeindex)
print np.in1d(np.datetime64(a_datetimeindex[0]), a_datetimeindex)
</code></pre>
<p>Returns the right results.</p>
<p>I guess that is because np.datetime64[ns] has accuracy to the nanosecond, but the Timestamp is truncated?</p>
<p>My question is, is there a way to create the DatetimeIndex so that it always indexes to the same (or comparable) data type?</p>
| 1 | 2016-07-30T17:02:56Z | 38,679,935 | <p>You are using numpy functions to manipulate pandas types. They are not always compatible. </p>
<p>The function <code>np.in1d</code> first converts its both arguments to ndarrays. A <code>DatetimeIndex</code> has a built-in conversion and an array of dtype <code>np.datetime64</code> is returned (it's <code>DatetimIndex.values</code>). But a <code>Timestamp</code> doesn't have such a facility and it's not converted.</p>
<p>Instead, you can use for example a python keyword <code>in</code> (the most natural way):</p>
<pre><code>a_datetimeindex[0] in a_datetimeindex
</code></pre>
<p>or an <code>Index.isin</code> method for a collection of elements</p>
<pre><code>a_datetimeindex.isin(a_list_or_index)
</code></pre>
<p>If you want to use <code>np.in1d</code>, explicitly convert both arguments to numpy types. Or call it on the underlying numpy arrays:</p>
<pre><code>np.in1d(a_datetimeindex.values[0], a_datetimeindex.values)
</code></pre>
<p>Alternatively, it's probably safe to use <code>np.in1d</code> with two collections of the same type:</p>
<pre><code>np.in1d(a_datetimeindex, another_datetimeindex)
</code></pre>
<p>or even</p>
<pre><code>np.in1d(a_datetimeindex[[0]], a_datetimeindex)
</code></pre>
| 1 | 2016-07-31T01:19:50Z | [
"python",
"pandas",
"datetimeindex"
] |
A transition from CountVectorizer to TfidfTransformer in sklearn | 38,676,436 | <p>I am processing a huge amount of text data in sklearn. First I need to vectorize the text context (word counts) and then perform a TfidfTransformer. I have the following code that doesn't seem to take the output from CountVectorizer to the input of TfidfTransformer. </p>
<pre><code>TEXT = [data[i].values()[3] for i in range(len(data))]
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
vectorizer = CountVectorizer(min_df=0.01,max_df = 2.5, lowercase = False, stop_words = 'english')
X = vectorizer(TEXT)
transformer = TfidfTransformer(X)
X = transformer.fit_transform()
</code></pre>
<p>As I run this code, I obtain this error:</p>
<pre><code>Traceback (most recent call last):
File "nlpQ2.py", line 27, in <module>
X = vectorizer(TEXT)
TypeError: 'CountVectorizer' object is not callable
</code></pre>
<p>I thought I had vectorized the text and now it's in a matrix -- is there a transition step that I have missed? Thank you!!</p>
| 2 | 2016-07-30T17:04:36Z | 38,676,483 | <p>You're probably looking for a <a href="http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html" rel="nofollow"><em>pipeline</em></a>, perhaps something like this:</p>
<pre><code>pipeline = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
])
</code></pre>
<p>or </p>
<pre><code>pipeline = make_pipeline(CountVectorizer(), TfidfTransformer())
</code></pre>
<p>On this pipeline, perform the regular operations (e.g., <code>fit</code>, <code>fit_transform</code>, and so forth).</p>
<p>See <a href="http://scikit-learn.org/stable/auto_examples/model_selection/grid_search_text_feature_extraction.html#example-model-selection-grid-search-text-feature-extraction-py" rel="nofollow">this example</a> also.</p>
| 1 | 2016-07-30T17:10:09Z | [
"python",
"scikit-learn",
"vectorization",
"tf-idf"
] |
A transition from CountVectorizer to TfidfTransformer in sklearn | 38,676,436 | <p>I am processing a huge amount of text data in sklearn. First I need to vectorize the text context (word counts) and then perform a TfidfTransformer. I have the following code that doesn't seem to take the output from CountVectorizer to the input of TfidfTransformer. </p>
<pre><code>TEXT = [data[i].values()[3] for i in range(len(data))]
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
vectorizer = CountVectorizer(min_df=0.01,max_df = 2.5, lowercase = False, stop_words = 'english')
X = vectorizer(TEXT)
transformer = TfidfTransformer(X)
X = transformer.fit_transform()
</code></pre>
<p>As I run this code, I obtain this error:</p>
<pre><code>Traceback (most recent call last):
File "nlpQ2.py", line 27, in <module>
X = vectorizer(TEXT)
TypeError: 'CountVectorizer' object is not callable
</code></pre>
<p>I thought I had vectorized the text and now it's in a matrix -- is there a transition step that I have missed? Thank you!!</p>
| 2 | 2016-07-30T17:04:36Z | 38,676,485 | <p>This line </p>
<pre><code>X = vectorizer(TEXT)
</code></pre>
<p>does not produce the output of the vectorizer (and this is the one raising the exception, it has nothing to do with TfIdf itself), you are supposed to call <code>fit_transform</code>. Furthermore, your next call is also wrong. You have to pass data as an argument to <code>fit_transform</code>, not to constructor.</p>
<pre><code>X = vectorizer.fit_transform(TEXT)
transformer = TfidfTransformer()
X = transformer.fit_transform(X)
</code></pre>
| 1 | 2016-07-30T17:10:37Z | [
"python",
"scikit-learn",
"vectorization",
"tf-idf"
] |
Python how to make class object shared across functions | 38,676,443 | <p>Lets say I created a class object someClassObject in function A, and threw that object into a function B:</p>
<pre><code>functionB(someClassObject)
</code></pre>
<p>How do I retain all the modifications I've made in function B to someClassObject so I can continue using someClassObject in function A if my function B cannot return anything?</p>
<p>My function B is a recursive function and I can't think of anyways to have it return my someClassObject</p>
| 0 | 2016-07-30T17:05:15Z | 38,676,488 | <p>I'm pretty sure that happens by default when you create a class:</p>
<pre><code>Type "help", "copyright", "credits" or "license" for more information.
>>> class A:
... def __init__(self, x):
... self.x = x
...
>>> def modA(objA):
... objA.x = 9
...
>>> def createA():
... a = A(2)
... print(a.x)
... modA(a)
... print(a.x)
...
>>> createA
<function createA at 0x7ff7acee2a60>
>>> createA()
2
9
</code></pre>
| 0 | 2016-07-30T17:10:43Z | [
"python",
"oop"
] |
Python how to make class object shared across functions | 38,676,443 | <p>Lets say I created a class object someClassObject in function A, and threw that object into a function B:</p>
<pre><code>functionB(someClassObject)
</code></pre>
<p>How do I retain all the modifications I've made in function B to someClassObject so I can continue using someClassObject in function A if my function B cannot return anything?</p>
<p>My function B is a recursive function and I can't think of anyways to have it return my someClassObject</p>
| 0 | 2016-07-30T17:05:15Z | 38,676,517 | <p>if you pass a class object (generally any object) around, it stays one single object so all changes to it are visible for all references to it.</p>
<p>for example in</p>
<pre><code>def B(cls):
cls.value=2
def A():
class C(object):
value=1
B(C)
return C.value
</code></pre>
<p>A() returns 2</p>
| 0 | 2016-07-30T17:13:37Z | [
"python",
"oop"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.