title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
google wave OnBlipSubmitted | 1,584,406 | <p>I'm trying to create a wave robot, and I have the basic stuff working. I'm trying to create a new blip with help text when someone types @help but for some reason it doesnt create it. I'm getting no errors in the log console, and I'm seeing the info log 'in @log'</p>
<pre><code>def OnBlipSubmitted(properties, context):
# Get the blip that was just submitted.
blip = context.GetBlipById(properties['blipId'])
text = blip.GetDocument().GetText()
if text.startswith('@help') == True:
logging.info('in @help')
blip.CreateChild().GetDocument().SetText('help text')
</code></pre>
| 1 | 2009-10-18T08:53:41Z | 1,586,989 | <p>if it just started working, I have two suggestions...</p>
<p>-->Have you been updating the Robot Version in the constructor? You should change the values as you update changes so that the caches can be updated.</p>
<pre><code>if __name__ == '__main__':
myRobot = robot.Robot('waverobotdev',
image_url = baseurl + 'assets/wave_robot_icon.png',
version = '61', # <-------------HERE
profile_url = baseurl)
</code></pre>
<p>-->The server connection between Wave and AppSpot has recently been extremely variable. Sometimes it takes 10+ minutes for the AppSpot server to receive my event, othertimes a few seconds. Verify you're receiving the events you expect.</p>
<p>Edit:
The code you provided looks good, so I wouldn't expect you're doing anything wrong in that respect.</p>
| 1 | 2009-10-19T04:42:34Z | [
"python",
"google-app-engine",
"google-wave"
] |
Return value while using cProfile | 1,584,425 | <p>I'm trying to profile an instance method, so I've done something like:</p>
<pre><code>import cProfile
class Test():
def __init__(self):
pass
def method(self):
cProfile.runctx("self.method_actual()", globals(), locals())
def method_actual(self):
print "Run"
if __name__ == "__main__":
Test().method()
</code></pre>
<p>But now problems arise when I want "method" to return a value that is computed by "method_actual". I don't really want to call "method_actual" twice.</p>
<p>Is there another way, something that can be thread safe? (In my application, the cProfile data are saved to datafiles named by one of the args, so they don't get clobbered and I can combine them later.)</p>
| 12 | 2009-10-18T09:04:26Z | 1,584,468 | <p>I discovered that you can do this:</p>
<pre><code>prof = cProfile.Profile()
retval = prof.runcall(self.method_actual, *args, **kwargs)
prof.dump_stats(datafn)
</code></pre>
<p>The downside is that it's undocumented.</p>
| 24 | 2009-10-18T09:39:37Z | [
"python"
] |
Return value while using cProfile | 1,584,425 | <p>I'm trying to profile an instance method, so I've done something like:</p>
<pre><code>import cProfile
class Test():
def __init__(self):
pass
def method(self):
cProfile.runctx("self.method_actual()", globals(), locals())
def method_actual(self):
print "Run"
if __name__ == "__main__":
Test().method()
</code></pre>
<p>But now problems arise when I want "method" to return a value that is computed by "method_actual". I don't really want to call "method_actual" twice.</p>
<p>Is there another way, something that can be thread safe? (In my application, the cProfile data are saved to datafiles named by one of the args, so they don't get clobbered and I can combine them later.)</p>
| 12 | 2009-10-18T09:04:26Z | 3,840,678 | <p>I was struggling with the same problem and used a wrapper function to get over direct return values. Instead of</p>
<pre><code>cP.runctx("a=foo()", globals(), locales())
</code></pre>
<p>I create a wrapper function</p>
<pre><code>def wrapper(b):
b.append(foo())
</code></pre>
<p>and profile the call to the wrapper function</p>
<pre><code>b = []
cP.runctx("wrapper(b)", globals(), locals())
a = b[0]
</code></pre>
<p>extracting the result of foo's computation from the out param (b) afterwards.</p>
| 5 | 2010-10-01T15:34:05Z | [
"python"
] |
Return value while using cProfile | 1,584,425 | <p>I'm trying to profile an instance method, so I've done something like:</p>
<pre><code>import cProfile
class Test():
def __init__(self):
pass
def method(self):
cProfile.runctx("self.method_actual()", globals(), locals())
def method_actual(self):
print "Run"
if __name__ == "__main__":
Test().method()
</code></pre>
<p>But now problems arise when I want "method" to return a value that is computed by "method_actual". I don't really want to call "method_actual" twice.</p>
<p>Is there another way, something that can be thread safe? (In my application, the cProfile data are saved to datafiles named by one of the args, so they don't get clobbered and I can combine them later.)</p>
| 12 | 2009-10-18T09:04:26Z | 17,259,420 | <p>An option for any arbitrary code:</p>
<pre><code> import cProfile, pstats, sys
pr = cProfile.Profile()
pr.enable()
...
my_return_val = my_func(my_arg)
...
pr.disable()
ps = pstats.Stats(pr, stream=sys.stdout)
ps.print_stats()
</code></pre>
<p>Taken from <a href="https://docs.python.org/2/library/profile.html#profile.Profile">https://docs.python.org/2/library/profile.html#profile.Profile</a></p>
| 10 | 2013-06-23T09:35:45Z | [
"python"
] |
Return value while using cProfile | 1,584,425 | <p>I'm trying to profile an instance method, so I've done something like:</p>
<pre><code>import cProfile
class Test():
def __init__(self):
pass
def method(self):
cProfile.runctx("self.method_actual()", globals(), locals())
def method_actual(self):
print "Run"
if __name__ == "__main__":
Test().method()
</code></pre>
<p>But now problems arise when I want "method" to return a value that is computed by "method_actual". I don't really want to call "method_actual" twice.</p>
<p>Is there another way, something that can be thread safe? (In my application, the cProfile data are saved to datafiles named by one of the args, so they don't get clobbered and I can combine them later.)</p>
| 12 | 2009-10-18T09:04:26Z | 21,734,916 | <p>I think @detly the <code>.runcall()</code> is basically the best answer, but for completeness, I just wanted to take @ThomasH 's answer to be function independent: </p>
<pre><code>def wrapper(b, f, *myargs, **mykwargs):
try:
b.append(f(*myargs, **mykwargs))
except TypeError:
print 'bad args passed to func.'
# Example run
def func(a, n):
return n*a + 1
b = []
cProfile.runctx("wrapper(b, func, 3, n=1)", globals(), locals())
a = b[0]
print 'a, ', a
</code></pre>
| 1 | 2014-02-12T17:16:51Z | [
"python"
] |
Return value while using cProfile | 1,584,425 | <p>I'm trying to profile an instance method, so I've done something like:</p>
<pre><code>import cProfile
class Test():
def __init__(self):
pass
def method(self):
cProfile.runctx("self.method_actual()", globals(), locals())
def method_actual(self):
print "Run"
if __name__ == "__main__":
Test().method()
</code></pre>
<p>But now problems arise when I want "method" to return a value that is computed by "method_actual". I don't really want to call "method_actual" twice.</p>
<p>Is there another way, something that can be thread safe? (In my application, the cProfile data are saved to datafiles named by one of the args, so they don't get clobbered and I can combine them later.)</p>
| 12 | 2009-10-18T09:04:26Z | 32,715,174 | <p>I created a decorator:</p>
<pre><code>import cProfile
import functools
import pstats
def profile(func):
@functools.wraps(func)
def inner(*args, **kwargs):
profiler = cProfile.Profile()
profiler.enable()
try:
retval = func(*args, **kwargs)
finally:
profiler.disable()
with open('profile.out', 'w') as profile_file:
stats = pstats.Stats(profiler, stream=profile_file)
stats.print_stats()
return retval
return inner
</code></pre>
<p>Decorate your function or method with it:</p>
<pre><code>@profile
def somefunc(...):
...
</code></pre>
<p>Now that function will be profiled.</p>
<p>Alternatively, if you'd like the raw, unprocessed profile data (e.g. because you want to run the excellent graphical viewer RunSnakeRun on it), then:</p>
<pre><code>import cProfile
import functools
import pstats
def profile(func):
@functools.wraps(func)
def inner(*args, **kwargs):
profiler = cProfile.Profile()
profiler.enable()
try:
retval = func(*args, **kwargs)
finally:
profiler.disable()
profiler.dump_stats('profile.out')
return retval
return inner
</code></pre>
<p>This is a minor improvement on several of the other answers on this page.</p>
| 0 | 2015-09-22T11:02:38Z | [
"python"
] |
Stochastic calculus library in python | 1,584,584 | <p>I am looking for a python library that would allow me to compute stochastic calculus stuff, like the (conditional) expectation of a random process I would define the diffusion. I had a look a at simpy (simpy.sourceforge.net), but it does not seem to cover my needs.</p>
<p>This is for quick prototyping and experimentation.
In java, I used with some success the (now inactive) <a href="http://martingale.berlios.de/Martingale.html">http://martingale.berlios.de/Martingale.html</a> library.</p>
<p>The problem is not difficult in itself, but there is a lot non trivial, boilerplate things to do (efficient memory use, variable reduction techniques, and so on).</p>
<p>Ideally, I would be able to write something like this (just illustrative):</p>
<pre>
def my_diffusion(t, dt, past_values, world, **kwargs):
W1, W2 = world.correlated_brownians_pair(correlation=kwargs['rho'])
X = past_values[-1]
sigma_1 = kwargs['sigma1']
sigma_2 = kwargs['sigma2']
dX = kwargs['mu'] * X * dt + sigma_1 * W1 * X * math.sqrt(dt) + sigma_2 * W2 * X * X * math.sqrt(dt)
return X + dX
X = RandomProcess(diffusion=my_diffusion, x0 = 1.0)
print X.expectancy(T=252, dt = 1./252., N_simul= 50000, world=World(random_generator='sobol'), sigma1 = 0.3, sigma2 = 0.01, rho=-0.1)
</pre>
<p>Does someone knows of something else than reimplementing it in numpy for example ?</p>
| 15 | 2009-10-18T10:53:07Z | 1,584,618 | <p>I know someone who uses <a href="https://computation.llnl.gov/casc/sundials/download/whatsnew.html" rel="nofollow">Sundials</a> to solve stochastic ODE/PDE problems, though I don't know enough about the library to be sure that it's appropriate in your case. There are python bindings for it <a href="http://pysundials.sourceforge.net/" rel="nofollow">here</a>. </p>
| 0 | 2009-10-18T11:14:19Z | [
"python",
"random",
"simulation",
"stochastic-process"
] |
Stochastic calculus library in python | 1,584,584 | <p>I am looking for a python library that would allow me to compute stochastic calculus stuff, like the (conditional) expectation of a random process I would define the diffusion. I had a look a at simpy (simpy.sourceforge.net), but it does not seem to cover my needs.</p>
<p>This is for quick prototyping and experimentation.
In java, I used with some success the (now inactive) <a href="http://martingale.berlios.de/Martingale.html">http://martingale.berlios.de/Martingale.html</a> library.</p>
<p>The problem is not difficult in itself, but there is a lot non trivial, boilerplate things to do (efficient memory use, variable reduction techniques, and so on).</p>
<p>Ideally, I would be able to write something like this (just illustrative):</p>
<pre>
def my_diffusion(t, dt, past_values, world, **kwargs):
W1, W2 = world.correlated_brownians_pair(correlation=kwargs['rho'])
X = past_values[-1]
sigma_1 = kwargs['sigma1']
sigma_2 = kwargs['sigma2']
dX = kwargs['mu'] * X * dt + sigma_1 * W1 * X * math.sqrt(dt) + sigma_2 * W2 * X * X * math.sqrt(dt)
return X + dX
X = RandomProcess(diffusion=my_diffusion, x0 = 1.0)
print X.expectancy(T=252, dt = 1./252., N_simul= 50000, world=World(random_generator='sobol'), sigma1 = 0.3, sigma2 = 0.01, rho=-0.1)
</pre>
<p>Does someone knows of something else than reimplementing it in numpy for example ?</p>
| 15 | 2009-10-18T10:53:07Z | 1,584,635 | <p>Have you looked at <a href="http://www.sagemath.org" rel="nofollow">sage</a>?</p>
| 1 | 2009-10-18T11:21:57Z | [
"python",
"random",
"simulation",
"stochastic-process"
] |
Stochastic calculus library in python | 1,584,584 | <p>I am looking for a python library that would allow me to compute stochastic calculus stuff, like the (conditional) expectation of a random process I would define the diffusion. I had a look a at simpy (simpy.sourceforge.net), but it does not seem to cover my needs.</p>
<p>This is for quick prototyping and experimentation.
In java, I used with some success the (now inactive) <a href="http://martingale.berlios.de/Martingale.html">http://martingale.berlios.de/Martingale.html</a> library.</p>
<p>The problem is not difficult in itself, but there is a lot non trivial, boilerplate things to do (efficient memory use, variable reduction techniques, and so on).</p>
<p>Ideally, I would be able to write something like this (just illustrative):</p>
<pre>
def my_diffusion(t, dt, past_values, world, **kwargs):
W1, W2 = world.correlated_brownians_pair(correlation=kwargs['rho'])
X = past_values[-1]
sigma_1 = kwargs['sigma1']
sigma_2 = kwargs['sigma2']
dX = kwargs['mu'] * X * dt + sigma_1 * W1 * X * math.sqrt(dt) + sigma_2 * W2 * X * X * math.sqrt(dt)
return X + dX
X = RandomProcess(diffusion=my_diffusion, x0 = 1.0)
print X.expectancy(T=252, dt = 1./252., N_simul= 50000, world=World(random_generator='sobol'), sigma1 = 0.3, sigma2 = 0.01, rho=-0.1)
</pre>
<p>Does someone knows of something else than reimplementing it in numpy for example ?</p>
| 15 | 2009-10-18T10:53:07Z | 2,548,765 | <p>The closest I've seen to this in Python is <a href="https://github.com/pymc-devs/pymc" rel="nofollow">PyMC</a> - an implementation of various Markov Chain Monte Carlo algorithms.</p>
| 1 | 2010-03-30T21:09:43Z | [
"python",
"random",
"simulation",
"stochastic-process"
] |
Stochastic calculus library in python | 1,584,584 | <p>I am looking for a python library that would allow me to compute stochastic calculus stuff, like the (conditional) expectation of a random process I would define the diffusion. I had a look a at simpy (simpy.sourceforge.net), but it does not seem to cover my needs.</p>
<p>This is for quick prototyping and experimentation.
In java, I used with some success the (now inactive) <a href="http://martingale.berlios.de/Martingale.html">http://martingale.berlios.de/Martingale.html</a> library.</p>
<p>The problem is not difficult in itself, but there is a lot non trivial, boilerplate things to do (efficient memory use, variable reduction techniques, and so on).</p>
<p>Ideally, I would be able to write something like this (just illustrative):</p>
<pre>
def my_diffusion(t, dt, past_values, world, **kwargs):
W1, W2 = world.correlated_brownians_pair(correlation=kwargs['rho'])
X = past_values[-1]
sigma_1 = kwargs['sigma1']
sigma_2 = kwargs['sigma2']
dX = kwargs['mu'] * X * dt + sigma_1 * W1 * X * math.sqrt(dt) + sigma_2 * W2 * X * X * math.sqrt(dt)
return X + dX
X = RandomProcess(diffusion=my_diffusion, x0 = 1.0)
print X.expectancy(T=252, dt = 1./252., N_simul= 50000, world=World(random_generator='sobol'), sigma1 = 0.3, sigma2 = 0.01, rho=-0.1)
</pre>
<p>Does someone knows of something else than reimplementing it in numpy for example ?</p>
| 15 | 2009-10-18T10:53:07Z | 7,592,438 | <p>I'm working on a stochastic processes (including diffusion processes and some conditioning) python library. Check out <a href="http://code.google.com/p/stochastic-processes/" rel="nofollow">this link</a> to the google-project homepage. Cheers!</p>
| 0 | 2011-09-29T04:24:30Z | [
"python",
"random",
"simulation",
"stochastic-process"
] |
Better way to do string filtering/manipulation | 1,584,639 | <pre><code>mystring = '14| "Preprocessor Frame Count Not Incrementing; Card: Motherboard, Port: 2"|minor'
</code></pre>
<p>So I have 3 elements (id, message and level) divided by pipe ("|"). I want to get each element so I have written these little functions:</p>
<pre><code> def get_msg(i):
x = i.split("|")
return x[1].strip().replace('"','')
def get_level(i):
x = i.split("|")
return x[2].strip()
#testing
print get_msg(mystring ) # Missing Input PID, PID: 20 : Port 4 of a static component
print get_level(mystring )# major
</code></pre>
<p>Right now it works well but I feel like this is not pythonic way to solve it, how could these 2 functions can be improved? Regular expression feels like fitting here but I'm very naive at it so couldn't apply.</p>
| 0 | 2009-10-18T11:23:22Z | 1,584,648 | <p>I think the best practice would be to actually have a better formatted string, or not use a string for that. Why is it a string? Where are you parsing this from? A database? Xml? Can the origin be altered? </p>
<pre><code>{ 'id': 14, 'message': 'foo', 'type': 'minor' }
</code></pre>
<p>A datatype like this I think would be a best practice, if it's stored in a database then split it up in multiple columns. </p>
<p>Edit: I'm probably going to get stoned for this because it's probably overkill/inefficient but if you add lots of sections later on you could store these in a nice hash map:</p>
<pre><code>>>> formatParts = {
... 'id': lambda x: x[0],
... 'message': lambda x: x[1].strip(' "'),
... 'level': lambda x: x[2].strip()
... }
>>> myList = mystring.split('|')
>>> formatParts['id'](myList)
'14'
>>> formatParts['message'](myList)
'Preprocessor Frame Count Not Incrementing; Card: Motherboard, Port: 2'
>>> formatParts['level'](myList)
'minor'
</code></pre>
| 1 | 2009-10-18T11:27:14Z | [
"python",
"string"
] |
Better way to do string filtering/manipulation | 1,584,639 | <pre><code>mystring = '14| "Preprocessor Frame Count Not Incrementing; Card: Motherboard, Port: 2"|minor'
</code></pre>
<p>So I have 3 elements (id, message and level) divided by pipe ("|"). I want to get each element so I have written these little functions:</p>
<pre><code> def get_msg(i):
x = i.split("|")
return x[1].strip().replace('"','')
def get_level(i):
x = i.split("|")
return x[2].strip()
#testing
print get_msg(mystring ) # Missing Input PID, PID: 20 : Port 4 of a static component
print get_level(mystring )# major
</code></pre>
<p>Right now it works well but I feel like this is not pythonic way to solve it, how could these 2 functions can be improved? Regular expression feels like fitting here but I'm very naive at it so couldn't apply.</p>
| 0 | 2009-10-18T11:23:22Z | 1,584,651 | <pre><code>lst = msg.split('|')
level = lst[2].strip()
message = lst[1].strip(' "')
</code></pre>
<p>you're splitting your string twice which is a bit of a waste, other than that modification is minor.</p>
| 2 | 2009-10-18T11:29:18Z | [
"python",
"string"
] |
Better way to do string filtering/manipulation | 1,584,639 | <pre><code>mystring = '14| "Preprocessor Frame Count Not Incrementing; Card: Motherboard, Port: 2"|minor'
</code></pre>
<p>So I have 3 elements (id, message and level) divided by pipe ("|"). I want to get each element so I have written these little functions:</p>
<pre><code> def get_msg(i):
x = i.split("|")
return x[1].strip().replace('"','')
def get_level(i):
x = i.split("|")
return x[2].strip()
#testing
print get_msg(mystring ) # Missing Input PID, PID: 20 : Port 4 of a static component
print get_level(mystring )# major
</code></pre>
<p>Right now it works well but I feel like this is not pythonic way to solve it, how could these 2 functions can be improved? Regular expression feels like fitting here but I'm very naive at it so couldn't apply.</p>
| 0 | 2009-10-18T11:23:22Z | 1,584,695 | <p>I think the most pythonic way is to use the csv module.
From <a href="http://blog.doughellmann.com/2007/08/pymotw-csv.html" rel="nofollow">PyMotW</a> with delimiter option:</p>
<pre><code>import csv
import sys
f = open(sys.argv[1], 'rt')
try:
reader = csv.reader(f, delimiter='|')
for row in reader:
print row
finally:
f.close()
</code></pre>
| 5 | 2009-10-18T11:52:12Z | [
"python",
"string"
] |
Better way to do string filtering/manipulation | 1,584,639 | <pre><code>mystring = '14| "Preprocessor Frame Count Not Incrementing; Card: Motherboard, Port: 2"|minor'
</code></pre>
<p>So I have 3 elements (id, message and level) divided by pipe ("|"). I want to get each element so I have written these little functions:</p>
<pre><code> def get_msg(i):
x = i.split("|")
return x[1].strip().replace('"','')
def get_level(i):
x = i.split("|")
return x[2].strip()
#testing
print get_msg(mystring ) # Missing Input PID, PID: 20 : Port 4 of a static component
print get_level(mystring )# major
</code></pre>
<p>Right now it works well but I feel like this is not pythonic way to solve it, how could these 2 functions can be improved? Regular expression feels like fitting here but I'm very naive at it so couldn't apply.</p>
| 0 | 2009-10-18T11:23:22Z | 1,584,697 | <pre><code>class MyParser(object):
def __init__(self, value):
self.lst = value.split('|')
def id(self):
return self.lst[0]
def level(self):
return self.lst[2].strip()
def message(self):
return self.lst[1].strip(' "')
</code></pre>
| 1 | 2009-10-18T11:53:14Z | [
"python",
"string"
] |
Better way to do string filtering/manipulation | 1,584,639 | <pre><code>mystring = '14| "Preprocessor Frame Count Not Incrementing; Card: Motherboard, Port: 2"|minor'
</code></pre>
<p>So I have 3 elements (id, message and level) divided by pipe ("|"). I want to get each element so I have written these little functions:</p>
<pre><code> def get_msg(i):
x = i.split("|")
return x[1].strip().replace('"','')
def get_level(i):
x = i.split("|")
return x[2].strip()
#testing
print get_msg(mystring ) # Missing Input PID, PID: 20 : Port 4 of a static component
print get_level(mystring )# major
</code></pre>
<p>Right now it works well but I feel like this is not pythonic way to solve it, how could these 2 functions can be improved? Regular expression feels like fitting here but I'm very naive at it so couldn't apply.</p>
| 0 | 2009-10-18T11:23:22Z | 1,585,953 | <p>If you don't need the getter functions, this should work nicely:</p>
<pre><code>>>> m_id,msg,lvl = [s.strip(' "') for s in mystring.split('|')]
>>> m_id,msg,lvl
('14', 'Preprocessor Frame Count Not Incrementing; Card: Motherboard, Port: 2',
'minor')
</code></pre>
<p>Note: avoid shadowing built-in function 'id'</p>
| 0 | 2009-10-18T20:31:32Z | [
"python",
"string"
] |
Python things which are neither True nor False | 1,584,733 | <p>I just found this :</p>
<pre><code>a = (None,)
print (a is True)
print (a is False)
print (a == True)
print (a == False)
print (a == None)
print (a is None)
if a : print "hello"
if not a : print "goodbye"
</code></pre>
<p>which produces :</p>
<pre><code>False
False
False
False
False
False
hello
</code></pre>
<p>So a neither is, nor equals True nor False, but acts as True in an if statement. </p>
<p>Why?</p>
<p>Update : </p>
<p>actually, I've just realized that this isn't as obscure as I thought. I get the same result for a=2, as well (though not for a=0 or a=1, which are considered equal to False and True respectively)</p>
| 10 | 2009-10-18T12:09:00Z | 1,584,738 | <p>(None,) is a tuple that contains an element, it's not empty and therefore does not evaluate to False in that context.</p>
| 1 | 2009-10-18T12:11:48Z | [
"python",
"boolean"
] |
Python things which are neither True nor False | 1,584,733 | <p>I just found this :</p>
<pre><code>a = (None,)
print (a is True)
print (a is False)
print (a == True)
print (a == False)
print (a == None)
print (a is None)
if a : print "hello"
if not a : print "goodbye"
</code></pre>
<p>which produces :</p>
<pre><code>False
False
False
False
False
False
hello
</code></pre>
<p>So a neither is, nor equals True nor False, but acts as True in an if statement. </p>
<p>Why?</p>
<p>Update : </p>
<p>actually, I've just realized that this isn't as obscure as I thought. I get the same result for a=2, as well (though not for a=0 or a=1, which are considered equal to False and True respectively)</p>
| 10 | 2009-10-18T12:09:00Z | 1,584,742 | <p>Because <code>a=(None,)</code> is a tuple containing a single element <code>None</code></p>
<p>Try again with <code>a=None</code> and you will see there is a different result.</p>
<p>Also try <code>a=()</code> which is the empty tuple. This has a truth value of false </p>
| 1 | 2009-10-18T12:13:14Z | [
"python",
"boolean"
] |
Python things which are neither True nor False | 1,584,733 | <p>I just found this :</p>
<pre><code>a = (None,)
print (a is True)
print (a is False)
print (a == True)
print (a == False)
print (a == None)
print (a is None)
if a : print "hello"
if not a : print "goodbye"
</code></pre>
<p>which produces :</p>
<pre><code>False
False
False
False
False
False
hello
</code></pre>
<p>So a neither is, nor equals True nor False, but acts as True in an if statement. </p>
<p>Why?</p>
<p>Update : </p>
<p>actually, I've just realized that this isn't as obscure as I thought. I get the same result for a=2, as well (though not for a=0 or a=1, which are considered equal to False and True respectively)</p>
| 10 | 2009-10-18T12:09:00Z | 1,584,743 | <p><code>a</code> is a one-member tuple, which evaluates to <code>True</code>. <code>is</code> test identity of the object, therefore, you get <code>False</code> in all those test. <code>==</code> test equality of the objects, therefore, you get <code>False</code> again.</p>
<p>in <code>if</code> statement a <code>__bool__</code> (or <code>__nonzero__</code>) used to evaluate the object, for a non-empty tuple it should return <code>True</code>, therefore you get <code>True</code>. hope that answers your question.</p>
<p><strong>edit</strong>: the reason <code>True</code> and <code>False</code> are equal to <code>1</code> and <code>0</code> respectively is because <code>bool</code> type implemented as a subclass of <code>int</code> type.</p>
| 8 | 2009-10-18T12:13:37Z | [
"python",
"boolean"
] |
Python things which are neither True nor False | 1,584,733 | <p>I just found this :</p>
<pre><code>a = (None,)
print (a is True)
print (a is False)
print (a == True)
print (a == False)
print (a == None)
print (a is None)
if a : print "hello"
if not a : print "goodbye"
</code></pre>
<p>which produces :</p>
<pre><code>False
False
False
False
False
False
hello
</code></pre>
<p>So a neither is, nor equals True nor False, but acts as True in an if statement. </p>
<p>Why?</p>
<p>Update : </p>
<p>actually, I've just realized that this isn't as obscure as I thought. I get the same result for a=2, as well (though not for a=0 or a=1, which are considered equal to False and True respectively)</p>
| 10 | 2009-10-18T12:09:00Z | 1,584,752 | <p>In Python every type can be converted to <code>bool</code> by using the <code>bool()</code> function or the <a href="http://docs.python.org/reference/datamodel.html#object.%5F%5Fnonzero%5F%5F" rel="nofollow"><code>__nonzero__</code> method</a>.</p>
<p>Examples:</p>
<ul>
<li>Sequences (lists, strings, ...) are converted to <code>False</code> when they are empty.</li>
<li>Integers are converted to <code>False</code> when they are equal to 0.</li>
<li>You can define this behavior in your own classes by overriding <code>__nonzero__()</code>.</li>
</ul>
<p>[Edit]</p>
<p>In your code, the tuple <code>(None,)</code> is converted using <code>bool()</code> in the <code>if</code> statements. Since it's non-empty, it evaluates to <code>True</code>.</p>
| 0 | 2009-10-18T12:18:31Z | [
"python",
"boolean"
] |
Python things which are neither True nor False | 1,584,733 | <p>I just found this :</p>
<pre><code>a = (None,)
print (a is True)
print (a is False)
print (a == True)
print (a == False)
print (a == None)
print (a is None)
if a : print "hello"
if not a : print "goodbye"
</code></pre>
<p>which produces :</p>
<pre><code>False
False
False
False
False
False
hello
</code></pre>
<p>So a neither is, nor equals True nor False, but acts as True in an if statement. </p>
<p>Why?</p>
<p>Update : </p>
<p>actually, I've just realized that this isn't as obscure as I thought. I get the same result for a=2, as well (though not for a=0 or a=1, which are considered equal to False and True respectively)</p>
| 10 | 2009-10-18T12:09:00Z | 1,584,865 | <p>I find almost all the explanations here unhelpful, so here is another try:</p>
<p>The confusion here is based on that testing with "is", "==" and "if" are three different things.</p>
<ul>
<li>"is" tests identity, that is, if it's the same object. That is obviously not true in this case.</li>
<li>"==" tests value equality, and obviously the only built in objects with the values of True and False are the object True and False (with the exception of the numbers 0 and 1, of any numeric type).</li>
</ul>
<p>And here comes the important part:</p>
<ul>
<li>'if' tests on boolean values. That means that whatever expression you give it, it will be converted to either True or False. You can make the same with bool(). And bool((None,)) will return True. The things that will evaluate to False is listed in the docs (linked to by others here)</li>
</ul>
<p>Now maybe this is only more clear in my head, but at least I tried. :)</p>
| 12 | 2009-10-18T13:12:54Z | [
"python",
"boolean"
] |
Python things which are neither True nor False | 1,584,733 | <p>I just found this :</p>
<pre><code>a = (None,)
print (a is True)
print (a is False)
print (a == True)
print (a == False)
print (a == None)
print (a is None)
if a : print "hello"
if not a : print "goodbye"
</code></pre>
<p>which produces :</p>
<pre><code>False
False
False
False
False
False
hello
</code></pre>
<p>So a neither is, nor equals True nor False, but acts as True in an if statement. </p>
<p>Why?</p>
<p>Update : </p>
<p>actually, I've just realized that this isn't as obscure as I thought. I get the same result for a=2, as well (though not for a=0 or a=1, which are considered equal to False and True respectively)</p>
| 10 | 2009-10-18T12:09:00Z | 1,586,644 | <p>Things in python don't have to be one of <code>True</code> or <code>False</code>. </p>
<p>When they're used as a text expression for <code>if</code>/<code>while</code> loops, they're converted to booleans. You can't use <code>is</code> or <code>==</code> to test what they evaluate to. You use <code>bool( thing )</code></p>
<pre><code>>>> a = (None,)
>>> bool(a)
True
</code></pre>
<p>Also note:</p>
<pre><code>>>> 10 == True
False
>>> 10 is True
False
>>> bool(10)
True
</code></pre>
| 2 | 2009-10-19T01:48:44Z | [
"python",
"boolean"
] |
Installing python on 1and1 shared hosting | 1,584,857 | <p>I'm trying to install python to a 1and1.com shared linux hosting account.</p>
<p>There is a nice guide at this address:
<a href="http://www.jacksinner.com/wordpress/?p=3" rel="nofollow">http://www.jacksinner.com/wordpress/?p=3</a></p>
<p>However I get stuck at step 6 which is: "make install". The error I get is as follows:</p>
<pre><code>(uiserver):u58399657:~/bin/python > make install
Creating directory /~/bin/python/bin
/usr/bin/install: cannot create directory `/~â: Permission denied
Creating directory /~/bin/python/lib
/usr/bin/install: cannot create directory `/~â: Permission denied
make: *** [altbininstall] Error 1
</code></pre>
<p>I look forward to some suggestions.</p>
<p><strong>UPDATE:</strong></p>
<p>Here is an alternative version of the configure step to fix the above error, however this time I'm getting a different error:</p>
<pre><code>(uiserver):u58399657:~ > cd Python-2.6.3
(uiserver):u58399657:~/Python-2.6.3 > ./configure -prefix=~/bin/python
configure: error: expected an absolute directory name for --prefix: ~/bin/python
(uiserver):u58399657:~/Python-2.6.3 >
</code></pre>
| 1 | 2009-10-18T13:09:38Z | 1,584,870 | <p>The short version is, it looks like you've set the prefix to <code>/~/bin/python</code> instead of simply <code>~/bin/python</code>. This is typically done with a <code>--prefix=path</code> argument to <code>configure</code> or some other similar script. Try fixing this and it should then work. I'd suggest actual commands, but it's been a while (hence my request to see what you've been typing.)</p>
<p>Because of the above mistake, it is trying to install to a subdirectory called <code>~</code> of the root directory (<code>/</code>), instead of your home directory (<code>~</code>).</p>
<p><strong>EDIT:</strong> Looking at the linked tutorial, this step is incorrect:</p>
<pre><code>./configure --prefix=/~/bin/python
</code></pre>
<p>It should instead read:</p>
<pre><code>./configure --prefix=~/bin/python
</code></pre>
<p>Note, this is addressed in the <em>very first</em> comment to that post.</p>
<p><strong>EDIT 2:</strong> It seems that whatever shell you are using isn't expanding the path properly. Try this instead:</p>
<pre><code>./configure --prefix=$HOME/bin/python
</code></pre>
<p>Failing even that, run <code>echo $HOME</code> and substitute that for <code>$HOME</code> above. It should look something like <code>--prefix=/home/mscharley/bin/python</code></p>
| 4 | 2009-10-18T13:13:44Z | [
"python",
"linux"
] |
Installing python on 1and1 shared hosting | 1,584,857 | <p>I'm trying to install python to a 1and1.com shared linux hosting account.</p>
<p>There is a nice guide at this address:
<a href="http://www.jacksinner.com/wordpress/?p=3" rel="nofollow">http://www.jacksinner.com/wordpress/?p=3</a></p>
<p>However I get stuck at step 6 which is: "make install". The error I get is as follows:</p>
<pre><code>(uiserver):u58399657:~/bin/python > make install
Creating directory /~/bin/python/bin
/usr/bin/install: cannot create directory `/~â: Permission denied
Creating directory /~/bin/python/lib
/usr/bin/install: cannot create directory `/~â: Permission denied
make: *** [altbininstall] Error 1
</code></pre>
<p>I look forward to some suggestions.</p>
<p><strong>UPDATE:</strong></p>
<p>Here is an alternative version of the configure step to fix the above error, however this time I'm getting a different error:</p>
<pre><code>(uiserver):u58399657:~ > cd Python-2.6.3
(uiserver):u58399657:~/Python-2.6.3 > ./configure -prefix=~/bin/python
configure: error: expected an absolute directory name for --prefix: ~/bin/python
(uiserver):u58399657:~/Python-2.6.3 >
</code></pre>
| 1 | 2009-10-18T13:09:38Z | 1,586,134 | <p>You really should consider using the <a href="https://www.activestate.com/activepython/downloads/" rel="nofollow">AS binary package from Activestate</a> for this kind of thing. Download the .tar.gz file, unpack it, change to the python directory and run the install shell script. This installs a completely standalone version of python without touching any of the system stuff. You don't need root permissions and you don't need to mess around with make. </p>
<p>Of course, maybe you are a C/C++ developer, make is a familiar tool and you are experienced at building packages from source. But if any of those is not true then it is worth your while to try out the <a href="https://www.activestate.com/activepython/downloads/" rel="nofollow">Activestate AS binary package</a>.</p>
| 0 | 2009-10-18T21:54:50Z | [
"python",
"linux"
] |
Grep multi-layered iterable for strings that match (Python) | 1,584,864 | <p>Say that we have a multilayered iterable with some strings at the "final" level, yes strings are iterable, but I think that you get my meaning:</p>
<pre><code>['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch')),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
...
]
</code></pre>
<p>Is there any convenient way that I could implement a search that would give me the indices of the matching strings? I would like something that would act something like this (where the above list is <code>data</code>):</p>
<pre><code>>>> grep('javascript', data)
</code></pre>
<p>and it would return [ (2,1,1), (2,2,0), (2,2,1), (2,2,2) ] perhaps. Maybe I'm missing a comparable solution that returns nothing of the sort but can help me find some strings within a multi-layered list of iterables of iterables of .... strings. </p>
<p>I wrote a little bit but it was seeming juvenile and inelegant so I thought I would ask here. I guess that I could just keep nesting the exception the way I started here to the number of levels that the function would then support, but I was hoping to get something neat, abstract, pythonic.</p>
<pre><code>import re
def rgrep(s, data):
''' given a iterable of strings or an iterable of iterables of strings,
returns the index/indices of strings that contain the search string.
Args::
s - the string that you are searching for
data - the iterable of strings or iterable of iterables of strings
'''
results = []
expr = re.compile(s)
for item in data:
try:
match = expr.search(item)
if match != None:
results.append( data.index(item) )
except TypeError:
for t in item:
try:
m = expr.search(t)
if m != None:
results.append( (list.index(item), item.index(t)) )
except TypeError:
''' you can only go 2 deep! '''
pass
return results
</code></pre>
| 1 | 2009-10-18T13:12:49Z | 1,584,895 | <p>To get the position use <code>enumerate()</code></p>
<pre><code>>>> data = [('foo', 'bar', 'frrr', 'baz'), ('foo/bar', 'baz/foo')]
>>>
>>> for l1, v1 in enumerate(data):
... for l2, v2 in enumerate(v1):
... if 'f' in v2:
... print l1, l2, v2
...
0 0 foo
1 0 foo/bar
1 1 baz/foo
</code></pre>
<p>In this example I am using a simple match <code>'foo' in bar</code> yet you probably use regex for the job.</p>
<p>Obviously, <code>enumerate()</code> can provide support in more than 2 levels as in your edited post.</p>
| 0 | 2009-10-18T13:26:50Z | [
"python",
"regex",
"string",
"search",
"data-structures"
] |
Grep multi-layered iterable for strings that match (Python) | 1,584,864 | <p>Say that we have a multilayered iterable with some strings at the "final" level, yes strings are iterable, but I think that you get my meaning:</p>
<pre><code>['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch')),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
...
]
</code></pre>
<p>Is there any convenient way that I could implement a search that would give me the indices of the matching strings? I would like something that would act something like this (where the above list is <code>data</code>):</p>
<pre><code>>>> grep('javascript', data)
</code></pre>
<p>and it would return [ (2,1,1), (2,2,0), (2,2,1), (2,2,2) ] perhaps. Maybe I'm missing a comparable solution that returns nothing of the sort but can help me find some strings within a multi-layered list of iterables of iterables of .... strings. </p>
<p>I wrote a little bit but it was seeming juvenile and inelegant so I thought I would ask here. I guess that I could just keep nesting the exception the way I started here to the number of levels that the function would then support, but I was hoping to get something neat, abstract, pythonic.</p>
<pre><code>import re
def rgrep(s, data):
''' given a iterable of strings or an iterable of iterables of strings,
returns the index/indices of strings that contain the search string.
Args::
s - the string that you are searching for
data - the iterable of strings or iterable of iterables of strings
'''
results = []
expr = re.compile(s)
for item in data:
try:
match = expr.search(item)
if match != None:
results.append( data.index(item) )
except TypeError:
for t in item:
try:
m = expr.search(t)
if m != None:
results.append( (list.index(item), item.index(t)) )
except TypeError:
''' you can only go 2 deep! '''
pass
return results
</code></pre>
| 1 | 2009-10-18T13:12:49Z | 1,584,944 | <p>Here is a grep that uses recursion to search the data structure. </p>
<p>Note that good data structures lead the way to elegant solutions.
Bad data structures make you bend over backwards to accomodate.
This feels to me like one of those cases where a bad data structure is obstructing
rather than helping you.</p>
<p>Having a simple data structure with a more uniform structure
(instead of using this grep) might be worth investigating.</p>
<pre><code>#!/usr/bin/env python
data=['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch',['find','java deep','down'])),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
]
def grep(astr,data,prefix=[]):
result=[]
for idx,elt in enumerate(data):
if isinstance(elt,basestring):
if astr in elt:
result.append(tuple(prefix+[idx]))
else:
result.extend(grep(astr,elt,prefix+[idx]))
return result
def pick(data,idx):
if idx:
return pick(data[idx[0]],idx[1:])
else:
return data
idxs=grep('java',data)
print(idxs)
for idx in idxs:
print('data[%s] = %s'%(idx,pick(data,idx)))
</code></pre>
| 1 | 2009-10-18T13:53:39Z | [
"python",
"regex",
"string",
"search",
"data-structures"
] |
Grep multi-layered iterable for strings that match (Python) | 1,584,864 | <p>Say that we have a multilayered iterable with some strings at the "final" level, yes strings are iterable, but I think that you get my meaning:</p>
<pre><code>['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch')),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
...
]
</code></pre>
<p>Is there any convenient way that I could implement a search that would give me the indices of the matching strings? I would like something that would act something like this (where the above list is <code>data</code>):</p>
<pre><code>>>> grep('javascript', data)
</code></pre>
<p>and it would return [ (2,1,1), (2,2,0), (2,2,1), (2,2,2) ] perhaps. Maybe I'm missing a comparable solution that returns nothing of the sort but can help me find some strings within a multi-layered list of iterables of iterables of .... strings. </p>
<p>I wrote a little bit but it was seeming juvenile and inelegant so I thought I would ask here. I guess that I could just keep nesting the exception the way I started here to the number of levels that the function would then support, but I was hoping to get something neat, abstract, pythonic.</p>
<pre><code>import re
def rgrep(s, data):
''' given a iterable of strings or an iterable of iterables of strings,
returns the index/indices of strings that contain the search string.
Args::
s - the string that you are searching for
data - the iterable of strings or iterable of iterables of strings
'''
results = []
expr = re.compile(s)
for item in data:
try:
match = expr.search(item)
if match != None:
results.append( data.index(item) )
except TypeError:
for t in item:
try:
m = expr.search(t)
if m != None:
results.append( (list.index(item), item.index(t)) )
except TypeError:
''' you can only go 2 deep! '''
pass
return results
</code></pre>
| 1 | 2009-10-18T13:12:49Z | 1,584,979 | <p>I'd split recursive enumeration from grepping:</p>
<pre><code>def enumerate_recursive(iter, base=()):
for index, item in enumerate(iter):
if isinstance(item, basestring):
yield (base + (index,)), item
else:
for pair in enumerate_recursive(item, (base + (index,))):
yield pair
def grep_index(filt, iter):
return (index for index, text in iter if filt in text)
</code></pre>
<p>This way you can do both non-recursive and recursive grepping:</p>
<pre><code>l = list(grep_index('opt1', enumerate(sys.argv))) # non-recursive
r = list(grep_index('diff', enumerate_recursive(your_data))) # recursive
</code></pre>
<p>Also note that we're using iterators here, saving RAM for longer sequences if necessary.</p>
<p>Even more generic solution would be to give a callable instead of string to grep_index. But that might not be necessary for you.</p>
| 3 | 2009-10-18T14:06:27Z | [
"python",
"regex",
"string",
"search",
"data-structures"
] |
web2py - require selected dropdown values validate from db | 1,584,909 | <p>i have a table member that include <code>SQLField("year", db.All_years)</code></p>
<p>and All_years table as the following:</p>
<pre><code>db.define_table("All_years",
SQLField("fromY","integer"),
SQLField("toY","integer")
)
</code></pre>
<p>and constrains are : </p>
<pre><code>db.member.year.requires = IS_IN_DB(db, 'All_years.id','All_years.fromY')
</code></pre>
<p>The problem is when I select a year from dropdown the value of year column is the id of year, not the year value e.g: if year 2009 has db id=1 the value of year in db equal=1 not equal 2009.</p>
<p>I don't understand why.</p>
| 2 | 2009-10-18T13:38:17Z | 1,584,943 | <p>I see your project is progressing well!</p>
<p>The validator is <code>IS_IN_DB(dbset, field, label)</code>. So you should try:</p>
<pre><code>db.member.year.requires = IS_IN_DB(db, 'All_years.id', '%(fromY)d')
</code></pre>
<p>to have a correct label in your drop-down list.</p>
<p>Now from your table it looks like you would rather choose an interval rather than just the beginning year, in that case you can use this:</p>
<pre><code>db.member.year.requires = IS_IN_DB(db, 'All_years.id', '%(fromY)d to %(toY)d')
</code></pre>
<p>that will display, for example, "1980 to 1985", and so on.</p>
| 2 | 2009-10-18T13:53:35Z | [
"python",
"web2py"
] |
easy_install -f vs easy_install -i | 1,585,077 | <p>This is related to <a href="http://stackoverflow.com/questions/1519589/how-do-you-host-your-own-egg-repository">this</a> question I asked a while back. </p>
<p>The end game is I want to be able to install my package "identity.model" and all dependencies. like so...</p>
<pre><code>$ easy_install -f http://eggs.sadphaeton.com identity.model
Searching for identity.model
Reading http://eggs.sadphaeton.com
Reading http://pypi.python.org/simple/identity.model/
Couldn't find index page for 'identity.model' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading http://pypi.python.org/simple/
No local packages or download links found for identity.model
error: Could not find suitable distribution for Requirement.parse('identity.model')
</code></pre>
<p>for whatever reason running this easy_install hits the home page which I laid out according to <a href="http://packages.python.org/distribute/easy%5Finstall.html#id31" rel="nofollow">this information</a></p>
<p>My index.html</p>
<pre><code><html>
<head>
<title>SadPhaeton Egg Repository</title>
</head>
<body>
<a rel="homepage" href="AlchemyExtra">AlchemyExtra</a>
<a rel="homepage" href="identity.model">identity.model</a>
<a rel="homepage" href="repoze.what.plugins.config">repoze.what.plugins.config</a>
</body>
</html>
</code></pre>
<p>if I run ...</p>
<pre><code>$ easy_install -i http://eggs.sadphaeton.com identity.model
</code></pre>
<p>it does find my package and the repoze.what.plugins.config I put up there as well since it's a dependency. however then when it goes to fetch tw.forms(external dependency hosted on pypi) It ends with a failure as it only searched <a href="http://eggs.sadphaeton.com" rel="nofollow">http://eggs.sadphaeton.com</a></p>
<p>Obviously I've misunderstood the "spec". Anyone have any idea what the trick is? </p>
| 2 | 2009-10-18T14:54:41Z | 1,585,203 | <p>-f will take the url you give it, and look there for packages, as well as on PyPI. An example of such a page is <a href="http://dist.plone.org/release/3.3.1/" rel="nofollow">http://dist.plone.org/release/3.3.1/</a> As you see, this is a list of distribution files.</p>
<p>With -i you define the main index page. It defaults to <a href="http://pypi.python.org/simple/" rel="nofollow">http://pypi.python.org/simple/</a> As you see, the index page is an index of packages, not of distribution files.</p>
<p>So in your case <code>easy_install -i <a href="http://eggs.sadphaeton.com" rel="nofollow">http://eggs.sadphaeton.com</a> identity.model</code> should work to download identity.model. And it did for me, like twice in the middle, but not the first time nor the second time. I don't know if you maybe are trying different formats? But in any case, it will then fail on tw.forms, as it's not on your index page.</p>
<p>So the solution should be to make a page like <a href="http://dist.plone.org/release/3.3.1/" rel="nofollow">http://dist.plone.org/release/3.3.1/</a> with your eggs on it. I don't know how exact the format has to be, but I think it's quite flexible.</p>
<p>Update:</p>
<p>Here is a step for step solution:</p>
<ol>
<li>Put all your distributions in a directory.</li>
<li>cd to that directory.</li>
<li>Type <code>python -c "from SimpleHTTPServer import test; test()"</code></li>
<li>Now type <code>easy_install -f <a href="http://localhost:8080/" rel="nofollow">http://localhost:8080/</a> <modulename></code></li>
</ol>
<p>It will install the module.</p>
| 3 | 2009-10-18T15:42:23Z | [
"python",
"setuptools"
] |
easy_install -f vs easy_install -i | 1,585,077 | <p>This is related to <a href="http://stackoverflow.com/questions/1519589/how-do-you-host-your-own-egg-repository">this</a> question I asked a while back. </p>
<p>The end game is I want to be able to install my package "identity.model" and all dependencies. like so...</p>
<pre><code>$ easy_install -f http://eggs.sadphaeton.com identity.model
Searching for identity.model
Reading http://eggs.sadphaeton.com
Reading http://pypi.python.org/simple/identity.model/
Couldn't find index page for 'identity.model' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading http://pypi.python.org/simple/
No local packages or download links found for identity.model
error: Could not find suitable distribution for Requirement.parse('identity.model')
</code></pre>
<p>for whatever reason running this easy_install hits the home page which I laid out according to <a href="http://packages.python.org/distribute/easy%5Finstall.html#id31" rel="nofollow">this information</a></p>
<p>My index.html</p>
<pre><code><html>
<head>
<title>SadPhaeton Egg Repository</title>
</head>
<body>
<a rel="homepage" href="AlchemyExtra">AlchemyExtra</a>
<a rel="homepage" href="identity.model">identity.model</a>
<a rel="homepage" href="repoze.what.plugins.config">repoze.what.plugins.config</a>
</body>
</html>
</code></pre>
<p>if I run ...</p>
<pre><code>$ easy_install -i http://eggs.sadphaeton.com identity.model
</code></pre>
<p>it does find my package and the repoze.what.plugins.config I put up there as well since it's a dependency. however then when it goes to fetch tw.forms(external dependency hosted on pypi) It ends with a failure as it only searched <a href="http://eggs.sadphaeton.com" rel="nofollow">http://eggs.sadphaeton.com</a></p>
<p>Obviously I've misunderstood the "spec". Anyone have any idea what the trick is? </p>
| 2 | 2009-10-18T14:54:41Z | 1,585,416 | <p>Well looks like the trick is in having the rel="download" links on the index.html of the root.</p>
<pre><code><html>
<head>
<title>SadPhaeton Egg Repository</title>
</head>
<body>
<a rel="homepage" href="AlchemyExtra">AlchemyExtra</a> <a rel="download" href="AlchemyExtra/AlchemyExtra-0.0dev-py2.6.egg">download</a><br>
<a rel="homepage" href="identity.model">identity.model</a> <a rel="download" href="identity.model/identity.model-0.0dev-py2.6.egg">download</a><br>
<a rel="homepage" href="repoze.what.plugins.config">repoze.what.plugins.config</a> <a rel="download" href="repoze.what.plugins.config/repoze.what.plugins.config-0.0.0-py2.6.egg">download</a><br>
</body>
</html>
</code></pre>
<p>that solves my immediate issue, though it would be nice if there were more details on this in the spec. I was expecting based on what I read that easy_install would consult the homepage for download link but it doesn't seem to want to do that for me.</p>
<p>now to somehow automate this because doing this crap manually is a PITA.</p>
| 0 | 2009-10-18T16:52:34Z | [
"python",
"setuptools"
] |
easy_install -f vs easy_install -i | 1,585,077 | <p>This is related to <a href="http://stackoverflow.com/questions/1519589/how-do-you-host-your-own-egg-repository">this</a> question I asked a while back. </p>
<p>The end game is I want to be able to install my package "identity.model" and all dependencies. like so...</p>
<pre><code>$ easy_install -f http://eggs.sadphaeton.com identity.model
Searching for identity.model
Reading http://eggs.sadphaeton.com
Reading http://pypi.python.org/simple/identity.model/
Couldn't find index page for 'identity.model' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading http://pypi.python.org/simple/
No local packages or download links found for identity.model
error: Could not find suitable distribution for Requirement.parse('identity.model')
</code></pre>
<p>for whatever reason running this easy_install hits the home page which I laid out according to <a href="http://packages.python.org/distribute/easy%5Finstall.html#id31" rel="nofollow">this information</a></p>
<p>My index.html</p>
<pre><code><html>
<head>
<title>SadPhaeton Egg Repository</title>
</head>
<body>
<a rel="homepage" href="AlchemyExtra">AlchemyExtra</a>
<a rel="homepage" href="identity.model">identity.model</a>
<a rel="homepage" href="repoze.what.plugins.config">repoze.what.plugins.config</a>
</body>
</html>
</code></pre>
<p>if I run ...</p>
<pre><code>$ easy_install -i http://eggs.sadphaeton.com identity.model
</code></pre>
<p>it does find my package and the repoze.what.plugins.config I put up there as well since it's a dependency. however then when it goes to fetch tw.forms(external dependency hosted on pypi) It ends with a failure as it only searched <a href="http://eggs.sadphaeton.com" rel="nofollow">http://eggs.sadphaeton.com</a></p>
<p>Obviously I've misunderstood the "spec". Anyone have any idea what the trick is? </p>
| 2 | 2009-10-18T14:54:41Z | 2,164,230 | <p>The problem is you're trying to mix -i and -f modes of making your page; you need to pick one or the other, as the <code>rel=""</code> stuff <em>only</em> works with -i.</p>
<p>If you want to use -f mode, then you just need a webserver directory with the eggs in it. If you want to use -i, then you must have a subdirectory for each project with an index.html in it, and it's those index.html files that would contain the <code>rel="homepage"</code> stuff.</p>
| 0 | 2010-01-29T18:34:59Z | [
"python",
"setuptools"
] |
Is the Python GIL really per interpreter? | 1,585,181 | <p>I often see people talking that the GIL is per Python Interpreter (even here on stackoverflow).</p>
<p>But what I see in the source code it seems to be that the GIL is a global variable and therefore there is one GIL for all Interpreters in each python process. I know they did this because there is no interpreter object passed around like lua or TCL does, it was just not designed well in the beginning. And thread local storage seems to be not portable for the python guys to use.</p>
<p>Is this correct? I had a short look at the 2.4 version I'm using in a project here. </p>
<p>Had this changed in later versions, especially in 3.0?</p>
| 8 | 2009-10-18T15:35:23Z | 1,585,260 | <p>I believe it is true (at least as of Python 2.6) that each process may have at most one CPython interpreter embedded (other runtimes may have different constraints). I'm not sure if this is an issue with the GIL per se, but it is likely due to global state, or to protect from conflicting global state in third-party C modules. From the <a href="http://docs.python.org/c-api/init.html#Py%5FInitialize" rel="nofollow">CPython API Docs</a>:</p>
<blockquote>
<p>[Py___Initialize()] is a no-op when called for a second time (without calling Py_Finalize() first). There is no return value; it is a fatal error if the initialization fails.</p>
</blockquote>
<p>You might be interested in the <a href="http://code.google.com/p/unladen-swallow/" rel="nofollow">Unladen Swallow</a> project, which aims eventually to remove the GIL entirely from CPython. Other Python runtimes don't have the GIL at all, like (I believe) <a href="http://www.stackless.com/" rel="nofollow">Stackless Python</a>, and certainly <a href="http://www.jython.org/" rel="nofollow">Jython</a>.</p>
<p>Also note that the GIL is <a href="http://docs.python.org/3.1/c-api/init.html#thread-state-and-the-global-interpreter-lock" rel="nofollow">still present in CPython 3.x</a>.</p>
| 0 | 2009-10-18T16:09:25Z | [
"python",
"multithreading",
"gil"
] |
Is the Python GIL really per interpreter? | 1,585,181 | <p>I often see people talking that the GIL is per Python Interpreter (even here on stackoverflow).</p>
<p>But what I see in the source code it seems to be that the GIL is a global variable and therefore there is one GIL for all Interpreters in each python process. I know they did this because there is no interpreter object passed around like lua or TCL does, it was just not designed well in the beginning. And thread local storage seems to be not portable for the python guys to use.</p>
<p>Is this correct? I had a short look at the 2.4 version I'm using in a project here. </p>
<p>Had this changed in later versions, especially in 3.0?</p>
| 8 | 2009-10-18T15:35:23Z | 1,585,641 | <p>Perhaps the confusion comes about because most people assume Python has one interpreter per process. I recall reading that the support for multiple interpreters via the C API was largely untested and hardly ever used. (And when I gave it a go, didn't work properly.)</p>
| 2 | 2009-10-18T18:24:19Z | [
"python",
"multithreading",
"gil"
] |
Is the Python GIL really per interpreter? | 1,585,181 | <p>I often see people talking that the GIL is per Python Interpreter (even here on stackoverflow).</p>
<p>But what I see in the source code it seems to be that the GIL is a global variable and therefore there is one GIL for all Interpreters in each python process. I know they did this because there is no interpreter object passed around like lua or TCL does, it was just not designed well in the beginning. And thread local storage seems to be not portable for the python guys to use.</p>
<p>Is this correct? I had a short look at the 2.4 version I'm using in a project here. </p>
<p>Had this changed in later versions, especially in 3.0?</p>
| 8 | 2009-10-18T15:35:23Z | 1,585,939 | <p>The GIL is indeed per-process, not per-interpreter. This is unchanged in 3.x.</p>
| 7 | 2009-10-18T20:25:55Z | [
"python",
"multithreading",
"gil"
] |
Python optional parameters | 1,585,247 | <p>Guys, I just started python recently and get confused with the optional parameters, say I have the program like this:</p>
<pre><code>class B:
pass
class A:
def __init__(self, builds = B()):
self.builds = builds
</code></pre>
<p>If I create A twice</p>
<pre><code>b = A()
c = A()
</code></pre>
<p>and print their builds</p>
<pre><code>print b.builds
print c.builds
</code></pre>
<p>I found they are using the exactly same object,</p>
<pre><code><__main__.B instance at 0x68ee0>
<__main__.B instance at 0x68ee0>
</code></pre>
<p>But it is not what I want, since if <code>b</code> changed some internal state of builds, the one in <code>c</code> object will also be changed.</p>
<p>Is it possible to recreate this optional parameters each time by using this optional parameters syntax?</p>
| 23 | 2009-10-18T16:05:44Z | 1,585,250 | <p>Yes; default parameters are evaluated only at the time when the function is defined.</p>
<p>One possible solution would be to have the parameter be a <strong>class</strong> rather than an instance, a la</p>
<pre><code>def foo(blah, klass = B):
b = klass()
# etc
</code></pre>
| 6 | 2009-10-18T16:07:01Z | [
"python",
"optional-arguments"
] |
Python optional parameters | 1,585,247 | <p>Guys, I just started python recently and get confused with the optional parameters, say I have the program like this:</p>
<pre><code>class B:
pass
class A:
def __init__(self, builds = B()):
self.builds = builds
</code></pre>
<p>If I create A twice</p>
<pre><code>b = A()
c = A()
</code></pre>
<p>and print their builds</p>
<pre><code>print b.builds
print c.builds
</code></pre>
<p>I found they are using the exactly same object,</p>
<pre><code><__main__.B instance at 0x68ee0>
<__main__.B instance at 0x68ee0>
</code></pre>
<p>But it is not what I want, since if <code>b</code> changed some internal state of builds, the one in <code>c</code> object will also be changed.</p>
<p>Is it possible to recreate this optional parameters each time by using this optional parameters syntax?</p>
| 23 | 2009-10-18T16:05:44Z | 1,585,265 | <p>you need to do the following:</p>
<pre><code>class A:
def __init__(self, builds=None):
if builds is None:
builds = B()
self.builds = builds
</code></pre>
<p>it's a very wide-spread error, using mutable parameters as a default arguments. there are plenty of dupes probably on SO.</p>
| 15 | 2009-10-18T16:11:36Z | [
"python",
"optional-arguments"
] |
Python optional parameters | 1,585,247 | <p>Guys, I just started python recently and get confused with the optional parameters, say I have the program like this:</p>
<pre><code>class B:
pass
class A:
def __init__(self, builds = B()):
self.builds = builds
</code></pre>
<p>If I create A twice</p>
<pre><code>b = A()
c = A()
</code></pre>
<p>and print their builds</p>
<pre><code>print b.builds
print c.builds
</code></pre>
<p>I found they are using the exactly same object,</p>
<pre><code><__main__.B instance at 0x68ee0>
<__main__.B instance at 0x68ee0>
</code></pre>
<p>But it is not what I want, since if <code>b</code> changed some internal state of builds, the one in <code>c</code> object will also be changed.</p>
<p>Is it possible to recreate this optional parameters each time by using this optional parameters syntax?</p>
| 23 | 2009-10-18T16:05:44Z | 1,585,451 | <p>You need to understand how default values work in order to use them effectively.</p>
<p>Functions are objects. As such, they have attributes. So, if I create this function:</p>
<pre><code>>>> def f(x, y=[]):
y.append(x)
return y
</code></pre>
<p>I've created an object. Here are its attributes:</p>
<pre><code>>>> dir(f)
['__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__',
'__dict__', '__doc__', '__format__', '__get__', '__getattribute__', '__globals__',
'__hash__', '__init__', '__module__', '__name__', '__new__', '__reduce__',
'__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__',
'func_closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc', 'func_globals',
'func_name']
</code></pre>
<p>One of them is <code>func_defaults</code>. That sounds promising, what's in there?</p>
<pre><code>>>> f.func_defaults
([],)
</code></pre>
<p>That's a tuple that contains the function's default values. If a default value is an object, the tuple contains an instance of that object. </p>
<p>This leads to some fairly counterintuitive behavior if you're thinking that <code>f</code> adds an item to a list, returning a list containing only that item if no list is provided:</p>
<pre><code>>>> f(1)
[1]
>>> f(2)
[1, 2]
</code></pre>
<p>But if you know that the default value is an object instance that's stored in one of the function's attributes, it's much less counterintuitive:</p>
<pre><code>>>> x = f(3)
>>> y = f(4)
>>> x == y
True
>>> x
[1, 2, 3, 4]
>>> x.append(5)
>>> f(6)
[1, 2, 3, 4, 5, 6]
</code></pre>
<p>Knowing this, it's clear that if you want a default value of a function's parameter to be a new list (or any new object), you can't simply stash an instance of the object in <code>func_defaults</code>. You have to create a new one every time the function is called:</p>
<pre><code>>>>def g(x, y=None):
if y==None:
y = []
y.append(x)
return y
</code></pre>
| 45 | 2009-10-18T17:10:36Z | [
"python",
"optional-arguments"
] |
union with sort in Google-App-Engine | 1,585,299 | <p>I have a class:</p>
<pre><code>class Transaction(db.Model):
accountDebit = db.ReferenceProperty(reference_class=Account,
collection_name="kontoDuguje")
accountCredit = db.ReferenceProperty(reference_class=Account,
collection_name="kontoPotrazuje")
amount = db.FloatProperty()
Tran_date = db.DateProperty()
comment = db.StringProperty()
</code></pre>
<p>here is the method of Account class by which I would like to get all the
transactions for the particular account (transactions with
accountDebit or accountCredit), but sorted by date:</p>
<pre><code>def GetTransactions(self):
transactions = []
transactions_debit = db.GqlQuery('SELECT * FROM Transaction ' +
'WHERE accountDebit=:1',self)
transactions_credit = db.GqlQuery('SELECT * FROM Transaction ' +
'WHERE accountCredit=:1',self)
for x in transactions_debit:
x.amount = -x.amount
transactions.append(x)
for x in transactions_credit:
x.amount = x.amount
transactions.append(x)
return transactions
</code></pre>
<p>The aim is to make union with the sort of this two results, but with limit + offset. Consider the fact that you can not fetch more than 1000 rows in a single query ...</p>
<p>Please help </p>
| 0 | 2009-10-18T16:21:55Z | 1,586,430 | <p>You can do an OR (Python laboriously synthesizes it for you at application level), which takes care of the "union with sorty". However, if you need to worry about > 1000 transactions, that won't help (nor will offset and limit: the sum of offset + limit is what's limited to 1000!). You'll need to slice by something (presumably the same field you're sorting on, <code>tran_date</code> I imagine?) with a couple of <code><</code> conditions there, and that of course can't guarantee you the exact limit and offset you desire, so you'll have to exceed them a bit and slice off the eccess at application level.</p>
<p><strong>Edit</strong>: OR is not actually synthesized at application level (IN and != are the two operations that are), so you need to synthesize it yourself (also at application level of course), e.g.:</p>
<pre><code>def GetTransactions(account):
transactions = list(db.GqlQuery(
'SELECT * FROM Transaction WHERE '
'accountDebit = :1 ORDER BY Tran_date', account))
transactions.extend(db.GqlQuery(
'SELECT * FROM Transaction WHERE '
'accountCredit = :1 ORDER BY Tran_date', account))
transactions.sort(key=operator.attrgetter('Tran_date'))
return transactions
</code></pre>
<p>But the big issues are still those outlined above.</p>
<p>So what are the numbers in play -- typical numbers of transactions for a user (say per week or per day), typical max total for a user, what order of magnitudes are you going to need in your offset and limit, etc, etc? Hard to suggest specific design choices without having any idea of the orders of magnitude of these numbers!-)</p>
<p><strong>Edit</strong>: there is no solution that will be optimal, or even reasonable, for ANY order of magnitude of each of these parameters -- how you deal efficiently with many millions of transactions per user per day is just going to be deeply different from how you deal with a few transactions per user per day; I can't even imagine an architecture that would make sense in both cases (I might, perhaps, in a relational context, but not in a non-relational one such as we have here -- e.g., to decently deal with the case of millions of transactions per day, you really want a finer-grained timestamp on a transaction than just recording its date can provide!-).</p>
| 2 | 2009-10-19T00:20:44Z | [
"python",
"google-app-engine"
] |
Is there a way to perform "if" in python's lambda | 1,585,322 | <p>In <strong>python 2.6</strong>, I want to do:</p>
<pre><code>f = lambda x: if x==2 print x else raise Exception()
f(2) #should print "2"
f(3) #should throw an exception
</code></pre>
<p>This clearly isn't the syntax. Is it possible to perform an <code>if</code> in <code>lambda</code> and if so how to do it?</p>
<p>thanks</p>
| 119 | 2009-10-18T16:28:03Z | 1,585,334 | <p>Lambdas in Python are fairly restrictive with regard to what you're allowed to use. Specifically, you can't have any keywords (except for operators like <code>and</code>, <code>not</code>, <code>or</code>, etc) in their body.</p>
<p>So, there's no way you could use a lambda for your example (because you can't use <code>raise</code>), but if you're willing to concede on that⦠You could use:</p>
<pre><code>f = lambda x: x == 2 and x or None
</code></pre>
| 10 | 2009-10-18T16:31:18Z | [
"python",
"lambda"
] |
Is there a way to perform "if" in python's lambda | 1,585,322 | <p>In <strong>python 2.6</strong>, I want to do:</p>
<pre><code>f = lambda x: if x==2 print x else raise Exception()
f(2) #should print "2"
f(3) #should throw an exception
</code></pre>
<p>This clearly isn't the syntax. Is it possible to perform an <code>if</code> in <code>lambda</code> and if so how to do it?</p>
<p>thanks</p>
| 119 | 2009-10-18T16:28:03Z | 1,585,343 | <p>why don't you just define a function?</p>
<pre><code>def f(x):
if x == 2:
print(x)
else:
raise ValueError
</code></pre>
<p>there really is no justification to use lambda in this case.</p>
| 26 | 2009-10-18T16:33:17Z | [
"python",
"lambda"
] |
Is there a way to perform "if" in python's lambda | 1,585,322 | <p>In <strong>python 2.6</strong>, I want to do:</p>
<pre><code>f = lambda x: if x==2 print x else raise Exception()
f(2) #should print "2"
f(3) #should throw an exception
</code></pre>
<p>This clearly isn't the syntax. Is it possible to perform an <code>if</code> in <code>lambda</code> and if so how to do it?</p>
<p>thanks</p>
| 119 | 2009-10-18T16:28:03Z | 1,585,396 | <p>The syntax you're looking for:</p>
<pre><code>lambda x: True if x % 2 == 0 else False
</code></pre>
<p>But you can't use <code>print</code> or <code>raise</code> in a lambda.</p>
| 268 | 2009-10-18T16:48:43Z | [
"python",
"lambda"
] |
Is there a way to perform "if" in python's lambda | 1,585,322 | <p>In <strong>python 2.6</strong>, I want to do:</p>
<pre><code>f = lambda x: if x==2 print x else raise Exception()
f(2) #should print "2"
f(3) #should throw an exception
</code></pre>
<p>This clearly isn't the syntax. Is it possible to perform an <code>if</code> in <code>lambda</code> and if so how to do it?</p>
<p>thanks</p>
| 119 | 2009-10-18T16:28:03Z | 1,586,124 | <p>You can easily raise an exception in a lambda, if that's what you really want to do.</p>
<pre><code>def Raise(exception):
raise exception
x = lambda y: 1 if y < 2 else Raise(ValueError("invalid value"))
</code></pre>
<p>Is this a good idea? My instinct in general is to leave the error reporting out of lambdas; let it have a value of None and raise the error in the caller. I don't think this is inherently evil, though--I consider the "y if x else z" syntax itself worse--just make sure you're not trying to stuff too much into a lambda body.</p>
| 13 | 2009-10-18T21:48:06Z | [
"python",
"lambda"
] |
Is there a way to perform "if" in python's lambda | 1,585,322 | <p>In <strong>python 2.6</strong>, I want to do:</p>
<pre><code>f = lambda x: if x==2 print x else raise Exception()
f(2) #should print "2"
f(3) #should throw an exception
</code></pre>
<p>This clearly isn't the syntax. Is it possible to perform an <code>if</code> in <code>lambda</code> and if so how to do it?</p>
<p>thanks</p>
| 119 | 2009-10-18T16:28:03Z | 33,251,758 | <p>Probably the worst python line I've written so far: </p>
<pre><code>f = lambda x: sys.stdout.write(["2\n",][2*(x==2)-2])
</code></pre>
<p>If x == 2 you print, </p>
<p>if x != 2 you raise. </p>
| 9 | 2015-10-21T05:43:22Z | [
"python",
"lambda"
] |
django Unicode GET Parameter Values | 1,585,439 | <p>I'm trying to get a GET parameter value that looks like this:
<a href="http://someurl/handler.json?&q=%E1%F8%E0%F1%F8%E9" rel="nofollow">http://someurl/handler.json?&q=%E1%F8%E0%F1%F8%E9</a></p>
<p>The q parameter in this case is Hebrew.
I'm trying to read the value using the following code:</p>
<pre><code>request.GET.get("q", None)
</code></pre>
<p>I'm getting gybrish instead of the correct text.
Any idea what's wrong here? Am I missing some setting?</p>
| 0 | 2009-10-18T17:05:09Z | 1,585,467 | <p>The query string is in ISO-8859-8, but Django's default encoding is UTF-8. You will have to change either <a href="http://docs.djangoproject.com/en/dev/ref/settings/#default-charset" rel="nofollow"><code>DEFAULT_CHARSET</code></a> or <a href="http://docs.djangoproject.com/en/dev/ref/request-response/#django.http.HttpRequest.encoding" rel="nofollow"><code>HttpRequest.encoding</code></a> to ISO-8859-8 to get the correct Unicode data.</p>
| 3 | 2009-10-18T17:16:32Z | [
"python",
"django",
"unicode"
] |
sending email from wave robot | 1,585,487 | <p>Anyone know how to send an email using google wave python api?</p>
<p>Thanks</p>
| 0 | 2009-10-18T17:24:04Z | 1,585,496 | <p>Being a wave robot is nothing special here - you've got to determine at what point you want to send email, but you haven't told us anything about that, so it's hard to advise you.</p>
<p>When you've worked out what you want to send, just follow the normal instructions for <a href="http://code.google.com/appengine/docs/python/mail/sendingmail.html" rel="nofollow">sending email from Python in AppEngine</a>.</p>
| 5 | 2009-10-18T17:27:38Z | [
"python",
"api",
"google-app-engine",
"google-wave"
] |
Python: Do relative imports mean you can't execute a subpackage by itself? | 1,585,756 | <p>I've recently ported my Python project to run on Python 3.1. For that I had to adopt the policy of relative imports within the submodules and subpackages of my project. I've donât that and now the project itself works, but I noticed I can't execute any of the subpackages or submodules in it. If I try, I get "builtins.ValueError: Attempted relative import in non-package". I can only import the whole project.</p>
<p>Is this normal?</p>
| 7 | 2009-10-18T19:12:07Z | 1,585,801 | <p>Yes, it's normal. If you want to execute a module that is also a part of a package (in itself a strange thing to do) you need to have absolute imports. When you execute the module it is not, from the interpreters point of view, a part of a package, but the <code>__main__</code> module. So it wouldn't know where the relative packages are.</p>
<p>The standard way to do it is to have functions in the packages, and separate executable scripts that call the functions, as this enables you to put the executable scripts outside the module, for example in /usr/bin</p>
| 4 | 2009-10-18T19:31:10Z | [
"python",
"import",
"python-3.x"
] |
Python: Do relative imports mean you can't execute a subpackage by itself? | 1,585,756 | <p>I've recently ported my Python project to run on Python 3.1. For that I had to adopt the policy of relative imports within the submodules and subpackages of my project. I've donât that and now the project itself works, but I noticed I can't execute any of the subpackages or submodules in it. If I try, I get "builtins.ValueError: Attempted relative import in non-package". I can only import the whole project.</p>
<p>Is this normal?</p>
| 7 | 2009-10-18T19:12:07Z | 1,586,005 | <p>You can use -m flag of the python interpreter to run modules in sub-packages (or even packages in 3.1.).</p>
| 3 | 2009-10-18T20:54:54Z | [
"python",
"import",
"python-3.x"
] |
Python: Do relative imports mean you can't execute a subpackage by itself? | 1,585,756 | <p>I've recently ported my Python project to run on Python 3.1. For that I had to adopt the policy of relative imports within the submodules and subpackages of my project. I've donât that and now the project itself works, but I noticed I can't execute any of the subpackages or submodules in it. If I try, I get "builtins.ValueError: Attempted relative import in non-package". I can only import the whole project.</p>
<p>Is this normal?</p>
| 7 | 2009-10-18T19:12:07Z | 3,652,088 | <p>I had the <a href="http://stackoverflow.com/questions/3616952/how-to-properly-use-relative-or-absolute-imports-in-python-modules">same problem</a> and I considered the <code>-m</code> switch too hard. </p>
<p>Instead I use this:</p>
<pre><code>try:
from . import bar
except ValueError:
import bar
if __name__ == "__main__":
pass
</code></pre>
| -1 | 2010-09-06T14:05:12Z | [
"python",
"import",
"python-3.x"
] |
Human-readable binary data using Python | 1,585,950 | <p>My work requires that I perform a mathematical simulation whose parameters come from a binary file. The simulator can read such binary file without a problem.</p>
<p>However, I need to peek inside the binary file to make sure the parameters are what I need them to be, and I cannot seem to be able to do it.</p>
<p>I would like to write an script in Python which would allow me to read in the binary file, search for the parameters that I care about, and display what their values are.</p>
<p>What I know about the binary file:</p>
<p>It represents simple text (as opposed to an image or soud file). There is a piece of code that can "dump" the file into a readable format: if I open that dump in Emacs I will find things like:</p>
<pre><code>CENTRAL_BODY = 'SUN'
</code></pre>
<p>All the file is just a series of similar instructions. I could use that dump code, but I much rather have Python do that.</p>
<p>This seems to be a very trivial question, and I apologize for not knowing better. I thought I was a proficient programmer!</p>
<p>Many thanks.</p>
| 2 | 2009-10-18T20:30:28Z | 1,585,957 | <p>You can read the file's content into a string in memory:</p>
<pre><code>thedata = open(thefilename, 'rb').read()
</code></pre>
<p>and then locate a string in it:</p>
<pre><code>where = thedata.find('CENTRAL_BODY')
</code></pre>
<p>and finally slice off the part you care about:</p>
<pre><code>thepart = thedata[where:where+50] # or whatever length
</code></pre>
<p>and display it as you prefer (e.g. find the string value by locating within <code>thepart</code> an <code>=</code> sign, then the first following quote, then the next quote after that).</p>
| 4 | 2009-10-18T20:34:38Z | [
"python",
"format",
"ascii",
"binary-data"
] |
Human-readable binary data using Python | 1,585,950 | <p>My work requires that I perform a mathematical simulation whose parameters come from a binary file. The simulator can read such binary file without a problem.</p>
<p>However, I need to peek inside the binary file to make sure the parameters are what I need them to be, and I cannot seem to be able to do it.</p>
<p>I would like to write an script in Python which would allow me to read in the binary file, search for the parameters that I care about, and display what their values are.</p>
<p>What I know about the binary file:</p>
<p>It represents simple text (as opposed to an image or soud file). There is a piece of code that can "dump" the file into a readable format: if I open that dump in Emacs I will find things like:</p>
<pre><code>CENTRAL_BODY = 'SUN'
</code></pre>
<p>All the file is just a series of similar instructions. I could use that dump code, but I much rather have Python do that.</p>
<p>This seems to be a very trivial question, and I apologize for not knowing better. I thought I was a proficient programmer!</p>
<p>Many thanks.</p>
| 2 | 2009-10-18T20:30:28Z | 1,585,996 | <p>If it's a binary file, you will need to use the struct module. You will need to know how the data is formatted in the file. If that is not documented, you will have to reverse engineer it. </p>
<p>Do you have source code of the other dumping program? You may be able to just port that to Python</p>
<p>We can probably help you better if we can see what the binary file and the corresponding dump looks like</p>
| 0 | 2009-10-18T20:50:07Z | [
"python",
"format",
"ascii",
"binary-data"
] |
Human-readable binary data using Python | 1,585,950 | <p>My work requires that I perform a mathematical simulation whose parameters come from a binary file. The simulator can read such binary file without a problem.</p>
<p>However, I need to peek inside the binary file to make sure the parameters are what I need them to be, and I cannot seem to be able to do it.</p>
<p>I would like to write an script in Python which would allow me to read in the binary file, search for the parameters that I care about, and display what their values are.</p>
<p>What I know about the binary file:</p>
<p>It represents simple text (as opposed to an image or soud file). There is a piece of code that can "dump" the file into a readable format: if I open that dump in Emacs I will find things like:</p>
<pre><code>CENTRAL_BODY = 'SUN'
</code></pre>
<p>All the file is just a series of similar instructions. I could use that dump code, but I much rather have Python do that.</p>
<p>This seems to be a very trivial question, and I apologize for not knowing better. I thought I was a proficient programmer!</p>
<p>Many thanks.</p>
| 2 | 2009-10-18T20:30:28Z | 1,586,242 | <p>It sounds like this 'dump' program already does what you need: interpreting the binary file. I guess my approach would be to write a python program that can take a dump'ed file, extract the parameters you want and display them.</p>
<p>Then parse it with something like this:</p>
<p>myparms.py:</p>
<pre><code>import sys
d = {}
for line in sys.stdin:
parts = line.split("=",2)
if len(parts) < 2:
continue
k = parts[0].strip()
v = parts[1].strip()
d[k] = v
print d['CENTRAL_BODY']
</code></pre>
<p>Use this like:</p>
<p>dump parameters.bin | python myparms.py</p>
<p>You didn't mention a platform or provide details about the dump'ed format, but this should be a place to start.</p>
| 1 | 2009-10-18T22:38:06Z | [
"python",
"format",
"ascii",
"binary-data"
] |
Human-readable binary data using Python | 1,585,950 | <p>My work requires that I perform a mathematical simulation whose parameters come from a binary file. The simulator can read such binary file without a problem.</p>
<p>However, I need to peek inside the binary file to make sure the parameters are what I need them to be, and I cannot seem to be able to do it.</p>
<p>I would like to write an script in Python which would allow me to read in the binary file, search for the parameters that I care about, and display what their values are.</p>
<p>What I know about the binary file:</p>
<p>It represents simple text (as opposed to an image or soud file). There is a piece of code that can "dump" the file into a readable format: if I open that dump in Emacs I will find things like:</p>
<pre><code>CENTRAL_BODY = 'SUN'
</code></pre>
<p>All the file is just a series of similar instructions. I could use that dump code, but I much rather have Python do that.</p>
<p>This seems to be a very trivial question, and I apologize for not knowing better. I thought I was a proficient programmer!</p>
<p>Many thanks.</p>
| 2 | 2009-10-18T20:30:28Z | 1,586,575 | <p>You have to know the format the data is stored in; there's simply no way around that.</p>
<p>If there's no written spec on it, try to open it in a hex editor and study the format, using the text-dump as a reference. If you can get the source code for the tool that creates the text-dumps, that would help you alot.</p>
<p>Keep in mind that the data could be scrambled in someway or another, e.g. rot13.</p>
| 0 | 2009-10-19T01:24:08Z | [
"python",
"format",
"ascii",
"binary-data"
] |
PHP, Python, Ruby application with multiple RDBMS | 1,586,008 | <p>I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.</p>
<p>I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.</p>
<p>Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS.</p>
<p>By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.</p>
<p>So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?</p>
<p>I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?</p>
<p>Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not?</p>
| 2 | 2009-10-18T20:56:01Z | 1,586,035 | <p>It would be great if code written for one platform would work on every other without any modification whatsoever, but this is usually not the case and probably never will be. What the current frameworks do is about all anyone can. </p>
| 0 | 2009-10-18T21:04:32Z | [
"php",
"python",
"ruby-on-rails",
"database"
] |
PHP, Python, Ruby application with multiple RDBMS | 1,586,008 | <p>I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.</p>
<p>I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.</p>
<p>Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS.</p>
<p>By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.</p>
<p>So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?</p>
<p>I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?</p>
<p>Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not?</p>
| 2 | 2009-10-18T20:56:01Z | 1,586,105 | <p>If you want to leverage the bells and whistles of various RDBMSes, you can certainly do it. Just apply standard OO Principles. Figure out what kind of API your persistence layer will need to provide. </p>
<p>You'll end up writing a set of isomorphic persistence adapter classes. From the perspective of your model code (which will be calling adapter methods to load and store data), these classes are identical. Writing good test coverage should be easy, and good tests will make life a lot easier. Deciding how much abstraction is provided by the persistence adapters is the trickiest part, and is largely application-specific.</p>
<p>As for whether this is worth the trouble: it depends. It's a good exercise if you've never done it before. It may be premature if you don't actually know for sure what your target databases are. </p>
<p>A good strategy might be to implement two persistence adapters to start. Let's say you expect the most common back end will be MySQL. Implement one adapter tuned for MySQL. Implement a second that uses your database abstraction library of choice, and uses only standard and widely available SQL features. Now you've got support for a ton of back ends (everything supported by your abstraction library of choice), plus tuned support for mySQL. If you decide you then want to provide an optimized adapter from Oracle, you can implement it at your leisure, and you'll know that your application can support swappable database back-ends.</p>
| 2 | 2009-10-18T21:39:02Z | [
"php",
"python",
"ruby-on-rails",
"database"
] |
PHP, Python, Ruby application with multiple RDBMS | 1,586,008 | <p>I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.</p>
<p>I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.</p>
<p>Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS.</p>
<p>By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.</p>
<p>So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?</p>
<p>I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?</p>
<p>Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not?</p>
| 2 | 2009-10-18T20:56:01Z | 1,587,287 | <p>It's even more "old fashioned" than modern ORMs, but doesn't <a href="http://en.wikipedia.org/wiki/Open%5FDatabase%5FConnectivity" rel="nofollow">ODBC</a> address this issue?</p>
| 0 | 2009-10-19T06:50:32Z | [
"php",
"python",
"ruby-on-rails",
"database"
] |
PHP, Python, Ruby application with multiple RDBMS | 1,586,008 | <p>I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.</p>
<p>I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.</p>
<p>Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS.</p>
<p>By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.</p>
<p>So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?</p>
<p>I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?</p>
<p>Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not?</p>
| 2 | 2009-10-18T20:56:01Z | 1,587,887 | <p>You cannot eat a cake and have it, choose on of the following options.</p>
<ul>
<li>Use your database abstraction layer whenever you can and in the rare cases when you have a need for a hand-made query (eg. performance reasons) stick to the lowest common denominator and don't use stored procedures or any proprietary extensions that you database has to offer. In this case deploying the application on a different RDBMS should be trivial.</li>
<li>Use the full power of your expensive RDBMS, but take into account that your application won't be easily portable. When the need arises you will have to spend considerable effort on porting and maintenance. Of course a decent layered design encapsulating all the differences in a single module or class will help in this endeavor.</li>
</ul>
<p>In other words you should consider how probable is it that your application will be deployed to multiple RDBMSes and make an informed choice.</p>
| 2 | 2009-10-19T10:09:53Z | [
"php",
"python",
"ruby-on-rails",
"database"
] |
django and executing a separate .py to manipute a database | 1,586,041 | <p>I want to execute a random .py file, say foo.py on the myproject/myapp folder by using crobjob by some periods</p>
<p>I have this basic model in my model.py for the app:</p>
<pre><code>class Mymodel(models.Model):
content = models.TextField()
</code></pre>
<p>Say I have this in my foo.py, I want to check if there is any Mymodel object that has a content field as same as mytext, if not make a new Mymodel with the mytext as content, if already existing do nothing.</p>
<pre><code><do django importings>
mytext = "something here"
if Mymodel.filter(content=mytext) == null:
newitem = Mymodel(content=mytext)
newitem.save()
else:
pass
</code></pre>
<p>So here is my question, what django imports shall I be doing? Also how can I check if the query has no item (don't know if if Mymodel.filter(content=mytext) == null would work. Also I don't know if this is an efficient way to achieve my goal as the amount of Mymodel will be high. </p>
<p>Thanks</p>
| 1 | 2009-10-18T21:07:17Z | 1,586,115 | <p>You have two separate questions here - it would have been better to split them out.</p>
<p>To run a separate script, you're best off creating a <code>./manage.py</code> command. See <a href="http://docs.djangoproject.com/en/dev/howto/custom-management-commands/" rel="nofollow">the documentation</a> on how to do this.</p>
<p>For your second question, the code you give is not valid Python, since there is no 'null' value - you mean <code>None</code>. However even then the code will not work, as that isn't how you write that query using the Django ORM. You want something like this:</p>
<pre><code>if not MyModel.objects.filter(content=mytext).count():
</code></pre>
<p>which asks the database how many items there are with content=mytext, and is True if there are none.</p>
| 5 | 2009-10-18T21:43:28Z | [
"python",
"django",
"django-models"
] |
django and executing a separate .py to manipute a database | 1,586,041 | <p>I want to execute a random .py file, say foo.py on the myproject/myapp folder by using crobjob by some periods</p>
<p>I have this basic model in my model.py for the app:</p>
<pre><code>class Mymodel(models.Model):
content = models.TextField()
</code></pre>
<p>Say I have this in my foo.py, I want to check if there is any Mymodel object that has a content field as same as mytext, if not make a new Mymodel with the mytext as content, if already existing do nothing.</p>
<pre><code><do django importings>
mytext = "something here"
if Mymodel.filter(content=mytext) == null:
newitem = Mymodel(content=mytext)
newitem.save()
else:
pass
</code></pre>
<p>So here is my question, what django imports shall I be doing? Also how can I check if the query has no item (don't know if if Mymodel.filter(content=mytext) == null would work. Also I don't know if this is an efficient way to achieve my goal as the amount of Mymodel will be high. </p>
<p>Thanks</p>
| 1 | 2009-10-18T21:07:17Z | 1,586,561 | <p>You might also check out <a href="http://github.com/django-extensions/django-extensions" rel="nofollow">django-extensions</a>, which has a built-in manage.py extension called "runscript" that executes any python script in your django project's context.</p>
| 2 | 2009-10-19T01:19:40Z | [
"python",
"django",
"django-models"
] |
How to generate random 'greenish' colors | 1,586,147 | <p>Anyone have any suggestions on how to make randomized colors that are all greenish? Right now I'm generating the colors by this:</p>
<pre><code>color = (randint(100, 200), randint(120, 255), randint(100, 200))
</code></pre>
<p>That mostly works, but I get brownish colors a lot. </p>
| 32 | 2009-10-18T22:00:37Z | 1,586,157 | <p>Simple solution: <strong>Use the <a href="http://en.wikipedia.org/wiki/HSL%5Fand%5FHSV">HSL or HSV</a> color space</strong> instead of rgb (convert it to RGB afterwards if you need this). The difference is the meaning of the tuple: Where RGB means values for Red, Green and Blue, in HSL the H is the color (120 degree or 0.33 meaning green for example) and the S is for saturation and the V for the brightness. So keep the H at a fixed value (or for even more random colors you could randomize it by add/sub a small random number) and randomize the S and the V. See the <a href="http://en.wikipedia.org/wiki/HSL%5Fand%5FHSV">wikipedia</a> article.</p>
| 54 | 2009-10-18T22:04:37Z | [
"python",
"language-agnostic",
"random",
"colors"
] |
How to generate random 'greenish' colors | 1,586,147 | <p>Anyone have any suggestions on how to make randomized colors that are all greenish? Right now I'm generating the colors by this:</p>
<pre><code>color = (randint(100, 200), randint(120, 255), randint(100, 200))
</code></pre>
<p>That mostly works, but I get brownish colors a lot. </p>
| 32 | 2009-10-18T22:00:37Z | 1,586,163 | <p>Check out the <code>colorsys</code> module:</p>
<p><a href="http://docs.python.org/library/colorsys.html" rel="nofollow">http://docs.python.org/library/colorsys.html</a></p>
<p>Use the HSL or HSV color space. Randomize the hue to be close to green, then choose completely random stuff for the saturation and V (brightness).</p>
| 16 | 2009-10-18T22:09:00Z | [
"python",
"language-agnostic",
"random",
"colors"
] |
How to generate random 'greenish' colors | 1,586,147 | <p>Anyone have any suggestions on how to make randomized colors that are all greenish? Right now I'm generating the colors by this:</p>
<pre><code>color = (randint(100, 200), randint(120, 255), randint(100, 200))
</code></pre>
<p>That mostly works, but I get brownish colors a lot. </p>
| 32 | 2009-10-18T22:00:37Z | 1,586,165 | <p>So in this case you are lucky enough to want variations on a primary color, but for artistic uses like this it is better to specify color wheel coordinates rather than primary color magnitudes.</p>
<p>You probably want something from the <code>colorsys</code> module like:</p>
<pre><code>colorsys.hsv_to_rgb(h, s, v)
Convert the color from HSV coordinates to RGB coordinates.
</code></pre>
| 5 | 2009-10-18T22:09:31Z | [
"python",
"language-agnostic",
"random",
"colors"
] |
How to generate random 'greenish' colors | 1,586,147 | <p>Anyone have any suggestions on how to make randomized colors that are all greenish? Right now I'm generating the colors by this:</p>
<pre><code>color = (randint(100, 200), randint(120, 255), randint(100, 200))
</code></pre>
<p>That mostly works, but I get brownish colors a lot. </p>
| 32 | 2009-10-18T22:00:37Z | 1,586,181 | <p>The simplest way to do this is to make sure that the red and blue components are the same, like this: (Forgive my Python)</p>
<pre><code>rb = randint(100, 200)
color = (rb, randint(120, 255), rb)
</code></pre>
| 0 | 2009-10-18T22:19:17Z | [
"python",
"language-agnostic",
"random",
"colors"
] |
How to generate random 'greenish' colors | 1,586,147 | <p>Anyone have any suggestions on how to make randomized colors that are all greenish? Right now I'm generating the colors by this:</p>
<pre><code>color = (randint(100, 200), randint(120, 255), randint(100, 200))
</code></pre>
<p>That mostly works, but I get brownish colors a lot. </p>
| 32 | 2009-10-18T22:00:37Z | 1,586,206 | <p>What you want is to work in terms of <a href="http://en.wikipedia.org/wiki/HSL%5Fand%5FHSV" rel="nofollow">HSL</a> instead of RGB. You could find a range of hue that satisfies "greenish" and pick a random hue from it. You could also pick random saturation and lightness but you'll probably want to keep your saturation near 1 and your lightness around 0.5 but you can play with them.</p>
<p>Below is some actionscript code to convert HSL to RGB. I haven't touched python in a while or it'd post the python version.</p>
<p>I find that greenish is something like 0.47*PI to 0.8*PI.</p>
<pre><code> /**
@param h hue [0, 2PI]
@param s saturation [0,1]
@param l lightness [0,1]
@return object {r,g,b} {[0,1],[0,1][0,1]}
*/
public function hslToRGB(h:Number, s:Number, l:Number):Color
{
var q:Number = (l<0.5)?l*(1+s):l+s-l*s;
var p:Number = 2*l-q;
var h_k:Number = h/(Math.PI*2);
var t_r:Number = h_k+1/3;
var t_g:Number = h_k;
var t_b:Number = h_k-1/3;
if (t_r < 0) ++t_r; else if (t_r > 1) --t_r;
if (t_g < 0) ++t_g; else if (t_g > 1) --t_g;
if (t_b < 0) ++t_b; else if (t_b > 1) --t_b;
var c:Color = new Color();
if (t_r < 1/6) c.r = p+((q-p)*6*t_r);
else if (t_r < 1/2) c.r = q;
else if (t_r < 2/3) c.r = p+((q-p)*6*(2/3-t_r));
else c.r = p;
if (t_g < 1/6) c.g = p+((q-p)*6*t_g);
else if (t_g < 1/2) c.g = q;
else if (t_g < 2/3) c.g = p+((q-p)*6*(2/3-t_g));
else c.g = p;
if (t_b < 1/6) c.b = p+((q-p)*6*t_b);
else if (t_b < 1/2) c.b = q;
else if (t_b < 2/3) c.b = p+((q-p)*6*(2/3-t_b));
else c.b = p;
return c;
}
</code></pre>
| 1 | 2009-10-18T22:27:17Z | [
"python",
"language-agnostic",
"random",
"colors"
] |
How to generate random 'greenish' colors | 1,586,147 | <p>Anyone have any suggestions on how to make randomized colors that are all greenish? Right now I'm generating the colors by this:</p>
<pre><code>color = (randint(100, 200), randint(120, 255), randint(100, 200))
</code></pre>
<p>That mostly works, but I get brownish colors a lot. </p>
| 32 | 2009-10-18T22:00:37Z | 1,586,291 | <p>As others have suggested, generating random colours is much easier in the HSV colour space (or HSL, the difference is pretty irrelevant for this)</p>
<p>So, code to generate random "green'ish" colours, and (for demonstration purposes) display them as a series of simple coloured HTML span tags:</p>
<pre><code>#!/usr/bin/env python2.5
"""Random green colour generator, written by dbr, for
http://stackoverflow.com/questions/1586147/how-to-generate-random-greenish-colors
"""
def hsv_to_rgb(h, s, v):
"""Converts HSV value to RGB values
Hue is in range 0-359 (degrees), value/saturation are in range 0-1 (float)
Direct implementation of:
http://en.wikipedia.org/wiki/HSL_and_HSV#Conversion_from_HSV_to_RGB
"""
h, s, v = [float(x) for x in (h, s, v)]
hi = (h / 60) % 6
hi = int(round(hi))
f = (h / 60) - (h / 60)
p = v * (1 - s)
q = v * (1 - f * s)
t = v * (1 - (1 - f) * s)
if hi == 0:
return v, t, p
elif hi == 1:
return q, v, p
elif hi == 2:
return p, v, t
elif hi == 3:
return p, q, v
elif hi == 4:
return t, p, v
elif hi == 5:
return v, p, q
def test():
"""Check examples on..
http://en.wikipedia.org/wiki/HSL_and_HSV#Examples
..work correctly
"""
def verify(got, expected):
if got != expected:
raise AssertionError("Got %s, expected %s" % (got, expected))
verify(hsv_to_rgb(0, 1, 1), (1, 0, 0))
verify(hsv_to_rgb(120, 0.5, 1.0), (0.5, 1, 0.5))
verify(hsv_to_rgb(240, 1, 0.5), (0, 0, 0.5))
def main():
"""Generate 50 random RGB colours, and create some simple coloured HTML
span tags to verify them.
"""
test() # Run simple test suite
from random import randint, uniform
for i in range(50):
# Tweak these values to change colours/variance
h = randint(90, 140) # Select random green'ish hue from hue wheel
s = uniform(0.2, 1)
v = uniform(0.3, 1)
r, g, b = hsv_to_rgb(h, s, v)
# Convert to 0-1 range for HTML output
r, g, b = [x*255 for x in (r, g, b)]
print "<span style='background:rgb(%i, %i, %i)'>&nbsp;&nbsp;</span>" % (r, g, b)
if __name__ == '__main__':
main()
</code></pre>
<p>The output (when viewed in a web-browser) should look something along the lines of:</p>
<p><img src="http://i.stack.imgur.com/ae9V4.png" alt="Example output, showing random green colours"></p>
<p><em>Edit</em>: I didn't know about the colorsys module. Instead of the above <code>hsv_to_rgb</code> function, you could use <a href="http://docs.python.org/library/colorsys.html#colorsys.hsv_to_rgb" rel="nofollow">colorsys.hsv_to_rgb</a>, which makes the code much shorter (it's not quite a drop-in replacement, as my <code>hsv_to_rgb</code> function expects the hue to be in degrees instead of 0-1):</p>
<pre><code>#!/usr/bin/env python2.5
from colorsys import hsv_to_rgb
from random import randint, uniform
for x in range(50):
h = uniform(0.25, 0.38) # Select random green'ish hue from hue wheel
s = uniform(0.2, 1)
v = uniform(0.3, 1)
r, g, b = hsv_to_rgb(h, s, v)
# Convert to 0-1 range for HTML output
r, g, b = [x*255 for x in (r, g, b)]
print "<span style='background:rgb(%i, %i, %i)'>&nbsp;&nbsp;</span>" % (r, g, b)
</code></pre>
| 20 | 2009-10-18T23:10:07Z | [
"python",
"language-agnostic",
"random",
"colors"
] |
How to generate random 'greenish' colors | 1,586,147 | <p>Anyone have any suggestions on how to make randomized colors that are all greenish? Right now I'm generating the colors by this:</p>
<pre><code>color = (randint(100, 200), randint(120, 255), randint(100, 200))
</code></pre>
<p>That mostly works, but I get brownish colors a lot. </p>
| 32 | 2009-10-18T22:00:37Z | 1,586,296 | <p>If you stick with RGB, you basically just need to make sure the G value is greater than the R and B, and try to keep the blue and red values similar so that the hue doesn't go too crazy. Extending from Slaks, maybe something like (I know next to nothing about Python):</p>
<pre><code>greenval = randint(100, 255)
redval = randint(20,(greenval - 60))
blueval = randint((redval - 20), (redval + 20))
color = (redval, greenval, blueval)
</code></pre>
| 9 | 2009-10-18T23:13:10Z | [
"python",
"language-agnostic",
"random",
"colors"
] |
How to generate random 'greenish' colors | 1,586,147 | <p>Anyone have any suggestions on how to make randomized colors that are all greenish? Right now I'm generating the colors by this:</p>
<pre><code>color = (randint(100, 200), randint(120, 255), randint(100, 200))
</code></pre>
<p>That mostly works, but I get brownish colors a lot. </p>
| 32 | 2009-10-18T22:00:37Z | 1,586,315 | <p>The solution with HSx color space is a very good one. However, if you need something extremely simplistic and have no specific requirements about the distribution of the colors (like uniformity), a simplistic RGB-based solution would be just to make sure that G value is greater than both R and B</p>
<pre><code>rr = randint(100, 200)
rb = randint(100, 200)
rg = randint(max(rr, rb) + 1, 255)
</code></pre>
<p>This will give you "greenish" colors. Some of them will be ever so slightly greenish. You can increase the guaranteed degree of greenishness by increasing (absolutely or relatively) the lower bound in the last <code>randint</code> call.</p>
| 3 | 2009-10-18T23:25:04Z | [
"python",
"language-agnostic",
"random",
"colors"
] |
How to generate random 'greenish' colors | 1,586,147 | <p>Anyone have any suggestions on how to make randomized colors that are all greenish? Right now I'm generating the colors by this:</p>
<pre><code>color = (randint(100, 200), randint(120, 255), randint(100, 200))
</code></pre>
<p>That mostly works, but I get brownish colors a lot. </p>
| 32 | 2009-10-18T22:00:37Z | 1,590,881 | <p>I'd go with with the HSV approach everyone else mentioned. Another approach would be to get a nice high resolution photo which some greenery in it, crop out the non-green parts, and pick random pixels from it using <a href="http://www.pythonware.com/products/pil/" rel="nofollow">PIL</a>.</p>
| 0 | 2009-10-19T20:19:33Z | [
"python",
"language-agnostic",
"random",
"colors"
] |
gobject io monitoring + nonblocking reads | 1,586,342 | <p>I've got a problem with using the <code>io_add_watch</code> monitor in python (via gobject). I want to do a nonblocking read of the whole buffer after every notification. Here's the code (shortened a bit):</p>
<pre><code>class SomeApp(object):
def __init__(self):
# some other init that does a lot of stderr debug writes
fl = fcntl.fcntl(0, fcntl.F_GETFL, 0)
fcntl.fcntl(0, fcntl.F_SETFL, fl | os.O_NONBLOCK)
print "hooked", gobject.io_add_watch(0, gobject.IO_IN | gobject.IO_PRI, self.got_message, [""])
self.app = gobject.MainLoop()
def run(self):
print "ready"
self.app.run()
def got_message(self, fd, condition, data):
print "reading now"
data[0] += os.read(0, 1024)
print "got something", fd, condition, data
return True
gobject.threads_init()
SomeApp().run()
</code></pre>
<p>Here's the trick - when I run the program without debug output activated, I don't get the <code>got_message</code> calls. When I write a lot of stuff to the stderr first, the problem disappears. If I don't write anything apart from the prints visible in this code, I don't get the stdin messsage signals. Another interesting thing is that when I try to run the same app with stderr debug enabled but via <code>strace</code> (to check if there are any fcntl / ioctl calls I missed), the problem appears again.</p>
<p>So in short: if I write a lot to stderr first without strace, <code>io_watch</code> works. If I write a lot with strace, or don't write at all <code>io_watch</code> doesn't work.</p>
<p>The "some other init" part takes some time, so if I type some text before I see "hooked 2" output and then press "ctrl+c" after "ready", the <code>get_message</code> callback is called, but the read call throws EAGAIN, so the buffer seems to be empty.</p>
<p>Strace log related to the stdin:</p>
<pre><code>ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
fcntl(0, F_GETFL) = 0xa002 (flags O_RDWR|O_ASYNC|O_LARGEFILE)
fcntl(0, F_SETFL, O_RDWR|O_NONBLOCK|O_ASYNC|O_LARGEFILE) = 0
fcntl(0, F_GETFL) = 0xa802 (flags O_RDWR|O_NONBLOCK|O_ASYNC|O_LARGEFILE)
</code></pre>
<p>Does anyone have some ideas on what's going on here?</p>
<p><hr /></p>
<p>EDIT: Another clue. I tried to refactor the app to do the reading in a different thread and pass it back via a pipe. It "kind of" works:</p>
<pre><code>...
rpipe, wpipe = os.pipe()
stopped = threading.Event()
self.stdreader = threading.Thread(name = "reader", target = self.std_read_loop, args = (wpipe, stopped))
self.stdreader.start()
new_data = ""
print "hooked", gobject.io_add_watch(rpipe, gobject.IO_IN | gobject.IO_PRI, self.got_message, [new_data])
def std_read_loop(self, wpipe, stop_event):
while True:
try:
new_data = os.read(0, 1024)
while len(new_data) > 0:
l = os.write(wpipe, new_data)
new_data = new_data[l:]
except OSError, e:
if stop_event.isSet():
break
time.sleep(0.1)
...
</code></pre>
<p>It's surprising that if I just put the same text in a new pipe, everything starts to work. The problem is that:</p>
<ul>
<li>the first line is not "noticed" at all - I get only the second and following lines</li>
<li>it's fugly</li>
</ul>
<p>Maybe that will give someone else a clue on why that's happening?</p>
| 3 | 2009-10-18T23:38:03Z | 1,588,171 | <p>The <a href="http://www.pygtk.org/pygtk2reference/gobject-functions.html#function-gobject--io-add-watch" rel="nofollow">documentation</a> says you should return <code>TRUE</code> from the callback or it will be removed from the list of event sources.</p>
| 0 | 2009-10-19T11:29:38Z | [
"python",
"input",
"glib",
"gobject"
] |
gobject io monitoring + nonblocking reads | 1,586,342 | <p>I've got a problem with using the <code>io_add_watch</code> monitor in python (via gobject). I want to do a nonblocking read of the whole buffer after every notification. Here's the code (shortened a bit):</p>
<pre><code>class SomeApp(object):
def __init__(self):
# some other init that does a lot of stderr debug writes
fl = fcntl.fcntl(0, fcntl.F_GETFL, 0)
fcntl.fcntl(0, fcntl.F_SETFL, fl | os.O_NONBLOCK)
print "hooked", gobject.io_add_watch(0, gobject.IO_IN | gobject.IO_PRI, self.got_message, [""])
self.app = gobject.MainLoop()
def run(self):
print "ready"
self.app.run()
def got_message(self, fd, condition, data):
print "reading now"
data[0] += os.read(0, 1024)
print "got something", fd, condition, data
return True
gobject.threads_init()
SomeApp().run()
</code></pre>
<p>Here's the trick - when I run the program without debug output activated, I don't get the <code>got_message</code> calls. When I write a lot of stuff to the stderr first, the problem disappears. If I don't write anything apart from the prints visible in this code, I don't get the stdin messsage signals. Another interesting thing is that when I try to run the same app with stderr debug enabled but via <code>strace</code> (to check if there are any fcntl / ioctl calls I missed), the problem appears again.</p>
<p>So in short: if I write a lot to stderr first without strace, <code>io_watch</code> works. If I write a lot with strace, or don't write at all <code>io_watch</code> doesn't work.</p>
<p>The "some other init" part takes some time, so if I type some text before I see "hooked 2" output and then press "ctrl+c" after "ready", the <code>get_message</code> callback is called, but the read call throws EAGAIN, so the buffer seems to be empty.</p>
<p>Strace log related to the stdin:</p>
<pre><code>ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
fcntl(0, F_GETFL) = 0xa002 (flags O_RDWR|O_ASYNC|O_LARGEFILE)
fcntl(0, F_SETFL, O_RDWR|O_NONBLOCK|O_ASYNC|O_LARGEFILE) = 0
fcntl(0, F_GETFL) = 0xa802 (flags O_RDWR|O_NONBLOCK|O_ASYNC|O_LARGEFILE)
</code></pre>
<p>Does anyone have some ideas on what's going on here?</p>
<p><hr /></p>
<p>EDIT: Another clue. I tried to refactor the app to do the reading in a different thread and pass it back via a pipe. It "kind of" works:</p>
<pre><code>...
rpipe, wpipe = os.pipe()
stopped = threading.Event()
self.stdreader = threading.Thread(name = "reader", target = self.std_read_loop, args = (wpipe, stopped))
self.stdreader.start()
new_data = ""
print "hooked", gobject.io_add_watch(rpipe, gobject.IO_IN | gobject.IO_PRI, self.got_message, [new_data])
def std_read_loop(self, wpipe, stop_event):
while True:
try:
new_data = os.read(0, 1024)
while len(new_data) > 0:
l = os.write(wpipe, new_data)
new_data = new_data[l:]
except OSError, e:
if stop_event.isSet():
break
time.sleep(0.1)
...
</code></pre>
<p>It's surprising that if I just put the same text in a new pipe, everything starts to work. The problem is that:</p>
<ul>
<li>the first line is not "noticed" at all - I get only the second and following lines</li>
<li>it's fugly</li>
</ul>
<p>Maybe that will give someone else a clue on why that's happening?</p>
| 3 | 2009-10-18T23:38:03Z | 1,621,815 | <p>This sounds like a race condition in which there is some delay to setting your callback, or else there is a change in the environment which affects whether or not you can set the callback.</p>
<p>I would look carefully at what happens before you call <code>io_add_watch()</code>. For instance the Python fcntl docs say:</p>
<blockquote>
<p>All functions in this module take a
file descriptor fd as their first
argument. This can be an integer file
descriptor, such as returned by
sys.stdin.fileno(), or a file object,
such as sys.stdin itself, which
provides a fileno() which returns a
genuine file descriptor.</p>
</blockquote>
<p>Clearly that is not what you are doing when you assume that STDIN will have FD == 0. I would change that first and try again.</p>
<p>The other thing is that if the FD is already blocked, then your process could be waiting while other non-blocked processes are running, therefore there is a timing difference depending on what you do first. What happens if you refactor the fcntl stuff so that it is done soon after the program starts, even before importing the GTK modules?</p>
<p>I'm not sure that I understand why a program using the GTK GUI would want to read from the standard input in the first place. If you are actually trying to capture the output of another process, you should use the subprocess module to set up a pipe, then <code>io_add_watch()</code> on the pipe like so:</p>
<pre><code>proc = subprocess.Popen(command, stdout = subprocess.PIPE)
gobject.io_add_watch(proc.stdout, glib.IO_IN, self.write_to_buffer )
</code></pre>
<p>Again, in this example we make sure that we have a valid opened FD before calling <code>io_add_watch(</code>).</p>
<p>Normally, when <code>gobject.io_add_watch()</code> is used, it is called just before <code>gobject.MainLoop()</code>. For example, here is some working code using <code>io_add_watch</code> to catch IO_IN.</p>
| 2 | 2009-10-25T19:27:24Z | [
"python",
"input",
"glib",
"gobject"
] |
gobject io monitoring + nonblocking reads | 1,586,342 | <p>I've got a problem with using the <code>io_add_watch</code> monitor in python (via gobject). I want to do a nonblocking read of the whole buffer after every notification. Here's the code (shortened a bit):</p>
<pre><code>class SomeApp(object):
def __init__(self):
# some other init that does a lot of stderr debug writes
fl = fcntl.fcntl(0, fcntl.F_GETFL, 0)
fcntl.fcntl(0, fcntl.F_SETFL, fl | os.O_NONBLOCK)
print "hooked", gobject.io_add_watch(0, gobject.IO_IN | gobject.IO_PRI, self.got_message, [""])
self.app = gobject.MainLoop()
def run(self):
print "ready"
self.app.run()
def got_message(self, fd, condition, data):
print "reading now"
data[0] += os.read(0, 1024)
print "got something", fd, condition, data
return True
gobject.threads_init()
SomeApp().run()
</code></pre>
<p>Here's the trick - when I run the program without debug output activated, I don't get the <code>got_message</code> calls. When I write a lot of stuff to the stderr first, the problem disappears. If I don't write anything apart from the prints visible in this code, I don't get the stdin messsage signals. Another interesting thing is that when I try to run the same app with stderr debug enabled but via <code>strace</code> (to check if there are any fcntl / ioctl calls I missed), the problem appears again.</p>
<p>So in short: if I write a lot to stderr first without strace, <code>io_watch</code> works. If I write a lot with strace, or don't write at all <code>io_watch</code> doesn't work.</p>
<p>The "some other init" part takes some time, so if I type some text before I see "hooked 2" output and then press "ctrl+c" after "ready", the <code>get_message</code> callback is called, but the read call throws EAGAIN, so the buffer seems to be empty.</p>
<p>Strace log related to the stdin:</p>
<pre><code>ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
fcntl(0, F_GETFL) = 0xa002 (flags O_RDWR|O_ASYNC|O_LARGEFILE)
fcntl(0, F_SETFL, O_RDWR|O_NONBLOCK|O_ASYNC|O_LARGEFILE) = 0
fcntl(0, F_GETFL) = 0xa802 (flags O_RDWR|O_NONBLOCK|O_ASYNC|O_LARGEFILE)
</code></pre>
<p>Does anyone have some ideas on what's going on here?</p>
<p><hr /></p>
<p>EDIT: Another clue. I tried to refactor the app to do the reading in a different thread and pass it back via a pipe. It "kind of" works:</p>
<pre><code>...
rpipe, wpipe = os.pipe()
stopped = threading.Event()
self.stdreader = threading.Thread(name = "reader", target = self.std_read_loop, args = (wpipe, stopped))
self.stdreader.start()
new_data = ""
print "hooked", gobject.io_add_watch(rpipe, gobject.IO_IN | gobject.IO_PRI, self.got_message, [new_data])
def std_read_loop(self, wpipe, stop_event):
while True:
try:
new_data = os.read(0, 1024)
while len(new_data) > 0:
l = os.write(wpipe, new_data)
new_data = new_data[l:]
except OSError, e:
if stop_event.isSet():
break
time.sleep(0.1)
...
</code></pre>
<p>It's surprising that if I just put the same text in a new pipe, everything starts to work. The problem is that:</p>
<ul>
<li>the first line is not "noticed" at all - I get only the second and following lines</li>
<li>it's fugly</li>
</ul>
<p>Maybe that will give someone else a clue on why that's happening?</p>
| 3 | 2009-10-18T23:38:03Z | 1,638,863 | <p>What happens if you hook the callback first, prior to any stderr output? Does it still get called when you have debug output enabled?</p>
<p>Also, I suppose you should probably be repeatedly calling <code>os.read()</code> in your handler until it gives no data, in case >1024 bytes become ready between calls.</p>
<p>Have you tried using the <code>select</code> module in a background thread to emulate <code>gio</code> functionality? Does that work? What platform is this and what kind of FD are you dealing with? (file? socket? pipe?)</p>
| 0 | 2009-10-28T17:39:19Z | [
"python",
"input",
"glib",
"gobject"
] |
Race-condition creating folder in Python | 1,586,648 | <p>I have a urllib2 caching module, which sporadically crashes because of the following code:</p>
<pre><code>if not os.path.exists(self.cache_location):
os.mkdir(self.cache_location)
</code></pre>
<p>The problem is, by the time the second line is being executed, the folder may exist, and will error:</p>
<pre> File ".../cache.py", line 103, in __init__
os.mkdir(self.cache_location)
OSError: [Errno 17] File exists: '/tmp/examplecachedir/'</pre>
<p>This is because the script is simultaneously launched numerous times, by third-party code I have no control over.</p>
<p>The code (before I attempted to fix the bug) can be found <a href="http://github.com/dbr/tvdb%5Fapi/commit/e7429cce89fb2406efce6d81336e2ffa01479976">here, on github</a></p>
<p>I can't use the <a href="http://docs.python.org/library/tempfile.html#tempfile.mkdtemp">tempfile.mkstemp</a>, as it solves the race condition by using a randomly named directory (<a href="http://svn.python.org/projects/python/trunk/Lib/tempfile.py">tempfile.py source here</a>), which would defeat the purpose of the cache.</p>
<p>I don't want to simply discard the error, as the same error Errno 17 error is raised if the folder name exists as a file (a different error), for example:</p>
<pre>$ touch blah
$ python
>>> import os
>>> os.mkdir("blah")
Traceback (most recent call last):
File "", line 1, in
OSError: [Errno 17] File exists: 'blah'
>>></pre>
<p>I cannot using <code>threading.RLock</code> as the code is called from multiple processes.</p>
<p>So, I tried writing a simple file-based lock (<a href="http://github.com/dbr/tvdb%5Fapi/blob/343df228d519ac5c8895a1e6bdab8d259a64cdb9/cache.py">that version can be found here</a>), but this has a problem: it creates the lockfile one level up, so <code>/tmp/example.lock</code> for <code>/tmp/example/</code>, which breaks if you use <code>/tmp/</code> as a cache dir (as it tries to make <code>/tmp.lock</code>)..</p>
<p>In short, I need to cache <code>urllib2</code> responses to disc. To do this, I need to access a known directory (creating it, if required), in a multiprocess safe way. It needs to work on OS X, Linux and Windows.</p>
<p>Thoughts? The only alternative solution I can think of is to rewrite the cache module using SQLite3 storage, rather than files.</p>
| 13 | 2009-10-19T01:52:13Z | 1,586,663 | <p>Could you catch the exception and then test whether the file exists as a directory or not?</p>
| 2 | 2009-10-19T01:57:54Z | [
"python",
"caching",
"race-condition"
] |
Race-condition creating folder in Python | 1,586,648 | <p>I have a urllib2 caching module, which sporadically crashes because of the following code:</p>
<pre><code>if not os.path.exists(self.cache_location):
os.mkdir(self.cache_location)
</code></pre>
<p>The problem is, by the time the second line is being executed, the folder may exist, and will error:</p>
<pre> File ".../cache.py", line 103, in __init__
os.mkdir(self.cache_location)
OSError: [Errno 17] File exists: '/tmp/examplecachedir/'</pre>
<p>This is because the script is simultaneously launched numerous times, by third-party code I have no control over.</p>
<p>The code (before I attempted to fix the bug) can be found <a href="http://github.com/dbr/tvdb%5Fapi/commit/e7429cce89fb2406efce6d81336e2ffa01479976">here, on github</a></p>
<p>I can't use the <a href="http://docs.python.org/library/tempfile.html#tempfile.mkdtemp">tempfile.mkstemp</a>, as it solves the race condition by using a randomly named directory (<a href="http://svn.python.org/projects/python/trunk/Lib/tempfile.py">tempfile.py source here</a>), which would defeat the purpose of the cache.</p>
<p>I don't want to simply discard the error, as the same error Errno 17 error is raised if the folder name exists as a file (a different error), for example:</p>
<pre>$ touch blah
$ python
>>> import os
>>> os.mkdir("blah")
Traceback (most recent call last):
File "", line 1, in
OSError: [Errno 17] File exists: 'blah'
>>></pre>
<p>I cannot using <code>threading.RLock</code> as the code is called from multiple processes.</p>
<p>So, I tried writing a simple file-based lock (<a href="http://github.com/dbr/tvdb%5Fapi/blob/343df228d519ac5c8895a1e6bdab8d259a64cdb9/cache.py">that version can be found here</a>), but this has a problem: it creates the lockfile one level up, so <code>/tmp/example.lock</code> for <code>/tmp/example/</code>, which breaks if you use <code>/tmp/</code> as a cache dir (as it tries to make <code>/tmp.lock</code>)..</p>
<p>In short, I need to cache <code>urllib2</code> responses to disc. To do this, I need to access a known directory (creating it, if required), in a multiprocess safe way. It needs to work on OS X, Linux and Windows.</p>
<p>Thoughts? The only alternative solution I can think of is to rewrite the cache module using SQLite3 storage, rather than files.</p>
| 13 | 2009-10-19T01:52:13Z | 1,586,668 | <p>Instead of </p>
<pre><code>if not os.path.exists(self.cache_location):
os.mkdir(self.cache_location)
</code></pre>
<p>you could do</p>
<pre><code>try:
os.makedirs(self.cache_location)
except OSError:
pass
</code></pre>
<p>As you would end up with the same <em>functionality</em>.</p>
<p><sub>DISCLAIMER: I don't know how Pythonic this might be.</sub></p>
<p><hr /></p>
<p>Using <code>SQLite3</code>, <em>might</em> be a bit of overkill, but would add a <em>lot</em> of functionality and flexibility to your use case.</p>
<p>If you have to do a lot of "selecting", concurrent inserting and filtering, it's a great idea to use <code>SQLite3</code>, as it wont add too much complexity over simple files (it could be argued that it removes complexity).</p>
<p><hr /></p>
<p>Rereading your question (and comments) I can better understand your problem.</p>
<p>What is the possibility that a <strong>file</strong> could create the same race condition?</p>
<p>If it is small enough, then I'd do something like:</p>
<pre><code>if not os.path.isfile(self.cache_location):
try:
os.makedirs(self.cache_location)
except OSError:
pass
</code></pre>
<p>Also, reading your code, I'd change </p>
<pre><code>else:
# Our target dir is already a file, or different error,
# relay the error!
raise OSError(e)
</code></pre>
<p>to</p>
<pre><code>else:
# Our target dir is already a file, or different error,
# relay the error!
raise
</code></pre>
<p>as it's really what you want, Python to reraise the exact same exception <sub>(just nitpicking)</sub>.</p>
<p><hr /></p>
<p>One more thing, may be <a href="http://stackoverflow.com/questions/1474052/synchronize-shell-script-execution" title="Synchronize shell script execution">this</a> could be of use for you (Unix-like only).</p>
| 11 | 2009-10-19T02:00:27Z | [
"python",
"caching",
"race-condition"
] |
Race-condition creating folder in Python | 1,586,648 | <p>I have a urllib2 caching module, which sporadically crashes because of the following code:</p>
<pre><code>if not os.path.exists(self.cache_location):
os.mkdir(self.cache_location)
</code></pre>
<p>The problem is, by the time the second line is being executed, the folder may exist, and will error:</p>
<pre> File ".../cache.py", line 103, in __init__
os.mkdir(self.cache_location)
OSError: [Errno 17] File exists: '/tmp/examplecachedir/'</pre>
<p>This is because the script is simultaneously launched numerous times, by third-party code I have no control over.</p>
<p>The code (before I attempted to fix the bug) can be found <a href="http://github.com/dbr/tvdb%5Fapi/commit/e7429cce89fb2406efce6d81336e2ffa01479976">here, on github</a></p>
<p>I can't use the <a href="http://docs.python.org/library/tempfile.html#tempfile.mkdtemp">tempfile.mkstemp</a>, as it solves the race condition by using a randomly named directory (<a href="http://svn.python.org/projects/python/trunk/Lib/tempfile.py">tempfile.py source here</a>), which would defeat the purpose of the cache.</p>
<p>I don't want to simply discard the error, as the same error Errno 17 error is raised if the folder name exists as a file (a different error), for example:</p>
<pre>$ touch blah
$ python
>>> import os
>>> os.mkdir("blah")
Traceback (most recent call last):
File "", line 1, in
OSError: [Errno 17] File exists: 'blah'
>>></pre>
<p>I cannot using <code>threading.RLock</code> as the code is called from multiple processes.</p>
<p>So, I tried writing a simple file-based lock (<a href="http://github.com/dbr/tvdb%5Fapi/blob/343df228d519ac5c8895a1e6bdab8d259a64cdb9/cache.py">that version can be found here</a>), but this has a problem: it creates the lockfile one level up, so <code>/tmp/example.lock</code> for <code>/tmp/example/</code>, which breaks if you use <code>/tmp/</code> as a cache dir (as it tries to make <code>/tmp.lock</code>)..</p>
<p>In short, I need to cache <code>urllib2</code> responses to disc. To do this, I need to access a known directory (creating it, if required), in a multiprocess safe way. It needs to work on OS X, Linux and Windows.</p>
<p>Thoughts? The only alternative solution I can think of is to rewrite the cache module using SQLite3 storage, rather than files.</p>
| 13 | 2009-10-19T01:52:13Z | 1,586,753 | <p>When you have race conditions often EAFP(easier to ask forgiveness than permission) works better that LBYL(look before you leap)</p>
<p><a href="http://my.safaribooksonline.com/0596001886/pythonian-CHP-6-SECT-6" rel="nofollow">Error checking strategies</a></p>
| 2 | 2009-10-19T02:49:29Z | [
"python",
"caching",
"race-condition"
] |
Race-condition creating folder in Python | 1,586,648 | <p>I have a urllib2 caching module, which sporadically crashes because of the following code:</p>
<pre><code>if not os.path.exists(self.cache_location):
os.mkdir(self.cache_location)
</code></pre>
<p>The problem is, by the time the second line is being executed, the folder may exist, and will error:</p>
<pre> File ".../cache.py", line 103, in __init__
os.mkdir(self.cache_location)
OSError: [Errno 17] File exists: '/tmp/examplecachedir/'</pre>
<p>This is because the script is simultaneously launched numerous times, by third-party code I have no control over.</p>
<p>The code (before I attempted to fix the bug) can be found <a href="http://github.com/dbr/tvdb%5Fapi/commit/e7429cce89fb2406efce6d81336e2ffa01479976">here, on github</a></p>
<p>I can't use the <a href="http://docs.python.org/library/tempfile.html#tempfile.mkdtemp">tempfile.mkstemp</a>, as it solves the race condition by using a randomly named directory (<a href="http://svn.python.org/projects/python/trunk/Lib/tempfile.py">tempfile.py source here</a>), which would defeat the purpose of the cache.</p>
<p>I don't want to simply discard the error, as the same error Errno 17 error is raised if the folder name exists as a file (a different error), for example:</p>
<pre>$ touch blah
$ python
>>> import os
>>> os.mkdir("blah")
Traceback (most recent call last):
File "", line 1, in
OSError: [Errno 17] File exists: 'blah'
>>></pre>
<p>I cannot using <code>threading.RLock</code> as the code is called from multiple processes.</p>
<p>So, I tried writing a simple file-based lock (<a href="http://github.com/dbr/tvdb%5Fapi/blob/343df228d519ac5c8895a1e6bdab8d259a64cdb9/cache.py">that version can be found here</a>), but this has a problem: it creates the lockfile one level up, so <code>/tmp/example.lock</code> for <code>/tmp/example/</code>, which breaks if you use <code>/tmp/</code> as a cache dir (as it tries to make <code>/tmp.lock</code>)..</p>
<p>In short, I need to cache <code>urllib2</code> responses to disc. To do this, I need to access a known directory (creating it, if required), in a multiprocess safe way. It needs to work on OS X, Linux and Windows.</p>
<p>Thoughts? The only alternative solution I can think of is to rewrite the cache module using SQLite3 storage, rather than files.</p>
| 13 | 2009-10-19T01:52:13Z | 8,192,589 | <p>The code I ended up with was:</p>
<pre><code>import os
import errno
folder_location = "/tmp/example_dir"
try:
os.mkdir(folder_location)
except OSError, e:
if e.errno == errno.EEXIST and os.path.isdir(folder_location):
# File exists, and it's a directory,
# another process beat us to creating this dir, that's OK.
pass
else:
# Our target dir exists as a file, or different error,
# reraise the error!
raise
</code></pre>
| 4 | 2011-11-19T07:25:30Z | [
"python",
"caching",
"race-condition"
] |
Using multiprocessing pool of workers | 1,586,754 | <p>I have the following code written to make my lazy second CPU core working. What the code does basically is first find the desired "sea" files in the directory hierarchy and later execute set of external scripts to process these binary "sea" files to produce 50 to 100 text and binary files in number. As the title of the question suggest in a paralleled fashion to increase the processing speed.</p>
<p>This question originates from the long discussion that we have been having on IPython users list titled as "<a href="http://article.gmane.org/gmane.comp.python.ipython.user/4765">Cannot start ipcluster</a>". Starting with my experimentation on IPython's parallel processing functionalities. </p>
<p>The issue is I can't get this code running correctly. If the folders that contain "sea" files only houses "sea" files the script finishes its execution without fully performing external script runs. (Say I have 30-50 external scripts to run, but my multiprocessing enabled script exhaust only after executing the first script in these external script chain.) Interestingly, if I run this script on an already processed folder (which is "sea" files processed beforehand and output files are already in that folder) then it runs, but this time I get speed-ups at about 2.4 to 2.7X with respect to linear processing timings. It is not very expected since I only have a Core 2 Duo 2.5 Ghz CPU in my laptop. Although I have a CUDA powered GPU it has nothing to do with my current parallel computing struggle :)</p>
<p>What do you think might be source of this issue?</p>
<p>Thank you for all comments and suggestions.</p>
<pre><code>#!/usr/bin/env python
from multiprocessing import Pool
from subprocess import call
import os
def find_sea_files():
file_list, path_list = [], []
init = os.getcwd()
for root, dirs, files in os.walk('.'):
dirs.sort()
for file in files:
if file.endswith('.sea'):
file_list.append(file)
os.chdir(root)
path_list.append(os.getcwd())
os.chdir(init)
return file_list, path_list
def process_all(pf):
os.chdir(pf[0])
call(['postprocessing_saudi', pf[1]])
if __name__ == '__main__':
pool = Pool(processes=2) # start 2 worker processes
files, paths = find_sea_files()
pathfile = [[paths[i],files[i]] for i in range(len(files))]
pool.map(process_all, pathfile)
</code></pre>
| 10 | 2009-10-19T02:49:44Z | 1,830,785 | <p>There are several things I can think of:</p>
<p>1) Have you printed out the pathfiles? Are you sure that they are all properly generated?</p>
<p>a) I ask as your os.walk is a bit interesting; the dirs.sort() should be ok, but seems quite unncessarily. os.chdir() in general shouldn't be used; the restoration <strong>should</strong> be alright, but in general you should just be appending root to init.</p>
<p>2) I've seen multiprocessing on python2.6 have problems spawning subporcesses from pools. (I specifically had a script use multiprocessing to spawn subprocesses. Those subprocesses then could not correctly use multiprocessing (the pool locked up)). Try python2.5 w/ the mulitprocessing backport.</p>
<p>3) Try <a href="http://www.picloud.com" rel="nofollow">picloud</a>'s cloud.mp module (which wraps multiprocessing, but handles pools a tad differently) and see if that works. </p>
<p>You would do</p>
<pre><code>cloud.mp.join(cloud.mp.map(process_all, pathfile))
</code></pre>
<p>(Disclaimer: I am one of the developers of PiCloud)</p>
| 3 | 2009-12-02T05:13:11Z | [
"python",
"multiprocessing"
] |
Using multiprocessing pool of workers | 1,586,754 | <p>I have the following code written to make my lazy second CPU core working. What the code does basically is first find the desired "sea" files in the directory hierarchy and later execute set of external scripts to process these binary "sea" files to produce 50 to 100 text and binary files in number. As the title of the question suggest in a paralleled fashion to increase the processing speed.</p>
<p>This question originates from the long discussion that we have been having on IPython users list titled as "<a href="http://article.gmane.org/gmane.comp.python.ipython.user/4765">Cannot start ipcluster</a>". Starting with my experimentation on IPython's parallel processing functionalities. </p>
<p>The issue is I can't get this code running correctly. If the folders that contain "sea" files only houses "sea" files the script finishes its execution without fully performing external script runs. (Say I have 30-50 external scripts to run, but my multiprocessing enabled script exhaust only after executing the first script in these external script chain.) Interestingly, if I run this script on an already processed folder (which is "sea" files processed beforehand and output files are already in that folder) then it runs, but this time I get speed-ups at about 2.4 to 2.7X with respect to linear processing timings. It is not very expected since I only have a Core 2 Duo 2.5 Ghz CPU in my laptop. Although I have a CUDA powered GPU it has nothing to do with my current parallel computing struggle :)</p>
<p>What do you think might be source of this issue?</p>
<p>Thank you for all comments and suggestions.</p>
<pre><code>#!/usr/bin/env python
from multiprocessing import Pool
from subprocess import call
import os
def find_sea_files():
file_list, path_list = [], []
init = os.getcwd()
for root, dirs, files in os.walk('.'):
dirs.sort()
for file in files:
if file.endswith('.sea'):
file_list.append(file)
os.chdir(root)
path_list.append(os.getcwd())
os.chdir(init)
return file_list, path_list
def process_all(pf):
os.chdir(pf[0])
call(['postprocessing_saudi', pf[1]])
if __name__ == '__main__':
pool = Pool(processes=2) # start 2 worker processes
files, paths = find_sea_files()
pathfile = [[paths[i],files[i]] for i in range(len(files))]
pool.map(process_all, pathfile)
</code></pre>
| 10 | 2009-10-19T02:49:44Z | 1,941,748 | <p>I would start with getting a better feeling for what is going on with the worker process. The multiprocessing module comes with logging for its subprocesses if you need. Since you have simplified the code to narrow down the problem, I would just debug with a few print statements, like so (or you can PrettyPrint the <strong>pf</strong> array):</p>
<pre><code>
def process_all(pf):
print "PID: ", os.getpid()
print "Script Dir: ", pf[0]
print "Script: ", pf[1]
os.chdir(pf[0])
call(['postprocessing_saudi', pf[1]])
if __name__ == '__main__':
pool = Pool(processes=2)
files, paths = find_sea_files()
pathfile = [[paths[i],files[i]] for i in range(len(files))]
pool.map(process_all, pathfile, 1) # Ensure the chunk size is 1
pool.close()
pool.join()
</code></pre>
<p>The version of Python that I have accomplished this with 2.6.4.</p>
| 6 | 2009-12-21T18:21:30Z | [
"python",
"multiprocessing"
] |
How to write a program that will automically generate sample exam questions from a file? | 1,587,248 | <p>How can I write a program that will automatically generate a sample examination? </p>
<p>For example, the user will be prompted to supply four categories of questions to be included in the a 6 question exam from the following list: </p>
<ol>
<li>Loops</li>
<li>Functions</li>
<li>Decisions</li>
<li>Data Types</li>
<li>Built-in functions</li>
<li>Recursion</li>
<li>Algorithms</li>
<li>Top-down design</li>
<li>Objects</li>
</ol>
<p>I also need to prompt the user to supply the total marks of the exam and also prompt the user for how many multiple questions there are in the exam. </p>
<p>The sample questions, their category, their value (number of marks) and
whether they are multiple choice questions are stored in a <code>Questions</code> file
that I need to open to read all of the questions. Then the program should read the Question file and randomly select questions according to what the user has entered.</p>
<p>The file format is a text file in notepad, and looks like the following:</p>
<pre><code>Multiple Choice Questions
Loops Questions
1. Which of the following is not a part of the IPO pattern?
a)Input b)Program c)Process d)Output
2. In Python, getting user input is done with a special expression called.
a)for b)read c)simultaneous assignment d)input
Function Questions
3. A Python function definition begins with
a)def b)define c)function d)defun
4.A function with no return statement returns
a)nothing b)its parameters c)its variables d)None
Decision Questions
5. An expression that evaluates to either true or false is called
a)operational b)Boolean c)simple d)compound
6.The literals for type bool are
a)T,F b)True,False c)true,false d)procrastination
DataTypes Questions
7. Which of the following is not a Python type-conversion function?
a)float b)round c)int d)long
8.The number of distinct values that can be represented using 5 bits is
a)5 b)10 c)32 d)50
Built-in Functions
9.The part of a program that uses a function is called the
a)user b)caller c)callee d)statement
10.A function can send output back to the program with a(n)
a)return b)print c)assignment d)SASE
Recursion
11.Recursions on sequence often use this as a base case:
a)0 b)1 c)an empty sequence d)None
12.The recursive Fibonacci function is inefficient because
a)it does many repeated computations b)recursion is inherently inefficient compared to iteration
c)calculating Fibonacci numbers is intractable d)fibbing is morally wrong
Algorithms
13.An algorithm is like a
a)newspaper b)venus flytrap c)drum d)recipe
14.Which algorithm requires time directly proportional to the size of the input?
a)linear search b)binary search c)merge sort d)selection sort
Top-down design
15.Which of the following is not one of the fundamental characteristics of object-oriented design/programming?
a)inheritance b)polymorphism c)generally d)encapsulation
Objects
16.What graphics class would be best for drawing a square?
a)Square b)Polygon c)Line d)Rectangle
17.A user interface organized around visual elements and users actions is called a (n)
a)GUI b)application c)windower d)API
</code></pre>
<p>This is the code I have so far. How can I improve it?</p>
<pre><code>def main():
infile = open("30075165.txt","r")
categories = raw_input("Please enter the four categories that are in the exam: ")
totalmarks = input("Please enter the total marks in the exam: ")
mc = input("Please enter the amount of multiple choice questions in the exam: ")
main()
</code></pre>
| -2 | 2009-10-19T06:39:52Z | 1,587,337 | <p>In the absence of additional information required to answer this specific problem, I will outline the general approach I would use to solve this problem. My solution would involve using <a href="http://en.wikipedia.org/wiki/LaTeX" rel="nofollow">LaTeX</a> to typeset the exam and the <a href="http://www.ctan.org/tex-archive/help/Catalogue/entries/probsoln.html" rel="nofollow"><code>probsoln</code></a> package to define problems.</p>
<p>The <code>probsoln</code> package provides a format for defining and labeling problems and storing them in files. It also provides the command <code>\loadrandomproblems[dataset]{n}{filename}</code> to load <code>n</code> randomly-selected problems from <code>filename</code> into <code>dataset</code>. This suggests storing problems by topic in several external files, e.g. <code>loops.tex</code>, <code>functions.tex</code>, etc. Then you could write a Python script to programmatically create the LaTeX source for the exam (<code>exam.tex</code>) based on user input.</p>
<p>loops.tex</p>
<pre><code>\newproblem{IPOpattern}{Which of the following is not a part of the IPO pattern?
\\ a) Input \quad b) Program \quad c) Process \quad d) Output}{The correct
answer goes here.}
\newproblem{input}{In Python, getting user input is done with a special expression
called: \\ a) for \quad b) read \quad c) simultaneous assignment \quad
d) input}{The correct answer goes here.}
</code></pre>
<p>exam.tex</p>
<pre><code>\documentclass{report}
\usepackage{probsoln}
\begin{document}
\hideanswers
\chapter{Loops}
% randomly select 2 problems from loops.tex and add to
% the data set called 'loops'
\loadrandomproblems[loops]{2}{loops}
% Display the problems
\renewcommand{\theenumi}{\thechapter.\arabic{enumi}}
\begin{enumerate}
\foreachproblem[loops]{\item\label{prob:\thisproblemlabel}\thisproblem}
\end{enumerate}
% You may need to change \theenumi back here
\chapter{Functions}
% randomly select 2 problems from functions.tex and add to
% the data set called 'functions'
\loadrandomproblems[functions]{2}{functions}
% Display the problems
\renewcommand{\theenumi}{\thechapter.\arabic{enumi}}
\begin{enumerate}
\foreachproblem[functions]{\item\label{prob:\thisproblemlabel}\thisproblem}
\end{enumerate}
% You may need to change \theenumi back here
\appendix
\chapter{Solutions}
\showanswers
\begin{itemize}
\foreachdataset{\thisdataset}{%
\foreachproblem[\thisdataset]{\item[\ref{prob:\thisproblemlabel}]\thisproblem}
}
\end{itemize}
\end{document}
</code></pre>
| 3 | 2009-10-19T07:15:13Z | [
"python"
] |
How to write a program that will automically generate sample exam questions from a file? | 1,587,248 | <p>How can I write a program that will automatically generate a sample examination? </p>
<p>For example, the user will be prompted to supply four categories of questions to be included in the a 6 question exam from the following list: </p>
<ol>
<li>Loops</li>
<li>Functions</li>
<li>Decisions</li>
<li>Data Types</li>
<li>Built-in functions</li>
<li>Recursion</li>
<li>Algorithms</li>
<li>Top-down design</li>
<li>Objects</li>
</ol>
<p>I also need to prompt the user to supply the total marks of the exam and also prompt the user for how many multiple questions there are in the exam. </p>
<p>The sample questions, their category, their value (number of marks) and
whether they are multiple choice questions are stored in a <code>Questions</code> file
that I need to open to read all of the questions. Then the program should read the Question file and randomly select questions according to what the user has entered.</p>
<p>The file format is a text file in notepad, and looks like the following:</p>
<pre><code>Multiple Choice Questions
Loops Questions
1. Which of the following is not a part of the IPO pattern?
a)Input b)Program c)Process d)Output
2. In Python, getting user input is done with a special expression called.
a)for b)read c)simultaneous assignment d)input
Function Questions
3. A Python function definition begins with
a)def b)define c)function d)defun
4.A function with no return statement returns
a)nothing b)its parameters c)its variables d)None
Decision Questions
5. An expression that evaluates to either true or false is called
a)operational b)Boolean c)simple d)compound
6.The literals for type bool are
a)T,F b)True,False c)true,false d)procrastination
DataTypes Questions
7. Which of the following is not a Python type-conversion function?
a)float b)round c)int d)long
8.The number of distinct values that can be represented using 5 bits is
a)5 b)10 c)32 d)50
Built-in Functions
9.The part of a program that uses a function is called the
a)user b)caller c)callee d)statement
10.A function can send output back to the program with a(n)
a)return b)print c)assignment d)SASE
Recursion
11.Recursions on sequence often use this as a base case:
a)0 b)1 c)an empty sequence d)None
12.The recursive Fibonacci function is inefficient because
a)it does many repeated computations b)recursion is inherently inefficient compared to iteration
c)calculating Fibonacci numbers is intractable d)fibbing is morally wrong
Algorithms
13.An algorithm is like a
a)newspaper b)venus flytrap c)drum d)recipe
14.Which algorithm requires time directly proportional to the size of the input?
a)linear search b)binary search c)merge sort d)selection sort
Top-down design
15.Which of the following is not one of the fundamental characteristics of object-oriented design/programming?
a)inheritance b)polymorphism c)generally d)encapsulation
Objects
16.What graphics class would be best for drawing a square?
a)Square b)Polygon c)Line d)Rectangle
17.A user interface organized around visual elements and users actions is called a (n)
a)GUI b)application c)windower d)API
</code></pre>
<p>This is the code I have so far. How can I improve it?</p>
<pre><code>def main():
infile = open("30075165.txt","r")
categories = raw_input("Please enter the four categories that are in the exam: ")
totalmarks = input("Please enter the total marks in the exam: ")
mc = input("Please enter the amount of multiple choice questions in the exam: ")
main()
</code></pre>
| -2 | 2009-10-19T06:39:52Z | 1,587,341 | <p>las3rjock has a good solution.</p>
<p>You could also move your input file to a SQLite database, using a normalised structure: e.g. Question table, Answer table (with FK to QuestionID), and generate a random answer based on the Question ID. You'll need a third table to keep track of the correct answer per question too.</p>
| 2 | 2009-10-19T07:16:38Z | [
"python"
] |
Custom Django-admin command issue | 1,587,282 | <p>trying to understand how custom admin commands work, I have my project named "mailing" and app inside named "msystem", I have written this retrieve.py to the mailing/msystem/management/commands/ folder and I have pasted an empty <strong>init</strong>.py both to the management and cpmmands folders.</p>
<pre><code>from django.core.management.base import BaseCommand
from mailing.msystem.models import Alarm
class Command(BaseCommand):
help = "Displays data"
def handle(self, *args, **options):
x = Alarm.objects.all()
for i in x:
print i.name
</code></pre>
<p>I am weirdly getting "indention" error for handle function when I try "python manage.py retrieve" however it looks fine to me, can you suggest me what to do or point me the problem</p>
<p>Thanks</p>
| 0 | 2009-10-19T06:49:30Z | 1,587,318 | <p>If you're getting an "indentation error" and everything looks aligned, this usually suggests that you're mixing tabs and spaces.</p>
<p>I suggest ensuring that your module is only using spaces.</p>
| 2 | 2009-10-19T07:06:11Z | [
"python",
"django",
"django-admin"
] |
Custom Django-admin command issue | 1,587,282 | <p>trying to understand how custom admin commands work, I have my project named "mailing" and app inside named "msystem", I have written this retrieve.py to the mailing/msystem/management/commands/ folder and I have pasted an empty <strong>init</strong>.py both to the management and cpmmands folders.</p>
<pre><code>from django.core.management.base import BaseCommand
from mailing.msystem.models import Alarm
class Command(BaseCommand):
help = "Displays data"
def handle(self, *args, **options):
x = Alarm.objects.all()
for i in x:
print i.name
</code></pre>
<p>I am weirdly getting "indention" error for handle function when I try "python manage.py retrieve" however it looks fine to me, can you suggest me what to do or point me the problem</p>
<p>Thanks</p>
| 0 | 2009-10-19T06:49:30Z | 1,587,323 | <p>Your indentation needs to be consistent through the entire file, which it isn't in the snippet you posted above.</p>
<p>The "help = " line is indented four spaces after "class" but then the "x =" line is indented many more than four.</p>
<p>Maybe you are mixing spaces and tabs and thus have two tabs before "x ="?</p>
<p>Your code should look like this:</p>
<pre><code>from django.core.management.base import BaseCommand
from mailing.msystem.models import Alarm
class Command(BaseCommand):
help = "Displays data"
def handle(self, *args, **options):
x = Alarm.objects.all()
for i in x:
print i.name
</code></pre>
| 4 | 2009-10-19T07:07:37Z | [
"python",
"django",
"django-admin"
] |
Python/numpy tricky slicing problem | 1,587,367 | <p>I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do:</p>
<p>Say we have a simple array like this:</p>
<pre><code>a = array([1, 0, 0, 0])
</code></pre>
<p>I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this:</p>
<pre><code>a[1:] = a[0:3]
</code></pre>
<p>This would get the following result:</p>
<pre><code>a = array([1, 1, 1, 1])
</code></pre>
<p>Or something like this:</p>
<pre><code>a[1:] = 2*a[:3]
# a = [1,2,4,8]
</code></pre>
<p>To illustrate further I want the following kind of behaviour:</p>
<pre><code>for i in range(len(a)):
if i == 0 or i+1 == len(a): continue
a[i+1] = a[i]
</code></pre>
<p>Except I want the speed of numpy.</p>
<p>The default behavior of numpy is to take a copy of the slice, so what I actually get is this:</p>
<pre><code>a = array([1, 1, 0, 0])
</code></pre>
<p>I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side. </p>
<p>Am I dreaming or is this magic possible?</p>
<p>Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes.</p>
<p>The algorithm is this:</p>
<pre><code>while not converged:
for i in range(len(u[:,0])):
for j in range(len(u[0,:])):
# skip over boundary entries, i,j == 0 or len(u)
u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1])
</code></pre>
<p>Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there.</p>
<p>I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing:</p>
<pre><code>u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:])
</code></pre>
<p>But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers.</p>
<p>Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.</p>
| 3 | 2009-10-19T07:28:08Z | 1,587,418 | <p>Just use a loop. I can't immediately think of any way to make the slice operator behave the way you're saying you want it to, except <em>maybe</em> by subclassing numpy's <code>array</code> and overriding the appropriate method with some sort of Python voodoo... but more importantly, the idea that <code>a[1:] = a[0:3]</code> should copy the first value of <code>a</code> into the next three slots seems completely nonsensical to me. I imagine that it could easily confuse anyone else who looks at your code (at least the first few times).</p>
| 1 | 2009-10-19T07:49:35Z | [
"python",
"numpy",
"slice"
] |
Python/numpy tricky slicing problem | 1,587,367 | <p>I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do:</p>
<p>Say we have a simple array like this:</p>
<pre><code>a = array([1, 0, 0, 0])
</code></pre>
<p>I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this:</p>
<pre><code>a[1:] = a[0:3]
</code></pre>
<p>This would get the following result:</p>
<pre><code>a = array([1, 1, 1, 1])
</code></pre>
<p>Or something like this:</p>
<pre><code>a[1:] = 2*a[:3]
# a = [1,2,4,8]
</code></pre>
<p>To illustrate further I want the following kind of behaviour:</p>
<pre><code>for i in range(len(a)):
if i == 0 or i+1 == len(a): continue
a[i+1] = a[i]
</code></pre>
<p>Except I want the speed of numpy.</p>
<p>The default behavior of numpy is to take a copy of the slice, so what I actually get is this:</p>
<pre><code>a = array([1, 1, 0, 0])
</code></pre>
<p>I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side. </p>
<p>Am I dreaming or is this magic possible?</p>
<p>Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes.</p>
<p>The algorithm is this:</p>
<pre><code>while not converged:
for i in range(len(u[:,0])):
for j in range(len(u[0,:])):
# skip over boundary entries, i,j == 0 or len(u)
u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1])
</code></pre>
<p>Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there.</p>
<p>I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing:</p>
<pre><code>u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:])
</code></pre>
<p>But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers.</p>
<p>Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.</p>
| 3 | 2009-10-19T07:28:08Z | 1,588,565 | <p>It must have something to do with assigning a slice. Operators, however, as you may already know, do follow your expected behavior:</p>
<pre><code>>>> a = numpy.array([1,0,0,0])
>>> a[1:]+=a[:3]
>>> a
array([1, 1, 1, 1])
</code></pre>
<p>If you already have zeros in your real-world problem where your example does, then this solves it. Otherwise, at added cost, set them to zero either by multiplying by zero or assigning to zero, (whichever is faster)</p>
<p>edit:
I had another thought. You may prefer this:</p>
<pre><code>numpy.put(a,[1,2,3],a[:3])
</code></pre>
| 1 | 2009-10-19T13:03:14Z | [
"python",
"numpy",
"slice"
] |
Python/numpy tricky slicing problem | 1,587,367 | <p>I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do:</p>
<p>Say we have a simple array like this:</p>
<pre><code>a = array([1, 0, 0, 0])
</code></pre>
<p>I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this:</p>
<pre><code>a[1:] = a[0:3]
</code></pre>
<p>This would get the following result:</p>
<pre><code>a = array([1, 1, 1, 1])
</code></pre>
<p>Or something like this:</p>
<pre><code>a[1:] = 2*a[:3]
# a = [1,2,4,8]
</code></pre>
<p>To illustrate further I want the following kind of behaviour:</p>
<pre><code>for i in range(len(a)):
if i == 0 or i+1 == len(a): continue
a[i+1] = a[i]
</code></pre>
<p>Except I want the speed of numpy.</p>
<p>The default behavior of numpy is to take a copy of the slice, so what I actually get is this:</p>
<pre><code>a = array([1, 1, 0, 0])
</code></pre>
<p>I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side. </p>
<p>Am I dreaming or is this magic possible?</p>
<p>Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes.</p>
<p>The algorithm is this:</p>
<pre><code>while not converged:
for i in range(len(u[:,0])):
for j in range(len(u[0,:])):
# skip over boundary entries, i,j == 0 or len(u)
u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1])
</code></pre>
<p>Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there.</p>
<p>I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing:</p>
<pre><code>u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:])
</code></pre>
<p>But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers.</p>
<p>Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.</p>
| 3 | 2009-10-19T07:28:08Z | 1,588,711 | <p>It is not the correct logic.
I'll try to use letters to explain it.</p>
<p>Image <code>array = abcd</code> with a,b,c,d as elements.<br />
Now, <code>array[1:]</code> means from the element in position <code>1</code> (starting from <code>0</code>) on.<br />
In this case:<code>bcd</code> and <code>array[0:3]</code> means from the character in position <code>0</code> up to the third character (the one in position <code>3-1</code>) in this case: <code>'abc'</code>.</p>
<p>Writing something like:<br />
<code>array[1:] = array[0:3]</code></p>
<p>means: replace <code>bcd</code> with <code>abc</code></p>
<p>To obtain the output you want, now in python, you should use something like:</p>
<pre><code>a[1:] = a[0]
</code></pre>
| 1 | 2009-10-19T13:37:10Z | [
"python",
"numpy",
"slice"
] |
Python/numpy tricky slicing problem | 1,587,367 | <p>I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do:</p>
<p>Say we have a simple array like this:</p>
<pre><code>a = array([1, 0, 0, 0])
</code></pre>
<p>I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this:</p>
<pre><code>a[1:] = a[0:3]
</code></pre>
<p>This would get the following result:</p>
<pre><code>a = array([1, 1, 1, 1])
</code></pre>
<p>Or something like this:</p>
<pre><code>a[1:] = 2*a[:3]
# a = [1,2,4,8]
</code></pre>
<p>To illustrate further I want the following kind of behaviour:</p>
<pre><code>for i in range(len(a)):
if i == 0 or i+1 == len(a): continue
a[i+1] = a[i]
</code></pre>
<p>Except I want the speed of numpy.</p>
<p>The default behavior of numpy is to take a copy of the slice, so what I actually get is this:</p>
<pre><code>a = array([1, 1, 0, 0])
</code></pre>
<p>I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side. </p>
<p>Am I dreaming or is this magic possible?</p>
<p>Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes.</p>
<p>The algorithm is this:</p>
<pre><code>while not converged:
for i in range(len(u[:,0])):
for j in range(len(u[0,:])):
# skip over boundary entries, i,j == 0 or len(u)
u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1])
</code></pre>
<p>Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there.</p>
<p>I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing:</p>
<pre><code>u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:])
</code></pre>
<p>But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers.</p>
<p>Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.</p>
| 3 | 2009-10-19T07:28:08Z | 1,589,600 | <p>Numpy must be checking if the target array is the same as the input array when doing the setkey call. Luckily, there are ways around it. First, I tried using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.put.html" rel="nofollow"><code>numpy.put</code></a> instead</p>
<pre><code>In [46]: a = numpy.array([1,0,0,0])
In [47]: numpy.put(a,[1,2,3],a[0:3])
In [48]: a
Out[48]: array([1, 1, 1, 1])
</code></pre>
<p>And then from the documentation of that, I gave using flatiters a try (<a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flat.html" rel="nofollow"><code>a.flat</code></a>)</p>
<pre><code>In [49]: a = numpy.array([1,0,0,0])
In [50]: a.flat[1:] = a[0:3]
In [51]: a
Out[51]: array([1, 1, 1, 1])
</code></pre>
<p>But this doesn't solve the problem you had in mind</p>
<pre><code>In [55]: a = np.array([1,0,0,0])
In [56]: a.flat[1:] = 2*a[0:3]
In [57]: a
Out[57]: array([1, 2, 0, 0])
</code></pre>
<p>This fails because the multiplication is done before the assignment, not in parallel as you would like.</p>
<p>Numpy is designed for repeated application of the exact same operation in parallel across an array. To do something more complicated, unless you can find decompose it in terms of functions like <code>numpy.cumsum</code> and <code>numpy.cumprod</code>, you'll have to resort to something like scipy.weave or writing the function in C. (See the <a href="http://www.scipy.org/PerformancePython" rel="nofollow">PerfomancePython</a> page for more details.) (Also, I've never used weave, so I can't guarantee it will do what you want.)</p>
| 1 | 2009-10-19T16:04:55Z | [
"python",
"numpy",
"slice"
] |
Python/numpy tricky slicing problem | 1,587,367 | <p>I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do:</p>
<p>Say we have a simple array like this:</p>
<pre><code>a = array([1, 0, 0, 0])
</code></pre>
<p>I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this:</p>
<pre><code>a[1:] = a[0:3]
</code></pre>
<p>This would get the following result:</p>
<pre><code>a = array([1, 1, 1, 1])
</code></pre>
<p>Or something like this:</p>
<pre><code>a[1:] = 2*a[:3]
# a = [1,2,4,8]
</code></pre>
<p>To illustrate further I want the following kind of behaviour:</p>
<pre><code>for i in range(len(a)):
if i == 0 or i+1 == len(a): continue
a[i+1] = a[i]
</code></pre>
<p>Except I want the speed of numpy.</p>
<p>The default behavior of numpy is to take a copy of the slice, so what I actually get is this:</p>
<pre><code>a = array([1, 1, 0, 0])
</code></pre>
<p>I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side. </p>
<p>Am I dreaming or is this magic possible?</p>
<p>Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes.</p>
<p>The algorithm is this:</p>
<pre><code>while not converged:
for i in range(len(u[:,0])):
for j in range(len(u[0,:])):
# skip over boundary entries, i,j == 0 or len(u)
u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1])
</code></pre>
<p>Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there.</p>
<p>I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing:</p>
<pre><code>u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:])
</code></pre>
<p>But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers.</p>
<p>Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.</p>
| 3 | 2009-10-19T07:28:08Z | 1,591,185 | <p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.accumulate.html" rel="nofollow"><strong>accumulate</strong></a> is designed to do what you seem to want; that is, to proprigate an operation along an array. Here's an example:</p>
<pre><code>from numpy import *
a = array([1,0,0,0])
a[1:] = add.accumulate(a[0:3])
# a = [1, 1, 1, 1]
b = array([1,1,1,1])
b[1:] = multiply.accumulate(2*b[0:3])
# b = [1 2 4 8]
</code></pre>
<p>Another way to do this is to explicitly specify the result array as the input array. Here's an example:</p>
<pre><code>c = array([2,0,0,0])
multiply(c[:3], c[:3], c[1:])
# c = [ 2 4 16 256]
</code></pre>
| 2 | 2009-10-19T21:13:02Z | [
"python",
"numpy",
"slice"
] |
Python/numpy tricky slicing problem | 1,587,367 | <p>I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do:</p>
<p>Say we have a simple array like this:</p>
<pre><code>a = array([1, 0, 0, 0])
</code></pre>
<p>I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this:</p>
<pre><code>a[1:] = a[0:3]
</code></pre>
<p>This would get the following result:</p>
<pre><code>a = array([1, 1, 1, 1])
</code></pre>
<p>Or something like this:</p>
<pre><code>a[1:] = 2*a[:3]
# a = [1,2,4,8]
</code></pre>
<p>To illustrate further I want the following kind of behaviour:</p>
<pre><code>for i in range(len(a)):
if i == 0 or i+1 == len(a): continue
a[i+1] = a[i]
</code></pre>
<p>Except I want the speed of numpy.</p>
<p>The default behavior of numpy is to take a copy of the slice, so what I actually get is this:</p>
<pre><code>a = array([1, 1, 0, 0])
</code></pre>
<p>I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side. </p>
<p>Am I dreaming or is this magic possible?</p>
<p>Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes.</p>
<p>The algorithm is this:</p>
<pre><code>while not converged:
for i in range(len(u[:,0])):
for j in range(len(u[0,:])):
# skip over boundary entries, i,j == 0 or len(u)
u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1])
</code></pre>
<p>Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there.</p>
<p>I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing:</p>
<pre><code>u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:])
</code></pre>
<p>But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers.</p>
<p>Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.</p>
| 3 | 2009-10-19T07:28:08Z | 1,603,553 | <p>i would suggest cython instead of looping in c. there <em>might</em> be some fancy numpy way of getting your example to work using a lot of intermediate steps... but since you know how to write it in c already, just write that quick little bit as a cython function and let cython's magic make the rest of the work easy for you.</p>
| 0 | 2009-10-21T20:54:35Z | [
"python",
"numpy",
"slice"
] |
Python/numpy tricky slicing problem | 1,587,367 | <p>I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do:</p>
<p>Say we have a simple array like this:</p>
<pre><code>a = array([1, 0, 0, 0])
</code></pre>
<p>I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this:</p>
<pre><code>a[1:] = a[0:3]
</code></pre>
<p>This would get the following result:</p>
<pre><code>a = array([1, 1, 1, 1])
</code></pre>
<p>Or something like this:</p>
<pre><code>a[1:] = 2*a[:3]
# a = [1,2,4,8]
</code></pre>
<p>To illustrate further I want the following kind of behaviour:</p>
<pre><code>for i in range(len(a)):
if i == 0 or i+1 == len(a): continue
a[i+1] = a[i]
</code></pre>
<p>Except I want the speed of numpy.</p>
<p>The default behavior of numpy is to take a copy of the slice, so what I actually get is this:</p>
<pre><code>a = array([1, 1, 0, 0])
</code></pre>
<p>I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side. </p>
<p>Am I dreaming or is this magic possible?</p>
<p>Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes.</p>
<p>The algorithm is this:</p>
<pre><code>while not converged:
for i in range(len(u[:,0])):
for j in range(len(u[0,:])):
# skip over boundary entries, i,j == 0 or len(u)
u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1])
</code></pre>
<p>Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there.</p>
<p>I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing:</p>
<pre><code>u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:])
</code></pre>
<p>But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers.</p>
<p>Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.</p>
| 3 | 2009-10-19T07:28:08Z | 1,637,711 | <p>You could have a look at np.lib.stride_tricks.</p>
<p>There is some information in these excellent slides:
<a href="http://mentat.za.net/numpy/numpy%5Fadvanced%5Fslides/" rel="nofollow">http://mentat.za.net/numpy/numpy_advanced_slides/</a></p>
<p>with stride_tricks starting at slide 29.</p>
<p>I'm not completely clear on the question though so can't suggest anything more concrete - although I would probably do it in cython or fortran with f2py or with weave. I'm liking fortran more at the moment because by the time you add all the required type annotations in cython I think it ends up looking less clear than the fortran.</p>
<p>There is a comparison of these approaches here:</p>
<p>www. scipy. org/ PerformancePython</p>
<p>(can't post more links as I'm a new user)
with an example that looks similar to your case.</p>
| 1 | 2009-10-28T14:41:47Z | [
"python",
"numpy",
"slice"
] |
Python/numpy tricky slicing problem | 1,587,367 | <p>I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do:</p>
<p>Say we have a simple array like this:</p>
<pre><code>a = array([1, 0, 0, 0])
</code></pre>
<p>I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this:</p>
<pre><code>a[1:] = a[0:3]
</code></pre>
<p>This would get the following result:</p>
<pre><code>a = array([1, 1, 1, 1])
</code></pre>
<p>Or something like this:</p>
<pre><code>a[1:] = 2*a[:3]
# a = [1,2,4,8]
</code></pre>
<p>To illustrate further I want the following kind of behaviour:</p>
<pre><code>for i in range(len(a)):
if i == 0 or i+1 == len(a): continue
a[i+1] = a[i]
</code></pre>
<p>Except I want the speed of numpy.</p>
<p>The default behavior of numpy is to take a copy of the slice, so what I actually get is this:</p>
<pre><code>a = array([1, 1, 0, 0])
</code></pre>
<p>I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side. </p>
<p>Am I dreaming or is this magic possible?</p>
<p>Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes.</p>
<p>The algorithm is this:</p>
<pre><code>while not converged:
for i in range(len(u[:,0])):
for j in range(len(u[0,:])):
# skip over boundary entries, i,j == 0 or len(u)
u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1])
</code></pre>
<p>Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there.</p>
<p>I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing:</p>
<pre><code>u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:])
</code></pre>
<p>But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers.</p>
<p>Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.</p>
| 3 | 2009-10-19T07:28:08Z | 2,613,622 | <p>Late answer, but this turned up on Google so I probably point to the doc the OP wanted. Your problem is clear: when using NumPy slices, temporaries are created. Wrap your code in a quick call to weave.blitz to get rid of the temporaries and have the behaviour your want.</p>
<p>Read the weave.blitz section of <a href="http://wiki.scipy.org/PerformancePython#head-cafc55bbf8fd74071b2c2ebcfb6f24ed1989d540" rel="nofollow">PerformancePython tutorial</a> for full details.</p>
| 4 | 2010-04-10T13:59:41Z | [
"python",
"numpy",
"slice"
] |
Python/numpy tricky slicing problem | 1,587,367 | <p>I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do:</p>
<p>Say we have a simple array like this:</p>
<pre><code>a = array([1, 0, 0, 0])
</code></pre>
<p>I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this:</p>
<pre><code>a[1:] = a[0:3]
</code></pre>
<p>This would get the following result:</p>
<pre><code>a = array([1, 1, 1, 1])
</code></pre>
<p>Or something like this:</p>
<pre><code>a[1:] = 2*a[:3]
# a = [1,2,4,8]
</code></pre>
<p>To illustrate further I want the following kind of behaviour:</p>
<pre><code>for i in range(len(a)):
if i == 0 or i+1 == len(a): continue
a[i+1] = a[i]
</code></pre>
<p>Except I want the speed of numpy.</p>
<p>The default behavior of numpy is to take a copy of the slice, so what I actually get is this:</p>
<pre><code>a = array([1, 1, 0, 0])
</code></pre>
<p>I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side. </p>
<p>Am I dreaming or is this magic possible?</p>
<p>Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes.</p>
<p>The algorithm is this:</p>
<pre><code>while not converged:
for i in range(len(u[:,0])):
for j in range(len(u[0,:])):
# skip over boundary entries, i,j == 0 or len(u)
u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1])
</code></pre>
<p>Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there.</p>
<p>I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing:</p>
<pre><code>u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:])
</code></pre>
<p>But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers.</p>
<p>Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.</p>
| 3 | 2009-10-19T07:28:08Z | 14,805,047 | <p>In the end I came up with the same problem as you. I had to resort to use Jacobi iteration and weaver:</p>
<pre><code> while (iter_n < max_time_steps):
expr = "field[1:-1, 1:-1] = (field[2:, 1:-1] "\
"+ field[:-2, 1:-1]+"\
"field[1:-1, 2:] +"\
"field[1:-1, :-2] )/4."
weave.blitz(expr, check_size=0)
#Toroidal conditions
field[:,0] = field[:,self.flow.n_x - 2]
field[:,self.flow.n_x -1] = field[:,1]
iter_n = iter_n + 1
</code></pre>
<p>It works and is fast, but is not Gauss-Seidel, so convergence can be a bit tricky. The only option of doing Gauss-Seidel as a traditional loop with indexes.</p>
| 1 | 2013-02-11T02:21:10Z | [
"python",
"numpy",
"slice"
] |
Lazy infinite sequences in Clojure and Python | 1,587,412 | <p>Here are the best implementations I could find for lazy infinite sequences of Fibonacci numbers in both Clojure and Python:</p>
<p>Clojure:</p>
<pre><code>(def fib-seq (lazy-cat [0 1]
(map + fib-seq (rest fib-seq))))
</code></pre>
<p>sample usage:</p>
<pre><code> (take 5 fib-seq)
</code></pre>
<p>Python:</p>
<pre><code>def fib():
a = b = 1
while True:
yield a
a,b = b,a+b
</code></pre>
<p>sample usage:</p>
<pre><code>for i in fib():
if i > 100:
break
else:
print i
</code></pre>
<p>Obviously the Python code is much more intuitive.</p>
<p>My question is: Is there a better (more intuitive and simple) implementation in Clojure ?</p>
<h1> Edit </h1>
<p>I'm opening a follow up question at
<a href="http://stackoverflow.com/questions/1590716/clojure-prime-numbers-lazy-sequence">Clojure Prime Numbers</a></p>
| 15 | 2009-10-19T07:47:41Z | 1,587,434 | <p>Think about how would you write lazy-cat with recur in clojure. </p>
| -1 | 2009-10-19T07:55:27Z | [
"python",
"clojure"
] |
Lazy infinite sequences in Clojure and Python | 1,587,412 | <p>Here are the best implementations I could find for lazy infinite sequences of Fibonacci numbers in both Clojure and Python:</p>
<p>Clojure:</p>
<pre><code>(def fib-seq (lazy-cat [0 1]
(map + fib-seq (rest fib-seq))))
</code></pre>
<p>sample usage:</p>
<pre><code> (take 5 fib-seq)
</code></pre>
<p>Python:</p>
<pre><code>def fib():
a = b = 1
while True:
yield a
a,b = b,a+b
</code></pre>
<p>sample usage:</p>
<pre><code>for i in fib():
if i > 100:
break
else:
print i
</code></pre>
<p>Obviously the Python code is much more intuitive.</p>
<p>My question is: Is there a better (more intuitive and simple) implementation in Clojure ?</p>
<h1> Edit </h1>
<p>I'm opening a follow up question at
<a href="http://stackoverflow.com/questions/1590716/clojure-prime-numbers-lazy-sequence">Clojure Prime Numbers</a></p>
| 15 | 2009-10-19T07:47:41Z | 1,587,582 | <p>I agree with Pavel, what is intuitive is subjective. Because I'm (slowly) starting to grok Haskell, I can tell what the Clojure code does, even though I've never written a line of Clojure in my life. So I would consider the Clojure line fairly intuitive, because I've seen it before and I'm adapting to a more functional mindset.</p>
<p>Let's consider the mathematical definition, shall we?</p>
<pre><code> { 0 if x = 0 }
F(x) = { 1 if x = 1 }
{ F(x - 1) + F(x - 2) if x > 1 }
</code></pre>
<p>This is less than ideal, formatting wise - those three brackets lined up should be one giant bracket - but who's counting? This is a pretty clear definition of the Fibonacci sequence to most people with a mathematical background. Let's look at the same thing in Haskell, because I know it better than Clojure:</p>
<pre><code>fib 0 = 0
fib 1 = 1
fib n = fibs (n - 1) + fibs (n - 2)
</code></pre>
<p>This is a function, <code>fib</code>, that returns the nth Fibonacci number. Not exactly what we had in Python or Clojure, so let's fix that:</p>
<pre><code>fibs = map fib [0..]
</code></pre>
<p>This makes <code>fibs</code> an infinite list of Fibonacci numbers. <code>fibs !! 1</code> is 1, <code>fibs !! 2</code> is 1, <code>fibs !! 10</code> is 55, and so on. However, this is probably quite inefficient, even in a language that relies on heavily optimized recursion such as Haskell. Let's look at the Clojure definition in Haskell:</p>
<pre><code>fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
</code></pre>
<p>The first couple of characters are pretty simple: <code>0 : 1 :</code> makes a list with elements 0 and 1, and then some more. But what's all the rest of that? Well, <code>fibs</code> is the list we've already got, and <code>tail fibs</code> calls the <code>tail</code> function on our list so far, which returns the list starting at the 2nd element (sort of like in Python saying <code>fibs[1:]</code>). So we take these two lists - <code>fibs</code> and <code>tail fibs</code> - and we zip them together with the <code>+</code> function (operator) - that is, we add the matching elements of each. Let's look:</p>
<pre><code>fibs = 0 : 1 : ...
tail fibs = 1 : ...
zip result = 1 : ...
</code></pre>
<p>So our next element is 1! But then we add that back onto our <code>fibs</code> list, and look what we get:</p>
<pre><code>fibs = 0 : 1 : 1 : ...
tail fibs = 1 : 1 : ...
zip result = 1 : 2 : ...
</code></pre>
<p>What we have here is a <em>recursive list definition</em>. As we add more elements to the end of <code>fibs</code> with our <code>zipWith (+) fibs (tail fibs)</code> bit, more elements become avaliable for us to work with when adding elements. Note that Haskell by default is lazy, so just making an infinite list like that won't crash anything (just don't try to print it).</p>
<p>So while this is perhaps theoretically the same as our mathematical definition before, it saves the results in our <code>fibs</code> list (sort of an auto-memoization) and we rarely have the problems that might be experienced in a naive solution. For completeness, let's define our <code>fib</code> function in terms of our new <code>fibs</code> list:</p>
<pre><code>fib n = fibs !! n
</code></pre>
<p>If I didn't lose you yet, that's good, because that means you understand the Clojure code. Look:</p>
<pre><code>(def fib-seq (lazy-cat [0 1]
(map + fib-seq (rest fib-seq))))
</code></pre>
<p>We make a list, <code>fib-seq</code>. It starts with two elements, <code>[0 1]</code>, just like our Haskell example. We do a lazy concatenation of these two initial elements with <code>(map + fib-seq (rest fib-seq))</code> - assuming <code>rest</code> does the same thing that <code>tail</code> does in Haskell, we're just combining our list with itself at a lower offset, and then combining these two lists with the <code>+</code> operator/function.</p>
<p>After working this through your head a few times, and exploring some other examples, this method of generating fibonacci series becomes at least semi-intuitive. It's at least intuitive enough for me to spot it in a language I don't know.</p>
| 36 | 2009-10-19T08:45:49Z | [
"python",
"clojure"
] |
Lazy infinite sequences in Clojure and Python | 1,587,412 | <p>Here are the best implementations I could find for lazy infinite sequences of Fibonacci numbers in both Clojure and Python:</p>
<p>Clojure:</p>
<pre><code>(def fib-seq (lazy-cat [0 1]
(map + fib-seq (rest fib-seq))))
</code></pre>
<p>sample usage:</p>
<pre><code> (take 5 fib-seq)
</code></pre>
<p>Python:</p>
<pre><code>def fib():
a = b = 1
while True:
yield a
a,b = b,a+b
</code></pre>
<p>sample usage:</p>
<pre><code>for i in fib():
if i > 100:
break
else:
print i
</code></pre>
<p>Obviously the Python code is much more intuitive.</p>
<p>My question is: Is there a better (more intuitive and simple) implementation in Clojure ?</p>
<h1> Edit </h1>
<p>I'm opening a follow up question at
<a href="http://stackoverflow.com/questions/1590716/clojure-prime-numbers-lazy-sequence">Clojure Prime Numbers</a></p>
| 15 | 2009-10-19T07:47:41Z | 1,588,260 | <p>The <a href="http://en.wikibooks.org/wiki/Clojure%5FProgramming/Examples/Lazy%5FFibonacci" rel="nofollow">wiki</a> has an in depth treatment of various Fibonacci implementations in Clojure which may interest you if you haven't seen it already.</p>
| 5 | 2009-10-19T11:53:41Z | [
"python",
"clojure"
] |
Lazy infinite sequences in Clojure and Python | 1,587,412 | <p>Here are the best implementations I could find for lazy infinite sequences of Fibonacci numbers in both Clojure and Python:</p>
<p>Clojure:</p>
<pre><code>(def fib-seq (lazy-cat [0 1]
(map + fib-seq (rest fib-seq))))
</code></pre>
<p>sample usage:</p>
<pre><code> (take 5 fib-seq)
</code></pre>
<p>Python:</p>
<pre><code>def fib():
a = b = 1
while True:
yield a
a,b = b,a+b
</code></pre>
<p>sample usage:</p>
<pre><code>for i in fib():
if i > 100:
break
else:
print i
</code></pre>
<p>Obviously the Python code is much more intuitive.</p>
<p>My question is: Is there a better (more intuitive and simple) implementation in Clojure ?</p>
<h1> Edit </h1>
<p>I'm opening a follow up question at
<a href="http://stackoverflow.com/questions/1590716/clojure-prime-numbers-lazy-sequence">Clojure Prime Numbers</a></p>
| 15 | 2009-10-19T07:47:41Z | 1,588,457 | <p>You should always use a language that fits the problem<sup><code>*</code></sup>. Your Python example is just lower level than the Clojure one -- easier to understand for beginners, but more tedious to write and parse for someone who knows the fitting higher level concepts.</p>
<p><sup><code>*</code></sup> <em>By the way, this also means that it is always nice to have a language that you can grow to fit.</em></p>
| 2 | 2009-10-19T12:37:36Z | [
"python",
"clojure"
] |
Lazy infinite sequences in Clojure and Python | 1,587,412 | <p>Here are the best implementations I could find for lazy infinite sequences of Fibonacci numbers in both Clojure and Python:</p>
<p>Clojure:</p>
<pre><code>(def fib-seq (lazy-cat [0 1]
(map + fib-seq (rest fib-seq))))
</code></pre>
<p>sample usage:</p>
<pre><code> (take 5 fib-seq)
</code></pre>
<p>Python:</p>
<pre><code>def fib():
a = b = 1
while True:
yield a
a,b = b,a+b
</code></pre>
<p>sample usage:</p>
<pre><code>for i in fib():
if i > 100:
break
else:
print i
</code></pre>
<p>Obviously the Python code is much more intuitive.</p>
<p>My question is: Is there a better (more intuitive and simple) implementation in Clojure ?</p>
<h1> Edit </h1>
<p>I'm opening a follow up question at
<a href="http://stackoverflow.com/questions/1590716/clojure-prime-numbers-lazy-sequence">Clojure Prime Numbers</a></p>
| 15 | 2009-10-19T07:47:41Z | 1,590,285 | <pre><code>(take 5 fibs)
</code></pre>
<p>Seems about as intuitive as it could possibly get. I mean, that is exactly what you're doing. You don't even need to understand anything about the language, or even know what language that is, in order to know what should happen.</p>
| -5 | 2009-10-19T18:23:07Z | [
"python",
"clojure"
] |
Lazy infinite sequences in Clojure and Python | 1,587,412 | <p>Here are the best implementations I could find for lazy infinite sequences of Fibonacci numbers in both Clojure and Python:</p>
<p>Clojure:</p>
<pre><code>(def fib-seq (lazy-cat [0 1]
(map + fib-seq (rest fib-seq))))
</code></pre>
<p>sample usage:</p>
<pre><code> (take 5 fib-seq)
</code></pre>
<p>Python:</p>
<pre><code>def fib():
a = b = 1
while True:
yield a
a,b = b,a+b
</code></pre>
<p>sample usage:</p>
<pre><code>for i in fib():
if i > 100:
break
else:
print i
</code></pre>
<p>Obviously the Python code is much more intuitive.</p>
<p>My question is: Is there a better (more intuitive and simple) implementation in Clojure ?</p>
<h1> Edit </h1>
<p>I'm opening a follow up question at
<a href="http://stackoverflow.com/questions/1590716/clojure-prime-numbers-lazy-sequence">Clojure Prime Numbers</a></p>
| 15 | 2009-10-19T07:47:41Z | 1,590,295 | <p>The Clojure code is intuitive to me (because I know Clojure). If you want something that might look more like something you're familiar with, you can try this, an efficient and overly-verbose recursive version:</p>
<pre><code>(use 'clojure.contrib.def) ; SO's syntax-highlighting still sucks
(defn-memo fib [n]
(cond (= n 0) 0
(= n 1) 1
:else (+ (fib (- n 1))
(fib (- n 2)))))
(def natural-numbers (iterate inc 0))
(def all-fibs
(for [n natural-numbers]
(fib n)))
</code></pre>
<p>But to someone who doesn't know what recursion or memoization are, that isn't going to be intuitive either. The very idea of "infinite lazy sequences" probably isn't intuitive to the majority of programmers. I can't guess what's in your brain so I don't know what a more intuitive Clojure function would look like to you, other than "looks more like Python".</p>
<p>To someone who doesn't know programming, all of this stuff is going to look like gibberish. What's a loop? What's a function? What is this <code>yield</code> thing? That's where we all start. Intuitiveness is a function of what you've learned so far. Non-intuitive code is code you aren't familiar with yet. Extrapolating from "I know this" to "It's inherently more intuitive" is wrong.</p>
| 6 | 2009-10-19T18:26:49Z | [
"python",
"clojure"
] |
Lazy infinite sequences in Clojure and Python | 1,587,412 | <p>Here are the best implementations I could find for lazy infinite sequences of Fibonacci numbers in both Clojure and Python:</p>
<p>Clojure:</p>
<pre><code>(def fib-seq (lazy-cat [0 1]
(map + fib-seq (rest fib-seq))))
</code></pre>
<p>sample usage:</p>
<pre><code> (take 5 fib-seq)
</code></pre>
<p>Python:</p>
<pre><code>def fib():
a = b = 1
while True:
yield a
a,b = b,a+b
</code></pre>
<p>sample usage:</p>
<pre><code>for i in fib():
if i > 100:
break
else:
print i
</code></pre>
<p>Obviously the Python code is much more intuitive.</p>
<p>My question is: Is there a better (more intuitive and simple) implementation in Clojure ?</p>
<h1> Edit </h1>
<p>I'm opening a follow up question at
<a href="http://stackoverflow.com/questions/1590716/clojure-prime-numbers-lazy-sequence">Clojure Prime Numbers</a></p>
| 15 | 2009-10-19T07:47:41Z | 1,590,548 | <p>If you didn't know any imperative languages, would this be intuitive for you?</p>
<pre><code>a = a + 5
</code></pre>
<p>WTF? <code>a</code> <em>clearly</em> isn't the same as <code>a + 5</code>.</p>
<p>if <code>a = a + 5</code>, is <code>a + 5 = a</code>?</p>
<p>Why doesn't this work???</p>
<pre><code>if (a = 5) { // should be == in most programming languages
// do something
}
</code></pre>
<p>There are a lot of things that aren't clear unless you've seen it before somewhere else and understood its purpose. For a long time I haven't known the <code>yield</code> keyword and in effect I couldn't figure out what it did.</p>
<p>You think the imperative approach is more legible because you are used to it.</p>
| 12 | 2009-10-19T19:16:12Z | [
"python",
"clojure"
] |
Lazy infinite sequences in Clojure and Python | 1,587,412 | <p>Here are the best implementations I could find for lazy infinite sequences of Fibonacci numbers in both Clojure and Python:</p>
<p>Clojure:</p>
<pre><code>(def fib-seq (lazy-cat [0 1]
(map + fib-seq (rest fib-seq))))
</code></pre>
<p>sample usage:</p>
<pre><code> (take 5 fib-seq)
</code></pre>
<p>Python:</p>
<pre><code>def fib():
a = b = 1
while True:
yield a
a,b = b,a+b
</code></pre>
<p>sample usage:</p>
<pre><code>for i in fib():
if i > 100:
break
else:
print i
</code></pre>
<p>Obviously the Python code is much more intuitive.</p>
<p>My question is: Is there a better (more intuitive and simple) implementation in Clojure ?</p>
<h1> Edit </h1>
<p>I'm opening a follow up question at
<a href="http://stackoverflow.com/questions/1590716/clojure-prime-numbers-lazy-sequence">Clojure Prime Numbers</a></p>
| 15 | 2009-10-19T07:47:41Z | 1,701,420 | <p>Here is one solution.</p>
<pre><code>(defn fib-seq [a b]
(cons (+ a b) (lazy-seq (fib-seq (+ a b) a))))
(def fibs (concat [1 1] (fib-seq 1 1)))
user=> (take 10 fibs)
(1 1 2 3 5 8 13 21 34 55)
</code></pre>
| 2 | 2009-11-09T14:41:07Z | [
"python",
"clojure"
] |
Lazy infinite sequences in Clojure and Python | 1,587,412 | <p>Here are the best implementations I could find for lazy infinite sequences of Fibonacci numbers in both Clojure and Python:</p>
<p>Clojure:</p>
<pre><code>(def fib-seq (lazy-cat [0 1]
(map + fib-seq (rest fib-seq))))
</code></pre>
<p>sample usage:</p>
<pre><code> (take 5 fib-seq)
</code></pre>
<p>Python:</p>
<pre><code>def fib():
a = b = 1
while True:
yield a
a,b = b,a+b
</code></pre>
<p>sample usage:</p>
<pre><code>for i in fib():
if i > 100:
break
else:
print i
</code></pre>
<p>Obviously the Python code is much more intuitive.</p>
<p>My question is: Is there a better (more intuitive and simple) implementation in Clojure ?</p>
<h1> Edit </h1>
<p>I'm opening a follow up question at
<a href="http://stackoverflow.com/questions/1590716/clojure-prime-numbers-lazy-sequence">Clojure Prime Numbers</a></p>
| 15 | 2009-10-19T07:47:41Z | 3,547,483 | <p>I like:</p>
<pre><code>(def fibs
(map first
(iterate
(fn [[ a, b ]]
[ b, (+ a b) ])
[0, 1])))
</code></pre>
<p>Which seems to have some similarities to the python/generator version.</p>
| 14 | 2010-08-23T12:25:30Z | [
"python",
"clojure"
] |
what is the correct way to process 4 bits inside an octet in python | 1,587,496 | <p>I'm writing an application to parse certain network packets. A packet field contains the protocol version number in an octet, so that 4 high bits are the 'major' and low 4 are the 'minor' version. Currently I am parsing them as follows, but am wondering if there is a prettier or more 'pythonic' way of doing it:</p>
<pre><code> v = ord(data[17])
major = (v & int('11110000',2) ) >> 4
minor = v & int('00001111',2)
</code></pre>
| 1 | 2009-10-19T08:18:26Z | 1,587,501 | <p>You can write binary literals like this<code>0b1111000</code></p>
<p>For your example I would proabbly use hex though</p>
<pre><code>v = ord(data[17])
major = (v & 0xF0) >> 4
minor = (v & 0x0F)
</code></pre>
<p>You might also want to use the <a href="http://docs.python.org/library/struct.html" rel="nofollow">struct</a> module to break the packet into its components</p>
| 2 | 2009-10-19T08:21:03Z | [
"python",
"bit-manipulation"
] |
what is the correct way to process 4 bits inside an octet in python | 1,587,496 | <p>I'm writing an application to parse certain network packets. A packet field contains the protocol version number in an octet, so that 4 high bits are the 'major' and low 4 are the 'minor' version. Currently I am parsing them as follows, but am wondering if there is a prettier or more 'pythonic' way of doing it:</p>
<pre><code> v = ord(data[17])
major = (v & int('11110000',2) ) >> 4
minor = v & int('00001111',2)
</code></pre>
| 1 | 2009-10-19T08:18:26Z | 1,587,509 | <p>It would be neater to use literals instead of calling <code>int</code>. You can use binary literals or hex, for example:</p>
<pre><code>major = (v & 0xf0) >> 4
minor = (v & 0x0f)
</code></pre>
<p>Binary literals only work for Python 2.6 or later and are of the form <code>0b11110000</code>. If you are using Python 2.6 or later then you might want to look at the <code>bytearray</code> type as this will let you treat the data as binary and so not have to use the call to <code>ord</code>.</p>
<p>If you are parsing binary data and finding that you are having to do lots of bit wise manipulations then you might like to try a more general solution as there are some third-party module that specialise in this. One is <a href="http://hachoir.org/" rel="nofollow">hachoir</a>, and a lower level alternative is <a href="http://python-bitstring.googlecode.com" rel="nofollow">bitstring</a> (which I wrote). In this your parsing would become something like:</p>
<pre><code>major, minor = data.readlist('uint:4, uint:4')
</code></pre>
<p>which can be easier to manage if you're doing a lot of such reads.</p>
| 1 | 2009-10-19T08:23:11Z | [
"python",
"bit-manipulation"
] |
what is the correct way to process 4 bits inside an octet in python | 1,587,496 | <p>I'm writing an application to parse certain network packets. A packet field contains the protocol version number in an octet, so that 4 high bits are the 'major' and low 4 are the 'minor' version. Currently I am parsing them as follows, but am wondering if there is a prettier or more 'pythonic' way of doing it:</p>
<pre><code> v = ord(data[17])
major = (v & int('11110000',2) ) >> 4
minor = v & int('00001111',2)
</code></pre>
| 1 | 2009-10-19T08:18:26Z | 1,587,560 | <p>Just one hint, you'd better apply the mask for the major <em>after</em> shifting, in case it's a negative number and the sign is kept:</p>
<pre><code>major = (v >> 4) & 0x0f
minor = (v & 0x0f)
</code></pre>
| 0 | 2009-10-19T08:38:00Z | [
"python",
"bit-manipulation"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.